As a Commissioner on the U.S. Equal Employment Opportunity Commission, I am a relative newcomer to the discipline of people analytics. I have learned a great deal from the valuable work of experts in the field, many of whom are featured in the Workforce Solutions Review. As I learn more about the discipline, the more I realize that – in many ways – I do people analytics for a living.
People analytics was recently defined as “the analysis of employee and workforce data to reveal insights and provide recommendations to improve business outcomes.” My job as a Commissioner is, in part, to analyze employee and workforce data to reveal insights and provide recommendations on legal compliance. You, the reader, probably use people analytics to make employment decisions for your company. But I use people analytics to determine whether those decisions comply with federal law. In fact, people analytics has become indispensable to me as I focus on the interaction between employment technologies and federal antidiscrimination law.
One of my highest priorities as an EEOC Commissioner is ensuring that AI helps eliminate rather than exacerbate discrimination in the workplace. AI culls and correlates information on a massive scale to make workforce-related predictions, which translates into big business. As a result, AI has the potential to transform HR departments from cost centers to value centers. By next year, nearly half of major corporations will use AI-based technologies in human resources. The market for HR technology alone will be worth $10 billion, more than half the value of the nearly $18 billion global markets for HR management. And, according to recent projections, HR technologies will drive the value of the HR management market at a compound rate of 12% for the next seven years.
This explosion is partly due to the pandemic, which has accelerated the rate at which companies are adopting HR technologies. But it also has a lot to do with the potential of AI to improve the way we make decisions, stripping out human biases and relying instead on robust candidate information. When it is well designed and properly deployed, AI has the potential to help workers find their most rewarding jobs and match companies with their most valuable and productive employees. It also can enrich companies’ values and culture by advancing diversity and inclusion in the workplace. But at the same time, if it is poorly designed and improperly deployed, AI can discriminate on a scale and magnitude far greater than any individual HR professional – and that can have devastating consequences both for the victims and the employer.
Some of the laws my agency enforces are over half a century old, but they apply with equal force to decisions made by algorithms as they do to decisions made by individuals. And sometimes, those laws hold employers liable for discrimination regardless of whether they intended to discriminate. To wit, “the algorithm made me do it” is not a defense against discrimination.
Federal law recognizes two types of discrimination: disparate treatment and disparate impact. It tolerates neither of them. Disparate treatment discrimination is pretty self-explanatory: an employer treats one employee worse than others because of that worker’s race, sex, religion, etc. So, an HR officer reviewing two otherwise identical resumes — and then tosses one in the trash because of the candidate’s race – engages in disparate treatment discrimination.
But disparate impact is different. Under disparate impact, a business can be legally liable for discrimination even when it had no intention of discriminating. Consider a company that has a problem with people arriving at work late because of traffic. It implements a policy of hiring people who live in only one nearby zip code because workers who live there will likely arrive on time every day. Suppose the people who live in that zip code are predominantly members of one race. In that case, the employer’s policy will effectively deny people of other races an opportunity to compete for jobs there. That could amount to disparate impact discrimination under federal law.
So, what does this have to do with people analytics? In both cases, data is everything. Data makes the difference between a good and a bad hire, a good promotion, and an unwanted one. And, when it comes to AI-informed HR technologies, data spells the difference between lawful and unlawful decisions. Here, the case of our resume screener is instructive.
On average, resume screeners spend seven seconds reviewing each resume. This isn’t because all HR professionals have impossibly short attention spans – it’s because of the sheer volume of resumes they have to sift through in a given day. When they discard a resume, it may not be altogether clear why. Maybe they tossed a resume in the trash because the candidate didn’t meet the basic job qualifications? Maybe they tossed it in the trash because they didn’t like the font? Or maybe they tossed it in the trash because of the candidate’s race? The human mind is mysterious. Absent communication, we can never know someone’s true motives.
But AI can correct for that “black box” problem. Carefully designed, AI can mask for protected classes like race, gender, age, or disability. It can hide for proxy terms, like candidates’ names, the names of schools, or associations with a particular gender or racial indicative clubs. It can offset the well-documented confidence gap that leads women to under-report their abilities on resumes and men to over-state theirs. It can identify a candidate’s adjacent skills, and it can identify candidates for upskilling opportunities. In short, AI can determine the best candidates based not only on their merit but also on their potential while stripping out human bias.
At the same time, AI can replicate and amplify existing bias. Amazon learned this when it tested a resume screening tool between 2015 and 2017. Data scientists fed the program’s algorithm a data set consisting of resumes belonging to the company’s current employees along with resumes that had been submitted in the prior ten years. Using machine learning, the program was able to identify patterns in the historical data set and then use those patterns to rate new. However, because the vast majority of resumes in the data set belonged to men, the program automatically downgraded resumes with certain word combinations such as women’s sports teams, women’s clubs, and the names of women’s colleges. This was not proof of misogynistic intent on the part of the AI; it was a function of the data fed to the AI in the first place.
Amazon’s resume screening program is an example of how biased inputs can yield biased outputs. It is a cautionary tale about how a neutral policy has the potential for disparate impact discrimination. But it is also an example of how a vigilant employer did not simply trust the algorithms to get things right. Instead, Amazon tested the program, evaluated its performance, and, when it proved unworkable, Amazon abandoned the program without ever actually using it to make a hiring decision.
Even if data scientists design what appears to be the perfect HR algorithm, employers cannot simply hand HR functions over to a robot army and call it a day. Human intervention and oversight are key to ensuring that AI is operating in a legally compliant manner. In fact, in some cases, the law requires nothing less.
I gave you an example of how AI can help mask someones’ membership in a protected class in a way that advances equal employment opportunity for all. But there are some instances in which the law requires that employers not treat all workers the same – where employers must be highly sensitive to employees’ disability, pregnancy, or religious observances to make reasonable workplace accommodations for them.
This issue is particularly salient in cases where AI is used to perform managerial and supervisory functions. For example, at some firms, AI is used to monitor (and report on) employee productivity and safety. It accounts for their whereabouts, their time, and even their mood. AI sends automatic reprimands and warnings to employees who are falling short of performance benchmarks – and, according to some reports, it is even used to terminate employees summarily. Ethical concerns aside, automating employment decisions can be highly problematic under federal antidiscrimination law from a purely legal perspective.
If an employee has a disability, is pregnant, or honors religious observances, the employer is required by law to engage in an interactive process to determine whether and how it can provide reasonable accommodations. Most of the time, an employee initiates the interactive process by notifying the employer of the need for a reasonable accommodation. This conversation can be sensitive, personal, and even difficult for employees. Employees may be reluctant to have that conversation if their primary interface with their employer is an app or a chatbot. In addition, there are some instances in which an employer may be expected to initiate the interactive process without being asked — for example, if the employer knows that the employee is experiencing workplace problems because of a disability. Under those circumstances, the process often starts when a supervisor senses, with their own eyes or judgment, that an employee needs intervention.
If an HR technology applies one-size-fits-all requirements to the entire workforce, irrespective of the unique needs of protected individuals, the risk that employment discrimination will increase exponentially. So, striking the right balance between automated decision-making and human decision-making is essential.
When I joined the EEOC last year, I made it my priority to clarify how federal antidiscrimination law applies to technologies that are transforming not only the way we work but the way we manage workers. I want to support employers, employees, and the AI community in their efforts to use AI to make the workplace more fair, diverse, and inclusive. The EEOC is a law enforcement agency, but I believe that enforcement alone will never be sufficient to achieve our mission. Preventing employment discrimination from occurring in the first place is preferable to remedying the consequences of discrimination. It has been my experience that most employers want to do the right thing; they just need the tools to comply. I have found this to be uniquely true in the AI space.
I have encountered a community of engineers, entrepreneurs, and employers more determined to get this right. The industry has been clamoring for guidance on developing and deploying AI in ways that are fully compliant with antidiscrimination law. They believe in their algorithms and in their potential to promote equality of opportunity in the workplace.
We cannot fully realize the potential of AI technologies for the American people – and for the global economy – unless those technologies are applied to uphold individuals’ most cherished rights: civil rights. My goal is to help our innovators and employers do precisely that and to ensure that AI helps promote inclusion and helps minimize discrimination in the workplace.