Welcome to the Machine(s): Can AI Save Employers From Discrimination or Retaliation Allegations?
Employees who claim that they were discriminated against or retaliated against by their employer typically must prove that the employer was substantially motivated by their membership in a protected class (such as race, gender, age, disability) or because they engaged in a protected activity (such as complaining about discrimination or harassment). But what if the employer claims that it disciplined or fired the employee solely for lack of productivity? And what if productivity is measured using an automated productivity tracking process that utilizes Artificial Intelligence (“AI”) to generate warnings or terminations regarding quality or productivity completely on its own, all without input from human supervisors? Is possible to still show discrimination or retaliation? In September of 2018, Amazon successfully relied on such a system in a case before the National Labor Relations Board (“NLRB”).
The employee claimed that she was terminated for engaging in protected concerted activity by making a complaint to human resources about a manager. In response, Amazon claimed that the employee was terminated solely as a result of its automated productivity tracking and termination process. In describing its system, Amazon noted the following:
“Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors. Any system feedback or automatically generated warnings or termination notices are required to be provided to associates within 14 days. If the feedback is not provided for any reason…the notice expires and is no longer valid. Similarly, the [system]…allows for notices to be exempted in the system if there is a delay in delivering feedback, lack of work, mechanical issues, etc…which created some barrier that prevented the associate from being successful. While managers have no control over rates, they can override the automatically generated notices in order to exempt or override the notice if a policy was applied incorrectly….”
So, does this mean that delegating employment decisions to the robots will insulate employers from discrimination or retaliation claims? Of course not. For starters, Amazon was very careful to point out that its system generated warnings or terminations without input from supervisors, an area where discriminatory/retaliatory intent could remain in play despite the use of AI. And, as Amazon acknowledged, some level of manager involvement is inevitable in terms of overriding the robots. This leaves the door open to that discretion being exercised – or not exercised – in a discriminatory or retaliatory manner. On top of this, blind reliance on productivity metrics alone without any room for context could run afoul of the law if, for example, an employee’s productivity dips due to an ineffective accommodation for a disability.
We can also thank Amazon for reminding us all that an unintended consequence of AI is that it can learn and apply existing human biases. Such was the case when Amazon tested a computer program aimed at analyzing resumes in order to pick the top candidates for a position. Much to Amazon’s chagrin, the machine taught itself to be sexist by learning that the word “women’s” was bad, and it gave lower scores to applicants from women’s colleges.
While the benefits of AI are added efficiency and neutrality, discrimination claims do not always require a showing of bad intent. For instance, a disparate impact claim requires only that the plaintiff demonstrate “that application of a facially neutral standard has resulted in a significantly discriminatory hiring pattern.” Thus, an employer could theoretically still face liability if it uses AI to screen candidates and that policy has a disproportionately negative effect on women (even if the program has not, in fact, learned to actually be sexist).
Finally, employers who utilize AI to help drive employee productivity should be mindful of current and future laws that could pose a risk in this area. For example, in states like California, employers must provide employees with meal and rest breaks during their shift. Employees who skip breaks or cut them short due to pressure to hit productivity targets could later bring wage and hour claims in court, either individually or as part of class actions. Productivity-focused practices at companies like Amazon have also attracted the attention of employee advocates, who claim that these policies and systems threaten the well-being of workers. It remains to be seen whether legislatures in employee-friendly states like New Jersey, New York or California could at some point in the future take up this cause and pass laws limiting computer-tracked productivity requirements in the workplace.
At the end of the day, technology can be a very useful tool to help employers remain objective, and therefore reduce the risk of discrimination or retaliation claims. However, employers and HR should only ever consider AI to be one of many arrows in their quivers. Human involvement in employment decisions cannot and should not be entirely eliminated from the process, and employers should never solely rely on technology to make employment decisions.
Employers facing this situation, or with questions about the impact of AI in an employment setting, should consult appropriate legal counsel.
The information contained in this publication should not be construed as legal advice, is not a substitute for legal counsel, and should not be relied on as such. For legal advice or answers to specific questions, please contact one of our attorneys.