What a Second Trump Administration Means for Employers’ Use of AI
With the rise of artificial intelligence (“AI”), there have been significant improvements made in hiring and employment analytics; however, there have been notable pitfalls. AI, which is often trained on the internet and other publicly available data sources, has been shown to perpetuate unlawful bias and unlawful discrimination in its employment decision-making for hiring.
While there are no federal laws that specifically address the use of AI in employment decisions, federal and state non-discrimination laws apply, including Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA). AI may discriminate against workers in ways that violate these laws, such as video interviewing software unfairly scoring an applicant with a speech impediment based on speech patterns analysis, or an automated hiring program discriminating against applicants based on gender. AI can also implicate other employment statutes, such as the use of polygraph technology to detect whether an employee or candidate may be lying, in violation of the Employee Polygraph Protection Act.
During President Biden’s term, there were significant advances in AI that could potentially be used for recruiting, hiring, promotions and employee evaluations. To regulate AI use in employment, the Biden Administration issued non-binding guidance and articulated enforcement priorities. For example, in October of this year, the Department of Labor (“DOL”) released a set of Best Practices which aim to provide strategies for how AI can be used as a tool for workers and businesses, yet still maintain a focus on workers’ right, well-being, privacy and economic security. The guide is meant to serve in conjunction with President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which was issued in October 2023.
The non-binding DOL guidance enumerates eight guiding principles for the “responsible use” of AI:
- Centering worker empowerment to allow workers to be informed and provide input on the design, testing, training and use of AI in the workplace (the DOL guidance describes this first principle as a “North Star”);
- Ethically developing AI in a way that protects workers;
- Establishing AI governance and human oversight to protect workers;
- Ensuring transparency in AI use with workers and potential job seekers;
- Protecting labor and employment rights, including insuring that AI systems do not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections;
- Using AI to enable workers to improve their job quality;
- Supporting workers impacted by AI, including “upskilling” workers during AI-related job transitions; and
- Ensuring responsible use of worker data, including limiting the scope of worker data collection and protecting and handling that data responsibly.
While it’s unclear what the incoming Trump Administration will do in the AI employment space, it is expected that President Biden’s Executive Order, along with non-binding guidance such as the DOL guidance, will be rescinded. Given that President-elect Trump is surrounded by private-sector technology executives such as Elon Musk, it is also expected that President-elect Trump will focus on AI innovation, rather than President Biden’s focus on AI-related ethics and safety.
During Donald Trump’s first term, he issued an executive order “Maintaining American Leadership in Artificial Intelligence,” which sought to reduce the barriers for accessing AI technologies, particularly for the federal government. During the next four years, expect a bigger emphasis on reducing government oversight over AI, with a potential continued emphasis on privacy and end-user-focused design.
If the Trump Administration scales back regulations on AI employment tools, it is expected that regulation will continue to occur at the state and local levels. For example, New York City, Maryland, Colorado, and Illinois which have recently added new regulations regarding AI use. Regardless of which party controls Washington, D.C., existing laws on the books in states such as California, New York, and New Jersey continue to make unfettered use of AI in the workplace a risky proposition for employers.
While the future of AI continues to evolve, employers need to be mindful of the impact of both old and new laws and regulations, at both the Federal, state, and local levels. Additionally, not all AI is created equal. Even within the subcategory of Generative AI (which includes ChatGPT), there are various technologies and predictive models, as well as different approaches to how the model is trained and whether information the user inputs into AI will remain private or confidential. Employers also need to be aware of how legal risk, including employment law risk, is apportioned in their contracts and terms of service with vendors, and whether employment claims based on AI usage will be covered by Employment Practices Liability (EPL) insurance. HR Legalist will continue to monitor developments in the AI space. In the meantime, employers with questions in this area should contact an attorney in Obermayer’s Labor and Employment Department for more specific guidance.
The information contained in this publication should not be construed as legal advice, is not a substitute for legal counsel, and should not be relied on as such. For legal advice or answers to specific questions, please contact one of our attorneys.