
On October 28, 2021, the Equal Employment Opportunity Commission (EEOC) issued a press release revealing their plan to further monitor the use of artificial intelligence (AI) and algorithmic tools used during the hiring process or during other employment-related decisions, in order to ensure compliance with federal anti-discrimination laws.
A Need to Examination AI
As more employers use some form of AI in their employment practices, such as steering job ads, recruiting chatbots, and behavioral assessments more research has started to be conducted on the ways in which AI may unintentional discriminate against certain groups, such as women and minorities.
While AI is useful in assisting employers with managing their recruitment and employee life cycle processes with greater efficiency and ensuring increased performance and decreased turnover, it is necessary for regulators, like the EEOC, to examine the functionality of AI tools to ensure that disparities in employment do not continue to widen and persist.
EEOC’s Initiative
According to the EEOC, the intent of the initiative is to “more closely examine how technology is fundamentally changing the way employment decisions are made by aiming to guide applicants, employees, employers, and technology vendors in ensuring that these technologies are used fairly, consistent with federal equal employment opportunity laws.”
Specifically, the EEOC has laid out the following plan for their newly introduced program:
- Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions;
- Identify promising practices;
- Gather information about the adoption, design, and impact of hiring and other employment-related technologies;
- Launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications; and
- Establish an internal working group to coordinate the agency’s work on the initiative.
That being said, this is not the first time that the Equal Employment Opportunity Commission has questioned the use of such advanced technology regarding employment law compliance. In 2016, the EEOC held a press hearing to discuss the agency’s investigation regarding the use of data analytics in hiring, performance management, retention and other employment decisions. Since this initial press conference on the subject, a lot of advancement has occurred and as a result the EEOC looks to be taking the necessary steps to ensure that compliance is upheld while such technology is utilized in the workplace.
Interaction with State and Local Legislation
While the EEOC has already begun to look at the issues surrounding compliance when it comes to the use of automated and other algorithmic tools, some states have already taken or are considering taking steps to address the use of AI in the workplace. For example, Illinois law requires employers to provide notice to and obtain consent from job applicants whose recorded video interviews are analyzed using AI. On a local level, New York City (NYC) has already taken steps to combat the potential challenges with compliance. On November 10, 2021, New York City passed Int. No. 1894-A, which is set to take effect on January 1, 2023, prohibiting NYC employers from utilizing AI programs when making hiring, recruitment, or advancement decisions, unless said program has been audited for bias by a third-party auditor within the past 12 months. If the employer’s AI technology passes the annual audit, employers are then required to notify every external and internal candidate, within ten business days of the employer’s intent to utilize such software during the decision-making process. NYC employers who fail to abide by the legislation are subject to a civil penalty of $500 to $1500 for each violation.
Next Steps for Employers
According to the Harvard Business Review, “many hope that algorithms will help human decision-makers avoid their own prejudices by adding consistency to the hiring process. But algorithms introduce new risks of their own. They can replicate institutional and historical biases, amplifying disadvantages lurking in data points like university attendance or performance evaluation scores. Even if algorithms remove some subjectivity from the hiring process, humans are still very much involved in final hiring decisions.” As a result, employers who use AI in their employment practices should ensure that they have properly vetted software vendors or platforms prior to use by asking vendors to validate and provide documentation which defends their position on the ethical use of AI in hiring and what steps have been taken to ensure that the technology is compliant with employment laws. Employers should also understand how their chosen AI software or platform works at each step of the process in order to account for and mitigate biases.
Additionally, employers should review their current pre-employment/promotional tools, if applicable, to determine if artificial intelligence or algorithmic software is utilized and to prepare for upcoming guidance to be issued by the Equal Employment Opportunity Commission in response to the use of such programs regarding employment law compliance. Furthermore, employers are encouraged to seek the assistance of legal counsel regarding the applicability of any state or local legislation that may play a factor regarding employment decisions and the use of AI.