Monday to Saturday - 8:00 -17:30

Long before the explosive rise of ChatGPT adoption, 75% of U.S. companies were already using some form of artificial intelligence in the employment lifecycle. The use of large language models as a component of the employment decision-making toolkit is a sophisticated, albeit less transparent, step forward that extends beyond traditional resume screening.
In emerging iterations, the technology permeates nearly every stage of the employment process, from hiring and compensation to promotion and termination. Regulation has responded accordingly, increasing at local, state, national and global levels. In this environment of technological and regulatory change, it’s imperative that companies examine how AI tools, particularly opaque systems with hidden layers, interact with employment data to mitigate risks and promote beneficial outcomes.
Many organizations are not aware of the extent and nature of AI use across their business. This can be problematic, as algorithms are only as reliable as their training and integrated data allow. Whether intentional or inadvertent, the biases of software engineers can (and do) find their way into software coding, leading to negative outcomes (e.g., employee homogeneity). Regardless of who developed or sold a program, or whether a company is aware of how a program does what it does, it is the company employing AI that will be held responsible for any misuse.
The harm caused by AI is no longer a future concern. It’s a reality today.
A legacy hiring test established by the EEOC, known as the “four-fifths rule,” also referred to as the Adverse Impact Ratio, continues to set the standard for how to determine adverse impact in hiring practices.
By calculating the difference in hiring rates between favored (highest selection rate) and disfavored groups, this rule can also be applied to review discrimination within AI. For example, if a job application input is for the candidate to be able to lift a certain amount of weight, AI review of that application could blindly rule out candidates of certain age or gender. In this example, the four-fifths rule would surface any disparities in hiring rates across the age or gender.
Similarly, the EEOC has flagged concerns over AI automatically rejecting disabled candidates or individuals with employment gaps. More recently, the agency issued guidance regarding employer use of AI in any aspect of the employer selection process, including hiring, promotion and termination.