These free resources can help you avoid disability discrimination claims when hiring using artificial intelligence

noun-artificial-intelligence-1613700

More employers now are using software, algorithms, and artificial intelligence to make smarter hiring decisions. There’s nothing inherently unlawful about that.

Except, consider this.

  • Maybe the algorithm intentionally or unintentionally “screens out” an individual with a disability, even though that individual can do the job with a reasonable accommodation.
  • Tangentially, an employer does not provide a reasonable accommodation necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.
  • Or perhaps the algorithmic decision-making tool poses “disability-related inquiries” or seeks information that qualifies as a “medical examination” before giving the candidate a conditional offer of employment.

Any one of these bullet points would result in a violation of the Americans with Disabilities Act.

Fortunately, the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice just released some technical guidance to help employers navigate these ADA issues.

The EEOC’s release, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” focuses on those three primary concerns in the bullet points above.

For example, did you know that an employer may be responsible for providing reasonable accommodations related to algorithmic decision-making tools, even if the software or application is developed or administered by another entity? Suppose an applicant were to tell the vendor that a medical condition made it difficult to take the test (which qualifies as a request for reasonable accommodation). If the vendor did not provide an ADA-required accommodation, the employer likely would be responsible — even if it was unaware that the applicant reported a problem to the vendor.

Or suppose a chatbot programmed with a simple algorithm rejects all applicants with significant gaps in their employment history. If a particular applicant had a gap in employment, and if a disability caused that gap (for example, if the individual needed to stop working to undergo treatment), the chatbot may function to screen out that person because of the disability. That’s an ADA violation.

The EEOC’s technical assistance document contains a lot more helpful guidance. There are also some separate tips for workers and a Disability-Focused Listening Session (video) that you can watch here.

Meanwhile, the DOJ’s guidance document, “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring,” provides a broad overview of rights and responsibilities in plain language. This document:

  • Provides examples of the types of technological tools that employers are using;
  • Clarifies that, when designing or choosing technological tools, employers must consider how their tools could impact different disabilities;
  • Explains employers’ obligations under the ADA when using algorithmic decision-making tools, including when an employer must provide a reasonable accommodation; and
  • Provides information for employees on what to do if they believe they have experienced discrimination.

Bottom line: If you are using software, algorithms, and artificial intelligence to hire, review these resource documents, stay abreast of new developments (hint: read this blog), and consider having outside employment counsel proactively review your processes.

Or wait for the EEOC to get the first shot at scrutinizing your hiring methodologies. Your company’s decision shortsightedness may end up paying off like my investment in James Harden playoff futures.

“Doing What’s Right – Not Just What’s Legal”
Contact Information