Affirmative Action and AI in the Workplace
With the launch of ChatGPT and the Supreme Court's decision to end affirmative action in colleges and universities, there has been a lot of speculation, and a fair amount of misinformation, about how all this impacts the workplace.
The EEOC - Equal Employment Opportunity Commission - is the federal entity that enforces Title VII, which is the law that makes it illegal to discriminate against an employee or job applicant based on their race, color, religion, sex - including gender identity, sexual orientation, pregnancy and related conditions - national origin, age (40 and up), disability or genetic information. These laws cover hiring, pay, benefits, training, promotions, and firing. The EEOC's job is to make sure everyone receives fair treatment in the workplace.
When the Supreme Court overturned affirmative action in higher education, the EEOC made it clear that this judgment does NOT apply to DEI initiatives in the workplace.
The EEOC stated that programs may need to be in place within workplaces to help companies achieve the goals of Title VII, specifically "to break down old patterns of segregation and hierarchy and to overcome the effects of past or present practices, policies, or other barriers to equal employment opportunity."
This is what DEI programs are designed to do, through education, policy and benefits updates, and equitable process design that mitigate bias that can impact decisions in hiring, staffing, mentoring, compensation, and promotions. If your organization is doing this work, you are doing the right thing, for your people and according to the law.
AI has an interesting intersection here, particularly related to hiring and promotion practices, and the EEOC has updated its guidance to address this:
If AI, or any software, has a disproportionate impact on which candidates are selected based on one of the protected classes in Title VII, the employer will be held accountable for discrimination under the law.
AI is particularly challenging here because the model learns from past behavior -- so if you have a flawed process that favors a certain demographic, your AI will learn from that and build on it. You are essentially automating bias.
This does not mean you shouldn't use these tools -- it does mean you need to be intentional about why you are using them, what data and history they are learning from, and how you are doing testing and QA before rolling them out.
Most organizations are far from achieving parity on any of the dimensions covered by Title VII. Stay committed to your DEI work and you will help bring the vision of Title VII to life, for the benefit of your employees, your business, and your community.
Visit https://www.eeoc.gov/ for more information on the EEOC and Title VII
Need help determining how to achieve high-impact, measurable results with your DEI work? Contact us here.
Sign up to receive our newsletter and get DEI insights from Equity At Work⢠first.