We Help Companies Create Equitable Workplaces
patrick-tomasso-71909-unsplash copy.jpg

The Equity At Work Blog

We Write The

Equity At Work Blog

The E4W Blog

I asked ChatGPT how bias shows up in AI. Here's what it said.

 
 
 

With the craze over ChatGPT's release, I couldn't help but wonder what, if anything, it would say about the dangers of bias being coded into its responses. So I asked. And I think the answer is spot on:


Bias can show up in AI in a number of ways, including:

1. Data bias: AI models are only as good as the data they are trained on.

If the data used to train an AI model is biased in some way, then the model itself may be biased. For example, if an AI model is trained on historical data that contains biases, such as racial or gender biases, then the model may perpetuate those biases.

2. Algorithmic bias: Algorithms used in AI can also be biased.

This can happen when the algorithm contains implicit biases based on the assumptions made by the developers. For example, an algorithm that is designed to identify job candidates based on their resumes may be biased if it relies on certain keywords or educational credentials that are more common among certain groups of people.

3. User bias: Bias can also be introduced into AI systems through user behavior.

For example, if users provide feedback or ratings that are biased, then the AI model may learn to make biased predictions based on that feedback.

4. Interpretation bias: Even if an AI model itself is not biased, the way that people interpret its outputs can be biased.

For example, if an AI model is used to make decisions about hiring or lending, and people interpret its outputs in a biased way, then the decisions made based on those outputs may also be biased.

It is important to acknowledge and address these sources of bias in AI to ensure that AI systems are fair and equitable for all users.”

Efficiencies to be gained from AI are tempting, but we have to always be mindful of the trade off.  What if your company has AI assist with - or replace human accountability for - recruiting, hiring, performance ratings, management of HR policies, documentation of discrimination or harassment, and/or administration of benefits?  How can you ensure AI is applied fairly?  That it strips the bias out of these activities, rather than codifying bias into them?

AI is only as unbiased as its developers, data, code and users.  Given the complexity of unconscious bias, it would be impossible to prevent that without deploying diverse teams to develop, manage, QA, monitor and continually improve it.  Otherwise it's bias in, bias out, and now at a magnified, systemic level.

by Michelle Bogan

Sign up to receive our newsletter in your inbox and get DEI insights from Equity At Work first.

 
Michelle BoganAI, ChatGPT, Bias