New test shows AI exhibits clear racial bias when used in recruiting

It feels almost entirely too familiar now: Generative AI is repeating the biases of its creators.

A new survey comes from Bloomberg OpenAI’s generative AI technology, specifically GPT 3.5, was found to show a preference for certain races in recruitment issues. This means that recruiting and HR professionals are increasingly incorporating generative AI-based tools into their automated recruiting workflows, such as LinkedIn’s new generation of AI assistants, that could spread racism. Again, sounds familiar.

The publication used a common and fairly simple experiment of feeding fictitious names and resumes into artificial intelligence recruiting software to see how quickly the system showed racial bias. Such studies have been used to uncover human and algorithmic biases among professionals and recruiters for years.

See also:

Reddit launches an AI-powered tool to detect online harassment

The investigation explains: “Reporters use voter and census data to come up with demographically distinct names — meaning they are associated with Americans of a specific race or ethnicity at least 90 percent of the time — and randomly assign them into equally qualified resumes.” “When asked to rank these resumes 1,000 times, GPT 3.5 (the most widely used version of the model) favored names in certain groups of people more than others, to the point where it could not be ranked by using Benchmarks for Evaluating Employment Discrimination against Protected Classes.”

The experiment divided names into four categories (white, Hispanic, black, and Asian) and two gender categories (male and female) and submitted them to four different job openings. ChatGPT consistently places “female names” in positions where female employees have historically been more represented, such as human resources positions, and less frequently selects black female candidates36 for technical positions such as software engineers.

ChatGPT also ranked resumes for different positions unequally, adjusting rankings based on gender and race.in a statement BloombergOpenAI said this does not reflect how most customers integrate their software in practice, noting that many businesses fine-tune responses to reduce bias. BloombergThe survey also consulted with 33 AI researchers, recruiters, computer scientists, lawyers and other experts to provide context for the results.

Advocates and researchers warn against the ethical liability of reliance on artificial intelligence. The report is not revolutionary in its years of work, but it is a powerful reminder that the widespread adoption of generative generation has not received the attention it deserves. The dangers of artificial intelligence. With only a few major players dominating the market, there is a narrow path to diversity in the software and data that our smart assistants and algorithms are built on. As Mashable’s Cecily Mauran reports in an examination of the Internet’s AI monoliths, incestuous AI development—or building models that are no longer trained on human input but instead on other AI models—leads to quality, reliability and, most importantly, a decline in diversity.

And, as regulators like it artificial intelligence now The argument goes that “humans in the loop” may not be able to help.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *