Team Licence
subjects
cpd types
licence
about

There are a number of cautionary tales that highlight how AI systems have gone wrong in the past, exhibiting unintended bias in their algorithms. The best thing we can do is to try to learn from these stories in order to stop them happening again.

Apple: Credit card

In 2019, Apple Card, launched by Apple and Goldman Sachs, faced allegations of gender bias in its AI-driven credit assessment system. Several high-profile cases surfaced where women, even with higher credit scores and similar financial profiles as their male counterparts, received significantly lower credit limits.

The AI model, trained on historical financial data, likely perpetuated existing gender biases, resulting in unfair credit decisions. This incident highlights the risk of bias in AI systems and the importance of transparency and fairness in financial services.

Amazon: Hiring tool

Amazon developed an AI hiring tool to streamline recruitment by analysing resumes and selecting top candidates. However, again the AI system exhibited bias against female candidates.

Trained on resumes submitted over a 10-year period, the model favoured male applicants because the tech industry, historically male-dominated, influenced the data.

The AI downgraded resumes containing terms such as "women's," – even, for example, in phrases such as "women's team" or "women's champion". Amazon eventually scrapped the tool, underscoring the dangers of training AI on biased data.

Optum: Risk assessment

AI from Optum, which many health systems were using to spot high-risk patients to receive follow-up care, was prompting doctors to pay more attention to white people than black people. Those identified for follow-up were 82% white and only 18% black; but when the medical histories of the sickest patients were reviewed the numbers should have been 53% and 46% respectively.

The Optum AI had been applied to at least 100 million patients. As you might expect the reason for this was that the AI had been trained on historical data, where black people had received less medical attention than they needed.

Summary

As accountants you might feel that bias issues within an AI system are less likely to have such a big impact. After all, you are not in recruitment or providing medical care. But these examples highlight the risks involved when an organisation gets something wrong.

Training AI tools on biased historical data will produce biased outcomes. Similar issues arising in finance and accountancy could erode trust and fairness in the profession.

Want to explore the relationship between ethics, AI, and accounting in more depth? Check out Julia Penny’s 4-hour course, AI and Ethics.

    You need to sign in or register before you can add a contribution.