Team Licence
subjects
cpd types
licence
about

The use of algorithms in decision making

Every accountant who has used a spreadsheet programme knows the danger of producing a well formatted spreadsheet, all nicely printed and looking impressive only to find that a formula error means that some or all of the figures are wrong. We've all been there - the total includes subtotals as well so there is double counting, the range set for Autosum misses out the top line, the logic of a formula is flawed so it doesn't do what you had planned it to do etc etc... The opportunities for cocking up a spreadsheet are many and varied.

In the nightmare scenario the spreadsheets are proudly presented to the Board only for some lay person, often the production or sales director, to say 'I don't understand these figures. Why is that…..' and the embarrassment begins as you spot the error and start to waffle you way out of it – or grovel whichever is most appropriate.

The thing looks right and is initially believed because it looks right – until the detail comes under scrutiny. We are happy to believe the numbers because the presentation encourages us to.

This is a small example , familiar to most, but organisations under pressure are increasingly turning to the use of computer software as an aid to decision making. The results produced by the software can be believed simply because they have been produced by the software. However one has only to consider the Post Office 'Horizon' scandal to see that the slavish adherence to an outcome generated by computer software can lead to flawed decision making, unfair treatment or illogical outcomes.

For example in 2015 it emerged that, in one three year period, 2,380 benefit claimants who were sick and disabled were declared 'fit for work' by the Department of Work and Pensions and had their benefits withdrawn. Most died shortly after the decision.

The schools examination authority in the UK, OFQUAL ,used a computer based system which promptly downgraded almost 40% of teachers' assessments of A -level grades. This resulted in the system being scrapped after an outcry and a government U-turn.

In the UK it has been discovered that almost half local authorities are using automated systems to assist in the decision making process for benefits, social housing and similar. These have often been implemented without public consultation and, councils claim, are being used to save money. However one council admitted that the system they used was only 26% accurate and many others have now discontinued their use as they have created more work resolving difficulties than was saved by implementing the software. Most councils claim that no decisions are made purely on the result of the algorithms and most claim failures in applications are as a result of people entering information wrongly.

Algorithms used by social media companies have been criticised for pushing vulnerable young people into the direction of content which has adversely affected their mental health and, in some cases, resulted in adolescent suicides. Social media companies have been slow to respond, presumably as to do so can affect profits.

The use of computer based systems is increasingly used in areas such as recruitment and the processing of applications for visas. In 2020 the Home Office in the UK had to suspend the use of a visa monitoring system on the grounds that it was inherently racist. This presents a data protection problem for users of such systems where they are found to be flawed. Selection processes, particularly for jobs, must be seen to be transparent and fair. The use of algorithms for candidate selection may be far from transparent and may be inherently unfair if the criteria on which it is basing its selection has been input by a flawed (or biased) human.

Clearly the use of artificial intelligence (AI) can bring great benefits to business and, if used properly, can enhance decision making, speed up routine processes and reduce costs but it has to be handled carefully and, should one say it, intelligently. AI is not a panacea whatever the software houses marketing these programs claim.

So there is a conundrum which has to be addressed. Consider the situation where, in order to save time and cost an organisation introduces a software solution which can process what it considers to be mostly routine claims.

If the AI is used as a filter so that anomalous or what look to be fraudulent claims are flagged for review this plays to its strengths. What any system can't do is think – they are not creative or able to incorporate any factors outside those set in the parameters by which they are judging claims or applications. The flagged claims should then go to a human person for review.

This is where the danger can creep in for two reasons. Firstly the human agent is, by virtue of their humanity, flawed and biased in some way – everybody is. Secondly there may be a tendency to believe the computer – the fact that the claim has not been capable of being processed is taken to mean that it is probably invalid - this plays into a confirmation bias. Claims can be rejected without a thorough review. No claim no benefit.

The conundrum is that the fail-safe – the human review – can be influenced by the decision made by the machine – computer says no without any further examination, analysis or consideration. Hence 2,380 rejected benefit claimants who lost their support despite being sick and disabled.

Organisations using these systems must be aware of the implications. The human oversight is extremely important and should be carried out by individuals who have the authority to override the system and make decisions. All too often rejected claims are passed to lowly members of staff who have no authority and who take the easy way out. This is a grave error and leaves the organisation open to claims against it for inequality, unfairness, bias or other reputation damaging cases.

Inevitably algorithms improve and are refined, AI based systems learn and decision making improves but meanwhile the output of any computer based system should be treated with care, particularly where it affects human lives.

Failing people is never acceptable where the computer says 'No' and nobody knows why.

John Taylor is an author for accountingcpd. To see his courses, click here.

  1. Sylvia B
    Posted 15-Aug-2023 at
    Hours wasted looking through multiple sheets to find the one broken link - so painful!
    1
  2. Muhammed G
    Posted 14-Aug-2023 at
    The computer makes work easy but nothing is without complications.
    1
  3. Nicki H
    Posted 13-Jun-2023 at
    Spreadsheets are wonderful and a tool of the devil at the same time ! A second/cold review of reports and outputs is essential, particularly where life impacting decisions are being made on the back of data analaysis
    1
  4. Dzingayi M
    Posted 13-Jun-2023 at
    The computer is a machine, machines make work easier, they can never completely replace human beings
    1
  5. David R
    Posted 12-Jun-2023 at
    Excellent and timely article. For me, the biggest takeaway is that whether we are using human or AI generated data, a period of review is essential before circulating or presenting the data. With increasing time pressures on organisations, e.g. to shorten month end work, only enough time is factored in for processed, with minimal opportunity for co ...
    see more
    1
  6. You need to sign in or register before you can add a contribution.