Two stories in the news this week highlighted how ways of handling inaccurate data can lead to different consequences. It all comes down to who you are. In these reports, poorer people and women are being disadvantaged, whereas organisations running flawed or discriminatory processes seem to go unchallenged.
Some individuals on low incomes are being wrongly targeted with threats to have their benefits withdrawn, based on predictive data analysis with an accepted 20 per cent margin of error; while companies failing to submit accurate statistics on equal pay are let off the hook.
Up to 17% of gender pay gap data is wrong
As result of legislation passed in 2017, organisations in the UK with 250 employees or more must report on their gender pay gap.
However, independent statistician, Nigel Marriott, quoted last year in Personnel Today, estimated that between 9% and 17% of gender pay gap data submitted in the UK is wrong. The submission isn’t particularly easy to complete. He cites three categories of mistakes – errors in the mathematics, income quartiles entered the wrong way round, and claims of no pay gap which clearly conflict with the male quartile gap.
This week, the Guardian revealed that they had been following up on progress of gender pay reporting with the Equalities and Human Rights Commission (EHRC). They explain: ‘More than 30 companies are yet to file accurate data for the previous 2017 period with the Equalities Office, and a number have filed mathematically impossible figures this year. Analysis also shows a further 725 companies have filed or resubmitted their figures since last year’s deadline.’
Yet despite these errors, and some firms being almost a year late in completing their return, the EHRC have admitted that they are not currently pursuing any companies for the ‘unlimited’ fine that is allowed for in the legislation.
This looks particularly toothless when compared to the millions that audit companies have been fined in recent times for failing to spot financial misdemeanours. When viewed in that context, the data suggests that in public life swindling shareholders matters; whereas underpaying female employees does not.
20 per cent may be incorrectly accused of fraud
A report from Sky News has highlighted that 8,000 people in London could be wrongly accused of fraud if a recently trialled predictive algorithm is unleashed. They form 20 per cent of a 40,000 cohort of possible benefit cheats identified by a system targeting single person discount fraud.
This new fraud detection programme, developed by BAE for the London Counter Fraud Hub, crunches data from millions of households. According to the story, following trials in the London Borough of Ealing, an error rate of 20 per cent has been judged acceptable.
This raises questions about how the individuals being targeted have consented for their data to be used. Under the Data Protection Act, individuals have the right to prevent data processing that is likely to cause damage or distress. The problem in this case, is that you won’t know you’re distressed until after an accusatory letter has landed on your doormat, threatening that your benefits will be withdrawn. In an environment where local authorities are strapped for cash, this level of collateral damage among people on low incomes, backed by an appeals process, is viewed as acceptable.
Back to basics
Reading some of the hype, you could be forgiven for believing we are on the cusp of a utopian, data-driven future, where algorithms will be able to do everything from cure cancer to manage our cities more effectively. However, the examples above demonstrate there is quite some way to go before this can happen.
First, we need to address a number of basic, but critical, points. What personal data are we happy to share? How is it being being collected? Who can see it? How accurate is it? How is it being processed, stored and used? In today’s digital environment, it is impossible for any individual to find the answer to most of these questions. Worrying at best.
Secondly, there’s a much bigger discussion to be had. If you want to build a world that, in addition to being more transparent, becomes a fairer place to live, how do we make sure our systems help make that happen, instead of simply reinforcing the structural inequalities of the status quo?