big data

Data Ethics

12 August, 2020

updated 25 August 2020

The world’s first guide on data ethics for brands – Data Ethics – The Rise of Morality in Technology – has been launched by World Federation of Advertisers to encourage companies to see the vital importance of addressing the gap between what they can do and what they should do, and go beyond simply following the rules on data privacy. For the industry, prioritising people over data is regarded as important for brands’ long-term licence to operate. Consumers will respond positively to brands that inspire trust through transparency which, in increasingly competitive markets, makes a significant difference to the bottom line.

For me, data ethics is the new ‘green’. As a society, we have the opportunity to get it right now and to incorporate ethical considerations into the future we’re building. It’s about being aware that the choices we make today will very likely have consequences beyond what we can understand today. Being the trustworthy partner that our customers, employees and partners expect, demands ethical standards and a strong commitment to excellence and transparency – not just compliance – in the field of data use and the way new technologies and artificial intelligence are deployed.

Vera Heitmann, Digital & Growth Leader – Public Affairs at IKEA Retail (Ingka Group)

Data ethics are, however, more than the bottom line. The big tech firms have come under increasing scrutiny for their collection and use of data, and the impact that’s had on everything from consumer choices in groceries to political candidates. Questions are being raised about who is building the systems, how they’re using the data and who gets to make these decisions.

Unconscious bias baked into systems and technology can negatively impact minorities. These biases are difficult to shed. Algorithmic bias is now a widely studied problem that refers to how human biases creep into the decisions made by computers. The problem has led to gendered language translations, biased criminal sentencing recommendations, and racially skewed facial recognition systems.

In August 2020 councils in the UK started quietly scrapping the use of computer algorithms in helping to make decisions on benefit claims and other welfare issues as critics call for more transparency on how such tools, most of which are implemented with consultation with the public, are being used in public services.

The use of artificial intelligence or automated decision-making came again into sharp focus after an algorithm used by the exam regulator Ofqual downgraded almost 40% of the A-level grades assessed by teachers. That system was also scrapped.

Currently, the people who have the power to make ethical decisions about the use of data are typically white males from high-earning, well-educated families, which makes workplace diversity a powerful and necessary tool for catching unsuspected bias before it has a chance to cause damage. Ethical leaders have an important role to play in ensuring that their organisations have that diversity – of thought, gender, culture – in their leadership and workforce, to reflect the world in which they operate. Leaders – in business, community and government – set the tone for how society operates and the values we live by. Their choices can build better communities.

Data ethics is more than just what we do with data, it’s also about who’s doing it. The Conversation, 22 June 2018 James Arvanitakis, Andrew Francis & Oliver Obst, Read the full article here.

Your Big Data Responsibility: The Rise In Data Ethics. Forbes, 8 June 2020, Christian Ofori-Boateng Read the full article here

Councils scrapping use of algorithms in benefit and welfare decisions The Guardian, Sarah Marsh, 24 August 2020. Read the full story here.

Photo by Markus Spiske on Unsplash