Privacy is possibly the single biggest data-related concern of our times and certainly one of the most debated ones. What is the right balance between safeguarding individual privacy and having companies access user data for business ends? How much of the responsibility is with regulatory bodies and how much with organizations’ own data governance teams?
Data ethics is tricky even in the hands of intelligent humans. Even if an organization takes a certain stand and creates data governance policies to protect the interests of its shareholders, stakeholders and customers, the actual implementation of the policy rests with humans. What can and cannot be done may be laid down but interpreted differently by the people who are executing business decisions.
AI and the ethics of data
Stepping into an AI-led future, we are adding another layer of complexity to data ethics. By its very nature, AI takes the human element out of certain decision-making processes. And in such a world, it is all the more important for organizations to be very clear on where they stand when it comes to data ethics. Not only in terms of spelling this out and communicating it org-wide, but also in setting up processes and guardrails to ensure that what an AI system does with data is in accordance with the organization’s policies.
“AI forces us to think in a different way. You have to take a ‘possibilities’ or ‘non-human’ mindset and overlay that with the ethical mindset,” says Deborah Adleman, Data Protection leader and regulatory ethics expert at EY. “You can’t say the AI did this wrong or I developed this model and I didn’t know if I fed it a gargantuan amount of data it would do something that has harmed the rights and privileges of natural persons.”
Guidelines around ethical AI
Just as the GDPR has transformed the data privacy world, organizations across the world are putting in place guidelines around AI ethics. An important one among these is the EU’s 2018 ethical AI guidelines, which call for “trustworthy AI that is lawful, ethical and robust.”
The EU committee has identified 7 key requirements for trustworthy AI systems, 4 of which emphasize governance mechanisms. These are:
1. Human agency and oversight
While AI may sometimes be used to influence human behaviour, users must be empowered to make informed decisions about AI systems and retain their autonomy of decision making. To ensure that AI does not compromise human autonomy, there must be an element of human oversight and scope for human intervention in AI systems. This could be through approaches such as Human-In-Command (HIC), Human-In-The-Loop (HITL) or Human-On-The-Loop (HOTL).
2. Technical robustness
AI systems are also vulnerable to attacks and need to be protected from these to avoid data being misused or corrupted. There must be not only robust technical systems but also governance measures in place to ensure safety, security fallback options and recovery in case of an attack.
3. Privacy and data governance
The EU guidelines recommend that AI systems must guarantee privacy and data protection to its users throughout the lifecycle for all information generated and shared through the system.
At all times, the processes and sources used by the AI system to gather, label, analyze, or process data, and the decisions made at different stages must be documented. These steps and decisions must be not only traceable but also explainable by humans and auditable in case a necessity arises.
The theory of data-related ethics is easy to understand. In practice though, it’s complex and confusing. Like James Cotton, International Director of the Data Management CoE, Information Builders says, “You can’t preach…using data in an ethical way if you don’t know what you have, where it came from, how it’s being used, or what it’s being used for” Source: Datanami
To know this, you need data governance. Perhaps, we can help — get in touch with our data experts.