AI gives immense power to organizations to create scalable solutions. However, this kind of power comes with great responsibility. Today’s AI has evolved enough to understand and shape many aspects of people’s lives. AI apps are speaking to people, chatbots are seamlessly servicing customers and deep learning models are even scouring images for detecting cancer.

As the AI story unfolds further, it is becoming increasingly essential for businesses to build their AI foundation on strong ethics—as a potent technology like this can have harmful effects. AI ethics refers to a set of guidelines that advise on the design and outcomes of artificial intelligence. These guidelines serve as a moral for AI-driven systems, given that they have the potential to amplify many of our biases and vices.

Establishing principles for AI ethics

In 2019, the Los Angeles Attorney’s office sued a tech giant for allegedly misappropriating data for its weather app. The suit alleged that the company’s use of data involved fraudulent and deceptive business practices. Further, a financial behemoth was investigated by regulators for using AI to discriminate against women by granting better credit limits to men. The case of a leading social network giving access to personal user data to a political firm is also well-known. These are only a few examples among a myriad of events that have pushed businesses to face the issue of “AI ethics” or “data ethics.”

Some foundational principles I think as essential for practising responsible AI are interpretability (Understanding what and why AI does what it does), beneficence (Using AI for good), accountability (Who should be responsible when AI causes harm?), fairness (AI should be fair and non-discriminative), privacy (Is my data secure?), and reliability (Can we rely on our systems to make important decisions?).

Primary concerns of AI today

1. Technological singularity – how will machines affect human interactions?

One of the main concerns of AI today is that it may surpass our own intelligence. This could have far-reaching implications on how we approach AI.

2. AI’s impact on jobs – can it replace humans in the workplace?

We cannot ignore the fact that AI is capable of rapid self-improvement and can operate at incredible accuracy and speed. While we are already deploying AI to take over mundane, repetitive tasks, it has the potential to replace humans in many roles like customer service, writing, legal and financial research, coding, and so on. 

While it’s part scary, it is also part intriguing as to what AI can do. For instance, a digital creator took a jab at this AI-generated music video for the classic “We didn’t start the fire” and did an exceptional job! In addition, Microsoft’s new voice-cloning AI VALL-E can simulate a speaker’s voice with incredible accuracy. A 3-second sample of a person talking is all it takes to clone their voice. And if we are to use AI for good, creating synthetic voices for ALS patients and enabling people to connect with deceased loved ones are just a few of the applications.

3. Privacy – how do we protect AI from hackers?

The hacking of AI systems can have dangerous consequences like misusing algorithms for scaling illicit activities such as cyber theft and data poisoning. AI can be a double-edged sword—stakeholders need to protect this powerful technology from being misused by bad actors.

4. Bias and discrimination – how to get rid of AI bias?

Biases indeed exist in AI systems. A study by UNESCO comprising a simple search of terms “school girl” and “school boy” showed very different gender-biased results. How we can lower biases and discrimination in AI systems remains to be a major challenge.

5. Accountability – who is responsible for AI’s mistakes?

How much of it is a “machine error”? It is becoming increasingly important to identify the roles and responsibilities of who will be responsible for the organization’s compliance with established AI principles.

How to establish AI ethics – 5 frameworks to achieve an ethical AI ecosystem

Despite the advancement in technology, most companies grapple with establishing a framework for AI ethics. What we need is a concrete foundation for our AI systems based on ethical principles.
The first and foremost thing companies can do is ensure proper governance and controls to establish accountability for AI systems. Once they have achieved that, they can look at the following five basic frameworks:

1. Transparency & Interpretability: It is essential for people to understand how AI models will use their data. AI is often seen as a black box rather than a transparent technology. AI systems need to be easily interpretable as to how they make decisions, i.e., based on which criteria. For e.g Microsoft AI, VALL-E generates audio that sounds remarkably like the original speaker from a voice sample just three seconds long. This raises the question on how the data is then going to be leveraged and the cybersecurity measures this could breach. Another example to cite, is chatbot ‘Worm GPT’,  an AI tool that is comparable to ChatGPT but has “no ethical boundaries or limitations,” and it is providing hackers with a method to conduct large-scale attacks.

2. Reliability & Robustness: AI systems must be able to operate reliably and consistently in both normal and abnormal situations. 

3. Privacy & Security: AI should be able to ensure data privacy and also safeguard against potential cyber attacks. This is especially true for AI systems used in financial services and healthcare, where data is highly sensitive.

4. Accountability: Organizations deploying AI must maintain responsibility over their AI systems and take accountability of any use and misuse involved.

5. Fairness: AI systems work with huge amounts of data and ML models are likely to have biases. However, efforts must be taken to minimize bias so they treat everyone the same, regardless of gender, race, income levels, and so on. For instance, Facebook’s advertising algorithm was accused of being discriminatory as it engaged in gender discrimination while displaying job listings.

End note

The potential of AI can be truly maximized only if it is used ethically. As it continues to touch more aspects of our lives, I hope that this conversation around AI ethics can guide us to devising better AI frameworks for the greater good.

ABOUT THE AUTHOR

Rohit Adlakha is a thought leader, commercial strategist, entrepreneurial visionary and a sustainability evangelist. His expertise is in business transformation, AI and digital revolutionary, high performance culture, inspirational leadership. Rohit Adlakha, ex Chief Digital and Information Officer at Wipro Limited and Business Head at HOLMES™ (Wipro's Artificial Intelligence platform). With over 15 years of leadership experience, he is a Board member at School of Inspired Leadership, a vanguard of change in shaping the future of India through education. He is a trusted advisor to Vector Center, a company that harnesses the power of AI to deliver real-time, decision grade water intelligence where he serves as an environmental, social and governance champion through technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

Show Buttons
Hide Buttons