AI is a new technology that can have both positive and negative effects on society, depending on how it is used. We need to make sure that AI does not cause bias, discrimination, or harm in the systems that we use for making decisions, solving problems, or creating things. We also need to address the issues of job loss, responsibility, and liability caused by AI. These are ethical questions that affect society and humanity. Ethics is not only about philosophy or morality, but also about practice and law.
As AI grows more widespread and powerful, we need clear rules and standards for how it is created and used. We also need to teach and enable users, developers, and stakeholders to know the pros and cons of AI and to keep it responsible for its actions and results.
How is AI different than other complex automated solutions we’ve built in the past?
We have built autonomous drones and complicated solutions that can fly planes. All companies have ethical training in place, so what more do we need? What is different about AI from all the other complicated systems we have built in the past? I tell people that AI is the next step in the evolution of automation. But, with AI, we are now building solutions that can mimic the cognitive reasoning and learning skills of humans. Bringing a virtual AI assistant onto your future IT team will be similar to bringing on a new hire. You will need to learn the skills and capabilities of your new AI system. You will need to learn to trust your new AI assistant when you give it a task. And you will need to understand how your AI assistant learns and how to train it.
What are the ethical concerns about AI?
Liability
One of the main concerns around AI is: Who is liable when an AI solution makes a mistake? Think of it like a future with self-driving cars. Right now, if there’s a car accident, the driver is the one liable for damages. But with self-driving cars involved, who will be at fault? Will it be the person using the car or the manufacturer? As AI takes the wheel, there’s increasing debate on who is liable for mistakes with negative consequences.
My POV is similar to all the complicated solutions we have built in the past, from planes, cars, and medical drugs; the same liability laws for the past will apply and the same product compliance marks, such CE and UL, will apply, albeit with some AI requirements.
Ethics
We also want to make sure that AI has the same ethics as the humans it’s working for. Our business standards need to apply to every AI assistant. Take an AI tool helping an HR department, for example. If it’s involved in the hiring process, we don’t want any discrimination to be built into our AI solutions.
Governance
Potential customers are also likely to ask about AI governance. AI developers need to have a set of principles when they’re creating their solutions because customers are going to want to know and understand them. For example, Juniper’s principles for AI innovation are:
- Mission-Driven: Juniper’s AI solutions will further our mission of solving difficult problems in networking and/or security to the benefit of society
- Transparent: Juniper will be transparent about when it uses AI, including in which products
- Explainable: Juniper will design AI-based products and solutions with a goal toward having explainable decision-making processes and intended impact
- Inclusive and Empowering: Juniper will strive toward AI capabilities that minimize unintended bias toward people
- Intentional Machine Learning: Juniper believes AI should be used to inform decision making and to achieve desired objectives. It should not seek to manipulate human experiences or allocate essential resources or necessities without appropriate ability for intervention
- Data Privacy and Security: Juniper will adopt recommended practices so that AI systems behave as intended, even when attackers try to interfere. Juniper will apply secure development techniques to minimize the possibility that machine learning models will violate or reveal underlying private data
What is explainable AI?
Ultimately, trusting AI starts just like hiring someone new into your organization; you want to understand:
- What they can do
- Their skills
- What they’re capable of
And, just like a human, they’ll start adapting and changing their behavior as they get more data and learn. But unlike humans, we don’t necessarily have to learn their capabilities by witnessing them over time. That’s where explainable AI (XAI) comes in.
Explainable AI is the ability for humans to understand the decisions, predictions, or actions made by AI. Once an AI solution is built, the manufacturer can give potential customers complete transparency into the workings of the tool by giving a complete breakdown of what features are relevant to each task performed by AI so anyone can see how an AI tool makes the decisions it does.
When it comes to some of the larger, more complex deep learning models (like deep neural networks), it can be very difficult to ever understand how much a model arrives at an answer to a question. Also, its behavior can undergo subtle changes over a period of time based on new training data or scenarios it processes.
One technique Juniper employs to help explain the results of deep learning models is the use of Shapley values that identify which network features had the greatest contribution to a poor Zoom experience. Was it a low RSSI, indicating poor Wi-Fi signal? Or perhaps it was a misconfigured VPN that routed traffic to a server that was far away. Shapley helps explain the result from AI.
Conclusion
The AI genie is out of the bottle, and it is not going back in. We need to learn to use AI to make our lives and society better. And discourage the bad guys.
Watch the full Bob Friday Talks: AI Ethics episode here.