As humans, real innovations transform our entire mindset and approach to challenges. Once we outgrow an operational hamster wheel, rarely do we want to get back on it. Changing trajectories may involve some friction and doubt, but with clarity, trust can be built and new habits form. The adoption of aviation and flying took a couple of decades to earn the public’s trust and resulted in the creation of the FAA in 1967. Today, the use of Artificial Intelligence (AI) and Machine Learning (ML) and their promise of a better future are experiencing a similar process. Regulations for AI platforms that explain how they got their answers will eventually make AI a mainstay of our future.
One major key for the trust and adoption of real AI is the concept of “explainability”. It’s the ability to explain to users, AI practitioners and customers how an AI solution achieves answers on par with human domain experts. This is crucial to gaining confidence and thus facilitating AI adoption. Explainable AI (XAI) is a fulcrum on which trustworthy AI and AI for Operations (AIOps) pivot. To be recognized as a true force multiplier in an industry, AI must not only gain trust but also maintain it, just like any new technology or team member.
AI and ML Primer
For a quick recap on the basics of AI/ML, check Juniper Networks’ whiteboard series of videos and blogs and then keep coming back to learn more about how explainable AI (XAI) is the missing link in the industry.
Simply put, AI is a technological solution that performs tasks on par with human domain experts. These tasks previously required human cognition. ML is the algorithms and methods used to deliver AI.
What is Explainable AI (XAI)?
Explainable AI or “explainability” is the ability to explain in human terms the why and how of decisions or actions that an AI service or platform takes. AI and ML are commonly considered to be opaque and unexplainable, but this is predominantly due to implementation and usability challenges. The capacity to reveal or deduce the “why” and the “how” is pivotal for the trust, adoption and evolution of AI technologies. Like hiring a new employee, a new AI assistant must earn trust and get progressively better at its job while humans teach it.
Explainable AI allows experts and non-domain experts alike to reason about so-called AI “black boxes”. By shining a light on the data, models and processes, operators can gain increased insight and observability into the system. The controllability of the system (or platform) can then be optimized or altered based on more effective reasoning. Most importantly, any flaws, risks or biases can be communicated more easily and then mitigated, reduced or removed.
Why is Explainable AI Important?
Explainable AI helps to remove fear, uncertainty and doubt. It engenders trust and confidence in AI platforms and solutions. In terms of safety, security and service assurance, explainable AI helps to banish the practice of “AI washing” while surfacing an AI’s relevant strengths and weaknesses. Explainability also accelerates due diligence and empowers AI practitioners and customers alike.
When tasking any system to find answers or make decisions, especially those with real-world impacts, it’s crucial to explain how a system arrives at a conclusion, influences an outcome or why it performed an action in the first place. This explainability becomes increasingly relevant for decisions or actions that have the potential to adversely impact people, the planet or in commercial terms, profit.
Safe Spaces for AI
The network and IT operations of today’s service providers and enterprises are the arteries and heart of the internet. They are the foundation of our digital economy, and in this digital age, the productivity of these IT departments is tightly coupled to a country’s economic growth. The growing complexity of data systems and their dependencies are outstripping human efforts to respond manually, in real time and at scale. Although automation can address some of the issues around human toil and growing scale, it still relies on explicit pattern matching and manual maintenance. What’s required is real AI that can continually learn, train and optimize itself while assisting human teams. The goal is to build and operate networks in such a way that they become wholly transparent to end-users.
IT operations are concerned with a mix of both centralized and distributed systems. Everything from client-to-cloud relies on layers of physical connectivity, IP transport and application stacks. As these tiers overlap and interact, their resilience and security become increasingly untenable for traditional monitoring, management and orchestration systems.
New tools and approaches are required and this is where explainability facilitates the responsible and accelerated adoption of AIOps. With well-defined protocols and data structures, in a domain that’s wholly concerned with connectivity, reachability and service assurance, AI can make incredible headway without fear of discrimination or human bias. When tasked with a neutral problem space, troubleshooting, assurance and optimization are well-bound challenges that AIOps can responsibly embrace.
Trustworthiness in AI
How Can Organizations Build Trust in AI?
Trust is not binary. Trust is a continuum. It’s built up in layers over time and it’s largely based upon an accumulation of previous experiences and outcomes. Trust often demands a willingness to increase scope in subtle steps rather than leaps and bounds. Trust can be fragile and fleeting if actions or outcomes are unexpected, and it tends to diminish if no reasoning for previous actions can be offered. Trust also encompasses a degree of risk tolerance that hinges on the probability of success or failure. Trust is a complex and continuous journey irrespective of promises made or well-meaning intent.
Transparency may foster trust, but it’s explainability that facilitates it. Humans fear the unknown and uncertainty induces anxiety. Confidence and peace of mind come with clarity. Developing trust in an AI platform implies getting to know and understanding it. From the data used, to the feature engineering and training, to the scope and impact of decisions or actions, the more consistent and explainable an AI solution is, the less we fear the unknown and distrust it. So, if an AI platform can teach us about itself while also demonstrating consistent and explainable actions, we are more likely to engage with it and embrace it.
One of our deepest natural and existential conditions is wanting to know what is happening and why. It’s core to our being and how we engage and move forward in the world. We often strive to feel in control, but when this feeling is lacking, it’s worrying and unsettling. If we outsource actions or decisions to a third party or AI, we want the ability to get at the what and, more importantly, the why to retain trust. As we learn about any new technology or innovation, we begin by attempting to understand the basics, its strengths and weaknesses and any potential risks. If AI is not explainable, there are intrinsic and immediate doubts about its usability and trustability. If commercial entities are not embracing AI principles like explainability, then where should we place our trust and who is accountable for what?
Interpretable AI
Related to explainability is the concept of “interpretability”. It’s not so much the ability to explain or comprehend a model’s results after the fact but the efficacy of the ML model and how well it can associate a cause with an effect. Simpler ML techniques like linear regression and decision trees are deemed to be more interpretable, as are neural network models that use input features and learned weights to make predictions.
Growing Trust and Accountability
IT and network managers have many motivations to reduce toil on operational teams. Reducing low-value repetitive work frees human talent for more interesting and impactful projects. As a bonus, this also decreases churn and increases employee retention. Leaders who embrace innovative new tools and methods for their teams benefit in many ways, yet with AI, there can be a reluctance to do so due to over-zealous marketing and AI washing. Unfortunately, we’ve reached a new tipping point in IT of compounding complexity, and real AI provides a timely solution that augments teams with smarter and more effective tools. However, these tools must be fit for purpose while delivering on their promises, but for an operator or buyer, how would someone know if they’re not explainable?
We may personify AI to make it more palatable, but the underlying question is, who is ultimately responsible for the decisions or actions an AI takes or contributes to? Who or what is it that we are being asked to extend our trust to? These questions can become traditional organizational issues, but they also relate to the domain and scope in which an AI is permitted to operate. By reserving the ability to keep a “human-in-the-loop,” we can always ensure there is an accountable entity that can approve or step in as needed. Manual gating can be selectively reserved for high-impact decisions or actions, yet this only works if the insights and actions are explainable to a human operator, who can subsequently make an informed decision.
In decision making, as we move from observe through orient to decide and act in the OODA loop, we should always question where and when to insert or reserve human gatekeepers when they interact with AI systems. Human operators, now potentially considered AI-augmented entities, can still become responsible and accountable for an overall system, including its successes and failures, just like any other system owner. The relationship between human and machine agents continues to grow in importance and revolves around the topic of trust and its relationship to transparency and explainability.
Some quick due diligence for AI solutions may include:
- What algorithms comprise and contribute to the solution?
- What data is ingested and how is it cleaned?
- Where is the data sourced from (is it customized per tenancy, account or user)?
- How are parameters and features engineered from the network space?
- How are models trained, re-trained and kept fresh and relevant?
- Can the system itself explain its reasoning, recommendations or actions?
- How is bias eliminated or reduced?
- How does the solution or platform improve and evolve automatically?
Explainable AI in Action
With Juniper Mist, we build AI around some core principles that engender trust through transparency and explainability. We’ve also written extensively about AI/ML and our unique AIOps approach, including data and primitives, problem-solving, interfaces and intelligent chatbots, which all help to surface and correct network anomalies while improving operations using a better set of tools in your toolbox.
Our ongoing innovations in AI will make your teams’, users’ and customers’ lives easier. And explainable AI helps you start your AI adoption journey. Start today and join us in one of our weekly webcasts to see how our AI-driven Virtual Network Assistant (VNA), Marvis, interacts using natural language, troubleshoots and offers to perform remediation actions, all while showing users the what and the why with relevant data and graphs to back insights and actions.