As AI solutions continue to become more powerful and implemented across various industries, concerns about its safety and other risks have become paramount. While AI offers numerous benefits, many policymakers are flagging the need for caution, the protection of privacy and other rights and mitigation of biases and even potential dangers.
There is no shortage of global government attention to AI. In the United States alone, the White House, National Institute of Standards and Technology and the Senate have issued frameworks to guide AI developers and users around common principles.
No one can seriously deny that protection and guardrails must be in place, but are more regulations the answer? Yes, but only if they are well-defined.
We can learn from past attempts at balancing new innovation and regulation
As with the Digital Millennium Copyright Act, regulators are figuring out how to regulate AI while avoiding unintended consequences that may impede technological progress. For example, the European Union has been looking into passing its own AI act. Various drafts stated that AI is “high risk” if used in transportation as a safety component, but it was written so broadly that even AI in networking for transportation customers would be classified as high-risk and would be subject to stringent regulations and oversight, even if that AI was not operating or managing trains. In June 2023, the European Parliament passed an updated AI Act draft that recognizes that not all AI impacts people or safety. This demonstrates a risk-based approach depending on the level of risk the AI can generate in a given environment.
While the outcome was beneficial, there was a risk that any AI solution used in transportation – even one used to manage IT networks and maintain uptime – would be highly regulated and not improve safety or privacy. This demonstrated the potential pitfalls of broad regulations without careful consideration of their impact. Even the prospect of such broad regulation can discourage innovation into technologies built to keep things running smoothly.
Is the US government headed in the right direction?
Let’s come back to our own shores. As noted previously, various arms of the US government have made strides in addressing AI concerns and fostering an ecosystem conducive to innovation. It is encouraging that the White House released its Blueprint for an AI Bill of Rights just as generative AI was becoming popular and widespread. Around the same time, respected technical organizations like the National Institute of Standards and Technology (NIST) were developing and releasing a framework for discussing and analyzing AI-related risk. NIST’s AI Risk Management Framework is a well-crafted document for AI developers, consumers and policymakers to discuss evolving technologies, promote safety and accountability and use the same taxonomy.
Many companies already in the AI space – or looking to join it – have reason to feel confident that the Senate’s SAFE Innovation Framework, which has the potential to codify aspects of the White House and NIST structures, will contribute to the landscape. All of us will benefit from clear rules and enhanced privacy and security. Still, we focus our efforts on the features and use cases that actually could create privacy and security risks.
As AI continues to evolve and permeate society, addressing the concerns surrounding its safety, privacy implications and potential biases is critical. Striking the right balance between enabling innovation and ensuring adequate safeguards is a complex task that requires collaboration between policymakers, industry experts and other stakeholders. By proactively shaping regulations, fostering dialogue and promoting responsible practices, we can navigate the AI landscape with confidence. In the end, we’ll likely be ok. If the Digital Millennium Copyright Act is any guide, companies that develop new AI solutions and those that use them (and everyone in between) will arrive at a place that works for everyone. Juniper has been at the forefront of addressing these issues and alleviating customer and consumer concerns. We released our AI Innovation Principles in 2022 to guide our own development and use of AI technologies. Regardless of the outcome of US government and EU policy decisions, responsible industry actors will need to be proactive and transparent on these matters in order to maintain their own credibility to regulators, customers, and their employees.