The state of AI regulation – lessons learned from the EU
Victoria Hanscomb
Share this content
Dave McCarthy, Program Manager, Government Relations at Axis Communications discusses the ripple effect of AI regulation from the EU on the US.
Article Chapters
Toggle- What do you think about the EU’s adoption of the AI Act? Any key takeaways?
- What impact if any does the EU AI Act have on the Americas, specifically the US?
- What other international developments should we be paying close attention to?
- What should companies know about the US Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence?
- How is the US EO on AI being implemented and enforced? Are there any results so far?
- What do users and end customers of AI need to know about these AI regulations?
- As the debate on AI regulation ensues, what’s your sense about whether or not regulation can strike a balance between promoting innovation and ensuring ethical development/deployment?
What do you think about the EU’s adoption of the AI Act? Any key takeaways?
The EU strives to be at the forefront of this sort of regulation in order to protect its citizens – just like it took the lead to safeguard data privacy when it enacted GDPR in 2018.
Accordingly, the EU’s adoption of the AI Act is no surprise.
It was first proposed by the EU Commission in April 2021 and has taken a thoughtful and deliberate course until its adoption in March 2024.
While many areas of AI have been discussed in the EU for a long time, it’s interesting to note that large language models like ChatGPT were a late edition to the AI Act.
This fact underscores how quickly AI development is occurring and the challenges that this presents to regulators.
What impact if any does the EU AI Act have on the Americas, specifically the US?
No nation wants to be behind in the race to govern a groundbreaking technology like AI.
The EU AI Act served as a green light for each country in the Americas to begin formulating their own legal guardrails around AI.
In general, people in the US tend to be skeptical toward overarching regulations, so a targeted approach to AI regulation is likely.
In addition, as the US begins formulating its own approach to AI regulation, it is clear legislators want to avoid being burdensome, less they stifle innovation.
While the US has been slower to act on AI regulations than the EU, expect to see major federal legislation progress in 2025.
What other international developments should we be paying close attention to?
It’s important to monitor regulatory and legislative developments in all markets, especially those places where your company has a presence.
In certain jurisdictions like Canada and Singapore, proposed legal frameworks have already been introduced but not yet enacted into formal legislation.
Of particular interest is the proposed Canadian Artificial Intelligence and Data Act, which positions Canada as one of the pioneering nations in AI regulation.
What should companies know about the US Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence?
President Biden’s Executive Order (EO) is the most comprehensive approach by the US government to regulate AI to date.
The EO establishes eight principles for government agencies to adhere to when addressing the use of AI.
Those principles and AI policies will be policed by new Chief AI Officers (CAIOs) to be created within every federal agency.
Ultimately, the EO was a signal to companies developing and deploying AI for the public sector to prepare to work with and comply with the AI policies set by each agency.
Such companies should prepare to provide AI impact assessments, operational tests, deployment reports and to take in government feedback.
How is the US EO on AI being implemented and enforced? Are there any results so far?
The Government is still preparing their enforcement mechanisms for AI governance.
President Biden released his EO in October 2023 and the Office of Management and Budget has recently published their implementation guidance to agencies in the spring.
The deadline for implementation of the EO was pushed from August to December 2024.
Like many initiatives in government, progress is being made, albeit slowly.
It is important to remember how challenging it is for government regulations to keep pace with the speed at which technology develops.
It’s also key to remember that technology regulation requires balance between encouraging innovation and protecting the public.
What do users and end customers of AI need to know about these AI regulations?
Any company developing or providing AI solutions should be prepared to answer compliance mandates related to their use case.
End customers should firmly understand their liability, as responsibility of the use of AI varies from developers to deployers.
From generative AI to multi-modal models, the various AI techniques have different outputs that are valuable and carry some degree of risk for various industries.
For customers, AI compliance will be a marathon – not a sprint – as regulatory compliance will be an ever-evolving requirement as the technology changes and improves.
As the debate on AI regulation ensues, what’s your sense about whether or not regulation can strike a balance between promoting innovation and ensuring ethical development/deployment?
Business and industry leaders typically have a healthy amount of suspicion when it comes to the federal government’s claim to not hinder innovation while maintaining regulation.
However, companies in the AI space should be cautiously optimistic about the US Senate’s AI Roadmap which is offering a gradual approach to AI regulation.
In Congress, a bipartisan group of US Senators have been holding AI insight forums to fully understand the needs of government and industry as related to AI.
Until recently, it appeared the bipartisan group of Senators would be introducing an overarching AI regulatory bill like the EU AI Act.
However, in May the group announced a framework for Senate committee chairs to follow as they introduce AI legislation tailored to the industries in their committee’s jurisdiction.
Since each industry is different from the next, each industry’s approach to AI will vary.
By taking a targeted approach to how AI should be governed within each industry, the Senate Roadmap offers a sound approach to establishing regulatory guardrails without stifling innovation.
This article was originally published in the August edition of Security Journal Americas. To read your FREE digital edition, click here.