AI regulations

Are You Ready to Navigate the AI Regulatory Landscape?


As an IT leader, you are frequently tasked with staying ahead of the newest technology trends, particularly the evolution of artificial intelligence (AI) and its accompanying regulatory challenges. With global efforts to mitigate the risks associated with AI, particularly generative AI, now taking center stage, there’s much to understand and much to prepare for.

AI Regulations a Global Effort

Risks associated with AI, such as misinformation, deepfakes, and threats to elections, have led to regulatory responses worldwide. Different nations are developing unique AI ethical and legal frameworks. The E.U. has proposed the AI Act, the U.S. has issued Executive Order 14110, and the U.K. is developing its own AI regulatory framework.

These responses are just the tip of the iceberg. Australia, Canada, China, and other countries are also working on regulations.

Closer to Home

This global effort is grounded on responsible AI guiding principles. It aims to promote ethical AI practices that avoid harm and ensure transparency. These principles are crucial to any organization’s AI strategy, and an organization-wide risk management culture should be established around them.

Likewise, your organization should be prepared for AI conformity assessments, which ensure that AI systems align with accepted ethical and legal principles. Your risk management strategy should incorporate these assessments as well as tools such as the NIST’s AI Risk Management Framework. Initiatives such as the establishment of the U.K.’s AI Safety Institute and the Bletchley Declaration are also worth noting, as they shape the future of AI safety and security.

Stay Informed on AI Regs

CIOs and other IT leaders must stay informed of state and municipal AI regulation initiatives and related key actions and deliverables. These initiatives and actions could serve as bellwethers for broader regulatory trends and give organizations a head start in preparing for the AI-driven future.

More than 40 U.S. states have introduced AI-related bills, with California leading with the number of proposed laws. California SB-1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” would require companies using AI models to conduct thorough testing to identify and mitigate potentially “unsafe” behaviors before the technology reaches the public.

Recommendations from Info-Tech include high-level guidance such as:

  • Cultivate a culture of risk management
  • Establish responsible AI principles
  • Prepare for conformity assessments for AI applications that are high-risk 
  • Expect AI regulations to affect other policy areas, such as cybersecurity, data, and digital content

Stay ahead, stay informed, and prepare for the regulatory landscape to come so that AI works for you, not against you.

Share Button