Regulation: The new scapegoat for AI literacy

Opinion
Oct 2, 20254 mins
ComplianceLegalRegulation

We don’t need a heap of new AI laws — we need leaders who actually understand the ones already on the books.

Lady Justice statue with scales, law books. [regulation / compliance / legal liability / fairness]
Credit: Simpson33 / Getty Images

Earlier this year, several state legislatures announced plans to establish task forces focused on governing AI, with headlines framing the move as a turning point in US tech oversight. At first glance, these initiatives seemed like momentum toward stronger accountability. In practice, however, they added little to the regulatory frameworks already in place.

The deeper challenge isn’t an absence of rules, but the lack of AI literacy among both policymakers and business leaders. Too often, decisions are made based on fear-driven narratives or overhyped promises rather than a grounded understanding of how AI systems work, where they add value and what risks they truly pose. Without that baseline knowledge, even well-intentioned policies become redundant or ineffective.

Regulation is already here

Much of the debate around new AI laws assumes AI operates outside of existing parameters. That’s misleading. In critical industries, companies already face strict oversight that governs how AI can be deployed. For example,

  • Healthcare: HIPAA doesn’t reference AI explicitly, but any healthcare organization or clinical practice using AI-driven diagnostics must still meet the same data privacy, integrity and consent standards. AI tools can’t sidestep rules about how patient data is secured or shared.
  • Finance: AI-powered trading systems fall under SEC rules, just like traditional algorithms. The Fair Credit Reporting Act also restricts the data financial institutions can use, even when models are machine learning-driven.
  • Enterprise standards: The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, while voluntary, has become a widely adopted baseline. For many Fortune 500 organizations, NIST isn’t just guidance — it’s policy.
  • Platform-level restrictions: Major providers — AWS, Google Cloud and Microsoft — embed responsible AI standards into their contracts. Azure’s policies, for example, ban biometric surveillance or disinformation campaigns outright. These self-enforced restrictions often have more immediate impact than waiting for government oversight.

Why policymakers get distracted

AI systems are already covered by a patchwork of existing compliance, risk and ethical frameworks. What’s missing is the awareness and ability to consistently apply them. As evidenced by the Senate vote earlier this year, which was treated as a watershed moment for AI safety, too many decision-makers misunderstand what’s actually at stake.

The numbers tell the story. In 2024 alone, state legislatures introduced nearly 700 AI-related bills, with 31 enacted and dozens more at the federal level. On top of this, US organizations already contend with international frameworks like the EU AI Act, OECD AI Principles and Canada’s proposed AIDA.

Instead of regulatory scarcity, businesses are facing regulatory fatigue. Leaders and lawmakers often view AI as a unique challenge requiring bespoke legislation, rather than an extension of existing compliance categories. The result? Symbolic, piecemeal rules that signal urgency but lack impact.

The literacy gap

The real bottleneck is literacy — both technical and contextual. This isn’t limited to government entities. Corporate boards, compliance teams and even some technology executives often fail to grasp how AI intersects with existing governance frameworks. For CIOs, this gap translates directly into compliance risk, operational inefficiency and missed opportunities to integrate AI responsibly.

The question isn’t whether companies, states or the federal government should regulate AI. And although enforcement and oversight must continually adapt, we don’t have to keep reinventing the wheel. A more productive use of time would be to ensure your AI is compliant with existing frameworks. This means remaining compliant with existing regulations, staying on top of new legislation and enforcing responsible AI practices.

AI literacy falls on leadership

AI governance isn’t a future concern; it’s a present-day operational requirement. Misuse typically occurs not because of regulatory voids, but because organizations ignore or misapply the laws already implemented.

Investing in AI literacy across leadership teams, compliance functions and development pipelines is more urgent than lobbying for new rules. More regulations won’t matter if organizations and policymakers don’t understand how to apply the ones already in place.

The aforementioned Senate vote to end the moratorium may not materially alter the AI regulatory environment. What it does reveal, however, is how far policymaking still lags behind technological reality. Progress won’t come from statements like this, but from informed, pragmatic leadership that can bridge innovation and governance at enterprise scale.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?