It feels like just yesterday we were looking ahead to 2024, anticipating a surge in AI regulation. Well, that anticipation was well-placed, and as we peer into 2025, the momentum shows no signs of slowing down. If anything, the gears are grinding even faster, moving from broad strokes to more intricate details.
Looking back at 2023, the U.S. really honed in on specific applications of AI. President Biden's Executive Order 14110 was a big one, directing federal agencies to get a handle on AI's risks within their domains. While this directly impacts government use, it's a clear signal to companies working with the government and those in key industries. Areas like fair competition, worker protection, privacy-enhancing tech, and civil rights were highlighted as future legislative targets. It’s like setting the stage for what’s to come.
We also saw state-level action. Colorado, for instance, stepped up with regulations for life insurers using big data, algorithms, and predictive models. This stems from a 2021 law aimed at preventing unfair discrimination based on protected classes. These new rules, kicking in fully by December 2024, mean insurers need to be diligent with assessments, fixing any discriminatory practices, and reporting annually. California, meanwhile, has been proposing rules for automated decision-making that carries significant weight, requiring notice, opt-out options, and transparency about the tech used. New York City has also issued rules for bias audits of automated employment decision tools, a crucial step for ensuring fairness in hiring and workforce management.
Internationally, the big news was the political agreement on the EU's AI Act. This comprehensive piece of legislation takes a risk-based approach, with stricter rules for high-risk AI and lighter touches for lower-risk applications, alongside outright bans for unacceptable uses. While the final text is still being ironed out and likely won't fully take effect until 2026, it’s a landmark moment, setting a global precedent. Elsewhere, China has introduced interim measures for generative AI, focusing on consent, service agreements, content moderation, and labeling AI-generated data. Canada, on the other hand, has opted for a voluntary code of conduct for generative AI, emphasizing principles like accountability, fairness, transparency, and human oversight. It’s fascinating to see these different approaches emerge.
So, what does this all mean for 2025? We can anticipate a continued push towards implementation and enforcement of these foundational regulations. Expect more detailed guidance from agencies, potentially new legislative proposals building on existing frameworks, and a growing emphasis on practical compliance for businesses. The focus will likely shift from 'what should we regulate?' to 'how do we effectively regulate and ensure accountability?' The conversation is evolving, and staying informed will be key for anyone involved in developing or deploying AI technologies.
