We're in Triage Mode for AI Policy
We lost a lot of valuable time in 2025, but can still 80/20 things.
In late 2024, I wrote this:
AI that exceeds human performance in nearly every cognitive domain is almost certain to be built and deployed in the next few years. … If there are ways that you can help improve the governance of AI in these and other countries, you should be doing it now or in the next few months, not planning for ways to have an impact several years from now. … By the end of 2025, if not earlier, we need comprehensive AI regulation in the US[.]
We are running well behind on that goal, after losing a lot of valuable time in 2025. So we have a lot of work to do, but we also need to focus, and recognize that we aren’t going to totally nail this AI policy thing. At best, we’ll 80/20 it — mitigating 80% of the risks with 20% of the effort that we would have applied in a world with slower AI progress and an earlier start on serious governance. It’s not a sure thing that that’s even possible, but given the pace of things, we have to try.
Below, I say more about what I mean by 80/20ing AI policy, take stock of where things stand today, and lay out some steps you might take to improve the pace of progress in 2026.
80/20ing is the Only Realistic Goal
It will already be tricky to get significant, rapid progress on the basics of AI safety and security and avoiding extreme concentration of power due to AI. We need to accept that at best, we will just barely avoid some of the worst case scenarios (e.g., an AI-enabled biological weapon that kills billions, a rogue AI takeover, or stable global totalitarianism enabled by AI), given the current pace of AI capabilities relative to the pace of governance.
Unfortunately, triaging is needed. Terminator image from Terminator 2: Judgment Day. Comic is from here.
We will probably not be able to totally avoid all very bad but not quite worst-case scenarios (e.g., a very unjust economic transition that causes a lot of suffering and suicides, rampant AI psychosis and addiction, a poorly planned and rushed transformation of the education system, smaller-scale AI-enabled terror attacks that “only” kill thousands or millions, and stable AI-enabled totalitarianism in at least a few countries for a few decades). The rapid timeline of AI progress also means that we will fail to pluck a lot of very low-hanging fruit in terms of societal benefits, since we weren’t trying that hard to pursue them actively.
I’m not saying AI will still end up being overall bad even if we get our act together today. And “we aren’t going to be totally on top of things” does not necessarily imply “we’re totally screwed” for several reasons, and I’ve always thought that AI has a ton of positive potential, which can offset potentially a very large amount of harms. There are a few reasons why we aren’t necessarily screwed in an 80/20 scenario. Most people don’t want to kill everyone, which gives some wiggle room in terms of how good safeguards need to be. Some degree of minimal caution on safety is in AI companies’ basic self-interest (i.e., not wanting to turn themselves into paperclips). AI safety is not as hard as some people claim. A lot of people will do beneficial things with AI (for themselves and for others) without being forced to, just because it advances their preexisting goals. AI will help with its own governance to some extent (e.g., speeding up some kinds of societal defenses and safety research).
But if we don’t do much more than we’re currently doing, we’ll almost certainly stumble into a lot of bad things that we could have avoided, and possibly hit one of the worst case scenarios. This is because (as my colleagues and I have also argued for many years) there are often incentives to cut corners, and while AI safety isn’t basically impossible as some claim, it’s not trivial, either. History has shown very extensively that, left to their own devices, even if something could be done safely, people in a hurry will find all sorts of ways to f*** things up. And boy, AI companies are really in a hurry: by all of their own admissions, they cut corners here and there in order to avoid being left behind.
I of course hope that we can do better than the bare minimum here, and that we find great solutions to everything from AI addiction to job displacement. But it’s not clear we’ll do the bare minimum, so I am focused on that for now, and I encourage policymakers to do the same.
The Union Carbide pesticide factory in Bhopal, India, where tens of thousands died in an accident. Image from here.
The Volkswagen emissions cheating scandal (“Dieselgate”) contributed to tens to hundreds of premature deaths. Image from Alexander Koerner, Getty Images.
Some Ships that Sailed in 2025
I don’t think dwelling on missed opportunities is usually that productive, so I’ll keep this brief. But I do want to concretely illustrate what I mean when I say we’ve wasted time, and can only 80/20 things at this point. Some ships that sailed in 2025 are:
Timely strong regulation: Perhaps most importantly from my perspective, the AI regulations that actually developed in 2025 all have fatal flaws. They were either watered down a lot due to industry lobbying (e.g., SB 53 and the RAISE Act), missed the point entirely by being counterproductive or focusing on less urgent risks (i.e., many other state laws), or aren’t necessarily durable in their impact (e.g., the EU AI Act, the enforcement of which is vulnerable to political pressure from the US). 2025 also saw the rise of Super PACs specifically focused on stopping (what I would consider) meaningful AI regulation. This means that right now, there is no real requirement for frontier AI companies to have a good safety and security policy (you just need to publish some sort of policy per SB 53, even if it’s garbage, and next year, it will have to start being “detailed” per the RAISE Act, though it can be detailed garbage). Also, the stuff that AI companies have to share is very limited. No one has to share the details of their AI’s “constitution” or how well its behavior aligns with it…. just as these systems are being put in charge of ~everything day by day. And no one outside of an AI company has to check whether you’re actually doing what your (possibly very bad) policy says. We’re not starting from 0 in 2025, but we are much closer to 0 than I’d like and much closer than most people realize.
Robust misuse prevention: In 2025, many public health investments were gutted in the US and around the world, leaving us less prepared for a pandemic enabled by AI (or other causes) than we could have been if we had a stronger sense of urgency on this front. We did not make major investments to drive down the costs of Far-UVC, pre-commit to purchasing tons of vaccines in order to make sure they’re ready in time, etc. On the cybersecurity front, we didn’t really get started on proactively patching open source code and hardening critical infrastructure to defend against AI-enabled (or normal) cyberattacks. It’s not that things are totally hopeless in either domain, and companies are belatedly starting to push a bit harder on AI cyberdefense as of early this year, but we’re not really crushing these things the way we could have if we started earlier.
Coverage from Axios of Anthropic’s report on Chinese hackers leveraging Claude Code. There are many such cases reported by OpenAI, Anthropic, and Google DeepMind, and notably, this is the tip of the iceberg in terms of real world AI misuse, today and in the near future. It seems extremely unlikely, given public reporting of safety culture at xAI, that they have robust protections against this kind of misuse, or potentially even more concerning misuse for creating biological weapons.
Large-scale safety and security research: I think there is a lot of good AI safety research happening. But much of it is very “mom and pop” and under-funded compared to research on AI capabilities, and it is often stymied by lack of access to non-public information that companies are reticent to share. There are efforts to better fund this research (see, e.g., this one) but in general, at a societal level, this is not being treated with a high degree of urgency. Many ideas proposed in 2025 for how to scale up work on AI risk mitigation have not yet been followed up on.
Large-scale AI auditing: There is a growing and very promising ecosystem of third parties capable of doing safety audits. I think frontier AI auditing (covering both safety and security) is both urgently needed and doable, and can help avoid extreme concentration of power, since it introduces external oversight into the decision-making of people with a “country of geniuses in a datacenter” at their disposal. But there’s a quantity/quality tradeoff: there aren’t enough qualified auditors right now to rigorously audit hundreds of AI companies across a full range of risks, so we have to prioritize. If we had started building demand for auditing (e.g., through state or federal policy requirements) sooner and spent last year rapidly scaling the ecosystem, maybe we could have gotten to that point by later this year, but instead we need to triage and focus on just the top few dozen (or maybe even fewer) AI companies, and focus on really “nailing” a pretty narrow set of risks. If you think that the 70th most capable AI company will still be capable of causing significant risks in a year or so, given the rapid pace of AI progress, or you think that there are lots of risks to worry about – well, I totally agree, and if you have a time machine, you can help us make sure that company is covered. In the real world, I am focused on making sure that (e.g.) the 7th most capable company has its act together on the biggest risks, and it’s not clear we will pull that off at this rate.
What You Can Do
If you have skills that you think could be helpful in improving AI outcomes, now is the time to deploy them. A while ago, I wrote some related career advice, with a focus on AI policy, here.
If you are a policymaker or have some kind of influence over policymakers, you could try to 1. advance state level policies that build on SB 53 and the RAISE Act in specific ways, 2. increase the quality and likelihood of a near-term federal compromise on AI regulation that can pass before the elections later this year. In terms of what good regulation looks like exactly, I talk a bit about that here and here, and tweeted various related links here. I also generally recommend reading Dean Ball’s writing on related topics (and others).
If you can afford to do so, you could consider donating to one or more of the various organizations that are working on this stuff. I of course will plug my own organization, AVERI, but there are many more organizations working on AI safety, security, and policy, of many shapes and sizes - reach out if you need help getting oriented on the options, and I can try to help or connect you with someone better.
If you work at an AI company, you could be more vocal in asking what the company is up to lobbying-wise, and put in some time to engage on the details with the policy team. You probably have more influence than you think! You can also volunteer to help with briefing policymakers about the urgency of all this.
Couldn’t resist including this one somehow, and I needed one more image.
And almost everyone reading this could be a bit more engaged on AI policy in everyday life — in person, on social media, etc. Despite what it may look like on social media to the kind of person reading this, most people don’t know or care about most of this stuff, and that needs to change in order for policymakers to “feel the heat” and not cave to threats from industry lobbyists.
Why I Say All This Right Now
There is a common chain of thought rattling around in the head of dozens of key “AI policy elites” at AI companies and in government right now, and has various associated catchphrases: “Congress is broken,” “Nothing happens in an election year,” “Things are too polarized,” etc. The details vary but the upshot is that we have no choice but to kick the can down the road another year.
I understand this perspective (Congress is indeed broken and as we get closer to the election, the appetite for bipartisanship will plummet) but these views also seem like a self-fulfilling prophecy to me. We are burning time that we have very little of to begin with. Getting real AI policies in place is hard but nevertheless necessary, and it’s all of our responsibility.
It would have been great to make more progress than we did last year, but all we can do is make better decisions now. Given the uncertain (but certainly rapid) pace of progress in AI, which I promise you is not even close to hitting any walls, the time for 80/20ing is now.






