The bread and butter problem in AI policy
There is too little safety and security "butter" spread over too much AI development/deployment "bread."
I’ve been using a specific metaphor a lot lately, and would like to share it for feedabck.
Some folks have told me it’s not the clearest metaphor but please bear with me while I try to explain it, and if you have a better way of thinking about this issue, please let me know.
Bilbo Baggins in Lord of the Rings: Fellowship of the Ring (2001), WingNut Films. Screencap taken from this tweet.
This line, from The Lord of the Rings: Fellowship of the Rings, refers to Bilbo’s personal sense of fatigue/weariness etc. after carrying the One Ring for a long time. I think it’s also a good (initial) metaphor for one of the biggest issues in AI policy today.
The issue is that there is too little safety and security “butter” spread over too much AI development/deployment “bread.” That is, for each “unit” of AI development and deployment — whether you divide things up by company, by AI system, by product, etc. — only a tiny fraction of the industry’s talent is applied towards making it safe and secure, and only a tiny amount of time is spent doing so, compared to what would arguably be societally optimal considering the risks in question. At a global level, the people working to ensure that AI systems are safe and don’t get stolen are stretched thin, and new companies popping up all the time make this worse. Policymakers will also soon be stretched thin as they assess risks related to, and implement reporting requirements on, zillions of companies, models, and products.
This situation doesn’t seem to be getting better despite an influx of talent and investment into safety, security, and governance, because the bread is growing even faster: capabilities are advancing (making them riskier and more tempting to steal), companies and products are popping up all over the place, and the pressure to ship products is getting greater and greater. This isn’t an ideal situation.
The bread/butter analogy is just one of many ways to argue for the urgency of AI regulation — there are others, like theoretical arguments about the potential for misalignment of corporate/military and global interests and unpriced externalities, the fact that companies regularly disclose various risks that they haven’t fully mitigated (implying they are under time pressure), etc. But the reason I find it compelling is that it tees up a natural question: if we’re already stretching the available talent thin and the situation is getting worse, why don’t we just consolidate AI development and deployment more (e.g. via a “CERN for AI”) so that there’s more bread per unit of butter, or slow it down a bit which could have the same effect (by allowing talented people more time to do their work)?
I don’t think that this conclusion obviously follows from the bread and butter metaphor — there are many assumptions being smuggled in. E.g., perhaps consolidating would be efficient in terms of allocating safety/security talent but would makes AI innovation itself less efficient due to reduced competition, and hurting other important goals in the process (e.g., ceding leadership to authoritarian countries). On that line of thinking, you may end up with well-buttered but foul-tasting bread. And there may be serious political/legal barriers to consolidating AI development/deployment, even though it would in fact be better. Or perhaps consolidating would speed up AI capabilities so much that this offsets the benefits of safety and security talent being applied in a more focused way — so the bread is shrinking on one dimension (quantity of projects), but growing even more on another (speed of progress).
I think this framing merits more discussion and I would be curious to hear people’s thoughts on it (including what a better metaphor would be). Digging in a few levels deeper here might yield interesting conclusions/discussions. For example — just illustratively — maybe consolidating at the level of datacenter security is a good idea, but products/companies less so for some reason? Perhaps there are means of consolidating to some extent without going all the way and hurting competitiveness (e.g., mandating more sharing of safety and security lessons between companies)? And how does the possibility of AI itself “churning new butter” (e.g., helping automate security engineering) fit into the equation?