Emergency blog post - reflections on VP Vance's speech at the AI Action Summit
Brief thoughts from Paris Aéroport on the more surprising parts of his speech.
In his highly anticipated remarks at the AI Action Summit, Vice President Vance articulated several components of an emerging American AI policy agenda. More details will come into focus in the coming weeks and months, especially when OSTP delivers the action plan that they were tasked with (for which there is an active request for information here).
Some of the main themes were roughly in line with what was expected (e.g., discouraging overregulation, emphasizing the upsides of AI, underscoring the US’s commitment to leading in AI, and raising concerns about Big Tech and authoritarian governments imposing a limited set of values on people through AI). I’m not going to provide a full summary, and would recommend that you just watch it here (his remarks start at about an hour and 15 minutes in).
Below, I will highlight a few themes and nuances that were harder to predict since they are areas where the DC consensus isn’t yet clear. In some cases I was encouraged to see these bits, and in other cases I have concerns about what these comments may portend.
AI security
One of the reasons I was in Paris to begin with was to attend a side-event called the AI Security Forum. So it was heartening to hear Vice President talk about the need to protect both AI technologies and semiconductors from theft and misuse. By distinguishing between AI technologies and semiconductors, he presumably meant to refer to protecting model weights and/or algorithmic secrets and code. As discussed elsewhere, I think security is in some respects “the new safety” in that it’s something that has been neglected for far too long, requires urgent investment due to long lead-times (e.g., for secure chips and datacenters), and should in theory have bipartisan support due to its non-ideological nature. While safety has gotten a bit of a bad reputation in some circles, which I’ll return to below, I think security may have more staying power.
I hope that Vance’s remarks are a sign that there will indeed be a bipartisan consensus on this issue, since I’ve heard a lot of concern about it from folks on the left, as well — no one wants American AI to be trivially stolen by China and Russia (other than China and Russia). There are big challenges in actually executing on the vision he outlined, and the rubber will meet the road in, e.g., how security requirements are baked into federal contracts, execution on (or replacement of) the late Biden executive order on datacenter permitting (which tied energy and security together), and more generally, the US government sending a very strong signal to industry that this is an urgent priority.
Safety and regulatory capture
While this requires reading between the lines a bit, one plausible interpretation of Vance’s remarks is that he thinks safety regulations (if any) should be very lightweight in their implementation, targeted on the most extreme risks, and should not be captured by Big Tech.
The emphasis on only extreme risks can be inferred via process of elimination (i.e., the other risks he explicitly or implicitly minimized), and the concern with efficient regulation can be inferred from his discussion of European regulatory overreach. He didn’t quite explicitly say that he’d be open to new regulation of any sort (or, acceptance of existing regulations such as the datacenter conditions discussed above, which to my knowledge haven’t yet been repealed), but he did say that safety can’t be entirely thrown to the side, which was in some respects a stronger statement (actually acknowledging there’s a “there there”) than the leaked summit statement.
As discussed elsewhere, I think that some AI safety regulations aren’t quite as onerous as some believe — e.g., I think the AI Act’s Code of Practice is trending in a fairly reasonable direction as it gets revised, though I also share some of Vance’s concerns about overly bureaucratic processes and broad scope. And I also agree that we should be wary of industry capture — indeed, the ability to be taken more seriously in what I say on policy is part of why I left OpenAI. But I think it’s very important to bear in mind that it is not just Big Tech saying that there are serious risks coming in the very near-term that we need to be prepared for. And while vigilance about regulatory capture is sensible, we also need to take seriously the fact that CEOs and company staff don’t have super strong incentives under the current administration to raise the alarm about catastrophic risks and yet they continue to talk about them, and there is now a very credible international report on them which seems to have been well-protected from industry capture.
Workers having a seat at the table
As I and others have discussed elsewhere, there are some interesting politics on the horizon at the intersection of AI and jobs. Vance was notably optimistic in his remarks about the impact of AI on the economy, and emphasized that American policy will aim for AI being a tool for rather than replacement of humans. But this is easier said than done and while I’m running out of time before my flight, I’ll just say very briefly that having helped build a team focused on this topic, there is no plan to ensure such an outcome right now. It is notable that Vance sides on the economic populism side rather than the tech right end of the spectrum on this issue, and said explicitly that we should not just think of AI as a disruptive technology but that it will and should be well-integrated with the world of atoms, rather than just occurring in the world of bits. Again, what this means in practice is TBD.
__
Sorry I couldn’t say more or edit this more (have to run for my flight), but I hope some folks find this helpful.