AI Policy Considerations for the Trump Administration: Part I - Navigating Innovation and Safety
The first of three essays discussing how the Trump administration can balance multiple sometimes competing policy objectives.
This is the first of three essays discussing how the Trump administration can implement policies that foster AI advancement and economic growth while also addressing other policy objectives such as ensuring safety and international stability. The goal of the series is to provide an exploration of the administration’s objectives and encourage responsible US leadership in AI. This first essay discusses how to navigate the potential tensions between advancing American AI innovation and ensuring safety. The next essay will discuss how to navigate the potential tensions between competing with China and avoiding war, and the last essay will explore ways to ensure rapid AI-enabled economic transformation doesn’t leave the working and middle classes behind.
Introduction to the series
The second Trump administration’s AI policy agenda has not yet been articulated, and it may evolve over time as different perspectives are considered and circumstances change. Actors within and outside of the administration are advocating a range of views, and it remains unclear which policy proposals will ultimately take shape.
That being said–and focusing on statements from President Trump and Vice President Vance rather than their advisors–here’s is our best assessment of the next administration’s AI policy ambitions:
A very clear principle is likely to be maintaining American leadership in AI
A likely action is to repeal or amend Biden’s AI executive order
Vance has indicated an intent to support open source AI, although it is unclear whether this aligns with the Administration’s intentions
Trump has expressed the view that AI poses significant safety risks
Outside of the context of AI (but quite relevant, as we discuss below), Trump has said that we should be wary of new wars.
In this blog post series, we explore potential tensions between these objectives and ways that those tensions can be navigated. In this post, we focus on potential tensions between advancing American AI innovation and ensuring safety. These tensions may require either modifying some aspects of the emerging plan or supplementing it with additional actions.
Tensions between different policy objectives are not unique to the next administration, making this discussion relevant for a wide range of countries and contexts that face similar challenges. But we hope this exploration is particularly timely for those thinking through the next administration’s AI policy, which will be particularly critical in influencing global AI outcomes. Note that we are not necessarily endorsing all of the policy ideas discussed below, since the details may matter a lot, and the overall portfolio should “hang together” well – rather, we aim to spark consideration of options that may not otherwise be considered.
Potential tensions between AI innovation and safety
We take one goal of repealing Biden’s executive order (besides reversing perceived leftist overreach) to be ensuring American companies can innovate freely and outcompete other countries’ AI sectors. This can create new and cheaper goods and services, allowing a higher standard of living and improving the US’s position vis-à-vis China and other countries.
However, there is a potential tension between this point and Trump’s stated concern about AI safety. The issue is that even with the current (weak) guardrails in place, cutthroat competition is leading to corner-cutting. Companies routinely disclose known issues such as factually incorrect outputs in high stakes contexts like health, uneven performance across different types of users, and even the ability to (for now, only somewhat) make it easier to create dangerous weapons. While considerable efforts are being made to address these issues, the industry standard is to leave much undone, with the subtext being that they don’t have time to fix everything given the competitive landscape.. This situation gets more and more dangerous as AI systems become more capable, which the next administration will need to account for.
If companies had to internalize the costs they were imposing on the world through the risks they’re taking with humanity’s fate, the market may address these issues. However, this is not currently the case. In addition to liability being unclear as it relates to AI, increasingly advanced AI systems may be capable of causing billions of dollars in damages through accidents or being used as powerful weapons to attack adversaries, particularly in the cyber area. As AI becomes a more critical technology for national security purposes, the U.S. government needs to closely monitor its development to ensure that these powerful systems are robust and safe tools that secure American dominance.
The concern with truly “scary” AI safety risks–as Trump described them–is that they could lead to catastrophic accidents, and in the worst case, to extinction. Alarmingly, leaders of all the leading AI companies acknowledge this possibility. In such a scenario, there would be no one left to bear the costs or address the consequences. Therefore, it is essential that governments are fully aware of ongoing developments to manage these risks effectively.
There is also the matter of open source, which raises some tensions if it indeed turns out to be a major element of the next administration’s approach. Open source AI has many benefits, fostering innovation and potentially advancing the US’s competitive position. However, it also gives increasingly powerful capabilities to everyone, including adversary countries, and at some point, we expect this may become intolerable to the administration given the desire to outcompete China., though we consider ways to at least somewhat reduce the risks from open source below.
Alleviating the tensions
In this section, we discuss a range of ideas for ensuring that American AI leadership and safety do not work at cross-purposes, and indeed ways in which they can be synergistic.
This is not exhaustive, and again, the detail is in the details.
Driving federal legislation on AI safety that preempts a growing patchwork of state laws and clarifies liability. This helps ensure that there are reasonable guardrails for the otherwise-vigorous competition. An existing starting point is the Bipartisan Senate AI Working Group's Roadmap for AI Policy, which emphasizes the importance of federal legislation to establish consistent AI safety standards. Clarity can help businesses plan better and innovate faster, while establishing clear guardrails. Legislation could also call for using federal purchasing power to provide a financial incentive to adopt certain norms in order to get certain contracts (e.g. driving adoption of personhood credentials to address deepfakes and related issues), which is a more market-friendly approach than blanket requirements across the economy.
Any informed US policy-making needs to be built on a foundation of good information, and the Biden executive order – while controversial among Trump’s allies for other reasons – did provide a begin to require information sharing with the government on, e.g., large datacenters and compute-intensive AI systems. Legislation should consider ways to put this kind of government information-gathering on a solid footing.
Federal AI legislation could also give key AI-related agencies the authority to hire more quickly and offer higher salaries than is possible today, which relates to our next point.
Investing in government capacity. In order to pull off any of Trump’s agenda effectively, including fostering economic prosperity, his administration will need AI experts who understand what’s going on, can make sense of the information companies give them, and ask the right follow-up questions. Moreover, for efficiency, it is very helpful for government to have a dedicated AI team that other parts of the federal government working on AI related projects (e.g., the Department of Defense, Department of Energy, etc.) can consult for questions, advice, and feedback on approaches.
Currently, the US government’s “AI brain” is the US AI Safety Institute in the Department of Commerce. Since this initiative started under Biden, it’s unclear what’s going to happen to it under the next administration. Additionally, Biden’s executive order, among other things, set in motion a demand signal from the top of government to start building out more capacity. Currently, US AISI comprises only about 10 research scientists, demonstrating that the government has a long way to go to build capacity, but also indicating that it wouldn’t be difficult to move AISI personnel to another part of government. Whether Trump ultimately keeps, rebrands, or replaces the US AI Safety Institute, his administration will need to ensure that the flow of talent into government accelerates rather than decelerates. Otherwise, efforts to advance innovation and build out a robust domestic semiconductor supply chain will falter due to limited capacity and poor decision-making within various agencies. Further discussion of AI infrastructure, including semiconductors and energy, will be provided in the third essay of this series.
Launching prizes, purchase guarantees a la Warp Speed, an ARPA-AI, and other efforts to accelerate AI safety research. Among other goals, this work could explore ways to reconcile open weight AI with societal resilience against the misuse of AI. Funding could target “d/acc” style investments that could allow open sourcing of models while baking in guardrails that are more resistant to fine-tuning than is possible today. Targeted investments could also be made in areas like cyber-defense and bio-defense. These could include both AI-related investments and more general investments like physical defenses against biological threats. In combination, these investments could help make AI systems and society safer amidst widespread AI innovation and adoption. Given the national security overlap with Chemical, Biological, Radiological, and Nuclear (CBRN) defense, DARPA would be a suitable agency to house these initiatives, or a new civilian ARPA-AI could be created, or a combination of the two could be pursued.
Pursuing and piloting more surgical approaches to export controls (e.g. exporting chips to authoritarian countries if they have location trackers and China agrees to verification processes to limit misuse). This would allow continued success of American semiconductor companies while limiting the potential damage caused by both exports and also open weight models. Innovative approaches to export controls could also be piloted in countries that are between the US and China like the UAE. It is becoming increasingly clear that leadership in AI is not just a matter of training big models but also investing a lot of computing power into running them, both to scale up the number of users and to throw a lot of (digital) brain power at very hard problems. That requires chips, not just the weights for an AI model, so doing an excellent job on export controls can give the US more “breathing room” to be permissive on open source with fewer repercussions.
Conclusion
It’s hard to say how far the suggestions above would go in addressing the tensions in the next administration’s potential AI policy agenda, but we think it would be a good start. These are early reflections, and our thinking may evolve as Trump’s real AI policy agenda comes into more focus and evolves – sometimes policies that make sense in isolation don’t make sense when combined, so this blog post is a bit of a speculative exercise. We hope some find it helpful, and we’re curious to hear what people think.
Acknowledgments: Thanks to Dean Ball, Adrian Weller, Nathan Calvin, Michael C. Horowitz, John Bailey, Larissa Schiavo, Chris Painter, and Herbie Bradley for related discussions and feedback, though they do not necessarily endorse any of the content here. All remaining errors are those of the authors.