I give advice to folks about careers in AI policy several times a week, and I’ve noticed that some questions come up pretty often. I figured that writing this document would help scale up my advice-giving to more people (hopefully helping them!) and also make the 1:1 conversations I do have more focused.
Please note that these are just my personal opinions. These opinions are informed in part by my experiences at OpenAI, in academia at ASU and Oxford, in government at the US Department of Energy, and as an affiliate of various think tanks. Those experiences probably bias me in various ways, though I am trying to speak about the field generally rather than any specific organization. I also may be making some mistakes/giving bad advice, so use your own judgment on all of these things.
As discussed under the first question, AI policy means various things to various people, and while I hope this post will be useful to folks interested in AI policy practice and engagement, I default to focusing on the AI policy research side of things.
FAQs
What does a day in the life of AI policy look like?
Let’s start with what AI policy means, since the short answer to the question “it depends on what you mean by AI policy, and I can’t speak for everyone.”
“AI policy” means a lot of different things to different people, and the work involved varies a lot even within a specific organization or a specific team within that organization. But roughly, to me it means the theory and practice of authoritative decision-making about AI by public and private actors, and it involves various “modes” of work which different people/organizations/teams tend to specialize in — AI policy research, AI policy implementation, and AI policy engagement.
Some roles are pretty clear-cut (e.g. someone may exclusively focus on researching the design of international institutions for governing AI from academia, or just leading engagement with a specific set of stakeholders on behalf of a company), and other times AI policy is hard/impossible to distinguish from AI ethics, safety, etc. and research/practice/engagement are intermixed. For example, a “trust and safety” team at an AI company might be at the intersection of these various threads, and involve a lot of implementation plus some research and some engagement (and some specific projects will be a mix of these things).
With that context, I can’t really speak to what AI policy people in general, but I can speak to my own experience. My time is spent on a combination of meeting with people about research and deployment related topics, reading about the latest developments on other teams on Slack, meeting with folks outside of the organization about various things, reading work on AI policy produced by my team/other teams/other organizations, reading about stuff outside AI policy which may be relevant, writing, and giving feedback on writing.
As a manager I spend a disproportionate amount of my time in meetings, but also set aside a fair bit of time to read/explore (on which more below) and spend a fair bit of time writing internal memos, writing and providing feedback on papers, etc. Before I was a manager I spent a larger fraction of my time going deep on specific topics and leading a project end to end (with such projects typically taking some number of months but sometimes weeks or years depending on the project).
What kind of backgrounds are you looking for in people who might join [org]? / how can I make myself a more attractive candidate for [role]?
There isn’t really a single profile that I personally look for, and I think other folks in industry would say similar things — my team currently includes folks with backgrounds in political science, law, computer science, science and technology studies, tech policy, economics, philosophy, etc. I think of educational credentials as signals of grit and skill in a certain area (though they are not necessarily the only ways to signal those things), and for many disciplines I can assume that with a degree in that area you have reasonable writing skills.
But there isn’t any formal credential which would assure me that someone would add value. Hence hiring processes involve interviewing people for various skills not necessarily well captured in CVs (e.g. experience in/attitudes towards collaboration) and one’s written output counts for a lot — it’s in some sense the currency of AI policy and of AI policy careers (even people who focus on the technical side of AI policy need to be able to communicate the results of their work clearly and connect it to wider considerations etc.).
What should/can I do if I’m “non-technical”?
I don’t like that term since it’s ill-defined, often applied in biased or inaccurate ways to people from underrepresented backgrounds, and conveys a static mentality re: one’s skills.
Everyone should invest in areas in which they are currently weak, and plan to learn more skills throughout their lives. Folks who come from “relatively” “non-technical” backgrounds should think of technicalness as a messy cluster of skills with various sub-components, some of which nominally technical people aren’t strong at themselves, such as (most importantly) an intuitive understanding of the state of the art derived from a lot of experimentation with various AI systems; fluency in relevant jargon (which comes in degrees); math skills; general coding skills (e.g. on the lower end, you’d ideally at least be comfortable using the terminal sometimes to install something on your laptop or use an API, some Python familiarity — and note the skills required to achieve any given task are rapidly changing with AI, and this is part of why I encourage experimentation); familiarity with the “lay of the land” of AI as a field — who’s working on what, what is easy/hard and with what engineering investment, etc.; understanding of the machine learning literature; and understanding of machine learning practice.
There’s no one-size-fits-all blueprint for investing just the right amount of time in the areas above, but I would again emphasize the importance of practical experience, and I think a general “diminishing returns” mentality is applicable across these skills: if you are at zero in one of them, you will probably get a lot of value from learning the basics.
What are the pros/cons of working in [industry, academia]?
People don’t often ask explicitly about civil society (e.g. non-profit advocacy groups, think tanks, etc.) or government (including national level, state/local, and international/regional organizations) but they should. So I’ll comment on those as well.
This is perhaps counterintuitive but below I’ll lump industry+government together and civil society+academia together since I think there are many similarities within these two clusters, but in a few cases I’ll draw more fine-grained distinctions.
Industry and government are great options if you want the opportunity to engage with cutting-edge capabilities and day-to-day decision-making about AI by companies, and are willing to accept some related tradeoffs. The opportunity is somewhat self-explanatory — it’s exciting and informative to be “on the inside” of developments in technology and governance thereof. A few years ago that mostly meant being in industry (which of course raised a bunch of issues) and now it increasingly means being in industry or government, as governments get more involved.
In industry and government, there is more of a presumption that you will engage with the cutting edge (be it a new model in industry, or a new executive order, piece of legislation, etc. in government) — otherwise there’s less reason for you to be there in the first place, since non-model-specific/non-legislation-specific research can in theory happen anywhere. In practice, even though some research could be done anywhere, empirically it won’t be done outside certain labs due to certain issues becoming salient sooner to those in industry given proximity and proprietary information, but I’m summarizing a loose framework that people (myself included) sometimes use explicitly or implicitly.
You may have some degree of autonomy in what you work on in industry or government, but it will generally be less than in academia, since your value add will generally be seen as “helping the organization make better decisions and take better actions” rather than simply to add to the overall body of knowledge available to everyone (which sometimes will be valued by an organization, but won’t always be). It will also generally be harder to publish your findings and personal ideas while in industry or government (since your papers may be seen, intentionally or not, as reflecting the official position of your organization), and you will generally be expected to focus on understanding and shaping developments at your organization (be it a company or government agency) rather than the wider world, vs. whatever AI system, AI policy issue, etc. you are interested in, even if it occurs elsewhere.
Academia and civil society are great options if you want to understand, comment on, and critique state of the art developments in technology and governance, and be (perceived as) a relatively impartial voice in doing so. If you’re in industry, it is hard for you to be seen as neutral — maybe informed, but not neutral — whereas this is possible to a greater extent in academia and civil society. Civil society is attractive for those who want to “keep their ears to the ground” of public opinion and different stakeholders’ views on and interests in AI, while actively shaping public debate in a way that is often seen as more legitimate than when industry aims to do so.Academia and civil society are critical places for developing new knowledge and perspectives, and civil society is critical for holding both industry and governments accountable to a wide set of interests.
If your theory of change largely involves deep scholarly understanding of some AI policy issue and are not super fussed about implementation details, academia is hard to beat. It’s the place to be if you enjoy going to academic conferences, doing cross-disciplinary collaborations, and if you don’t particularly need state-of-the-art information, and of course it’s the clear winner if you want to teach in a university setting (though mentorship opportunities abound elsewhere).
The biggest drawbacks to being in academia and civil society are resourcing (including salary, computing power, etc.) and access to information about state-of-the-art technical developments. While there are steps being taken by various organizations to make state-of-the-art AI capabilities more widely available and some collaborations occur across organizational boundaries, generally speaking there is a lot of information asymmetry between industry and the rest of the world, and governments also have a big informational advantage. Companies know more about the state-of-the-art of the technology, and governments know more about what’s feasible and likely policy-wise. Governments also aggregate information from various people they talk to so “drink from a firehose” of information from across society.
These are not totally clear-cut distinctions. You can be more or less proximate to industry while in academia or civil society. The main thing to take away here is that different paths make more/less sense depending on the kind of work you want to do.
Generally, I encourage most folks to try to get experience in multiple sectors and to make the most of the sector one is in (on which more below in the general advice section), but it’s hard to give general advice on this — for some folks it may make sense to be a “lifer” in a particular sector, e.g. if you care a ton about publishing autonomy, academia may be right for you, or if your theory of change involves activism, then civil society may make more sense.
What are you thinking about these days?
Again, as a reminder, this is all just my personal opinion, but — much of the time, I am struggling with the tension between 1. Making the most of the current moment in AI policy (learning what I can about the real impact of current models and the risks/opportunities of the next ones, increasing the quality of public discourse on/implementation of industry norms and public policymaking, etc.) and 2. Trying to think ahead to ways in which the constraints/affordances of AI policy could or should change a lot in the future. These roughly fall into the buckets of asking myself/others “What’s next?” and “What’s missing?”, respectively. As a manager I think a lot about what the right portfolio is within/across those buckets.
I’m pretty transparent on Twitter, in papers, etc. re: what I think about things, so there aren’t very many big clusters of issues that I think are important but haven’t said anything about (there’s always some gap but I try to close it as quickly as possible). A recent summary of how I see the “lay of the land” in AI governance (using that term equivalently synonymously with “policy” above) can be found here, and more generally see my publications.
Where are the jobs?
I don’t really get this question explicitly very often since there are almost always some jobs listed but I am using this pseudo-question as a prompt to speculate on some dynamics in the AI policy job market which are worth thinking about if you’re pursuing a career in this area.
I am not that familiar with the data here (I’d guess no one has looked at it carefully but let me know if I’m wrong), but my anecdotal impression is that there is an under-supply of senior AI policy researchers who can autonomously pursue impactful research and mentor others, and an over-supply of junior AI policy researchers who need a fair amount of supervision. I mean “oversupply” in the sense of there being a lot of applicants per job rather than implying that this is more people than “the world needs” — I think we need a bigger AI policy field, it’s just a question of how best to organize/fund it.
My theory for what’s going on there (assuming the claims above are true) is that the field is still somewhat young in its current “shape” and that experience takes a while to gain, hence there just hasn’t been enough time to “grow” a lot of senior talent. As a result, a lot of job listings are fairly senior and there is a growing ecosystem of new think tanks/startups etc. trying to “absorb” this talent. Governments are ramping up hiring which may help create new junior opportunities but doesn’t help on the senior side of things (since some people who would be good fits in senior government roles are currently supervising people elsewhere).
AI policy isn’t totally unique— you should get experience where you can so don’t worry too much about the above (and don’t get deterred from applying — just do it, on which more below). You’ll be better off e.g. having done an AI engineering job, or a product management job in another sector, or doing research on a different topic, etc. vs. not having done those things, so find ways to keep yourself busy and learning while sharpening your AI policy skills/portfolio (you can probably publish on AI policy even if your current job isn’t in AI policy). Also, building mentorship/management experience can be valuable in helping address the bottleneck discussed above.
General Advice
Here are some additional bits of advice I give pretty often which aren’t really answers to a specific question above.
Don’t over-anchor on specific roles
This is a very dynamic field, hence why my old career advice doc is basically obsolete. OpenAI didn’t exist when I was first starting out, Anthropic didn’t exist until just a few years ago, many non-profits and startups didn’t exist until a few months ago, there were many fewer AI policy-specific roles in government until very recently, etc.
The upshot is that planning for a very specific “dream job” is impractical and unwise — impractical because it’s very unlikely that the precise role you want will be available at the exact time you need a job and that you’ll be the one to get it, and unwise because you may miss out on much better opportunities for fulfillment and impact.
It’s better to be flexible re: the details of your future role(s) and focus on building strong general-purpose skills, establishing a solid portfolio of work, and getting experience where you can (more on that below).
Have as much fun as you can
Whether you just dabble in or seriously pursue AI policy, you should try to enjoy the process as much as you can and be grateful that you’re alive at a time when very fundamental aspects of the future of humanity’s relationship with technology are being debated and steered, and where you can make some small contribution to that.
If the work you’re doing feels boring, maybe it’s because it’s the wrong career for you, or maybe you’re “doing it wrong” in the sense of being too generic (see below) and not pushing the Overton window enough. Having a big impact often benefits from exploring widely and doing things that haven’t been done before, which in turn can be fun.
Expect a lot of failure
You should expect (and hope) to fail a lot. It happens to everyone, even folks who are successful by various measures — e.g. in the past few months I’ve had a paper rejected, missed a deadline I really wanted to hit, had various misunderstandings with coworkers that took effort to resolve, abandoned projects that weren’t going well, etc. A lot of things I do somewhat reliably now, I totally failed to do, over and over, for a while, and I’m still finding new things to fail at. If you’re not failing a lot, maybe you aren’t being ambitious enough.
Maybe you actually aren’t being particularly ambitious, e.g. you’re just applying for entry-level jobs, but you still may fail a lot through no fault of your own. Don’t be too discouraged, though learn from the experience. You may worry (as I used to) that people are going to gossip about you behind your back and share all of your failure stories amongst each other, but actually people aren’t really thinking about you (or anyone) that often and are mostly focused on their own stuff, so just focus on doing the next thing.
Timing is important
A lot of AI policy papers do not seem that helpful to me for timing reasons (note that those papers may still be helpful for someone other than me — see below). Work can be too early, in that it’s too abstract to be useful for decision-making by any particular actor; and it can be too late, in that it’s specific but it missed the window of opportunity during which it would have been the most helpful. A paper published in February may not be as helpful as if it had been published in January, or it may not be as helpful as if it had come before another similar publication. Timing is one reason why the access advantages discussed above can be important — it makes it easier to see when a window of opportunity is coming up and when there is a specific challenge looming that isn’t yet getting much attention (note that these points are somewhat related to the theory of punctuated equilibrium in policy change and Collingridge’s dilemma).
This is one reason why it’s critical that you keep abreast of the state of the technology, especially by trying it out directly. Even if you don’t have special information, you can still spend more time than others understanding what’s possible with AI, what issues it raises in various domains, how safety mitigations are evolving over time on a given product or how they vary between products, etc.
Think carefully about the purpose of your writing
Sometimes writing is just for experience or “getting your name out there.” Other times it’s to build up a set of things you can refer to in job applications, or to show that you are making a career pivot. Other times you are writing to synthesize information/explain it better than has been done to date, or to introduce a new idea or normalize an existing one, to flesh out a vague idea, etc.
These are all valid reasons to write, but they can trade off against one another, so it’s good to have an explicit understanding (in your head, if not written down in an explicit personal career plan) of what exactly you’re trying to achieve.
Write multiple kinds of things
There’s a spectrum of formality in writing (from tweets to tweet threads to blog posts to lightly reviewed or non-peer-reviewed papers at workshops to conference papers and journal papers). Don’t put all your time in the “full formal” end of the spectrum — having a larger number of outputs (which is easier as you get closer to the informal end) will help you iterate on feedback faster and help with getting the timing of outputs right, since less lead time is required. There are always plenty of things to write about that don’t require a big new insight, e.g. a book review, a summary of a conference you attended (even virtually), a comment on a recent paper or news story, etc.
Just apply
If you’re interested in a role, just apply. There’s basically no downside other than the time involved and you’ll get more efficient at it over time. Anecdotally, I get the impression that people from underrepresented backgrounds tend to sell themselves short more in terms of what they’re “qualified” to apply for. Don’t take the listed requirements too literally — your potential competitors won’t, and the expectations for the role may just be illustrative/subject to change etc.
Also, don’t spend too long on your application — you may miss the window, and while maybe you can get in touch and have them consider you as a special case, it definitely doesn’t help to be late.
Don’t be too generic
Explore widely when it comes to reading material, types of projects, etc. The most impactful ideas often come from connections and analogies between very “distant” ideas/literatures/disciplines. If you work on and read about all the same things as everyone else, you will say similar things to those people.
There are some risks of exploring too much (being a “dilettante” with no substance) but many people don’t explore enough.
Have a niche. This isn’t actually inconsistent with exploring widely, though it sounds like it is. You should be known as someone with wide interests, who knows all the basics (or is deliberately teaching themselves the basics), and who is carving their own unique path on a topic or set of topics that they were the first (or best) to tackle.
Get practice with public speaking
This will help with being more comfortable with giving presentations in meetings, etc. and is important even if you don’t plan to give lots of (or any) public talks. Presentations and meetings, in turn, are important in order for you to communicate your ideas to decision-makers, get feedback, and demonstrate the value you can add to potential employers.
“Public” speaking need not mean recorded, fully public to anyone, etc. — it could be a small group, a presentation to a single person on Zoom, a speech at Toastmasters, etc.
Play to the strengths of your current sector
Do things and build experiences that are impossible in other sectors while you can. E.g. it’s hard to build a foundation in areas far from your discipline when you’re busy with a full-time job. So take advantage of the opportunity to explore a lot of disciplines while you’re in school. If you get an opportunity to work with or in industry, take advantage of the access you have to cutting-edge developments on the technical, product, etc. sides by chatting with lots of folks (though in any sector, you should do your homework before reaching out and before meeting, to be respectful of folks’ time).
Optimize for experience and impact, not credit
Policy world is different from academia in that fair (in an academic sense) credit assignment cannot be assumed and is in fact uncommon (since many of the outputs don’t have authors listed, or they may be credited to e.g. a policymaker who didn’t actually write it from scratch but drew on many unnamed staff and other stakeholders). Also, a lot of the most important developments in AI policy are not really covered in papers, and may not look like “real work” or “real research” from an academic perspective, since they will not necessarily look like papers.
Some of your accomplishments will not only not be visible publicly, but will also be very “partial” in nature (e.g. helping influence one part of a much larger document/project).
Fortunately, people who are potential employers are aware of these phenomena and look at more than one’s profile when considering candidates, and you can talk about things that aren’t obvious from e.g. Twitter or Google Scholar when you apply for jobs. But you may still want to take steps to establish a personal “brand” outside of your official work, and keep track of the various things you have “fingerprints” on, even if you aren’t on the author page.
Expertise, confidence, and taste come from practice
Excellence is mundane. Expect to gradually improve in a bunch of specific areas (writing, public speaking, research taste, technical skills, etc.), constantly practice in areas where you’re weak, and continue investing in areas where you’re strong. In the aggregate, these will build your confidence over time after you’ve accumulated a growing amount of “wins” (both things that are publicly recognized as such, and things that aren’t but which you know were well done). Some people are more confident than others, and you should avoid getting overconfident (except perhaps in the specific sense of “being willing to make bets and psych yourself into believing in them, since that’s the way to find out if they actually can work out, even though you’re actually not sure”), but ideally your confidence will increase gradually over time as your skills improve.
People don’t always write down all of the insights from their experience. When they do, it can be worth a bit of time to read it, e.g. this great piece by Tom Kalil. But even then, it’s no substitute for experience. You may “know” (have read) various tips and tricks, but knowing how to apply them in practice (and when to ignore them), and actually “feeling them in your bones” so that they deeply inform your planning — those will take more time, and it’s OK to feel like you don’t know what you’re doing (this is true of everyone to some extent, if they’re honest with themselves).
Solicit feedback and sit on it before responding
People won’t necessarily give you feedback by default. If they do (even if you don’t agree with it) and they aren’t being super rude, you should be grateful for the opportunity to learn and improve, and take some time to process it before responding. Don’t take it too personally and try to look for the truth in what they’re saying even if it feels super wrong to you.
You should also go out of your way to solicit feedback (while being conscious of people’s time).
Don’t over-anchor on any particular piece of feedback, either, and remember that no one’s perfect/everyone had to start somewhere. There’s a saying in the context of writing that negative feedback typically has a point, though specific suggestions for fixing the problem are rarely correct. I’m not sure if that applies to AI policy careers but I have learned over time that feedback which seems wrong often has more truth to it than I initially thought.
PhDs aren’t all or nothing
If you have time to apply to a PhD and are somewhat seriously interested in doing it, go ahead and apply. You can always change course later and drop out, and you will still be better off both skill wise and credential wise by virtue of being a PhD student or PhD candidate compared to being neither.
Use social media but take it with a grain of salt
A lot of AI policy debate, discovery of new ideas/framings, etc. happens on social media. And social media is a great way to get “discovered” and to get feedback on your ideas. But don’t take it too seriously, either — Twitter in particular does not give a very representative impression of what kind of work is happening in AI policy (it’s easier to tweet about e.g. discrete papers/announcements vs. more continuous projects, and of course some projects are not yet discussable publicly). Social media also gives the impression that things are a bit more polarized than they are “on the ground” (e.g. there is something to the “AI ethics vs. AI safety” thing but it’s overstated). If you find it to be pretty toxic — well, you’re right, though that doesn’t necessarily mean you should stay away from AI policy as a field or that you can safely ignore the discussions happening there.
Always be learning
You can learn something from everything you read/do/hear, though what value you get from different activities will change over time. For example, you will gradually learn less and less in terms of direct knowledge from reading about AI policy since there is so much repetition and genericness, and you will have already thought about a lot of things before they are written about. But you still may get value from reading such things (or skimming them) at a higher level of abstraction: e.g. you will gain knowledge about who knows what, how things are being framed in different disciplines, how people are reacting publicly to that publication/why they may find it more interesting than you, etc.
Your public output should ideally show that you are learning constantly–you should have a niche, at least at any given time, but your niche can evolve/grow/sharpen etc. and you avoid stagnation at all costs. Potential employers will generally want someone who is already able to add some value but will grow over time in terms of the value they can add. Ideally you will kill two(+) birds with one stone, i.e. produce outputs that both show that you are growing and also give you valuable research/writing experience, and then also give you some information about how people react to the output.
I hope you found this somewhat useful, and definitely let me know if you have any feedback (I will certainly make small tweaks based on things that folks flag; I may or may not do a larger refresh down the road).
Acknowledgments: Thanks to Larissa Schiavo and Girish Sastry for helpful feedback. Any remaining mistakes are my own, and again, these are all just my personal opinions.