I'm Eva, and I work in AI policy and regulation in London. I want to work out what humanity needs to do to build a safe and prosperous future with AI, and how to make it happen.
Current work:
Leading operations and making sure the company runs smoothly day-to-day at Conjecture, a London-based AI safety start-up.
I share additional essays, commentary and ideas on my Substack.
Past work:
Until January 2025, I researched policy solutions for making AI safe with the Center for Future Generations. During my time with CFG, I also co-authored a policy paper on internationalising AI governance with the Oxford Martin School.
Before ICFG, I worked as a Policy Fellow with the Center for Data Innovation, which is part of ITIF, where I focused on European tech policy. I advised Pour Demain on their policy priorities for their EU AI policy work, and I helped the OECD Observatory of Public Sector Innovation refine their Anticipatory Innovation framework. I dove into trade and economic policy with the Federation of German Industries and stremalined internal procurement databases for Andritz Oy.
I had a great time participating in EF's inaugural cohort of the Polaris Fellowship in London. I'm also a part of the Talos network, and a UCL alumna.
Publications:
In 2025, at Conjecture, I co-authored three papers on AI safety and Geopolitics: The Three Main Doctrines on the Future of AI (published on arxiv and presented at the 2026 IASEAI Conference), Modeling the Geopolitics of AI Development (also published on SSRN), and How Middle Powers may Prevent the Development of Artificial Superintelligence (also published on SSRN).
In 2024, I presented a research report on AI risk thresholds at the UK government's AI Security Institute's Conference on Frontier AI Safety Frameworks. The research report, of which I am the lead author, compares AI safety standards to safety standards in other high-risk industries, like the nuclear sector and aviation.
For the AI Seoul Summit in May 2024, I wrote 3 concrete policy recommendations that would constitute necessary and significant steps towards making advanced AI safe.
With the Oxford Martin School, I co-authored a 2024 paper on which aspects of AI governance should be internationalised.
I am also the lead author for a commentary on international compute governance for the Istituto Affari Internazionali.
I contributed to A Narrow Path, a comprehensive set of policy proposals to build a future in which humanity is in control of AI, and avoid the current default outcome of uncontrollable, inscrutable superintelligent AI systems that determine humanity's future.
On the ICFG blog, I wrote about tricky AI governance challenges, such as the emergence of unexpected capabilities (also here) and ensuring post-deployment safety (also here). I also published an explainer on what makes advanced AI so different (also here) in the wake of the 2023 AI Safety Summit in Bletchley Park, UK.
I wrote an essay on why the 'AGI race' between the US and China doesn't exist, which you can find on my Substack and on Lesswrong.
For the Center for Data Innovation, I wrote policy commentaries about European tech policy, on social media and trust in public institutions, eIDs, energy efficiency standards, and anti-SLAPP legislation. I also had the pleasure of conducting and publishing interviews with inspiring AI start-up founders.
At the Federation of German Industries, I co-authored a leading policy report on German-Chinese trade relations (in German).
You can reach me via eva.behrens.eb [at] gmail.com . Please don't hesitate to reach out.