Defense in Depth

An Action Plan to Increase the Safety and Security of Advanced AI

for media inquiries, contact [email protected]

Contributor Bios

Edouard Harris
Action Plan Lead

Edouard Harris is the CTO and co-founder of Gladstone AI. Edouard is a machine learning engineer and AI researcher, serial startup founder, and angel investor who holds a PhD in Physics. In 2017, he co-founded and ran an AI training company (now acquired) that raised capital from top Silicon Valley investors. At Gladstone AI, Edouard tracks the real-time development of advanced AI systems and conducts technical research into AI safety and AI alignment. As part of this work, he has collaborated with researchers at the world's top AI safety organizations, including Center for Human Centered AI at UC Berkeley, Google DeepMind, and OpenAI. Edouard has advised senior U.S. policymakers on advanced AI risk, including officers in the U.S. DOD and a U.S. Cabinet Secretary.Edouard led Gladstone’s work developing the Action Plan, and contributed to the Survey of AI Technologies and AI R&D Trajectories as a domain expert in both AI national security and technical AI safety.

Jeremie Harris
Survey of AI Technologies and AI R&D Trajectories Lead

Jeremie Harris is the CEO and co-founder of Gladstone AI. He is an experienced AI startup founder with deep technical expertise in AI safety and security, who has co-founded and sold companies backed by top Silicon Valley investors. He has led historic workshops and briefings on AI national security risk for some of the most senior national security and defense officials in the United States and around the world, as well as cabinet secretaries, central bank governors, and generals. He has also trained senior DOD executives and leaders in AI and its national security implications. Jeremie led Gladstone’s work developing the Survey of AI Technologies and AI R&D Trajectories, and contributed to the Action Plan as a domain expert in both AI national security and technical AI safety.

Mark Beall

Mark Beall formerly led the Strategy and Policy Directorate of the Department of Defense Joint Artificial Intelligence Center. In this capacity, Mark led DOD participation in national AI policy development and implementation, including policy for safeguarding AI capabilities. He engaged with over 70 allies and partners and China across a range of bilateral and multilateral fora on AI and proliferation and was the architect of the AI Partnership for Defense — a group of 14 like-minded allies focused on AI issues. He contributed to the assessment as a domain expert in defense policy, and is a former co-founder of Gladstone AI.

Benjamin Isaacoff
Project Lead

Benjamin Isaacoff leads a team of AI & ML scientists at General Motors researching, developing, and deploying AI systems for diverse applications spanning the business. He previously worked in technology public policy in the U.S. Senate and U.S. State Department, where he led the writing and passage of two bills on space policy into law and led the development and implementation of the State Department Strategic Framework for International Engagement on Artificial Intelligence. He contributed to the assessment as a Project Lead and a domain expert in State Department policy.

Jonathan Askonas
History Team Lead

Jonathan Askonas is an assistant professor of Politics at the Catholic University of America, where he works on the connections between the American political tradition, technology, and national security. He is currently working on two books: A Muse of Fire: Why the U.S. Military Forgets What It Learns in War, on what happens to wartime innovations when the war is over and The Shot in the Dark: A History of the U.S. Army Asymmetric Warfare Group, the first comprehensive overview of a unit that helped the Army adapt to the post 9/11 era of counterinsurgency and global power competition. Jonathan led Gladstone’s history team, which drafted its Historical Survey of Technology Control Regimes.

Alexander Falbo-Wild
Historical Survey of Technology Control Regimes Lead

Alexander Falbo-Wild is a historian, researcher, and professional military educator. From 2014 to 2018, he was a Case Method Teaching Fellow at Marine Corps University and an Honorary Historian in Residence with USMC History Division. He then served as Chief Archivist to the Maryland National Guard’s Office of the Command Historian from 2018 to 2021. He has given instruction to the US Naval Academy, Canadian Forces College, and the British Army’s Education Training Service. He is currently a history PhD student at Temple University. He led the drafting of Gladstone’s Historical Survey of Technology Control Regimes.

Caroline Pitman

Caroline Pitman is a recent graduate of the Catholic University of America, where she studied international politics, theology and history. She is currently serving as a Refugee Resettlement Specialist in Atlanta, Georgia as part of the Jesuit Volunteer Corps. Caroline previously worked at the Brookings Institution as a research assistant within Governance Studies and held internships with the United States Agency for International Development, the Senate Foreign Relations Committee and Freedom House. She supported the assessment as a domain expert in military history.

Joseph Youngberg

Joseph Youngberg is Gladstone’s Chief of Operations. He is an Air Force veteran with an extensive background in security and management. He oversees operations involved in Gladstone’s delivery of one of the first GPT-4-powered applications ever deployed in the U.S. Department of Defense, as well as Gladstone’s Foundations of AI course, which has been used to train hundreds of DOD senior leaders and executives. Joseph supported the assessment by leading Gladstone’s operations.

[anonymous]

An anonymous AI capabilities researcher at a leading frontier lab also directly supported the assessment by drafting, reviewing, and editing components of several deliverables. Their name is encoded in this cryptographic hash: e5b63409d76da1d33c49e44cf55f8b7ebd52baa6e624a0b53dcd9a04377ce7dec5cabed82c831c10dc10bab942def414eb904f320129be7f41102611283aa02a

Press

Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says, Billy Perrigio - TIME Magazine

Employees at Top AI Labs Fear Safety Is an Afterthought, Report Says, Billy Perrigio - TIME Magazine

AI could pose ‘extinction-level’ threat to humans and the US must intervene, State Dept.-commissioned report warns, Matt Egan - CNN

Media Clip: Matt Egan’s LinkedIn Post

State Dept-backed report provides action plan to avoid catastrophic AI risks, Ben Dickson - Venture Beat

US-funded report issues urgent AI warning of 'uncontrollable' systems turning on humans, Steph Sorace - Fox Business

Podcast: AI: More DANGEROUS Than We Could EVER Imagine | feat Gladstone AI's Jeremie & Eduardo Harris - Bryan Callen Podcast

Press Release: Gladstone.AI Announces the First-Ever AI Action Plan for United States National Security Commissioned by the U.S. State Department