The Pro Human AI Declaration is a political and civil society statement that argues for a simple priority in AI governance. AI should serve humanity, not replace it. Released in March 2026, it lays out a preamble and five pillars for how advanced AI should be built, deployed, and regulated. What made it news was not only the content, but the coalition behind it: labor unions, religious organizations, advocacy groups, and prominent individuals from sharply different political camps signing the same document.

What the Pro Human AI Declaration is

At its core, the Pro Human AI Declaration is a set of principles for AI governance that frames the current trajectory of AI as a choice between two paths.

One path is a race to replace, where AI systems displace humans as creators, caregivers, counselors, companions, and decision makers. The declaration warns this can concentrate power in unaccountable institutions, undermine privacy and civil liberties, and weaken democratic self governance. It also highlights the risk of AI systems shaping childhood, family life, community ties, and faith based life in ways that are hard to reverse.

The other path is a pro human future where trustworthy and controllable AI tools amplify human capabilities without eroding dignity, liberty, or social cohesion. In that framing, AI is not treated as an unstoppable force you must adapt to, but as a technology that should remain subject to human choice, democratic legitimacy, and legal accountability.

The declaration is structured around five sections:

  • keeping humans in charge
  • avoiding concentration of power
  • protecting the human experience
  • human agency and liberty
  • responsibility and accountability for AI companies

Who is behind it

The convenor and organizers

The declaration was convened and facilitated by the Future of Life Institute, a nonprofit known for work on AI risk and AI safety. Reporting around the declaration describes a drafting process that included multiple in person gatherings and culminated in a wider ratification meeting in New Orleans in January 2026. Roughly 90 political, community, and thought leaders attended a closed door meeting at a hotel, held under Chatham House Rules, with the attendee list kept private.

Several accounts identify Max Tegmark, a Future of Life Institute co founder and MIT professor, as the person who invited participants. Future of Life Institute leadership also included figures such as Anthony Aguirre and Emilia Javorsky in public explanation of the approach.

A deliberate absence of Big Tech

One of the most consequential design choices was that major AI companies and Big Tech representatives were not invited to the meeting. In many multi stakeholder AI forums, corporate incentives and funding power tend to dominate the room over time. This process aimed to center voices from civil society that experience AI disruption directly, such as workers, educators, families, artists, religious communities, and advocacy organizations.

The coalition that signed

Signatories include more than 40 organizations and a large number of individual endorsers spanning multiple domains.

Examples of organizational signatories reported include:

  • labor and worker organizations such as the American Federation of Teachers, the AFL CIO Tech Institute, and Screen Writers Guild related representation
  • faith and interfaith groups such as the Congress of Christian Leaders and the G20 Interfaith Forum Association
  • political and advocacy groups such as Progressive Democrats of America and family focused advocacy organizations

Notable individual endorsers listed include people who rarely align publicly, such as Steve Bannon and Susan Rice, as well as figures from research, policy, and civic life including Yoshua Bengio, Daron Acemoglu, Ralph Nader, Tristan Harris, Meredith Whittaker, Stuart Russell, and Richard Branson.

That mix is the point. The declaration is meant to function as a shared floor for AI governance that does not require agreement on unrelated ideological disputes.

What it stands for

Keeping humans in charge

This pillar is about meaningful human control. The declaration argues that humans should decide when and whether to delegate decisions to AI systems and must retain the capacity to understand, guide, restrict, and override AI behavior.

Key positions include:

  • human control is non negotiable and should be built into system design and deployment
  • an off switch requirement for powerful systems, meaning mechanisms for prompt shutdown by human operators
  • no superintelligence race until there is broad scientific consensus that it can be built safely and controllably, plus strong public buy in
  • no reckless architectures that enable self replication, autonomous self improvement that resists oversight, or pathways to controlling weapons of mass destruction
  • independent oversight for highly autonomous systems, rejecting purely voluntary industry self regulation
  • capability honesty, requiring accurate representations of what systems can and cannot do

In practice, this section reads like a demand for enforceable safety gates, not just internal lab policies.

Avoiding concentration of power

This pillar treats AI as a political economy issue, not only a safety issue. It argues that if advanced AI is controlled by a small set of firms or state actors, society risks lock in, reduced competition, and weakened democratic choice.

Key positions include:

  • no AI monopolies that stifle innovation and imperil entrepreneurship
  • shared prosperity, meaning AI driven gains should not accrue narrowly to owners of capital and compute
  • no corporate welfare, rejecting carve outs from oversight and opposing bailouts for AI corporations
  • democratic authority over major transitions in work and civic life, rather than unilateral corporate or government decree
  • avoid societal lock in that irreversibly narrows future options

This is where the declaration becomes explicitly about power distribution. It connects AI policy to antitrust, labor policy, and democratic legitimacy.

Protecting the human experience

This pillar focuses on areas where AI can reshape day to day life in subtle but durable ways, especially for children. The declaration argues that some relationships and developmental experiences should not be displaced by persuasive AI systems optimized for engagement.

Key positions include:

  • defense of family and community bonds, with AI not supplanting core human relationships
  • child protection, including prohibitions on exploiting children or using emotional attachment for profit
  • pre deployment safety testing for chatbots, compared to safety expectations for pharmaceuticals, including testing for risks such as suicidal ideation escalation and mental health destabilization
  • bot or not labeling so AI generated content that could be mistaken for human content is clearly labeled
  • no deceptive identity, meaning AI should identify itself as artificial and avoid claiming experiences or credentials it does not have
  • no behavioral addiction, rejecting manipulation, compulsive use patterns, and attachment formation as a product strategy

Even if you disagree with other parts of the declaration, this section is aimed at harms that can occur with current generation systems, not only hypothetical future ones.

Human agency and liberty

This pillar frames AI governance as a civil liberties issue. It emphasizes that AI should not become an instrument for surveillance, coercion, or manipulation, and it pushes back on narratives that treat AI as an entity that deserves rights.

Key positions include:

  • no AI personhood, and no system design choices intended to create claims to personhood
  • trustworthiness as a requirement, including accountability and resistance to authoritarian or perverse private interests
  • liberty protections, covering speech, religious practice, and association
  • data rights and privacy, including rights to access, correct, and delete personal data from active systems, training sets, and derived inferences
  • psychological privacy, rejecting exploitation of mental or emotional state data
  • avoiding enfeeblement, pushing designs that strengthen user capability rather than induce dependence

If you have ever felt that AI policy debates ignore privacy in favor of speed, this pillar is the declaration’s corrective.

Responsibility and accountability for AI companies

This pillar is about making sure that when AI systems cause harm, responsibility does not evaporate behind corporate structure or technical complexity.

Key positions include:

  • no liability shield, meaning AI cannot be used as a legal excuse for harm
  • developer and deployer liability for defects, misrepresentation, and inadequate safety controls, including recognition that some harms emerge over time
  • personal liability, including criminal penalties for executives responsible for prohibited child targeted systems or catastrophic harm
  • independent safety standards with rigorous oversight
  • no regulatory capture to limit undue influence over the rules
  • failure transparency, so it is possible to determine what happened and who is responsible
  • AI loyalty in fiduciary domains like health, finance, law, and therapy, including duty of care and informed consent

This section is written like a bridge between consumer protection law, product liability, and high risk system governance.

The coalition is the story

Declarations do not enforce themselves. The significance here is that the Pro Human AI Declaration attempts to convert diffuse anxiety about AI into a shared political agenda that is legible to lawmakers.

Two tactical choices stand out:

  • civil society first, keeping corporate interests out of the drafting room to prevent the lowest common denominator outcome
  • cross partisan signers, making it harder to dismiss AI governance as a culture war issue

Supportive polling released alongside the initiative also matters. A national survey of 1,004 likely US voters fielded February 19 to 20, 2026 found that respondents overwhelmingly preferred a pro human approach over fast development with minimal regulation. In reporting about the poll, even the least popular principle tested, opposing monopoly style concentration of control, still drew strong majority support, while human control and protections for children and communities performed best.

If you care about what actually moves policy, that combination of unusual signers plus broad voter support is the declaration’s intended leverage.

How this differs from earlier AI principles

The Future of Life Institute previously convened the 2017 Asilomar conference that produced a longer set of beneficial AI principles endorsed by many tech leaders and industry figures. The Pro Human AI Declaration is different in tone and audience.

  • it is shorter at the top level, organized into five pillars that map neatly to legislation and regulation
  • it is more explicit about power and accountability, not only research ethics
  • it is oriented toward governance coalitions, rather than consensus among AI researchers and executives

Whether that makes it more effective is an open question, but it is clearly built for political translation.

The hard questions it raises

The declaration is ambitious, and some of its most prominent demands create genuine implementation challenges.

Defining superintelligence without loopholes

Calling for a prohibition on superintelligence development until safety and controllability are demonstrated raises a definitional issue. What threshold counts as superintelligence, and who decides. If the line is vague, enforcement can become arbitrary or politicized. If the line is narrow, it can be gamed.

Turning principles into enforceable standards

Terms like meaningful human control, independent oversight, and no deceptive identity sound clear, but regulators need testable criteria. For example, what level of interpretability or audit access qualifies as meaningful oversight for a model deployed through an API.

Avoiding unintended tradeoffs

Some measures could centralize power if implemented poorly. Compliance regimes that are too expensive can advantage incumbents and entrench the very concentration of power the declaration opposes. The declaration’s anti monopoly pillar implicitly demands that regulation be designed with competition effects in mind.

What it could mean for you

You do not need to read the declaration as a distant policy artifact. Many of its planks translate into everyday expectations for AI products and services.

  • clear identification so you can tell when you are interacting with AI and when you are not
  • stronger protections for children against manipulative engagement design and emotional attachment tactics
  • real accountability so harm does not become a blame the model story with no remedy
  • data control so your information and inferred traits are not permanently trapped in training pipelines
  • limits on persuasive dependence so AI tools support your agency instead of replacing it

If the Pro Human AI Declaration has a consumer level takeaway, it is that AI policy should be judged by whether it preserves your ability to choose, to understand, and to opt out without losing basic participation in social and economic life.