Skip to main content

When AI Decides Who Gets Help: Why Social Welfare needs a ProSocial Reset

| Newsletter

Featured article from Global Newsletter January 2026
Written by Dr. Cornelia C. Walther. Senior fellow Wharton/Harvard. This email address is being protected from spambots. You need JavaScript enabled to view it.

Artificial intelligence is quietly becoming part of how societies decide who gets help, when, and under what conditions. From benefits eligibility to housing support, from disability assessments to unemployment services, algorithms are increasingly embedded in systems that touch the most vulnerable moments of people’s lives.

This is not a future scenario. It is already happening.

And that makes one question unavoidable: What kind of intelligence do we want guiding our social safety nets?

AI can reduce paperwork, speed up decisions, and help governments manage complex systems under pressure. But it can also do something far more consequential: it can reshape how dignity, fairness, and participation are experienced in everyday life. Used carelessly, it risks turning social support into automated surveillance. Used wisely, it could become one of the most powerful tools for social inclusion we have ever designed.

That difference does not lie in the technology itself. It lies in the intent we build into it.

AI Is a Means, Not a Moral Compass

Every major technological revolution has promised progress. And every one has also produced new inequalities.

The industrial age created gigantic wealth – while locking in harsh labor conditions. The digital age connected the world – while turning attention, behavior, and personal data into commodities. Each time, social policy arrived late, scrambling to repair damage that could have been prevented.

AI marks a more delicate threshold. Unlike previous technologies, it does not just amplify physical labor or information flow. It increasingly mediates human judgment itself – how decisions are made, how people are categorized, how risk and deservingness are defined.

That is why AI cannot be guided by efficiency alone. Speed and accuracy are not enough when the outcome affects someone’s ability to eat, access healthcare, or remain housed.

What must guide AI, instead, is Natural Intelligence: our human capacity to reason ethically, feel empathy, understand context, and take responsibility for consequences. AI can support those capacities – but it cannot replace them.

This is where the idea of ProSocial AI comes in.

From “Ethical AI” to ProSocial AI

Much of today’s conversation focuses on ethical AI: reducing bias, protecting privacy, improving transparency. These safeguards matter. But they are defensive. They aim to prevent harm, not to define what systems should actively enable.

A system can be technically ethical and still deeply misaligned with human needs.

Another popular frame, AI for Good, applies existing tools to positive causes – healthcare, climate, education. Again, valuable. But the “good” often lives in the use case, not in the architecture. The same system can easily be redeployed for extractive or exclusionary ends.

ProSocial AI goes further.

It starts from a different question: What kind of society are we trying to support? And then builds systems that are deliberately designed to reinforce human dignity, agency, and participation – by default, not as an afterthought.

In practical terms, ProSocial AI means AI systems that are:

  • Tailored to real social contexts, rather than imposed as one-size-fits-all solutions
  • Trained on data that counteracts exclusion instead of amplifying it
  • Tested for their impact on dignity, trust, and lived experience – not just performance metrics
  • Targeted at democratically defined goals, such as maximizing access to support rather than minimizing payouts

In social welfare, that shift is important.

Why Welfare Systems Are the Real Test Case

Social welfare is where values become operational.

When a benefits application is rejected, when support is delayed, when an appeal is ignored, the consequences are not abstract. They show up as stress, instability, shame, and loss of trust in institutions.

Yet many welfare systems today use technology primarily to create friction – to deter claims, detect fraud, and manage scarcity. Automation often shifts the burden of proof onto individuals least able to navigate complex digital systems.

The result has been well documented: wrongful benefit terminations, opaque decisions, and entire groups excluded because their data does not “fit” the model.

These failures are not inevitable. They are design choices.

ProSocial AI proposes a different logic: use intelligence to remove friction, not add it. Use systems to identify who is missing out, not who to exclude.

Design processes that assume eligibility and help people claim their rights, rather than forcing them to prove worthiness repeatedly.

To do this well, we need to look beyond single systems and consider how AI operates across different layers of society.

A Four-Level View of Impact
AI in welfare does not affect only individuals. Its effects ripple across institutions, national policy, and even global power dynamics. A ProSocial approach therefore looks at four interconnected levels:

1. Individuals (micro level)
This is where AI is felt most directly. For many people, automated systems already feel impersonal, intimidating, and uncontestable.

A ProSocial approach asks: Does this system strengthen or weaken a person’s sense of agency?

That requires more than digital access. It requires what can be called double literacy:

  • understanding oneself – one’s needs, limits, emotions, and values
  • understanding how algorithms work, where they fail, and how to challenge them

At this level, good systems should foster:

  • awareness of how decisions are made
  • appreciation for human vulnerability rather than exploitation of it
  • acceptance that AI is a tool, not an authority
  • accountability on both sides – clear responsibility and meaningful recourse

If interacting with a system leaves people feeling smaller, confused, or powerless, the system has failed – no matter how “efficient” it is.

2. Organizations and communities (meso level)
Social workers, case managers, and frontline staff often experience AI as something imposed from above. Rigid systems can strip away professional judgment and turn care into compliance.

ProSocial AI supports Hybrid Intelligence: machines handle routine tasks, while humans focus on relationships, context, and moral judgment.

Instead of replacing professionals, AI should function like an administrative exoskeleton – reducing paperwork, summarizing information, translating bureaucratic language – so people can do the work only humans can do.

Crucially, this requires co-design. Systems must be built with those who use them and those affected by them, not merely for them.

3. National policy (macro level)
At the level of governments, AI can either entrench surveillance or enable smarter, fairer welfare states.

Used well, AI can:

  • detect non-take-up of benefits and proactively reach out
  • anticipate social stress (job loss, inflation, housing pressure) and adjust support early
  • move systems from reactive crisis management to preventive stabilization

Used poorly, it normalizes suspicion and erodes trust.

A ProSocial welfare state evaluates success not just by cost savings, but by a quadruple bottom line:

  • purpose: are systems aligned with shared social goals?
  • people: do they improve wellbeing and cohesion?
  • prosperity: are they financially sustainable?
  • planet: do they respect environmental limits?

4. Global systems (meta level)
AI capabilities are highly concentrated. Most models are built in, and for, a small number of countries and corporations.

Without intentional correction, this risks exporting biased systems worldwide – what some have called digital or data colonialism.

A ProSocial approach demands:

  • investment in local capacity and data sovereignty
  • smaller, more energy-efficient models where appropriate
  • global standards that protect dignity without imposing cultural uniformity

Welfare AI should not save one society today by undermining planetary stability or global equity tomorrow.

The Limits We Must Acknowledge

ProSocial AI is not a magic solution.

Technology cannot fix poverty, inequality, or injustice on its own. Without political will, institutional reform, and civic participation, even well-designed systems can become empty gestures.

There is also the risk of commercial capture: public institutions becoming dependent on opaque private platforms. Building ProSocial AI requires public investment, open standards, and procurement rules that reward transparency and participation.

Finally, “prosocial” is not a single universal definition. What counts as dignity or fairness must be shaped locally, through democratic processes, within shared human rights boundaries.

Acknowledging these limits is not a weakness. It is what keeps ProSocial AI grounded.

A Choice, Not a Fate

AI has begun to reshape societies, and social welfare

Whether this dynamic will deepen exclusion or expand participation; whether it will automate distrust or rebuild the social contract, is up to us.

The decisive factor is not technical sophistication. It is human leadership – the willingness to align systems with values before they harden into infrastructure.

ProSocial AI is not about making machines more human. It is about making our institutions more humane.

Practical Takeaways

For citizens

  • Ask not only what a system decides, but how and why
  • Expect explanations and challenge opaque decisions
  • Build basic algorithmic literacy alongside self-awareness

For professionals

  • Treat AI as support, not authority
  • Insist on systems that preserve judgment and relational work
  • Document when technology undermines dignity or trust

For policymakers

  • Measure success by access, dignity, and stability – not just savings
  • Use AI to detect exclusion and non-take-up, not only fraud
  • Require co-design, transparency, and clear accountability

For technologists

  • Design for vulnerability, not ideal users
  • Test emotional and social impact, not just accuracy
  • Build systems that slow down high-stakes decisions when needed

For society at large

  • Remember: AI reflects the priorities we encode
  • If we want welfare systems that support participation and dignity, we must design them that way – on purpose

Everything is connected. Artificial and natural intelligence increasingly influence each other. As we are transitioning to a hybrid era we must ensure that AI remains a means to an end, and leave the latter to be decided by NI.