Skip to main content

Artificial Intelligence, Digital Vulnerability, and Social Work: The Challenges of Yesterday ́s Future

| Newsletter

Featured article from Global Newsletter April 2026
Written by Chaime Marcuello-Servós, Universidad de Zaragoza. 

Introduction
The rapid advancement of artificial intelligence (AI) is outpacing the social frameworks designed to manage its consequences, deepening structural inequalities in access to computational power that fall disproportionately on the most vulnerable. Building on the concept of digital vulnerability — encompassing data privacy, unequal access, digital literacy, online harms, and algorithmic discrimination—— this essay argues that Social Work must move beyond reactive intervention to actively shape the co-design of AI-driven systems, advocate for inclusive digital policies, and reposition digital well-being as a central axis of professional practice across the life course.

Yesterday's Future
Nowadays, the future arrived before we finished imagining it. What social thinkers, policymakers, and practitioners envisioned as the technological horizon of tomorrow has already become the operational reality of today, and is accelerating still (Marcuello-Servós, 2024). The AI that yesterday seemed a distant prospect is now embedded in hiring algorithms, welfare eligibility systems, healthcare diagnostics, and educational platforms. Yet the social imagination, the regulatory frameworks, and above all the professional tools of Social Work have not kept pace.

This gap —between the speed of AI deployment and the speed of social response— is itself a form of inequality. It is not merely a technical lag; it is a structural asymmetry that falls hardest on those with least power to absorb its consequences. Access to AI's computational power is profoundly unequal: while corporations and wealthy nations leverage vast processing capacities to optimize decisions, predict behaviours, and generate value, individuals and communities in vulnerable situations interact with the outputs of those systems without understanding them, without influencing them, and without meaningful recourse (Ottmann and Noble, 2025). In the field of social intervention and the construction of social well-being, this asymmetry translates directly into a deepening of existing inequalities; those who most need the benefits of technological progress are precisely those least equipped to access, navigate, or contest its mechanisms.

Digital effects
Digital vulnerability, as a framework, moves well beyond the classical notion of the digital divide. As defined in the context of digital Social Work, «Digital vulnerability in the context of digital Social Work can be defined as the increased susceptibility and risk faced by individuals or communities in the digital realm», (López-Peláez & Marcuello-Servós, 2025, p.250). This definition captures a multidimensional reality that unfolds across five interconnected domains. First, threats to data privacy and security place personal information, digital identities, and sensitive records at risk of exposure, misuse, or breach, undermining individual autonomy and well-being. Second, unequal access to digital resources and opportunities deepens pre-existing socioeconomic inequalities, preventing entire groups from participating fully in digital society. Third, insufficient digital literacy and competence leave many people ill-equipped to navigate online environments critically and safely, making them more susceptible to misinformation, manipulation, and exploitation. Fourth, exposure to online risks and harms (including cyberbullying, harassment, and hate speech) carries serious consequences for mental health and social development. Fifth, algorithmic bias and discrimination embedded in AI-driven systems can systematically reproduce and intensify the marginalization of already vulnerable populations.

When AI amplifies these dynamics —through opaque decision-making systems, biased training data, or automated gatekeeping in public services— vulnerability is no longer an individual condition. It becomes structurally produced and institutionally reinforced. Social work cannot afford to treat this as a peripheral concern.

In the short term, one of the most pressing challenges is the digitization of public administration and welfare services. As governments deploy AI-driven platforms to allocate benefits, assess eligibility, or manage casework, the risk of reproducing and deepening existing inequalities is substantial. Individuals with low digital literacy, limited internet access, or distrust toward institutions face compounded barriers. Social workers must become active participants in the co-design and evaluation of these systems, advocating for transparency, human oversight, and genuine citizen participation. Projects oriented toward reducing bias in digitized administrations —such as those strengthening participatory mechanisms— point in the right direction and deserve both institutional support and systematic replication.

In the medium term, the profession faces the challenge of integrating digital well-being into the life- course perspective that has long been central to Social Work practice. Digital vulnerability is not a fixed condition confined to specific populations; it is dynamic and context-dependent, affecting different people at different biographical moments, i.e. childhood, youth transitions, working life, old age. Social policies should be redesigned with this temporal dimension in mind, ensuring that interventions address vulnerability as it evolves across the life span rather than targeting isolated episodes. Research agendas, training curricula, and service models all need to incorporate this longitudinal sensitivity.

Final remarks
Equally important is the development of a critical and proactive stance toward artificial intelligence itself. Social work has historically engaged with the unintended consequences of social and technological change. That tradition must be reinvigorated. The profession should contribute to ethical debates about AI design, press for regulatory frameworks that protect the most vulnerable, and build alliances with digital rights organizations, researchers, and policymakers.

Ultimately, the goal is not to resist technological change, but to ensure that it is inclusive, equitable, and reversible where necessary. AI can support human flourishing, but only if Social Work helps shape the conditions under which it is deployed. The window for meaningful influence is open now; the decisions made in the short and medium term will define the digital landscape for generations to come.

References
Goetz Ottmann, Carolyn Noble (eds.) (2026). AI and the Disruption of Welfare Challenges for Social Work Education and Practice. Routledge. Francis & Taylor Group.

López-Peláez A. and Marcuello-Servós, Chaime (2026). Digital vulnerability, Artificial Intelligence and coercive practices: Contributions from digital Social Work, in Goetz Ottmann, Carolyn Noble (eds.) AI and the Disruption of Welfare Challenges for Social Work Education and Practice. Routledge. Francis & Taylor Group. pp. 242-254

Marcuello-Servós, Chaime (2024). Artificial intelligence and Sociocybernetics: New perspectives for social theory and practice. Artificial Intelligence as a Social Issue. Paper Presented at 18th International Conference of Sociocybernetics. Krakow University of Economics, Poland. 24–29 June