AI in Social Care: Why “Mostly Works” Is Not Acceptable

AI in Social Care: Why “Mostly Works” Is Not Acceptable

AI is moving quickly into social care, from documentation support and risk prediction to decision support in safeguarding and care coordination. For overstretched care systems, the promise is compelling: reduced admin burden, earlier risk detection, and better allocation of limited resources.

But in social care, AI systems do not operate in a neutral environment. They are embedded in decisions affecting elderly people, individuals with disabilities, and vulnerable families. In this context, “mostly works” is not good enough.

In healthcare and public services, even low-frequency errors can cause disproportionate harm, a concern increasingly reflected in global health AI guidance. The World Health Organization (WHO) has warned that AI systems in health settings must meet strict standards of safety, transparency, and human oversight because failures directly affect patient and citizen wellbeing.

 

When AI Errors Affect Human Lives

In many digital sectors, AI mistakes are inconvenient. In social care, they can be life-altering.

An incorrect risk flag may lead to unnecessary intervention in a family’s life. A missed signal may delay support for someone at risk of neglect or abuse. A flawed prioritisation system may mean limited resources are directed away from those in greatest need.

Healthcare AI safety research emphasises that AI systems used in care contexts must be treated as safety-critical, meaning they require higher assurance, monitoring, and governance than general-purpose tools.

Unlike consumer AI, these systems influence decisions about housing support, safeguarding, medication adherence, and care planning. When AI hallucinations, plausible but incorrect outputs, enter these processes, they introduce risks that frontline professionals may not easily detect.

 

The Risk of Bias in Social Care Systems

Social care data reflects structural inequalities. Historical patterns of intervention, reporting, and access to services can encode bias into datasets. When AI models learn from this data, they risk perpetuating or amplifying disparities.

Research on algorithmic fairness in public sector AI highlights that predictive systems can disproportionately affect marginalised communities when historical data reflects unequal treatment.

In social care, this can mean:

  • Certain groups being flagged as higher risk more often
  • Over-surveillance of already disadvantaged populations
  • Under-recognition of needs in underrepresented communities

This is why algorithmic fairness in healthcare and public services is not just a technical issue, it is an ethical and governance concern.

In social care, biased systems do not simply distort data, they shape real-world interventions. They influence who is flagged as high risk, who receives attention first, and who may be overlooked. Over time, this can reinforce existing inequalities rather than reduce them, particularly for communities that already experience structural disadvantage. Addressing bias therefore requires more than model tuning; it demands governance structures that include impact assessments, diverse stakeholder input, and ongoing monitoring of outcomes, not just model performance. Without this, AI risks becoming a mechanism that scales historical inequities under the appearance of objectivity.

 

Why Accuracy Metrics Hide Real-World Harm

AI systems are often evaluated using aggregate accuracy metrics. But high overall performance can conceal serious failures in specific subgroups. A landmark study published in Science found that a widely used healthcare risk algorithm systematically underestimated the needs of Black patients because it used healthcare spending as a proxy for health status, showing how a model can appear accurate overall while producing unequal outcomes in practice.

In social care, the implications are similar. A model that is “90% accurate” may still consistently fail individuals with complex needs, rare conditions, or atypical living situations, precisely those who rely most on support. This is why AI governance and ethics guidance increasingly stress the importance of evaluating system performance across demographic and contextual segments, rather than relying solely on overall accuracy metrics. For example, global health AI guidance from the World Health Organization highlights fairness, bias monitoring, and subgroup performance analysis as key safety requirements.

 

Human-in-the-Loop as a Safety System

In social care, AI should support professional judgment, not replace it. Human-in-the-loop design, where professionals review, interpret, and contextualise AI outputs, is a core safeguard.

WHO guidance on AI in health stresses that human oversight must remain central, especially where AI outputs influence clinical or care decisions.

But human oversight is only effective if:

  • Professionals understand system limitations
  • Outputs are explainable
  • Workflows allow time for review
  • Responsibility remains clearly assigned

Otherwise, AI can create “automation bias,” where professionals over-trust system recommendations.

In social care, this risk is amplified by workload pressure, staffing shortages, and time constraints. When systems present outputs with an appearance of authority, risk scores, prioritisation flags, or recommended actions, professionals may default to agreement, especially in high-demand environments. Human-in-the-loop only functions as a safety system if the “human” remains an active decision-maker rather than a passive confirmer. That requires training on AI limitations, clear visibility into how recommendations are generated, and workflows that allow space for professional judgment. Without these conditions, human oversight becomes symbolic rather than protective, and AI shifts from decision support to silent decision influence.

 

Governance for Vulnerable Populations

AI in social care operates in a governance environment that must prioritise protection of vulnerable populations. This includes:

  • Transparency about how AI systems are used
  • Clear accountability structures
  • Ongoing monitoring for bias and drift
  • Mechanisms for appeal or human review

Public sector AI governance research emphasises that systems affecting citizens’ rights and wellbeing require stronger safeguards than typical enterprise AI deployments.

For healthcare tech leaders, this means designing AI governance not as an afterthought, but as part of system architecture. Documentation, traceability, and oversight processes are not compliance burdens, they are safety mechanisms.

 

When “Mostly Works” Is a Risk, Not a Result

AI can reduce administrative burden and improve coordination in social care. But the context matters. These systems operate where people are vulnerable, resources are scarce, and errors carry human consequences.

“Mostly works” may be acceptable in content recommendation or internal productivity tools. In social care, it is not.

Healthcare technology leaders have a responsibility to ensure AI systems in social care are:

  • Safe
  • Fair
  • Transparent
  • Governed with care

Because when AI supports decisions about vulnerable lives, performance is not just a technical metric, it is a matter of trust.

Share this post

Do you have any questions?

Newsletter

Zartis Tech Review

Your monthly source for AI and software related news.