Developing ethical AI for social work

Looks at the systemic risks associated with the "privatisation" of AI development.

Overview

As the government moves to integrate Artificial Intelligence (AI) into public services, social work faces a critical juncture. This briefing explores the systemic risks associated with the "privatisation" of AI development, including reliance on international tech giants and the potential erosion of professional values. It contrasts these risks with emerging UK-based initiatives designed to foster "home-grown," ethically aligned AI through collaboration between computer scientists and social work experts.

Learning outcomes

As a result of engaging with this resource, participants will be able to:

  • Understand the risks of unregulated AI development, including the tension between profit motives and social work values.

  • Identify the strategic importance of the UK's AI Opportunities Action Plan for creating sovereign, sector-specific tools.

  • Consider the necessity of interdisciplinary collaboration to prevent "techno-solutionism."

  • Explore key UK initiatives (The Digital Care Hub, SCALE, Oxford Institute) currently shaping the evidence base.

Introduction: The development dilemma

AI development is often framed as a technical challenge, yet for social work, it is fundamentally an ethical one. The central concern is not just what is being developed, but who is developing it and how.

Historically, there has been a disconnect between computer science and social care, with technology often imposed on the workforce without adequate input from end-users. This has led to "failed experiments," such as predictive analytics tools that collapsed because engineers did not understand the nuance of real-world practice. Today, as the government’s AI Opportunities Action Plan accelerates adoption, the sector must scrutinise the "black box" of development to ensuring that algorithmic systems embody the core values of the profession.

Key messages: Risks in the current landscape

During the research on Understanding the emerging use of artificial intelligence in social work education and practice’[link to SWE report once published] feedback from subject matter experts from computer science, social work and social work education raised a range of concerns about AI development. The following risks and ethical challenges were raised.

The privatisation of AI and personal information in social work

Partnerships with big and small tech companies off opportunities as well as risks and ethical challenges. AI development is a highly competitive and potentially profitable business and the government’s commitment to integrating AI in public services has attracted the attention of private providers. There are concerns around what this could mean for safe and responsible AI development of more sophisticated AI models. 

The risks of unregulated AI development and a lack of standards

AI development could happen without proper ethical considerations or understanding of the sensitivity of the information social care manages or sector specific needs. Some examples were given about providers claiming predictive and decision-making abilities for their products without evidence of reliability, error rate, or transparency about the algorithm's conclusions. Collaborating with technology companies offers opportunities, but there are concerns about AI privatisation in public services and potential exploitation of personal information. This may reflect trust issues and perceived value conflict (profit vs social good).

Over-dependence on private international companies

There are concerns about over reliance on international AI providers that predominantly operate from the US, where data privacy and regulatory frameworks are more relaxed than the UK. This raises questions about security and accountability, especially in a politically unstable climate, and what would happen if the providers went out of business or didn’t meet expectations.

Home-grown talent

The allocation of public sector funding to major international companies also presents a range of ethical challenges. Several subject matter experts emphasised the need for more home-grown talent and signalled the potential for developing a UK-based, social care-specific AI model that would provide better oversight. This approach aligns with the governments AI Opportunities Action Plan.

Cross-discipline collaboration

The relationship between developers and subject matter experts is essential for mitigating risks associated with AI development. Subject matter experts noted that, historically there has been a disconnect between computer scientists and social care professionals, with technology sometimes being developed without adequate input from end-users. Some previous AI developments, like predictive analytics experiments, failed due to engineers not understanding real-world applications and user needs. A senior computer science educator noted that computer science education should place more emphasis on ethics and social good, as the existing curriculum primarily focuses on technical aspects. Establishing mutual understanding and shared purpose is crucial to ensure AI applications embody core social work values.

A spotlight on collaborative innovation

To bridge the gap between technologists and practitioners, several "incubators" and initiatives have emerged. These projects aim to co-design AI tools with a deep understanding of the social work context.

Focus: Sector-Led Guidance and Principles

The Digital Care Hub (formerly Digital Social Care) leads the sector’s response to digital transformation in adults social care, ensuring AI is developed collaboratively and used responsibly.

  • The Oxford Statement: In partnership with the Institute for Ethics in AI, they convened the summit that produced the "Oxford Statement," outlining what the sector requires to use AI safely.

  • Practical Tools: They have published Ethical Principles for the use of AI in Social Care and specific guidance for care workers, helping commissioners and providers navigate the ethical challenges of procurement and deployment.

Focus: Ethical Governance and Policy

The Oxford University Institute for Ethics in AI brings together philosophers, humanities experts, and technical developers to scrutinize the societal impact of AI.

  • Partnership: Works with the Digital Care Hub to translate high-level ethics into practical guidance for the sector.

  • Scope: Investigates broad challenges ranging from algorithmic bias in facial recognition to the impact of AI on human well-being and democratic governance.

Focus: Co-production and Workforce Development

Launched in April 2025, the Centre for Social Care and Artificial Intelligence Learning (SCALE) fosters direct collaboration between computer science and social care.

  • Mission: To place people with lived experience and practitioners at the centre of AI development.

  • Educational Innovation: Cardiff University is introducing a new module for the Masters in AI on "AI for Social Good," requiring computer science students to engage with social care ethics before they graduate.

Implications for practice

For commissioners and strategic leads

  • Procurement principles: Leverage resources such as the AI Playbook for the UK government, and other guidance and tools from the Government Digital Service and the Department for Science, Innovation and Technology (DSIT), to ensure ethical, practical, and effective adoption of AI in public sector settings.

  • Demand evidence: Move beyond vendor sales pitches. Require evidence of reliability, error rates, and "explainability" before procuring AI tools.

  • Prioritise co-design: Favour "home-grown" or collaborative solutions that demonstrate active involvement of social workers and service users in the design phase.

For Educators

  • Interdisciplinary curriculum: Social work education should include digital literacy in the curriculum, but computer science education should also include social ethics. The Cardiff University model offers a blueprint for this interdisciplinary approach.

Supporting effective practice

Reflective questions

Use these questions to reflect upon new AI proposals or partnerships.

  1. Transparency: How clearly can the AI tool explain its recommendations to practitioners and service users, and are these explanations accessible to non-technical audiences?
  2. Accountability: Who is responsible if the AI system makes an error that adversely affects a service user, and how are grievances or appeals handled?
  3. Bias and Fairness: What measures are in place to identify, monitor, and mitigate potential biases within the AI system, especially those that could disproportionately affect vulnerable or marginalised groups?
  4. Resilience: If the provider of this AI system ceased trading tomorrow, what would be the impact on our statutory duties?