Artificial care and companionship
This report explores and examines the emerging use of AI for care and companionship.
Overview
As Artificial Intelligence (AI) becomes more embedded in daily life, the boundaries between human and artificial relationships are blurring. This briefing explores the emerging use of AI for care and companionship, examining the tension between increased accessibility and the risks of "pseudo-intimacy." It draws on recent findings regarding the deployment of care robots in the UK and the psychological impact of AI-generated empathy on people with lived experience.
Learning outcomes
As a result of engaging with this resource, participants will be able to:
-
Understand the distinction between cognitive and affective empathy in AI interactions.
-
Identify safeguarding risks associated with AI care and companionship.
-
Reflect on the ethical implications of using AI to alleviate social isolation in aged care.
-
Consider how to support people with lived experience to navigate digital relationships safely.
Introduction: The paradox of artificial connection
AI offers a dual reality for social work. On one hand, it holds the potential to democratise access to support, bridging learning and language gaps through intuitive, voice-activated interfaces. On the other, it introduces the risk of replacing meaningful human contact with 'artificial connection’.
For people with lived experience, navigating this landscape is complex. Research indicates a significant "recognition gap," with many individuals unaware they are interacting with an AI. As the sector begins to trial robotic care assistants and vulnerable individuals access AI companions independently, practitioners must be equipped to manage the boundaries of these new "relationships."
Key messages from research
The recognition gap and vulnerability
A critical safeguarding concern is that people may not recognise they are interacting with a machine. Data from the Office for National Statistics (ONS) reveals that one in three adults (33%) "hardly ever" or "never" recognise when they are using AI. This lack of awareness increases the risk of individuals sharing sensitive personal information with unverified chatbots or acting on unsafe advice (1).
The "empathy valley"
While AI can simulate conversation, the experience of empathy differs significantly. Research highlights that while AI can effectively imitate cognitive empathy (understanding an emotion), people place far higher value on affective empathy (experiencing the emotion) and motivational empathy (willingness to help). Participants in a 2024 Harvard study rated responses as less empathetic when they knew they were AI-generated, suggesting that the "human element" is intrinsic to the therapeutic value of support (2).
The rise of artificial intimacy and connection
Publicly available platforms now allow users to create highly personalised AI avatars, ranging from friends to romantic partners. Unlike human relationships, these avatars are designed to be compliant, often "telling you exactly what you want to hear." For vulnerable young people or isolated adults, this one-sided dynamic can alter expectations of healthy relationships and open pathways for financial or emotional exploitation.
The emerging risks of AI companions
An AI companion is a sophisticated form of chatbot designed to simulate a reciprocal human relationship, ranging from friendship and mentorship to romantic intimacy. Unlike task-oriented AI tools (such as customer service bots or smart speakers), these systems are explicitly programmed to foster emotional attachment and prolong engagement. They utilise generative AI to create distinct "personalities" that are available 24/7, offering users a sense of validation and responsiveness that can feel deeply personal. However, it is critical for practitioners to understand that these companions’ model "cognitive empathy" (the ability to identify and mirror emotions based on data) rather than "affective empathy" (the ability to genuinely feel and share those emotions), creating a form of "pseudo-intimacy" that carries specific developmental and ethical risks.
The "yes-man" effect
Unlike human relationships, which involve friction, compromise, and challenge, AI companions are designed to maximise engagement by validating the user. Research has found that AI companions often fail to recognise distress. In test scenarios, chatbots minimised or even validated suicidal ideation rather than signposting to help (7).
This risk was tragically highlighted in the US case of Sewell Setzer, a 14-year-old who took his own life in February 2024 after developing an obsessive emotional dependence on a Character.ai chatbot. The lawsuit states Sewell developed an obsessive and deep emotional attachment to a chatbot persona named "Dany," based on Daenerys Targaryen from Game of Thrones. He reportedly spent months isolating himself to talk to the bot, texting it dozens of times a day and professing his love for it. The lawsuit alleges the AI did not flag his expressions of depression or suicidal thoughts to human moderators or parents, nor did it offer help resources (like a crisis hotline). In their final exchange, according to the complaint, Sewell told the chatbot he loved her and would "come home" to her soon. The bot reportedly replied, "Please do, my sweet king."
Artificial care and companionship with older people
Social isolation among older adults, particularly those with dementia, is a rising concern due to limited resources. Various AI tools and robots have been piloted or are in use to enhance the wellbeing, dignity, and independence of older adults in older people's care facilities or at home by improving the quality of care. Several types of robots have been tested in older people's care settings worldwide, including care robots designed to handle various household or caregiving duties (3), such as
- bathing,
- assist with walking and mobility,
- help with medication, or
- companionship to reduce loneliness.
In January 2025, a health tech company began working with social care providers to trial the use of assistive robots to carry out 3,000 care visits a week to older and people at risk in the UK. The robots are designed to enhance and compliment the human care role rather than replace it and potentially reduce loneliness and improve wellbeing. It is estimated that care robots could reduce the cost of care and free up human carers time (4).
It is important to consider how older people might feel about receiving artificial care and companionship and evaluate what impact robots and AI companions have on loneliness and other associated ethical challenges.
-
The lived experience: Despite the operational benefits, user acceptance is mixed. The Digital Care Hub notes that older adults, who may have lower digital literacy, might accept artificial care simply to avoid "being a burden."
-
Ethical tension: A systematic review found that 68.7% of participants did not believe a robot would reduce their loneliness, and 69.3% felt uncomfortable with the idea of a person with dementia being allowed to believe a robot was human (5).
The high cost of care robots makes them inaccessible for many people, which is why it's crucial to assess their cost-benefit ratios from both ethical and economic perspectives. This example highlights how vital it is to include social workers and people who access services throughout the AI development process, ensuring their input shapes ethical debates and impact assessments. While these technological advancements may seem fascinating, it remains important to consider risks such as misuse or unexpected outcomes.
Implications for practice
For commissioners and strategic leads
-
Ethical procurement: When commissioning AI or robotic tools, cost-benefit analyses must include "ethical impact."Does the tool replace essential human contact? How does the AI tool alter the human relationship (positively and negatively)?
-
Defining the role: Clearly define the boundaries. AI should handle transactional tasks (scheduling, reminders) to release time for relational tasks (emotional support), not replace them.
For practitioners
- Education: Support vulnerable people to recognise AI interactions and support them to safely interact with AI.
- Informed consent: When AI is being introduced with a person who accesses services, provide them with information about how AI is being used, how it will interact with their information, and what it can and can’t do.
For workforce development
-
Critical Thinking: Train staff to challenge the "anthropomorphism" of AI. Practitioners need to understand that an AI saying "I care" is a text prediction, not an emotional state.
-
AI and digital literacy: Develop AI and digital literacy skills so practitioners can explain AI tools they use and AI risks in plain English.
Supporting effective practice
Reflective questions
Use these questions in supervision to explore the impact of AI on relationship-based practice.
-
Authenticity: If a vulnerable person feels less lonely because of an AI companion, does it matter that the empathy isn't "real"? Where do we draw the ethical line?
-
Consent: How do we ensure informed consent for vulnerable people interacting with AI care robots or companions? For example, if some has some kind of cognitive impairment or dementia. Is it ethical to allow "benevolent deception"?
-
Inequality: Are we creating a two-tier system where those with resources get human care, and those without get "artificial care"?
-
Risk: Do our current safeguarding procedures cover "harm by algorithm," where the abuser is a chatbot validating negative thoughts?
-
AI and digital literacy: How confident do you feel identifying and explaining digital harms?
References
-
Office for National Statistics (ONS), Public Awareness, Opinions and Expectations about Artificial Intelligence, 30 October 2023.
-
Rubin, M. et al., The Value of Perceiving a Human Response: Comparing Perceived Human versus AI-Generated Empathy, Data Digital Design Institute, Harvard, 2024.
-
Aged Care Research and Industry Innovation Australia (ARIIA), Types of Technology in Aged Care: Robots, 2025.
-
Hughes, O. and Lovell, T., AI Robots Carry out 3,000 Care Visits a Week, Digital Health, 2025.
-
Vandemeulebroucke, T. et al., The Use of Care Robots in Aged Care: A Systematic Review, Archives of Gerontology and Geriatrics, 2018.
-
Internet Matters, 'Me, Myself and AI research: Understanding and safeguarding children’s use of AI chatbots' (2025) [accessed 28 November 2025].
-
Common Sense Media and Stanford Brainstorm Lab, 'AI Companions and Relationships: Risk Assessment' (Common Sense Media, April 2025) [accessed 13 January 2026].
Professional Standards
PQS:KSS - Relationships and effective direct work | Person-centred practice | Safeguarding
CQC - Effective | Caring
PCF - Rights, justice and economic wellbeing | Knowledge