Care Norms and Carebots
Shaun Respess
8 February 2026Can robots care well? In thinking about our budding relationships with embodied AI, it is essential to reflect on the emergent norms that makes care possible for machines and humans alike.
As pulled straight from the scripts of science fiction novels or weekend cartoons, carebots are here to make our lives more convenient. These machines can socialize with us and assist with everyday mundane tasks, all without the same risks of frustration or burnout that their human peers exhibit. According to numerous scholars in domains like elder care and neurodivergent education, they are the future of caregiving – either as replacements for human labor or as supportive tools. It should come as no surprise, then, that so many public and private institutions are eager to adopt carebots.
However, pessimists contend that carebots cannot replicate the requisite “touch” of care and would alienate users from authentic relationships. The underlying premise is that there is some standard of care and socialization that robots will never reach, even if that standard remains opaque. Critics essentially believe that carebots could not fulfill end-users needs in some capacity or will make a sham moral judgment on their behalf, thereby asserting that human caregivers are to be preferred.
Emergent Normativity
As defined by care scholars like Maurice Hamington, “good care” is a set of behavioral norms that emerge from particular relationships and are (largely) determined by practices of humble inquiry, inclusive connection, and responsive action. These norms are grounded in an intersubjectivity that robots seem to lack – a consciousness of oneself and others together in dynamic interactions where meaning is co-constructed. What one ought to do to best meet the needs of another is a process of drawing on past experiences with other subjects, receiving and conceiving of the situations of the care-recipient(s) in question, and sustaining a responsibility for their welfare. To accomplish these, one also necessarily involves their “mindful body” with others in felt experiences and interpersonal contact. This problem of intersubjectivity is what makes carebots so fascinating in comparison to their human counterparts: it is not completely implausible that robots could one day come to obtain the sensorimotor sensibilities to enact upon their environments, self-reflect, and learn in dynamic interactions. Yet, most of us are intuitively skeptical that robots will ever feel empathy or adapt in a way that we have come to expect from good caregivers.
Synthetic Connection
Developers have made drastic strides in improving the emotional repertoire of carebots. Humanoid robots in many ways can now smile or scold, make eye contact, apply voice inflection, and utilize gestures to exhibit preferences. Some even possess synthetic skin to give users a sense of genuine contact, mirroring the artificial fur used in animal robot companions. These moves have certainly captured the attention of target populations – many elderly users for example will try to help “lost” or “confused” robots, will hesitate to harm them and reprimand those who do, or will intimate that the robot prefers one person over another. This is a problem of deception for many roboethicists. Core to the defense of carebots in these situations is the assertion that their synthetic sensitivity is a sufficient substitute for empathy; that the mere appearance of emotions based on feedback from others counts as connection.
Beyond the charge that an account such as this could unknowingly endorse sociopathic caregiving, empathy involves more than accurately predicting what emotions to apply in a given situation. It is what care scholars understand as a felt acknowledgment of mutual vulnerability. While we have issues with empathetic engrossment in many caregiving professions (ex. psychotherapy) that could be remedied to an extent by an impartial AI expert, empathy enables one to relate to the situations of another in more than rational ways, which prompt attention and action. We should therefore not be surprised when our robotic counselor seems unmoved or condescending when we share trauma. Moreover, it is unlikely that the carebot will grow from the experience in any meaningful way.
Care Alignment
The alignment problem is generally well-represented in the philosophy of AI: how do we develop a system that appropriately aligns with our moral values? Care seems misaligned between humans and robots not necessarily in terms of value elicitation – see the exceptional work on care centered value-sensitive design in this respect – but in terms of the procedural habits required to enact them. The reflection needed by a carebot to personally identify with what it purportedly cares about, to improvise as new contexts arise in a relationship, and to grasp the complicated social systems that constrain care for certain groups is limited at best. Improving the responsiveness of carebots is not untenable, but would require habitual conditioning akin to how humans learn through continuous embodied experiences. This approach would also assume that robots possess the sensors to match the senses of their peers. Heuristic-driven AI is the future in this regard, as opposed to a brute force computing model (where all problems are solved by an insatiable demand for more data and greater processing power) which is preferred by many developers.
That said, the norms informing such behaviors must still be driven by the human caregivers on the front lines, who are not obligated to train their potential replacements. The networked reciprocity from which we all learn how to provide and receive care rests on an inevitable association mediated by shared finitude. As these conditions are applicable to carebots only in a broad sense, they would have to be cordially invited as apprentices in kind. Entry into a care guild, however, does not imply that they will or could ever “care well.” While the threshold of carebot competency currently and prospectively remains unsettled, the conditions by which their actions are held to account are of tremendous importance.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.