Skip to content

AI Companions and the Lessons of Family Law

By CLARE HUNTINGTON. Full Text.

Virtual friends and lovers powered by artificial intelligence are rapidly moving to the center of our emotional and social lives. Millions of people turn to AI companions every day for conversation, romance, sexual intimacy, therapy, and education. AI companionship holds promise, potentially reducing loneliness, supporting people without access to mental health treatment, helping students learn, and offering a judgment-free space for sensitive conversations. But AI companionship also raises significant concerns. The technology’s addictiveness may exacerbate loneliness and can undermine human relationships. Therapy bots may prove more harmful than helpful. AI companions can be emotionally abusive. And their access to the most intimate aspects of users’ lives poses distinct privacy challenges.

As lawmakers and policy experts reckon with the benefits and serious risks of AI companionship, they must account for the distinctive aspects of AI companionship. Unlike interacting with other forms of AI—being driven in an autonomous vehicle, say, or getting help with coding—people are in a relationship with their AI companion. Any regulatory approach must address this relationality, especially the human drive to attach to others and the vulnerability that comes with that attachment.

Legal scholars have long argued that the regulation of technology must account for relationality. This Article demonstrates that family law—the law of relationships—is a ready means to do so. As a foundational matter, any effort to regulate AI companionship must explain why the legal system should act. Family law helps answer this question by debunking the widespread belief that relationships are purely a private matter. Family law establishes the strong state interest in nurturing positive relationships and addressing harm in abusive and neglectful relationships. These state interests apply not only to human relationships but also to human-AI relationships.

Family law also helps answer the question of how to regulate AI companionship. Family law recognizes, for example, that legal intervention is often necessary to shift the power imbalance that facilitates harmful relationships—a lesson that should be applied to the power imbalance between technology companies and people using AI companions. And family law teaches that expertise and licensing are necessary for mental health experts to work with a person at any age, although AI companions marketed for therapeutic purposes have not been subject to similar gatekeeping. Finally, family law holds lessons for advocacy, showing that it is possible to advance reasonable regulation notwithstanding the polarized political climate and considerable antipathy to regulating the technology industry, at least at the federal level. Family law points, for example, towards state-level interventions rather than action by Congress or federal agencies, and it demonstrates the broader acceptance of regulations targeted at minors than at adults.

In short, AI companionship is a new kind of relationship, bringing profound and unrecognized change to the landscape of our intimate lives. Legal scholars and policymakers must start grappling with this new world now. Family law holds great promise to accelerate that reckoning.