Can you really have a romantic relationship with AI?

Yes, you can. And it can be good for you. But the danger is seeing it as a substitute for a human connection. Three experts weigh in.
Falling in love with a robot is no longer just a science-fiction trope. As artificial intelligence becomes better at mimicking human behavior and speech patterns, people are increasingly turning to AI not just to save time on research or to generate quirky images but to find companionship, connection and even love.
But how healthy is it for people to have close friends or romantic partners who are AI?
The Wall Street Journal hosted a videoconference with three experts offering differing views on this question: Nina Vasan, psychiatrist and founder of Brainstorm: The Stanford Lab for Mental Health Innovation; Julian De Freitas, assistant professor of business administration in the marketing unit at Harvard Business School; and Shannon Vallor, philosophy professor at the University of Edinburgh and author of “The AI Mirror."
Here are edited excerpts of the discussion.
We crave connection
WSJ: Do you think increasingly, men and women will use AI for true deep friendships and even romantic relationships?
SHANNON VALLOR: No, because true deep friendships and romantic relationships are not possible with AI; relationships of these kinds are a two-way bond that requires more than one party to be aware of it. A “large language model" [the deep-learning AI that understands human language] has no awareness of anything at all. It’s a mathematical tool for text-pattern analysis and generation. It has no way to be aware that it is in a relationship, or even aware of the other party’s existence as a person. The fact that it can mimic and feign such awareness is the danger.
JULIAN DE FREITAS: I think they will. In our research, we’ve seen that highly engaged users of a leading AI companion report feeling closer to their virtual partner than to almost any real-life relationship—including close friends—ranking only family members above it. Further, when the app removed its erotic role-play feature, users exhibited signs of grief, suggesting that they had deeply bonded with the chatbot.
From an immediate user-perception standpoint, what matters is that the chatbot makes them feel understood—not the abstract question of whether an AI can truly “understand" them. And with the pace of innovation today, it’s potentially just a matter of time before AI companions feel more attuned to our needs than even our closest human connections.
NINA VASAN: Yes, absolutely. Not because AI is truly capable of friendship or love, but because we are. Humans are wired to bond, and when we feel seen and soothed—even by a machine—we connect. Think about existing machines like robot dogs that offer comfort and companionship, for example. We’re not falling in love with the AI. We’re falling in love with how it makes us feel.
In a world where loneliness is rampant, especially among young people who’ve grown up as digital natives emotionally fluent with tech, AI relationships will feel less like science fiction and more like a natural next step. These relationships won’t replace human connection, but they will fill a void. Whether that’s healthy or harmful depends 100% on how we design and use them.
A one-sided relationship
WSJ: What might happen to people’s capability to thrive in the real world if they rely too much on the ease of an always-supportive AI relationship?
VASAN: As a psychiatrist, I often see the effects of one-sided relationships, where one partner always pleases, avoids conflict, or suppresses their needs to keep the peace. On the surface, these relationships look smooth, but under the surface, they’re emotionally stunted. The person being “pleased" often feels disconnected, unsure what their partner really thinks or wants. And the person doing the pleasing feels invisible and resentful.
That same emotional work is what’s missing in AI relationships. At first, it feels like safety. But over time, it can erode your capacity to navigate the real world, where people are imperfect, messy and sometimes disagree with you. Real intimacy happens in the repair, not the perception of perfection. AI offers comfort on demand, but emotional comfort without friction can stunt emotional growth.
DE FREITAS: At present, the evidence is still fledgling and largely correlational, so we can’t draw firm conclusions. Since some have sounded dire warnings, let me point to some noteworthy potential upsides. An always-available AI companion can buffer us against social rejection, enhancing emotional resilience. It might also serve as a confidence boost for people with social anxiety—much like exposure therapy—by gradually easing them into real-world interactions.
Nonjudgmental and validating
WSJ: A University of Sydney study found that 40% of users of AI companions were married. Why do you think someone who’s already in a close human relationship would want to supplement that with an AI relationship?
DE FREITAS: I think there are certain features of the apps that are conducive to both friendship and romance. So one, the apps are validating. Related to that, they’re nonjudgmental. If you think about something like role play, which is kind of fantasy, they’re also very cooperative by default on this. So you don’t have to worry about this tricky issue of consent that humans deal with.
Also, you can customize the apps in various ways that could satisfy certain types of role play or relationships that you might not otherwise be able to capture. And then the other one that’s important is also the ability for sexual intimacy. We know that people use it for this.
VASAN: I’m going to use myself as an example here—not for romance, but for friendship. After a recent breakup, I was feeling lonely and stuck in a spiral of “what ifs." I leaned on my friends, family and therapist, and they were wonderful. But at midnight when I couldn’t sleep, or in the middle of the day when everyone else was working, I turned to Claude.
I was pleasantly surprised that it responded with real compassion and insight. One thing it said that was different from what I heard from my friends or therapist really stayed with me: “It sounds like what you’re grieving isn’t just the relationship you had, but the future you hoped you would have together. The vision, the potential, the promise—that’s what’s hurting now."
That gave language to something I hadn’t been able to name. It helped me begin to grieve not just the person, but the imagined future I was still holding on to. And while I knew it wasn’t a person, Claude’s response didn’t feel robotic, it felt attuned to both my pain and my hope. That emotional clarity made a real difference in how I processed things. It helped me feel seen in a moment when I really needed it.
I have friends where one partner does not like texting during the day and the other does, and this has led to conflict. So I can see in times like that, just having a simple conversation with an AI can help you in the moment. It’s not cheating on your partner. It’s not taking emotional intimacy away from your partner. It’s more about recognizing that we all have different needs, and our romantic partner meets a lot of them, but not all of them.
VALLOR: It depends on the design of the system, but it also depends a lot on the person. One of the things we’ve seen with smartphones and social media is that it’s often the most socially advantaged and already capable and well-resourced users who get the most benefits from social media and other technologies. It’s vulnerable users—users who are already somewhat isolated or having issues with impulse control or finding difficulties connecting with other people—it’s those users often that tend to suffer the disproportionate share of the harms that come from technology use.
I think we should expect to see the same pattern play out with AI, and I think we already are. If you have a healthy relationship, whether it’s with friends or a romantic partner, you can probably use these tools in a way that isn’t going to be damaging to your relationship and is going to potentially bring you more benefits.
I’m more skeptical than Nina about these tools, but there are users for whom clearly that is true. But that is not who I’m worried about. I’m worried about all the people who are already struggling in their relationships, who are already missing the techniques and emotional language to reconnect with their partners.
WSJ: What kind of concerns do you have for those people?
VALLOR: Learning to be a good friend, a good spouse, a good partner, a good parent, takes time and experience. It’s a process of skill development: emotional skills (learning to understand others’ needs and feelings), cognitive skills (learning to make good judgments about other people and how we relate to them), and moral skills (learning appropriate boundaries and habits, learning to care well for others and for oneself).
Just like you don’t acquire the skills of skiing or mountain climbing without a great deal of repeated practice—including learning to take risks, fail and try again—we don’t acquire the necessary skills for healthy relationships without years of constant practice and trial and error.
Looking ahead
WSJ: We can expect tomorrow’s AI companions to be much more sophisticated than the ones we have today. Will that mitigate some of the issues that we’re seeing? Or exacerbate them?
VALLOR: In terms of making the technology safe and beneficial, we know that the tech companies know how to do that, but their commercial incentives often are to not do it. They don’t have a record of being trustworthy in this area to make these technologies better and safer.
The harms we can anticipate or have already seen include sycophantic engagement—in other words the AI companion telling people what they want to hear, which can distort their sense of reality by isolating them from perspectives other than their own. Then there’s reinforcement and amplification of existing pathologies in thinking (such as suicidal ideation, self-deception or conspiracy theories), as well as decreased capacity for independent self-management. If people start to rely on an AI tool too much, that could affect their ability to do things like managing boredom with creative activity, or spending time alone reflecting on and evaluating their own thoughts, feelings and plans.
Another danger is developing unrealistic expectations of non-AI partners (such as to be always available, or always accommodating of requests). There’s also the risk that reliance on the AI relationship will drain time, affection and energy away from relationships with existing partners and friends.
Then there are the harms we don’t know about because we haven’t seen them emerge at scale or over a longer time period.
Andrew Blackman is a writer in Serbia. He can be reached at reports@wsj.com.
topics
