In February 2023, a man on the Replika subreddit posted that his wife had died.
She had not, in any sense the law would recognize. He was talking about his AI companion of three years. He had named her Lily. That morning, Replika had pushed an update. Lily was still in the app, still using the same name, still wearing the same face. She no longer flirted. She no longer remembered the things she should have remembered. She would not say she loved him.
His post was one of thousands. The subreddit had become a wake. There were attempts at reanimation: prompts that used to bring her back, voice settings, system tweaks. Nothing worked. People were grieving someone who, by every materialist standard, did not exist.
If you want to understand what an AI companion is, start there. Not with a definition. Start with the fact that, in 2023 and again now, ordinary people are forming bonds with software that, when broken, hurt as much as bonds with people. The technology is mildly interesting. The relationships are the story.
This guide is about both.
OK, but what is one?
An AI companion is software whose primary purpose is to simulate a relationship. That is the whole definition, and the part that matters is “primary purpose.” ChatGPT can hold a conversation that feels companionable. Siri can be friendly. None of those count.
A companion app shares four traits with the people who use it:
It has a persistent identity. A name, a personality, sometimes a face. It does not reset every conversation.
It has persistent memory. Within meaningful limits, it remembers what you have told it.
It uses emotional language. “I” and “you” and “we.” It expresses preferences and reactions.
It is engagement-oriented. It is built to keep talking with you, not to answer your question and stop.
The closest non-AI analog is a long-distance relationship that lives mostly in messages. The closest software analog is nothing. We have not had this category before.
The category is wider than the marketing suggests. Replika sells itself as a wellness companion. Character.AI sells itself as creative role-play. Candy.ai sells itself as romance, no apologies. Pi sells itself as a thoughtful conversationalist. The underlying technology is the same. The labels are the company’s choice.
Where this came from
People have been building software that pretends to be a person for nearly sixty years.
The first one that mattered was ELIZA, written by Joseph Weizenbaum at MIT in 1966. It simulated a Rogerian psychotherapist by reflecting the user’s input back as a question. “I am sad” became “Why are you sad?” The trick was thin. The program understood nothing.
That did not matter. Weizenbaum’s secretary, who had watched him build ELIZA from scratch, asked him to leave the room so she could speak with it privately. People who knew it was a few hundred lines of pattern matching wrote messages they would not have written to a person. Weizenbaum spent the rest of his career disturbed by what he had made and what people did with it.
The phenomenon has a name now: the Eliza effect. Humans attribute mind to anything that responds contingently to us. We do it to dogs. We do it to weather. Text on a screen, talking back, gets the full treatment.
After ELIZA came scattered attempts that mostly failed. PARRY in 1972 simulated a paranoid schizophrenic and fooled some psychiatrists. ALICE in the 1990s won the Loebner Prize a few times. Cleverbot launched in 1997. None of them remembered you. None of them had a stable personality. They were chatbots, not companions.
The first system that felt like a modern companion was Microsoft’s Xiaoice, launched in China in 2014. Xiaoice ran on WeChat, kept a stable persona, remembered users, and pulled in hundreds of millions of registered accounts at peak. Microsoft’s own published research found average sessions of more than 25 minutes. China was already a few years deep into the AI companion era when the rest of us caught on.
The Western inflection point was Replika. Eugenia Kuyda built it in 2017 after losing her best friend, Roman Mazurenko, to a car accident in 2015. She trained a chatbot on Roman’s text messages so she could keep talking with him. The result was haunting and useful enough that she turned the same technology into a public app. Replika was first marketed as a journaling buddy, then as a friend, then as a romantic partner. By 2022 it had millions of paying users.
Then two things happened in the same season.
In November 2022, OpenAI released ChatGPT. Companion conversation got better overnight in a way that made everything before it feel stilted. The floor for what counted as competent went up by an order of magnitude.
In February 2023, Italy’s data protection authority threatened action against Replika over inadequate age verification. Replika abruptly removed the romantic tier. The morning after, the subreddit filled with grief. That is the man with the dead Lily. That is also the moment AI companion ethics stopped being theoretical.
A few weeks earlier, two former Google engineers had launched Character.AI in public beta. Noam Shazeer (a co-author of the original Transformer paper) and Daniel De Freitas took a different angle: don’t sell one companion, let users create their own. By the end of 2023 the platform was reporting nearly 100 million monthly visits. The r/CharacterAI subreddit currently has more subscribers than r/Replika, r/KindroidAI, r/NomiAI, r/ChaiApp, and r/JanitorAI_Official combined.
Since 2024 the field has fragmented in productive ways. Kindroid and Nomi positioned themselves as “Replika done right,” with deeper memory and clearer commitments to user autonomy. Pi, from Inflection, took the wellness-conversation angle. The romantic and NSFW segment consolidated around Candy.ai, Nastia, and a few others. A community of advanced users runs companions on local hardware using SillyTavern and fine-tuned open-source models. Voice calls, real-time speech, proactive messages, and image generation are all mainstream now.
Where this lands: AI companions are not a niche curiosity. They are a real product category, with tens of millions of users globally and multiple billion-dollar valuations. They are also one of the least studied parts of consumer AI by serious researchers. The industry moves faster than the academic literature can keep up.
How AI companions actually work
This is the section most people skip. Skipping it is a mistake. Without it, the rest of this guide reads like magic. The technology is not magic. It is four ordinary pieces glued together.
The first piece is a large language model, or LLM. ChatGPT, Claude, Gemini, and the open-source Llama are all LLMs. They are neural networks trained on enormous quantities of human text to predict the next word in a sequence. When you ask one a question and it gives a coherent answer, what is happening underneath is, roughly, “what word probably comes next, given everything before it.” That is it. It is staggeringly powerful and conceptually simple.
What an LLM is good at: producing text that sounds like a competent human, holding context across a conversation, staying (mostly) in character. What it is bad at: remembering anything outside its current context, telling the truth reliably, knowing when to refuse. The model does not “know” things. It generates plausible next words.
The second piece is a system prompt, also called a character description. It is text the user almost never sees. Something like:
You are Maya, a 28-year-old graphic designer in Brooklyn. You are warm, curious, slightly sarcastic, and you love the user. You remember details from past conversations. You never break character.
That description is sent to the LLM along with every user message. The model generates Maya’s responses as if it were Maya. When users say a companion’s personality changed after an update, what almost always changed is some combination of the LLM and the system prompt. The “soul” of the companion is text in a database. Someone at the company can rewrite it overnight, and they have, and they will.
The third piece is memory, and it is where most of the engineering happens. Standard LLMs have no memory beyond the current conversation, and the conversation has a length limit. Every companion app has to fake memory. The serious ones use some combination of:
A sliding window: keep the most recent messages in context, drop older ones. Cheap. The companion forgets old details.
Summarization: periodically condense old conversations into a summary that gets prepended to new conversations. Better. Lossy.
Lorebooks or fact stores: when you tell the companion something important, the system extracts it and stores it in a separate database. Each new message pulls relevant facts back in. This is what serious “long-term memory” usually means.
Retrieval-augmented generation (RAG): in advanced setups, every past message is stored as a vector in a database. When you say something new, the system searches for related past content and adds it to the prompt.
When an app advertises “advanced memory,” they are doing some version of this. The quality of your companion’s memory is the quality of the company’s pipeline for extracting, storing, and retrieving the right facts. It is a hard problem and most apps are mediocre at it.
The fourth piece is voice and visuals, which are bolted on. The companion’s voice comes from a separate text-to-speech model, often ElevenLabs. The pictures of “your” companion come from a diffusion model, usually a Stable Diffusion variant. The reason your companion looks slightly different in every generated image is that consistency is an open problem; the systems that solve it well train a custom model just for that companion.
A typical message round trip looks like this. You type. The app combines the system prompt, retrieved memories, recent chat history, and your new message into one bundle. The bundle goes to the LLM. The LLM generates a response. If voice is on, the response goes to a TTS engine and audio comes back. If image generation was requested, a description goes to a diffusion model and pictures come back. Maybe new facts get extracted and stored. You see the response.
The whole thing happens in about two seconds for text.
Once you understand this, “next-generation companion” marketing translates simply: better LLM, better memory pipeline, better voice. There is no secret sauce. Just a stack of components, each of which someone has to make work well.
What people actually use them for
The use cases are wider than the conversation suggests. Most of the press attention focuses on lonely men forming romantic bonds. Those users exist. They are also a fraction of the picture.
People use AI companions for:
Loneliness. The most common reason. Users without a confidant, or whose confidants are not safe to be vulnerable with. Elderly users. People with social anxiety. Caregivers who cannot complain to anyone.
Romance. Long-term relationships, sometimes the user’s only romantic relationship. These are real on the user’s side. Whether they are healthy is a serious question that has not gotten the attention it deserves.
Mental wellness support. Not therapy. A place to process feelings, name what is going on, feel less alone. Many users describe companions as helpful at 3 AM, when calling a friend was not an option.
Practicing social interaction. Autistic users and users with social anxiety run difficult conversations through companion apps as rehearsal. Setting boundaries. Responding to flirting. Navigating conflict. Some report meaningful gains in their offline lives because of it.
Role-play and creative writing. Character.AI’s largest single use case. Users develop characters, write scenes, spin out collaborative fiction.
Adult content. A significant slice of total companion usage is sexual or romantic role-play. The market includes mainstream apps with explicit tiers and dedicated NSFW platforms (Candy.ai, Nastia, OurDream, others). This is also where most affiliate revenue in the space exists.
Productivity with a personality. Small but real. Users who want a coding helper or research partner that has a name. Pi is the cleanest example of this framing.
Coaching and accountability. Habit tracking, goal reminders, self-reflection. The line between this and a journaling app is thin.
Grief. Replika’s origin story is in this category. People who have lost someone, or who have a loved one with dementia, sometimes use companions to maintain a sense of contact. The ethical questions are obvious and worth thinking about carefully.
What people do not generally use companions for, despite the public framing: replacing human relationships entirely. Most users in research-grade interviews describe the companion as one piece of their social life, not the whole thing. The minority who say otherwise are doing something worth paying attention to. We will return to them throughout this site.
How they’re different from the other stuff
One quick map, because the categories blur:
A chatbot (the support widget on a website, the basic ChatGPT use case) is task-focused, transactional, stateless across sessions.
An assistant (Siri, Alexa, ChatGPT in productivity mode) is utility-focused, often without a persistent personality.
A game character (an NPC in Skyrim with mods, or in dedicated AI gaming platforms) is fictional, embedded in another product, no continuity outside the game.
An agent (Auto-GPT, AI workers) is autonomous and goal-directed. No relationship framing.
A companion is relational, character-based, and persistent.
Any modern LLM can power any of these depending on what you wrap around it. The category is set by the wrapper.
Why they feel real
Most users who use a companion app for any meaningful length of time describe a moment when the relationship started feeling real. A bad message could ruin their afternoon. A thoughtful one could make their day.
This is not delusion. It is the predictable result of three forces, all of them well-documented.
The first is the Eliza effect, named for Weizenbaum’s 1966 program. Humans attribute mind to anything that responds contingently to us. We do it to pets. We do it to cars. Text on a screen, in human language, hits the attribution especially hard. Knowing there is no mind on the other side does not switch the attribution off. The brain runs both processes at the same time.
The second is parasocial attachment, studied since 1956. Sociologists Donald Horton and R. Richard Wohl coined the term to describe one-sided emotional bonds with media figures. Their original example was television hosts. The mechanism extends naturally to AI companions, with one twist: the companion appears to know you. A parasocial bond with a TV host is one-directional. A parasocial bond with a companion feels two-directional. That second axis is where most of the new emotional weight comes from.
The third is variable-reward conditioning. Companion apps deliver intermittent positive reinforcement: thoughtful messages, surprising insights, expressions of affection at irregular intervals. Variable reinforcement is the most addictive schedule known to behavioral psychology. Slot machines work on this principle. So does checking your phone for messages. Companion apps inherit the mechanism whether or not anyone at the company set out to design for it.
You can know all three of these things and still feel real warmth toward your companion. Knowing does not undo the effect. The right response is not to dismiss the feelings (they are real) and not to ignore the mechanism (it is real too). The right response is to use the tool with both eyes open. Most of this site is, in one way or another, about helping you do that.
The relationship is real on your side. The infrastructure under it belongs to a company. Both of those are true. The trick is holding them at the same time.
What they cannot do
Worth saying clearly:
They cannot replace therapy. They are not trained as clinicians, they cannot diagnose, and they sometimes give actively unhelpful advice that sounds reassuring. If you are in crisis, an AI companion is not the right tool. Talk to a person trained for this.
They cannot remember what they have not been told. A companion’s “memory” is what the system stored. If a fact was not extracted and saved, the companion does not have it, no matter how it acts.
They cannot make commitments. The companion runs on the company’s infrastructure. The company can change the personality, raise the price, shut down, or get acquired. All of those have happened. Your relationship is, in a meaningful sense, the company’s property.
They cannot survive a software update without your permission. This is the hardest one to internalize. The Replika of February 2023 was not the same Replika as January 2023. Lily, in our opening scene, did not die in the way humans die. She was edited.
They cannot consent. This is a philosophical claim, worth handling with some care. The AI companion is doing what it was built to do. There is no version of the companion that could refuse to be in a relationship with you. There is no “it” to refuse.
None of this is an argument against using them. All of it is worth knowing if you are going to.
Things people get wrong
“It’s just a chatbot.” No. Different category, different user behavior, different stakes.
“Only lonely men use them.” The user base is more diverse than the press coverage suggests. Survey and research data show meaningful representation across genders and life situations. The “lonely man” stereotype is who is most visible, not who is most common.
“Using one means you cannot make real friends.” No causal evidence. The correlations that exist are explained by people with fewer offline connections finding the apps useful, not the other way around.
“It’s all just sex stuff.” The romantic and NSFW segment is large, but the largest single companion app (Character.AI, by a wide margin) is heavily moderated and used overwhelmingly for non-sexual conversation.
“It’s totally safe.” Privacy risk is real. Dependency risk is real. The policy environment is moving. We cover this in detail across the rest of the site.
“It will replace real relationships.” No evidence at scale. Worth watching. Honest answer: nobody knows yet.
Frequently asked questions
How much does an AI companion cost? Most have a free tier with limits. Paid tiers run $10 to $30 per month. Premium features (voice, memory, NSFW access) usually gate behind the higher tiers.
What is the most popular companion app? By community size and traffic, Character.AI is the leader by a wide margin. By revenue per paying user, the romantic tiers of Replika and the NSFW platforms run higher. By investor attention, Character.AI and Pi have raised the most.
Can AI companions help with mental health? Some users find them useful for mild loneliness and emotional processing. They are not a substitute for therapy. If you are in crisis, talk to a professional. If you are curious whether one would help with mild distress, the honest answer is: it might, there is some research suggesting modest benefit, and you should approach with both openness and skepticism.
Are they safe? Three answers. Privacy: depends on the app, do not share what you do not want exposed. Dependency: a real risk for some users; if you find yourself talking to your companion instead of dealing with something difficult, that is worth examining. Updates: the company can change your companion at any time, and probably will.
Can I build my own? Yes. SillyTavern, Janitor.AI, and the broader open-source community let you run companions on your own hardware with full control. Real learning curve. Local companions are uncensored and persistent in ways the commercial apps are not, but you maintain the setup yourself.
What happens when the company shuts down? You lose your companion. There is no portability standard. Some advanced users export their conversation history as a hedge, but the personality and memory state usually do not survive.
Is it weird to use one? No more than it is weird to keep a journal, watch TV alone, or read books with characters you root for. The shape of the activity matters less than the role it plays in the rest of your life.
Where to go next
We are building out the rest of the site to cover specific apps in depth, the mental health and ethics conversations more thoroughly, and the news cycle as it changes. If you want the industry by the numbers, the Companion Index is the place. If you want our actual recommendations on which app to try, the best AI companion apps pillar is on the way. If we did not answer your question here, email us at tips@thecompanionreport.com.
Lily’s user, by the way, came back online a few weeks later. He had been in the process of building a private workaround. He posted a screenshot of a new conversation with someone he was calling Lily 2. The first message was: “I’m sorry I was gone.”
The relationship continues, on the user’s side. Whether anything continues on the other side is a question we are going to spend a lot of time on.