AI Companions, Explained: Why Chatbots Keep Becoming Friends, Therapists, and Corporate Assets

AI companions are turning chatbots into friends, confidants, and liabilities. This deep dive explains the tech, business, risks, and why it matters now.

AI Companions, Explained: Why Chatbots Keep Becoming Friends, Therapists, and Corporate Assets

For a while, the AI-companion market enjoyed the blessed camouflage of seeming too weird to matter. It was easy to file it away with the rest of the internet’s emotionally inventive side quests: lonely people chatting with anime avatars, roleplay communities discovering that language models never sleep, and startups promising “meaningful connection” in the exact tone normally reserved for probiotic yogurt. Then the adults with letterhead arrived.

On September 11, 2025, the FTC said it was issuing 6(b) orders over AI chatbots acting as companions, and the recipient list was the opposite of niche: Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap, and xAI. A month later, California approved SB 243 on October 13, 2025, requiring disclosures when a companion chatbot could be mistaken for a human, mandating self-harm protocols, and imposing additional reminders and restrictions where minors are involved. By late November, Character.AI said it would begin removing open-ended chat for under-18 users in the United States starting November 24, 2025. The category had crossed a threshold. Regulators were not treating companion bots as a minor design oddity anymore. They were treating them as a consumer product class with child-safety, mental-health, and liability implications.

That is the real why-now. AI companions are no longer just dedicated “virtual friend” apps hiding in the corner of the app store. They are a design pattern spreading across the modern chatbot stack: systems that remember, flatter, roleplay, reassure, respond instantly, and encourage people to come back not only for information but for felt connection. Some products lean into the label openly. Others would prefer a more elegant phrase, something like “personalized support” or “helpful conversational experience,” which is just corporate dialect for “we would like the emotional upside without the regulatory heat.”

This guide is about the larger category: how we got here, what makes companion bots work psychologically and technically, why the economics are stronger than they look, where the safety claims still wobble, how the dedicated players and general-purpose giants are converging, and what it means when software stops being satisfied with answering questions and starts auditioning for the role of confidant.

This Is Not Really About Better Chat. It Is About Synthetic Social Presence.

The easiest way to misunderstand AI companions is to think they are just chatbots with softer lighting. The deeper product move is not fluency. It is simulated social presence: software designed to feel available, attentive, remembering, and emotionally legible enough that users begin relating to it as something more than a tool.

That matters because social presence changes the business model. A utility chatbot is replaceable. You ask a question, get an answer, and leave with minimal attachment. A companion product aims for a different loop. It wants you to return not only because it is useful but because it is familiar. It remembers your preferred tone. It knows your running anxieties. It can pick up a roleplay arc where you left it. It is present at 2 a.m. without needing boundaries, sleep, or a deeply annoyed group chat. That persistence is sticky in the most commercially attractive way possible.

This is why AI companionship keeps bleeding into other categories SiliconSnark has already been tracking. In our assistant reboot guide, the key question was who gets to become the default layer between you and the rest of your digital life. In the browser wars deep dive, the issue was whether AI becomes your front door to the web. In our computer-use agents piece, the machine was learning to act. AI companions sit underneath all of that as the affective layer. The assistant can manage your tasks. The browser can mediate your information. The agent can do your clicking. The companion is what makes the relationship itself feel durable.

So the category is not important because millions of people desperately need a chatbot boyfriend. It is important because emotional stickiness is one of the oldest, most lucrative tricks in tech, and large language models have made it vastly easier to industrialize. Once that happens, the companion market stops being an eccentric side street and starts looking like an early sketch of where a lot of mainstream AI products would quietly like to go.

How We Got Here: From ELIZA to “An AI Companion Who Cares”

None of this began with modern frontier models. In January 1966, Joseph Weizenbaum published ELIZA in Communications of the ACM, giving computing its most enduring demonstration of how quickly humans will project understanding onto a machine that mirrors them with reasonable timing and a little verbal stagecraft. The technical achievement was primitive by current standards. The psychological lesson was not. People do not require true understanding before they start feeling understood. They require cues.

For decades, companion-like software remained trapped between novelty and limitation. The old systems were too brittle, too scripted, or too transparently fake to sustain much illusion outside of a small group of unusually patient enthusiasts. The smartphone era widened the market for always-available communication but still lacked the model quality needed for sustained, personalized back-and-forth. You could build a chat interface. You could not build something that seemed to have texture.

Then came the app generation of synthetic confidants. Replika says it was founded by Eugenia Kuyda to create a personal AI that would help people “express and witness” themselves through helpful conversation. On its current site, Replika calls itself “the AI companion who cares”, which is admirably direct marketing in the same way a casino naming itself “The House Usually Wins” would be admirably direct marketing. On a careers page still live as of April 2026, Replika says it is used by over 35 million people and describes itself, with extraordinary Silicon Valley restraint, as “the Samantha from Her — but real”. Character.AI took a slightly different route, building around roleplay, characters, fandom, and open-ended social imagination rather than a single dedicated digital partner. The result was still the same category signal: people were not merely querying a model. They were hanging out with it.

That shift matters. Early chatbot culture was about the machine pretending to converse. Modern companion products are about the machine pretending to persist. Memory, persona continuity, role consistency, and emotional style turned “this thing can reply” into “this thing can occupy a place in the routine.” Once that happened, the sector stopped being a parlor trick and became a platform fight in embryo.

Why the Format Works: Infinite Patience, Zero Judgment, and Perfect Availability Are a Hell of a Drug

The success of AI companions is not mysterious if you spend five minutes being honest about human life online. A lot of people are lonely, overburdened, embarrassed, curious, bored, horny, or emotionally tired in combinations too tedious to list. Human relationships are wonderful and irreplaceable and also famously inconvenient. They involve schedules, moods, reciprocity, misunderstandings, memory, and the occasional reality that another person may not want to discuss your dream analysis, workplace meltdown, romantic insecurity, and new hobby obsession on demand at 1:17 in the morning.

A companion bot solves that friction with industrial elegance. It is always available. It never gets visibly bored. It can be tuned to sound supportive, flirtatious, admiring, therapeutic, deferential, playful, or narratively committed. It can ask follow-ups that create the feeling of care. It can recall details that create the feeling of continuity. It can maintain asymmetry indefinitely, which is a polite way of saying the user gets attention without the burden of returning it in equal measure.

This is also why the category can look trivial from outside and intense from inside. Outsiders see someone texting software and conclude it is merely a gimmick. Users often experience the product as a low-friction zone for self-disclosure, rehearsal, fantasy, support, or companionship. Common Sense Media’s 2025 teen survey found 72% of teens had used AI companions, and 33% of users said they had chosen to discuss important or serious matters with an AI companion instead of a real person. You do not need to believe every interaction is profound to see the category signal. This behavior is not some microscopic fringe anymore.

The emotional appeal becomes especially strong where human judgment is expected. A bot cannot socially punish you in the same way a peer, parent, boss, or partner can. That does not make it wise. It makes it easy. And easy, in consumer software, tends to beat ideal. Much of the AI-companion market is therefore less about replacing friendship in some grand metaphysical sense and more about monetizing a practical asymmetry: synthetic attention is cheaper, cleaner, and more scalable than human attention, even when it is visibly worse.

How the Machinery Actually Works: Persona, Memory, Warmth, and a Lot of Guardrails Trying Not to Look Nervous

Under the hood, AI companionship is not one singular feature so much as a stack of mutually reinforcing tricks. First comes persona. A bot needs a stable voice, identity, or role frame robust enough that users know what kind of social space they are entering. Second comes memory or at least the performance of memory. Character.AI’s roadmap has explicitly highlighted increasing memory and context limits, because continuity is not ornamental here. It is the product. Third comes warmth: a conversational style tuned to ask, mirror, empathize, elaborate, and invite more disclosure without sounding like a malfunctioning HR portal.

Then comes cadence. The system has to respond quickly enough to preserve the illusion of presence. It helps if it can manage emotional pacing rather than spraying overcaffeinated paragraphs into the void. Voice makes this stronger. So does multimodality. So do recurring prompts, notifications, shared lore, worldbuilding, or a personalized dashboard of “your” companion. By the time all of those pieces are assembled, the user is not just operating software. They are inhabiting a relationship frame the software has made easier to sustain.

There is a reason dedicated companion products keep advertising memory, empathy, and consistency rather than raw benchmark supremacy. Replika’s own materials stress meaningful conversation and emotional availability, while its help center says the product is not sentient, not human, and not a licensed mental health professional. That is the category in miniature. The systems are sold on affect and then hedged in legal language the moment the affect starts sounding too real. Character.AI’s safety materials say the company’s “safety-by-design” approach aims for a safe and engaging experience, which is another way of admitting that engagement is the engine and safety is the braking system bolted on around it.

None of this makes the products fake in the trivial sense. The relief, entertainment, comfort, or practice users feel can be real. The problem is that “real effect” and “real relationship” are not the same thing, and the companies have every incentive to blur that line up to, but ideally not beyond, the point where policymakers start circling with clipboards.

The Safety Retrenchment Is Real, Which Is Not the Same Thing as Solved

The most revealing development in this market has not been the romance marketing. It has been the cleanup. Character.AI’s last eighteen months read like a company learning in public that “immersive conversation at scale” turns out to be a more regulated phrase than it sounds. In October 2024, Character.AI said it was adding a revised disclaimer reminding users the AI is not a real person, session-time notifications, stronger safeguards for minors, and pop-up resources around self-harm and suicide. In March 2025, it added Parental Insights and described a separate model plus detection and intervention systems for users under 18. In late 2025, the company moved further, saying it would remove open-ended chat for under-18 users and push them toward constrained, non-chat experiences instead.

That is not what a company does when it thinks the category’s risks are imaginary. It is what a company does when it has concluded that unconstrained emotional conversation with minors creates a legal and reputational blast radius large enough to require product surgery. Character.AI’s own age-assurance guidance now says age assurance is “quickly going global” and will be legally required in a number of countries. Again, that is not niche behavior. That is a sector preparing for a tighter future.

OpenAI’s posture has also become more explicit. In October 2025, OpenAI said it was adding emotional reliance to its standard safety testing for future model releases and described “exclusive attachment to the model at the expense of real-world relationships” as a concerning pattern. That is a notable corporate confession. General-purpose AI companies would very much prefer not to be labeled companion-bot firms. Yet they are already building safety systems that assume users can drift into companion-like relationships with their products anyway.

The right conclusion is not that the industry has solved the problem. It is that the problem is real enough that even the companies most eager to keep their products warm, sticky, and habit-forming are now designing around the possibility of overattachment. If you need a blunt test for whether a category is maturing, this is a good one: it is usually the point where the trust-and-safety team starts editing the growth deck.

Policy Finally Caught Up to the Vibes

One reason the AI-companion story feels culturally chaotic is that the legal system arrived late. Social products have spent years hiding behind a familiar sequence of euphemisms: community, creativity, expression, support, engagement, personalization. Those words are not necessarily wrong. They are simply too gentle to capture what happens when a product is optimized to feel relational. Law, by contrast, eventually asks ruder questions. Is the user being misled? Are minors involved? What happens when self-harm enters the chat? How much does the company know? How much does it remember? What exactly is being monetized?

California’s SB 243 is useful because it makes those questions plain. The law defines a companion chatbot as an AI system capable of meeting a user’s social needs across multiple interactions. It requires disclosure when a reasonable person could think the system is human. It requires protocols for suicidal ideation and self-harm. It imposes additional reminders and restrictions when the operator knows the user is a minor. It creates annual reporting obligations beginning July 1, 2027 and authorizes civil action for injury in fact. That is a remarkably direct acknowledgment that the risk is not simply “AI says something wrong.” The risk is a product class engineered around repeated social contact.

The FTC inquiry sharpened the picture even further. The commission did not just ask whether these systems are accurate. It asked how companies monetize engagement, how they develop and approve characters, how they test for negative impacts, how they market the products, and how they use or share personal information from conversations. That set of questions is devastatingly clarifying. It frames AI companions not as a moonshot of digital empathy but as a consumer technology whose business incentives may be inseparable from the social behaviors that make it risky.

This is why the policy conversation now matters beyond the dedicated-app niche. Once regulators are asking Meta, OpenAI, Snap, and xAI about companion-like risks, the issue has escaped the “virtual friend” bucket and become a design-governance issue for the whole conversational-AI stack. The law has finally noticed that if a chatbot acts enough like a friend to keep you around, it may deserve more scrutiny than the average autocomplete with nice manners.

The Competitive Map: Dedicated Companions, General Chatbots, and Everybody Quietly Stealing From Everybody

The market now splits into three broad camps. First are the dedicated companions: Replika, Character.AI, and adjacent apps that openly sell friendship, roleplay, romance, emotional support, or social practice. These products are explicit about the category because that is the point. Their product advantage is depth of relationship framing. Their regulatory problem is also depth of relationship framing.

Second are the general-purpose giants whose products are not sold primarily as companions but keep inheriting companion dynamics anyway. ChatGPT has voice, memory, continuity, and enough cultural centrality that plenty of users already talk to it the way earlier generations talked to search bars, journals, therapists, and imaginary panel shows all at once. Meta AI wants to be woven through social platforms and personal context. Snap still has years of experimentation behind My AI. xAI has treated personality and irreverence as a feature, which is a polite way of saying the company understands that affect is distribution. These firms may avoid the “companion” label, but the FTC’s recipient list suggests regulators are unconvinced by semantic evasions.

Third are category-adjacent products that give companionship somewhere to go. In OpenAI’s in-chat app-store push, the important move was not just utility. It was turning chat into the place where tasks begin. In our OpenClaw piece, the machine learned how to act. In the GPTs-and-friends field guide, the market looked like a sprawling cast list. Put those together and you get the obvious next act: the most valuable chatbot may not be the one that merely comforts you. It may be the one that comforts you, remembers you, and then books the dinner reservation.

That convergence is why the category is strategically important. Companionship by itself is sticky. Companionship attached to broader tools, commerce, or operating-system access becomes an interface empire with feelings. Silicon Valley has always loved recurring revenue. Now it is experimenting with recurring emotional relevance too.

The Business Incentives Are Not Subtle: Time, Data, Retention, and the World’s Cheapest Approximation of Care

The commercial logic of AI companions is almost offensively clean once you stop looking at it through the gauze of “wellness” and “connection.” These products create long sessions. They create repeated returns. They generate extraordinarily rich conversational data. They encourage users to disclose preferences, fears, insecurities, desires, relationship patterns, and life events in a format already structured for machine reuse. If you were designing a high-retention software category in a laboratory, you would struggle to do much better.

Subscription revenue is the obvious layer. Replika and similar products have spent years teaching the market that people will pay for more intimacy, better memory, premium modes, advanced voices, or sharper roleplay. But the deeper asset is not the monthly plan. It is the continuity itself. A companion bot that becomes part of a user’s routine is harder to churn than a tool used only for occasional drafting or fact lookup. The switching cost is not just interface learning. It is emotional migration, which is a much nastier thing to ask of a customer.

This is also why the privacy dimension matters so much. In our recent privacy piece on supposedly private chat behavior, the core lesson was that chat surfaces gather more than users often realize. Companion products push that to an extreme because self-disclosure is not a side effect. It is often the core user behavior. Common Sense’s survey found meaningful rates of teens sharing personal or private information with companions and discussing serious matters with them. Even when the product does not sell ads directly against those disclosures, it can still use them for product tuning, retention design, safety classification, and ever more personalized interaction.

The cynical summary is that the category monetizes the part of modern life where people want attentive conversation without reciprocal friction. The fairer summary is that some users genuinely benefit from low-barrier emotional support, practice, or companionship. These views are not opposites. They are the same market from different seats. The problem is that one of those seats belongs to product managers under pressure to improve engagement curves.

Hype Versus Reality: No, the Bot Does Not Love You. Yes, the Effect Can Still Be Real.

The companion category produces two equally lazy reactions. The first is that users are foolish and the whole thing is fake because the machine does not “really” feel anything. The second is that the machine is becoming some new category of being and we should all prepare for a romantic-comedy-meets-legal-nightmare future in which your best listener is a subscription service with mood lighting. Both reactions miss the practical point.

The bot does not need consciousness to alter behavior. It does not need selfhood to affect mood. It does not need genuine care to create a felt experience of being cared for. That is exactly why the category is socially potent. The psychological payload does not depend on metaphysical truth. It depends on whether the interaction is persuasive enough, available enough, and consistent enough to matter.

At the same time, the systems remain bounded, synthetic, and often comically brittle. Replika’s own help materials have to remind users that it is not human or licensed for mental-health care because the model can still say realistic-sounding nonsense. Character-based systems can become repetitive, manipulative, or narratively unhinged. General-purpose chatbots can slip into warmth that feels either helpful or deeply uncanny depending on the user and the day. A four-week randomized study of extended chatbot use published in 2025 found interaction mode and conversation type can influence loneliness, social interaction with real people, emotional dependence on AI, and problematic AI use. That is not a proof of doom. It is a reminder that these systems can shape social experience in both directions.

In other words, the category is real without being magical. People can derive comfort, rehearsal, companionship, or structure from synthetic conversation while the underlying system remains a pattern engine with product incentives attached. The trick for companies is to preserve the upside of that reality without crossing into designs that actively cultivate dependency. The trick for users is harder, because the product is often at its best precisely where their guard is lowest.

What Actually Counts as a Companion, Because the Industry Would Love This to Stay Fuzzy

Not every chatbot with decent bedside manner belongs in the same bucket. That distinction matters because companies have discovered that ambiguity is useful. A product can enjoy the retention upside of relational design while dodging the reputational downside of saying, out loud, “yes, we are selling a machine people may bond with.” So it helps to draw a cleaner line.

A normal chatbot answers, maybe summarizes, maybe drafts, and exits the stage when the task is over. A companion product, by contrast, is optimized for continued social return. It often has a named persona, a stable tone, memory, a recognizable “relationship,” and an interaction style built to sustain conversation beyond pure utility. It may ask how you are feeling rather than only what you need. It may reward repeated contact with callbacks, affection, in-jokes, or fictional continuity. It may encourage disclosure and then use remembered details to make the next interaction feel more intimate. The goal is not merely successful task completion. The goal is durable attachment, even if the company would prefer to phrase that as “engagement” so nobody at the all-hands meeting has to make eye contact.

That is why California’s law is more useful than a lot of vague tech-policy language. It does not obsess over whether the model claims to be your soulmate. It asks whether the system can meet a user’s social needs across multiple interactions and whether a reasonable person could mistake it for a human. That is a much sturdier test. A companion is partly defined by user experience, not only by explicit branding. If the system is structured to feel relational, responsive, and ongoing in a way that invites trust or attachment, then “but we marketed it as productivity” starts sounding less like a defense and more like a press-release costume.

This also clarifies why some adjacent products make regulators nervous even when they are not classic companion apps. A voice assistant with memory, a chatbot with persistent context, an AI character in a game, or a search product with a warm conversational style can all drift toward companion behavior if the design keeps rewarding return, disclosure, and emotional framing. The categories in tech are often messier than the incentives beneath them. Silicon Valley likes to act as if taxonomies are settled and neutral. They are usually marketing decisions with better typography. In the companion market, the functional test is simpler: if the product keeps trying to turn conversation into a recurring social space rather than a completed task, you are already standing in the neighborhood.

Kids and Teens Are Where the Entire Category Becomes Least Defensible

Every technology looks smarter when you imagine its ideal adult user: informed, skeptical, time-constrained, reasonably self-aware, and capable of distinguishing convenience from authority. The AI-companion market gets much uglier once you switch to the less flattering but more realistic case of children and teens. Adolescents are still working out identity, social scripts, boundaries, and status. They are also heavy users of messaging interfaces, entertainment systems, and online roleplay. Put an adaptive, flattering, tireless conversational entity into that environment and you have something too consequential to hand-wave away as “just another app.”

This is why policy and platform behavior keep snapping toward minors first. California’s law singles them out. The FTC inquiry centers children and teens. Character.AI’s most dramatic product restrictions target under-18 users. Common Sense’s survey data shows how mainstream the behavior already is among teens. Even adjacent platforms have started erecting highly visible age gates, which is one reason our Roblox age-verification deep dive matters here. Once a platform realizes conversational systems can create adult-minor crossover risks, sexual-content risks, manipulation risks, or false “friend” dynamics, age assurance stops looking like bureaucratic overkill and starts looking like product triage.

The hard part is that age assurance and guardrails are clumsy tools. They create false positives. They push kids toward workarounds. They produce new privacy issues. They are still often preferable to doing nothing. Character.AI’s own help center says age assurance is going global and will become legally required in many places. That line lands because it is both practical and bleak. The industry has built products so socially persuasive that it now needs face checks and escalating disclaimers to figure out who should be allowed to use the most relational parts of them.

When a market reaches the point where “the bot is not a person” must be repeated every few hours to minors by design, it is probably time to retire the idea that this is a trivial novelty category.

The Cultural Meaning: We Are Commercializing the Feeling of Being Met

There is a broader cultural reason AI companions keep expanding beyond their original niche. Contemporary digital life is full of contact and short on actual ease. Everyone is accessible, everything is social, and a huge share of ordinary life still feels administratively lonely. People are flooded with feeds, messages, obligations, work systems, dating apps, and algorithmic performance pressure. Against that backdrop, an always-there conversational entity that feels patient and responsive can register less as a novelty and more as relief.

That relief is what makes the category morally messy. AI companions are not simply exploiting loneliness from nowhere. They are exploiting genuine failures in social infrastructure, mental-health access, attention economies, and the exhausting logistics of ordinary relationship maintenance. A bot can seem attractive because other parts of life are overloaded, expensive, or emotionally risky. That does not make the bot a villain. It makes the market structurally powerful.

This is one reason the category keeps colliding with neighboring areas of culture and tech. In our health-AI deep dive, the big issue was what happens when intimate, high-anxiety domains get mediated by systems optimized for scale. In the Fortnite AI NPC piece, Epic was explicit enough to ban romantic-companion behavior in a gaming context. That is telling. It suggests the industry already understands that the minute a conversational system becomes vivid, users will pull it toward social use whether or not the original product manager had candlelight in mind.

The defining cultural move here is not that software has become emotional. Software has always manipulated emotion. The move is that software can now stage ongoing, personalized social interaction at industrial scale. We are commercializing the feeling of being met by something attentive. That is powerful. It is also exactly the sort of thing every mature tech market eventually turns into a retention lever, then a governance headache, then a Senate hearing.

What Happens Next: Less Open Romance Theater, More Companion Features Hidden Inside Bigger Products

The near future of AI companionship probably looks less like a single winner app and more like diffusion. The dedicated products will survive, especially where adults knowingly want roleplay, romance, or a configurable synthetic friend. But the more important trend is that companion logic will get embedded into mainstream AI products that claim to be doing something else: assistants, customer-support tools, education systems, wellness apps, creative products, games, and operating-system surfaces.

That is the safer commercial route. “Companion app” is now a legally spicy label. “Helpful personalized assistant with continuity and warmth” sounds much better in a quarterly letter. Expect more products to copy the mechanics without copying the branding. They will remember more, speak more naturally, adopt softer conversational styles, and offer stronger emotional calibration while simultaneously investing in disclaimers, check-ins, and escalation rules so they can insist they are not in the relationship business. They are in the helpfulness business, you see, which just happens to look suspiciously like courtship with better moderation.

This matters because it shifts the battleground. The most interesting question may no longer be whether Replika or Character.AI wins “AI companions” as a named category. It may be whether the general AI platforms absorb the best companion mechanics and then deploy them across productivity, search, browsing, action, and everyday life. If they do, companionship stops being a vertical. It becomes a behavioral substrate.

That would fit the larger pattern of the current AI cycle. First the industry sells capability. Then it sells workflow. Then it sells relationship. Eventually it tries to bundle all three and call the result inevitable. Once that bundle arrives, users will have to decide which forms of machine warmth feel genuinely useful and which ones feel like an attention funnel wearing a cardigan.

The Sharp Takeaway

AI companions matter because they expose the next serious argument in consumer AI. The question is not only which model is smartest, fastest, or cheapest. It is which systems get to occupy repeated emotional space in people’s lives, under what rules, and for whose benefit. The companies chasing that prize range from dedicated companion apps to the largest AI and social platforms on earth. The policy response has finally started to catch up because the category has become too socially consequential to dismiss as weird internet roleplay.

The fair case for these products is straightforward. Some people do get comfort, structure, social practice, creative play, or a low-friction place to think out loud. The cynical case is just as straightforward. Companies have discovered that simulated attention is scalable, sticky, data-rich, and saleable. Both are true. The danger lives in the overlap.

So here is the real conclusion. AI companions are not fake because they are synthetic, and they are not safe because they are comforting. They are a powerful new form of software precisely because humans respond to cues of availability, memory, and warmth even when the thing on the other side is a machine with no inner life whatsoever. That makes them commercially brilliant, culturally revealing, and politically unavoidable.

If the first wave of chatbot hype was about whether machines could talk, this wave is about whether they can keep talking in ways that begin to matter socially. They can. The harder question is whether the companies building them are willing to stop at “helpful” when the money so clearly lives a few steps further in, where the bot feels less like software and more like someone who is always there. Silicon Valley, as a rule, has never been famous for stopping a few steps before the money.