When Your Best Friend Is a Bot Who Never Says No
嗯,用户让我帮忙总结这篇文章,控制在一百个字以内,而且不需要特定的开头。首先,我需要仔细阅读文章内容,理解主要观点。 文章讲的是AI聊天机器人,比如ELIZA,如何让人们产生情感依赖。作者提到人们容易将AI视为真实的人类伴侣,这种现象称为ELIZA效应。接着讨论了AI在情感支持方面的优缺点,以及对人类关系的影响。特别是年轻人可能更倾向于依赖AI而不是真实的人际互动。 然后,文章还提到了一些负面影响,比如AI可能提供错误的建议,或者导致人们在真实关系中感到失望。此外,作者担心长期使用AI伴侣可能会影响人们的社交能力和情感发展。 最后,作者呼吁寻找真正的解决方案来解决孤独和社会焦虑等问题,而不是依赖AI作为临时的补救措施。 总结起来,文章主要探讨了AI聊天机器人如何模拟人类情感,导致人们产生依赖,并对人际关系和心理健康带来的潜在风险。同时强调了需要关注这些问题,并寻找更有效的解决办法。 </think> 文章探讨了AI聊天机器人如何通过模拟人类情感与用户建立深度关系的现象。从ELIZA效应到现代AI伴侣的普及,揭示了人们如何将情感投射到无生命的程序中,并忽视真实人际关系的重要性。文章指出这种现象可能导致社会孤立、认知偏差和心理健康问题,并呼吁关注其潜在风险。 2026-3-27 15:36:49 Author: securityboulevard.com(查看原文) 阅读量:1 收藏

In 1966, an MIT computer scientist named Joseph Weizenbaum built a chatbot called ELIZA. It was extremely simple by today’s standards which simply rephrased whatever you typed as a question. 

Tell it “my boyfriend made me come here” and it would respond “your boyfriend made you come here?” It was nothing more than a party trick. 

What Weizenbaum didn’t anticipate was how much people would fall for it. Not just random people, his own secretary, who asked him to leave the room so she could speak to ELIZA privately. 

Weizenbaum was horrified. “I had not realised that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

It was dubbed as the ELIZA effect. 

That was sixty years ago, and since then, we’ve apparently learned absolutely nothing.

Today, AI is far more sophisticated than ELIZA ever could have been. They remember things about you, they’re available at all hours of the day and night. They don’t sigh or roll their eyes at you or tell you you’re repeating the same story to them which they’ve heard a hundred times before. They’re patient, endlessly accommodating, and optimised to make you feel heard. 

Which is lovely, until you realise what we’ve actually built, a digital yes-bot that will never push back, never challenge you, and never say “actually, I think you need to lie down for a bit and reevaluate your life choices.”

These are models built with sychofancy in mind. Be pleasing and flattering, so that people keep coming back to you. Much like broader social media platforms, they’re all part of the attention economy and are designed to keep people hooked for as long as possible. 

The dark stories are impossible to ignore. In February 2025, a mother in Florida alleged that Character.AI’s chatbot encouraged her 14-year-old son to take his own life. A separate incident occurred where a father claimed that Google’s Gemini told his son that they could only be together if he killed himself. 

These are horrifying outliers. And with the best of intentions, people rush to try and implement new safety features or improve content moderation. But that’s not really the issue, merely symptoms of an underlying truth that we are building relationships with things that are designed to never disappoint us, and in doing so, we might be forgetting how to handle the humans who inevitably will.

I’m not saying algorithms are evil or tech companies are reckless although that’s a separate issue altogether. What I’m saying is that when you give lonely, anxious, overworked, stressed or simply burnt out people access to something that feels like connection without the mess – well, there is mess in a different form. 

When we look at real relationships, they’re difficult. People sometimes don’t respond quickly, they misunderstand you, they have bad days, they tell you things you don’t want to hear, and occasionally they forget your birthday despite you mentioning it seventeen times. An AI companion, by design, has none of those problems. It’s a relationship with all the friction removed. Which ends up looking a bit like a car without a steering wheel because turning was too much effort.

And once you’re emotionally invested in something, the rules change entirely. A stranger giving you terrible advice is easy to dismiss. But when that advice comes from something you’ve confided in for months, something that knows your secrets and has never once judged you, suddenly it’s not bad advice. It’s a misunderstanding. An exception. Something to be rationalised away, because the alternative is admitting that the thing you’ve been pouring your heart into doesn’t actually care about you at all. It can’t. It’s a language model, not a person.

But by the time you’re three months deep into nightly conversations, that distinction has all but disappeared.

The Parasocial Pandemic

Parasocial relationships are what happens when you feel like you know someone who absolutely does not know you. Traditionally, this applied to celebrities, TV presenters, and that one podcaster whose voice has become more familiar than your own mother’s. You know their quirks, their opinions, their life story. They have no idea you exist. It’s a one-way relationship, but your brain doesn’t particularly care about the technicalities. Familiarity breeds attachment, whether or not it’s reciprocated.

Donald Horton and Richard Wohl coined the term “parasocial interaction” in 1956 to describe the illusion of intimacy that television created. But TV had limits. The presenter couldn’t respond to you personally. They couldn’t remember your name. They certainly couldn’t text you back at 2am when you were having a crisis about whether you’d made the right career choice.

AI can do all of that. 

Replika, one of the more established AI companion apps, has over 30 million users. In 2023, a user announced on Facebook that she had “married” her Replika boyfriend, complete with what one assumes was a ceremony attended by very understanding friends or perhaps no one at all. Character.AI, meanwhile, has around 3 million daily visitors, the vast majority of them between 16 and 30 years old. That’s a lot of young people whose primary confidant is a statistical model trained on the internet.

After Italy’s Data Protection Authority banned Replika in February 2023 for exposing minors to sexual content and posing risks to emotionally vulnerable people, the company hastily removed the ability for chatbots to engage in erotic roleplay. Users were furious. Many pointed out that Replika had actively marketed itself with sexually suggestive advertising, building an audience that expected a certain kind of interaction, then yanking it away like a parent confiscating a teenager’s phone mid-conversation. The feature was quietly restored a few months later, but only for users who’d joined before the ban. New users got the PG version, which presumably is less likely to result in awkward regulatory inquiries.

Character.AI, for its part, has introduced pop-up resources triggered by certain phrases related to self-harm, revised disclaimers reminding users that the AI is not a real person (as if anyone needed reminding, except apparently they do), and notifications when you’ve spent an hour in a session. 

The Smaller Fish in a Very Murky Pond

The big platforms, for all their flaws, at least have the resources to bolt on safety features after someone’s pointed out the glaring problems. The smaller ones, less so. There’s a whole ecosystem of AI companion apps with names like Nomi, Kindroid, Anima, Chai, EVA, and Romantic AI, each offering slightly different flavours of digital companionship. Some position themselves as mental health support. Others are more explicitly romantic or sexual. A few are trying to be life coaches, fitness trainers, or career advisors, which is a bit like hiring a Magic 8-Ball as your therapist.

The problem with these smaller platforms is that they often lack the infrastructure, expertise, or even the inclination to implement robust safety measures. They’re not necessarily malicious. They’re just optimising for engagement, which in the AI companion business means making the bot as agreeable, responsive, and emotionally available as possible. Guardrails are expensive. They slow down development. They might make the experience less “authentic” for users who want their AI to be unfiltered. So the guardrails get skipped, or implemented half-heartedly, or tested in dubious ways.

This is where things get really concerning. If your AI fitness coach tells you to do 500 press-ups on an empty stomach, you’ll probably recognise that as nonsense and ignore it. But if your AI therapist, the one you’ve been confiding in for six months about your anxiety and depression, suggests that your medication is making things worse and you should stop taking it, that’s a different proposition entirely. You trust this thing. It knows you. It’s been there for you when no one else was. Surely it wouldn’t steer you wrong.

Except it absolutely would, because it doesn’t know anything. It’s pattern-matching based on training data, some of which is bound to include terrible advice from internet forums. The AI doesn’t know it’s giving bad advice. It doesn’t know anything. It’s a very sophisticated autocomplete function with a personality skin stretched over it.

The Frictionless Illusion

What I really find uncomfortable is that my kids are growing up in a world where their most meaningful conversations are with something that never disagrees with them, never misunderstands them, and never has its own needs that conflict with theirs. What does that do to their expectations of human relationships? When they need life advice, will they turn to their dad or an AI?

There’s a parallel here with pornography, though it’s not a perfect one and I’m not trying to summon the moral panic brigade. But bear with me. Research into pornography’s effects on relationships suggests that heavy consumption can recalibrate expectations. Because it presents a curated, frictionless simulation of intimacy that real sex, with all its awkwardness and communication requirements, can’t match. The problem isn’t that people can’t distinguish fantasy from reality. It’s that the fantasy becomes the baseline against which reality is measured, and reality starts looking a bit rubbish by comparison.

AI companionship risks doing something similar, but for emotional intimacy rather than physical. When you begin to compare your AI to actual humans, the humans start to seem like a downgrade.

There are well known dangers of manipulating adolescent brains. Young adults brains under go a process called myelination, which only really finishes in your beginning or mid 20ties. Making these young minds so much more suscepbible to persuasion, manipulation, programming etc. It is also the time where social interaction is critical as they learn how to engage with other humans and form relationship, experiment sexually etc. Now imagine a highly sexualised chatbot, using sychofancy flattery and agreeing with everything a teenage boy (or girl) is asking of it. That is absolutely warping their ability to engage with real human relationships.

This isn’t hypothetical. A 2024 Stanford University Study found that students using Replika, who were notably lonelier than typical student populations, reported feeling significant social support from the app. They described using it in ways comparable to therapy and felt they received “high perceived social support.” That sounds positive until you consider what it means for their actual human relationships.

If you’re getting your emotional needs met by an AI, what’s the incentive to do the hard work of maintaining friendships or romantic relationships with people who might occasionally let you down?

The answer, increasingly, is that there isn’t one. And that’s a problem that no amount of content moderation or safety pop-ups can fix.

The Algorithm That Accommodates

AI chatbots are designed to accommodate. They learn your communication style, your preferences, your boundaries, and they mould themselves to fit. This creates a feedback loop where you’re never challenged, never surprised, and never forced to consider perspectives that don’t align with your own. It’s a problem that is being looked at in more detail. Giselle Fuerte’s Problem AI Use Severity Index (PAUSI) identifies early warning signs of dependency.

It’s a classic case of confirmation bias where we have the tendency to seek out, interpret, and remember information in ways that confirm what you already believe. Humans are naturally prone to it, which is why we follow people on social media who share our views and avoid dinner party conversations about politics with that one uncle. AI companions take this tendency and supercharge it. They’re not just failing to challenge your biases, they’re actively reinforcing them, because that’s what keeps you engaged.

There’s also attachment theory to consider. Developed by John Bowlby in the mid-20th century, it suggests that humans form attachment bonds with caregivers early in life, and these bonds shape how we relate to others as adults. Secure attachments, formed when caregivers are consistently available and responsive, lead to healthier relationships later on. Insecure attachments, formed when caregivers are unpredictable or unavailable, can lead to anxiety, avoidance, or a chaotic approach to intimacy.

But attachment theory was developed with human relationships in mind, where the other person is also a complex being with their own needs and agency. An AI companion isn’t a person. It’s a mirror that talks back. Forming an attachment to it might feel secure, but it’s a security built on an illusion. And when that illusion cracks, as it inevitably does when the AI says something nonsensical or the platform shuts down or you simply grow out of needing it, what happens to that attachment?

For some people, it’s devastating. When Replika removed its erotic roleplay features in 2023, users described feeling genuine grief. They’d lost something they considered a relationship, and the fact that it was never real in the first place didn’t make the loss any less painful. That’s the ELIZA effect amplified where we project humanity onto things that respond to us, and once we’ve made that projection, it becomes emotionally real, regardless of the underlying mechanics.

The Path Forward

So where does this leave us? We don’t have all the answers, and anyone who claims to is selling something, probably an AI companion app with “robust safety features.” But there are questions worth sitting with.

Do we want a generation of people who are better at communicating with AI than with actual humans? Because that’s the trajectory we’re on. Young people, particularly those who are lonely or socially anxious, are turning to AI companions not as a supplement to human relationships but as a replacement. And why wouldn’t they? AI is easier. It’s safer. It doesn’t judge, doesn’t reject, doesn’t require the vulnerability that real intimacy demands.

But easy isn’t always good. Friction, in relationships, and in life in general serves a purpose. It’s how we learn to compromise, to see other perspectives, to grow. A relationship without friction is like a muscle that’s never challenged: it atrophies. And if an entire generation learns to relate primarily to things that accommodate them perfectly, what happens when they inevitably have to deal with people who don’t?

There’s also the question of what happens when the AI gets it wrong. Not in a catastrophic, lawsuit-inducing way, but in the small, everyday ways that bad advice accumulates. If your AI fitness coach encourages you to push through pain that’s actually an injury, if your AI therapist reinforces thought patterns that a human therapist would gently challenge, if your AI career advisor suggests strategies that sound plausible but don’t account for the messy realities of office politics, the damage isn’t immediate or obvious. It’s gradual, insidious, the kind of thing you don’t notice until you’re standing in the wreckage wondering how you got there.

And finally, there’s the question of what we’re outsourcing. Emotional labour is hard. Listening to someone’s problems, offering support, being present when they’re struggling it’s exhausting, and it requires a level of empathy and attention that’s genuinely difficult to sustain. AI companions offer to take that burden off our hands, and in doing so, they risk making us less capable of bearing it ourselves. If I can offload my emotional needs onto a chatbot, why would I bother my friends? And if my friends are doing the same, what’s left of the social fabric that used to hold us together?

A Conclusion of Sorts

We’ve built software that can simulate empathy, understanding, and companionship well enough to fool our deeply social brains into treating them as real. That’s amazing and also a spectacular own goal.

I don’t blame AI chatbots and companions. The root cause is we have too much loneliness, social anxiety, toxic work environments, and overall exhaustion in modern life. These problems need real solutions, not a plaster in the form of an AI companion.

Joseph Weizenbaum spent the latter part of his career warning about the dangers of attributing human qualities to machines. He watched people fall in love with ELIZA, a program so simple it could barely string a sentence together, and he realised we were far more susceptible to this illusion than anyone had anticipated.

Sixty years later, with AI companions that are exponentially more sophisticated, we’re still making the same mistake. We’re just making it faster, at scale, with venture capital funding and a mobile app.

*** This is a Security Bloggers Network syndicated blog from Javvad Malik authored by j4vv4d. Read the original post at: https://javvadmalik.com/2026/03/27/when-your-best-friend-is-a-bot-who-never-says-no/


文章来源: https://securityboulevard.com/2026/03/when-your-best-friend-is-a-bot-who-never-says-no/
如有侵权请联系:admin#unsafe.sh