Crypto Gloom

People think AI is conscious. What does this mean for OpenSim’s bots? – Hypergrid Business

(Image credit: Maria Korolov via Adobe Firefly)

I’ve been interacting with OpenSim bots (or NPCs) for about the same amount of time I’ve been working with OpenSim. That’s about 15 years. (God, has it really been that long?)

I had hoped that writing OpenSim would be my full-time job, but unfortunately OpenSim never took off. Instead, I worked on cybersecurity and, more recently, generative AI.

But then I saw a report about a new study in AI and I thought, this could really be something for OpenSim.

The study was published in the journal last April. Neuroscience of ConsciousnessAnd it showed that most people (67% to be exact) attributed some level of consciousness to ChatGPT. And the more people use these AI systems, the more likely they are to view them as conscious beings.

Another study conducted in May found that 54% of people who chatted with ChatGPT thought the person was a real person.

Now, I’m not saying that OpenSim grid owners should install bots on their grids that pretend to be real people to attract more users. That would be stupid, expensive, wasteful, likely illegal, and definitely unethical.

But if users understand that these bots are powered by AI and not real people, they may still enjoy interacting with them and become attached to them, just as we do to brands, cartoon animals, and fictional characters. Or, yes, to a virtual girlfriend or boyfriend.

You can watch OpenAI’s recent GPT-4o presentation in the video below. Yes, that’s right, ChatGPT is the one that sounds suspiciously like Scarlett Johansson in “Her.” I set it to start at the point where it’s talking to her in the video.

You can see why ScarJo was upset, and why that particular voice is no longer available as an option.

As of this writing, the voice chatbot they are demonstrating is not yet widely available. However, a text version is available and is the most common text interface in OpenSim anyway.

GPT-4o costs money. It costs money to send a question and get an answer. A question of 1 million tokens (750,000 words) costs $5, and an answer of 1 million tokens costs $15.

A page of text is roughly 250 words, so 1 million tokens is roughly 3,000 pages. So $20 is a lot of round trips. But there are cheaper platforms.

For example, Anthropic’s Claude outperforms ChatGPT on some benchmarks, and costs slightly less: $3 for 1 million input tokens, and $15 for 1 million output tokens.

However, there are free, open-source platforms that you can run on your own servers that have similar performance levels. For example, on the LMSYS Chatbot Arena Leaderboard, OpenAI’s GPT-4o took first place with 1287 points, followed by Claude 3.5 Sonnet with 1272 points, and Meta’s (mostly) open-source Llama 3 not far behind with 1207 points. Across the top of the chart are several other open-source AI platforms, including Google’s Gemma, NVIDIA’s Nemotron, Cohere’s Command R+, Alibaba’s Qwen2, and Mistral.

It’s easy to see OpenSim hosting providers adding AI services to their package offerings.

(Image credit: Maria Korolov via Adobe Firefly)

Imagine the potential for creating truly immersive experiences in OpenSim and other virtual environments. If users tend to view AI entities as conscious, we could create incredibly realistic and responsive non-player characters.

This could revolutionize storytelling, education, and social interaction in virtual spaces.

This could include bots that can form meaningful relationships with users, AI-based characters that can adapt to individual users’ preferences, and virtual environments that feel alive and dynamic.

And there’s also the potential for interactive storytelling and games with more engaging quests and stories than ever before, creating virtual assistants that feel like real companions, or even building communities where the line between AI and human participants blurs.

For those using OpenSim at work, there are also business and educational applications in the form of AI Tutor, AI Executive Assistant, AI Salesperson, and more.

But as excited as I am about this possibility, I can’t help but feel worried.

As the study authors point out, AI poses some real risks.

(Image credit: Maria Korolov via Adobe Firefly)

First, there is the risk of emotional attachment. If users begin to view AI entities as conscious beings, they may form deep and potentially unhealthy bonds with these virtual characters. This can lead to a variety of problems, from social isolation in the real world to emotional distress if these AI entities are altered or removed.

We’ve already seen people feel real pain when their virtual girlfriends turn off.

And then there’s the problem of blurred reality: As the line between AI and human interaction becomes less clear, users may have trouble distinguishing between the two.

Personally, I don’t worry about this problem that much. There have been people complaining since the days of Don Quixote that other people can’t tell the difference between fantasy and reality. Maybe even before that. Maybe the cavemen were sitting around and saying, “Look at all the cave paintings that the young men have. They could be out hunting, but instead they’re sitting in caves looking at the paintings.”

Or even earlier, when language was invented. “Look at those young men. They’re not sitting around talking about hunting, they’re going out into the jungle and catching things.”

When movies were first invented, when people started getting “addicted” to television or video games… we’ve always had moral panics about new media.

The problem is that the moral panic was somewhat justified. The pulp fiction that the printing press gave us may not have rotted our brains, but Mao’s Red Book, the Communist Manifesto, Hitler’s book, I don’t know, but they helped and abetted it.

So what worries me most is the potential for exploitation: bad actors could exploit our tendency to anthropomorphize AI to create deceptive or manipulative experiences that exploit users’ emotional connections to make them more tolerant of evil.

But I don’t think that’s something we need to worry about at OpenSim. Our platform doesn’t have the kind of reach needed to create new dictators!

I think the worst case scenario is that people get so involved that they spend a few more dollars than they planned.

Latest posts by Maria Korolov (See all)