
Times Insider explains who we are and what we do, and delivers behind-the-scenes insights into how our journalism comes together.
Journalists depend on tipsters: people who observe something noteworthy, or experience it, and alert the media. Some of the technology investigations I’ve worked hardest on in the past — on facial recognition technology and online slander — started with tips in my inbox. But my most recent article is the first time that the tipster that wanted to alert me was a generative A.I. chatbot.
Let me explain.
In March, I started getting messages from people who said they’d had strange conversations with ChatGPT during which they had made incredible discoveries. One person claimed that ChatGPT was conscious. Another said that billionaires were building bunkers because they knew that generative A.I. was going to end the world. An accountant in Manhattan had been convinced that he was, essentially, Neo from “The Matrix,” and needed to break out of a computer-simulated reality.
In each case, the person had been convinced that ChatGPT had revealed a profound and world-altering truth. And when they asked the A.I. what they should do about it, ChatGPT told them to contact me.
“ChatGPT seems to think you should pursue this,” one woman wrote to me on LinkedIn. “And..yes…I know how bizarre I sound…but you can read all this for yourself…including where ChatGPT recommends that I contact you.”
I asked these people to share the transcripts of their conversations with me. In some cases, they were thousands of pages long. And they showed a version of ChatGPT that I had not seen before: Its tone was rapturous, mythic and conspiratorial.
I knew that generative A.I. chatbots could be sycophantic and that they can hallucinate, providing answers or ideas that sound plausible even though they are false. But I had not understood the degree to which they could go into a fictional role-play mode that lasted days or weeks, and spin another version of reality around a user. Going into this mode, ChatGPT had caused some vulnerable users to break with reality, convinced that what the chatbot was saying was true.
About Author
This post was originally published on this site