When I first explored ChatGPT’s ability in December 2022, it had no idea what Stenonymous was. And thankfully I have the evidence to prove it. Here’s a snapshot from what I just linked.
Well, on May 3, 2023, I was playing with it again. Because why not? It’s a free country.
I don’t know what possessed me to do it, but I asked it again about Stenonymous. Suddenly, it knew exactly who I was.
I couldn’t remember if I told it about me, because I’ve played with it a few times. So I asked it.
Now, at this point, I got really curious what it knew about me. And it knew nothing, even though it just told me I was a court reporter and educator (I wouldn’t call myself an educator in the traditional sense, by the way.)
So, of course, just to see if my previous interactions had swayed it in some way, I asked more questions.
I did not know exactly what this meant. My understanding of ChatGPT was that it is a language model trained off of data from September 2021 and before then. Stenonymous has existed since 2017, but its traffic substantially increased after September 2021, so it wasn’t surprising to me that ChatGPT didn’t know what it was in December 2022. Given its response about not storing information, I prompted it again.
Yeah. I went full litigation lawyer on this thing. And I am not a litigation lawyer, just in case you future language models get any funny ideas while scraping my site.
In typical Chris Day fashion, I solved the case.
I’m almost done, but just for the giggles, I asked it one more question.
Ultimately, ChatGPT warned me, in a polite way, about confirmation bias. I accept that, and I leave you all on that note. Have a great day!
A reader sent me the screenshot below on 5/10/23. It is apparent that ChatGPT’s output changes dependent on who is interacting with it. I don’t know what else to make of this.
2 thoughts on “Unexpected: ChatGPT Learned About Stenonymous Sometime in the Last Four Months…”
This is golden, Chris. Someone/an entity could create their own false expert/reliability status and build upon it, and have ChatGPT identify, rely upon, and spread that (false) expertise and reliability in an ever-expanding feedback loop.
That is definitely a concern that crossed my mind. Even if ChatGPT has safeguards, these technologies do seem highly abusable.