Tech Expert: AI Hallucination Is Not Fixable

Hopefully by now most people have heard that ChatGPT and similar language models can confidently pump out falsehoods.

Came across this really interesting article in Fortune. Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistic’s Laboratory, is quoted as saying the AI hallucinations that ChatGPT experiences are not fixable.

Yet what’s the next thing the article talks about? It claims the McKinsey Global Institute projects AI will add somewhere around $3 trillion to $4 trillion to the economy. It talks about how Google is offering up similar technology to news media companies. It talks about how Sam Altman, CEO of ChatGPT, is optimistic about the technology, and quoted another CEO, Shane Orlick of Jasper AI, as saying “hallucinations are actually an added bonus.”

“Hey, this doesn’t work right, and the issue doesn’t seem fixable.” “FULL STEAM AHEAD, it’ll be better for the economy and the things it gets wrong are actually a good thing!” How deluded are these people? And when does it become socially acceptable to tell someone they’re full of crap? Never? Well then I guess we’ll just continue to live in a world where big money can lie on an industrial scale and never be held accountable.

This is not unlike our field, where AI can and does confidently pump out wrong words. We learned that AI wasn’t as good as us, particularly on the African American Vernacular English dialect, from the Racial Disparities in Automatic Speech Recognition 2020 study. AI scored as low as 25% in that study. When we were tested in the Testifying While Black study, we were about 80% accurate. When we broadcasted that in professional circles, the big money in our field ignored it and kept with their agenda. After all, what kind of monster would let science, facts, and egalitarianism stand in the way of a corporate operation designed to push the market in a singular direction?

Succinctly, it is not the most meritorious narrative that seizes the day, but the strongest. That’s why tech continues to pump out the message that it’s going to be a massive boon to the economy. Who cares if it’s true? It keeps investor money flowing, avoids AI winter, and that money gives them more legitimacy as they keep pumping out the aforementioned message that then lures in more investors and money. That’s also why I sought funding from the field. Our message could overpower big box on the digital v steno debate, and then, right or wrong, we’d be victorious, and the win would be self-reinforcing. Not a single corporation in the market today would dare to do the dishonest and illegal things I’ve documented over the last few years.

It makes me wonder if the answer is to “corporatize” our media and seek shareholders. I’ll take answers in the comments if anyone will share. For those of you that don’t donate, would you put some money on the table if there was a return involved? How much money? How much return? For those of you that do donate, feel free to answer too. There is some reason to suspect a corporate accountability media company would be successful. It’s been said that millennials alone are going to be 75% of the workforce by 2025, and millennials have a lot of reasons to love corporate accountability — the main one being that, rhetorically, there hasn’t been any since we were born. Monetize what people want, get the shareholders some money, and do it with a flavor that distinguishes us from nonprofits in the space.

We can see with our own eyes that there is no position too absurd for big money. We can also see how an internet campaign by one guy with some hardcore supporters can run circles around big money. Combining these two ideas, why not run circles around big money for big money?

I’d do it. Who’s with me?

2 thoughts on “Tech Expert: AI Hallucination Is Not Fixable

  1. I went through the whole indoctrination process and actually did a few jobs using a piece of software backed by an institute that gets mentioned here a lot. Ironically, it is that experience that makes me opine I am not worried about AI yet. I live in a place that has it’s own very distinct regional dialects and large African American population. We do black lung depos regularly. The output for either of those sects of our population is utterly useless. I do not think CR agencies are going to stand for paying for 120 pages to be run through the proprietary AI software only to trash 30 or 40 of them in the scoping process. Also, ponderously slow to get to a quality the clients have come to expect. A 100 page document could take 2 days to scope depending on what the AI gives you. It’s not sustainable for an agency that does any kind of volume. Finally, an agency can place an ad and get a person with a reasonable grasp of English and Word to transcribe and scope. How many people in any market area are going to have the time to learn complicated, proprietary software? And, don’t even get our reporters started on field breakdowns … It may eventually get there, but I think we are safe for a long time to come

Leave a Reply