I recently read a super interesting, yet slightly terrifying study, by researchers at the University of Texas at Austin, Texas A&M, and Purdue, which shows that Large Language Models (LLMs) can suffer permanent cognitive damage when they're trained on low-quality, high-engagement social media posts.
Imagine listening to your favourite podcast. You rewind it to go over something you missed, but each time you replay it, it’s somehow different.
This sounds frustrating, right? But, it’s likely this is what would happen if we just stuffed large language models into screen readers, in a lazy attempt to avoid having to publish accessible content.
I've developed a weird fascination with challenging language models, thinking up edge-case questions or scenarios to really work out if it can do what is being claimed, or whether it's just reasonably convincing on the surface.
Most language models are useful, but usually, they aren't that great once you drift outside of the realms of common knowledge and into nuanced territory.
So, since the launch of axe Assistant, I've spent a few days testing it out! This post documents my findings!
The problem with AI, especially large language models (LLM), is that most people don't really understand how they work. This is not an accident! It's in many companies' interests to keep the mystery alive! They use smoke and mirrors to market their products as smarter, safer, or more "human" than they actually are.
As humans, we love to personify things, because our only reference point is often ourselves. When something talks like us, we assume it thinks like us too.
I'm seeing more and more people using ChatGPT, but also, being weirdly secretive about it. They're augmenting their own work, increasing their knowledge and productivity, but not really sharing how they suddenly appear to have stepped up a gear.
There appears to be a common fear that by admitting you use ChatGPT, your work no longer has value, that you somehow look less smart, or that you're now cheating when giving advice. But this couldn't be further from the truth, because ChatGPT is just a tool!