Post last updated 2 months, 3 weeks ago
A bit of history
ChatGPT was released a little more than 3 years ago, and among my peers I was among the first to try it, and oh man, was I blown away. Not just me, but the whole world entered the AI craze, funneling hundreds of billions and promising the new technology would bring Heaven to Earth. The Big Tech Bros kept saying almost every day that there would be no need for software engineers, artists, and basically anyone, hoping they would replace the pesky workers with a new type of slave.
The wake-up
In the meantime, I kept trying different AIs1 for many tasks: searching, coding, generating images (I know, I know, bear with me), creating PowerPoints, or debugging issues. After the initial WOW moment, I started to come back down to Earth: for every impressive thing spit out by AI, if it wasn't some brain-dead thing, you would wrestle it for the right answer. No matter if it made up facts about the world or code that almost works, in many cases you'd have to keep holding its hand: everyone suddenly became a prompt engineer.
Case in point: I was trying to adjust the CSS of this blog-I knew how I wanted it to lay out, what colors I'd want to use, fonts, etc., but I'm not the biggest fan of CSS-and even using the Gemini paid version, it kept fucking up, either deleting already working parts or giving correct-looking but broken code. I was dumbfounded, as web dev is one area AI is generally good at, and I probably wasted more time telling it it's stupid and to redo the answer than if I were to handwrite everything.
Another subject of strong polarization
Nowadays, there seem to be ^ sides regarding the AI question. You have the AI evangelists, who believe AGI (Artificial General Intelligence) is right around the corner and will solve every problem we have. Usually, the Venn diagram of these people and the ones holding direct stakes in some AI company is pretty overlapping. The other side is viciously against it, arguing that it's a plague upon us and that we should fight against it as much as we can2.
Another hallucination-inducing instance of wasting a lot of time with AI was at work. I had to add some code to a new project following the template of old ones-basically copy-paste work. I thought to ask GPT or Claude to do that, so I gave it the initial code, pointed to the template, and told it I needed it to copy-paste but with some other names. After about 5 minutes, it gave me something that looked decent3. I tested it briefly and pushed to production. A day later, when reviewing the overnight runs with the new code, I discovered parameters were not copy-pasted from the relevant template but from some other part in the initial file I pointed it to. Now I had to clean up.
He keeps yapping
I'm in the part. I don't despise it, but I don't see it as a savior either. I think ThePrimeagen made a pretty good illustration of what AI is good at: for narrow things, it's often fantastic, but the larger the scope it works with, the more it goes haywire. Searching, especially, has become quite good with things like tool usage and RAGs; boilerplate code is also almost always on point, and I've seen a few stories of it being helpful for giving indications of which medical specialist to contact, among other things. It's also fucking expensive, and for now only Nvidia is really making money off it. OpenAI, Anthropic, and all others who actually train and run these models run on VC money and circular investments. Many people agree that we are in a bubble, and I guess when it bursts it will be disastrous for the economy. As an example, ChatGPT is used monthly by 800 million users, but fewer than 50 million pay any subscription, and when inevitably they become more expensive, I doubt more people will rush to pay.
Another issue with AI is the social and political implications it brings up. I recently read some great pieces by tante and ava about how AI is just widening the class divide. While the effects on critical thinking of relying too much on AI already look concerning, the overuse of AI in education might really put a nail in the proverbial coffin of society. Harari has some good points about how AI could be used to influence us without us ever realizing we're being steered in one direction or another.
"OK dingus, your stance is?"
Bottom line is that I do use AI, but I try to limit the usage to speed some things up rather than let it do all the thinking. On this blog, I'm only using it for proofreading and occasional CSS touch-ups, but everything else is my own work/thought.
At work, I use it for code and parsing data, with varying degrees of success.
Archive links
By AI I mean generative models for text and images/video.↩
Not necessarily in the Terminator sense, but rather in a we'll lose a big part of our humanity by delegating everything, from art to thinking, to it. I think they have quite some good arguments for this, even if I don't fully abide by their stance.↩
That's another pitfall with AI: it says everything with such confidence, or it generates so much
, that you don't even bother reviewing or double-checking it.↩