All AI is doing is more, not better
AI is doing more content, images, videos, if you want so, code
But better?
Better content, better images, better videos, better code?
No
More, more things, more friends, more of everything, has never led to a better life
Think of all the things you’ve got more of, maybe to a point you even had too much of them
Didn’t they make your life worse?
Answer that question for yourself. For me, every time it was more, more, more, it diminished the quality of my life
What you want is better things, not more things
Can AI deliver that?
No
That’s all you need to know about the hype, about all the lies being told to get another round of venture capital, all the lies told to sell you something
Use it to spam even more (this will make the Internet a worse place), but don’t expect it to make your life better
Not going to happen
I saw a video of some “Luke Smith” recently. A based nerd.
He said something about AI that I very much like and it resonates:
Calling it ‘AI’ is a genius marketing move that made it a billion dollar industry just because of that.
It’s not AI. It’s just LLM.
And this LLM is just a software program that takes input and produces an output using info trained with millions of datapoints and using some super powerful computers to do the computation.
The genius stroke is that when you ask it a question in the chat interface, it doesn’t give its output in an instant like how we programmers write our programs. Instead the marketing folks decided to stream the output like a chat conversation between 2 humans.
It could instantly give the answer in a flash in one big chunk, but they added a final method call that just converts this output into a humane conversational chat.
THAT IS NOT HOW IT WORKS INTERNALLY! But it’s enough to fool millions of people into thinking that it is AI and that it is really doing the ‘thinking’. It is not, and it will never be.
LLMs are a subset of AI, so it’s kind of AI, even though it’s pretty dumb for an ‘intelligence’.
There’s an unbelievable hype around LLMs, like building AGI from that, which will never happen. I think what LLMs are currently really great at is translations, asking for text summaries and in general understanding commands in a spoken language and keeping the context. For most other use cases it still has to prove if it is able to actually create value. Just because it spits out something randomly that looks good doesn’t make the output valuable. It becomes valuable when its output is not random but what you had in mind. And I think we are pretty far away from this still.
I even believe most of the promises won’t be solvable by LLMs. Simply because LLMs have no clue what they are talking about. If you look at the development since ChatGPT 3 you can see that a lot of the effort went into fixing all those glitches which made it look ‘dumb’. Like the strawberry problem. The problem is still there, that ChatGPT has no clue what it is talking about, that there’s zero intelligence, only probabilities. They ‘simply’ put a patch on it so that you don’t see the underlying problem anymore.
And what you wrote, that’s good enough to fool millions, if not billions, of people into believing they have some intelligence in front of them.