Let’s be honest — a lot of people (including me, until recently) interact with ChatGPT or AI tools like they’re magic.
You type something, it gives you answers, and you move on.
But once I slowed down and started looking under the hood, I realised:
AI isn’t smart. It’s just really, really good at guessing.
So what exactly is it doing?
When you ask a question, the AI doesn’t “know” the answer.
It simply predicts what the next word — or more accurately, the next token — should be.
Think of tokens as chunks of text — not always full words. For example:
- The word “investment” might be 1 token.
- But “investor-friendly” could be 2 or 3.
Now, here’s what blew my mind:
These models are not pulling answers from a knowledge bank.
They’re calculating probabilities: “What’s the next most likely token based on everything I’ve seen in training?”
Why does this matter?
Because once you understand that it’s just playing a giant guessing game — you stop treating it like a person.
And your prompts get sharper. You stop asking “What’s the best credit card?” and instead say:
“List 5 credit cards in India for first-time users. Include annual fees and approval speed.”
Now the model has structure. You’ve reduced ambiguity. You’re working with the machine, not against it.
Another surprise? Token limits.
Every AI model has a limit to how much it can “hold in mind” at once — like the number of tokens in a conversation.
It’s like a whiteboard that can only fit so much text before it starts forgetting what was earlier.
If you’ve ever asked ChatGPT something and it suddenly forgets what you said 10 minutes ago…
Now you know why.
I’m still exploring all of this. But it’s shifted how I look at these tools.
Less like a brain. More like a probability engine with a really powerful memory (that sometimes forgets).
Does that make sense?
Ever thought about AI that way?