After learning how language models work, I shifted gears into how we actually talk to them. Good prompts aren't about being clever — they're about being clear and structured.
If you ask "What's the best way to save money?" you'll get generic advice. But if you ask "Give me a monthly saving plan for someone earning ₹60,000, with rent of ₹15,000, and a goal to save ₹10,000 per month" — now the model has constraints. Context. Structure.
RAG — Retrieval-Augmented Generation
Imagine AI as a smart intern. Instead of telling them everything in one go, you give them access to a clean Google Drive with all the info they might need. They can fetch, read, and respond — without you repeating everything every time. That's RAG.
Fine-tuning isn't always needed
You can often get 80% of the result just by better prompts + retrieval logic. No heavy training, no engineering headaches.
I don't see AI as a chatbot anymore. I see it like a Lego system — modular, flexible, logical.