If you want better responses from AI tools (ChatGPT, Claude, Gemini, etc.), stop typing random messages and start prompting with structure.
These 5 tips make AI work with you like an engineer-in-training you mentor, instead of a random output generator.
1️⃣ Give Context Before the Question
❌ Bad:
“Fix this code.”
✔️ Good:
Context: Next.js app with Firebase auth.
Problem: Login works locally but not in production.
Task: Identify the cause and list 2 fixes.
💡 Context = clarity.
No context = confusion.
2️⃣ Speak Like You're Writing a Function
❌ Bad:
“Make a dashboard.”
✔️ Better:
Task: Create a dashboard layout in React.
Requirements:
- Sidebar + header layout
- Dark mode toggle (state)
- Placeholder for data list
Output: Code only, no explanation.
This gives the AI inputs, constraints, and expected output.
3️⃣ Define What You Don’t Want
LLMs fill gaps. If you don’t set boundaries, they’ll invent things.
✔️ Example boundary prompt:
Do NOT:
- Add libraries I didn't mention
- Invent API endpoints
- Add animations or UI styling
🧠 Telling AI what not to do is just as important as what you want.
4️⃣ Ask for the Smallest Fix First
Don’t let AI rewrite your entire project when you only need one line changed.
Error:
TypeError: req.user is undefined
Give me:
1. Why it happens
2. 1-line patch fix
3. A long-term scalable fix
This keeps your codebase safe from guesswork.
5️⃣ Force Output Format
Pick a format: code, JSON, Markdown, or steps.
Output Format:
{
"issue": "What caused it",
"fix": "1-3 line patch",
"test": "How to verify the fix",
"future": "How to prevent it again"
}
You get structured answers instead of paragraphs of noise.
🧱 Final Thought
AI doesn’t replace developers.
It replaces the repetitive parts of development — if you guide it correctly.
Prompt like a developer.
Not like a user.
Top comments (0)