Prompting With Craft
Prompting With Craft
Before we chase advanced hacks, let's refresh the fundamentals.
Prompting is still a first-step problem. And someone who masters one basic step properly will outperform someone who is slightly good at everything.
AI did not suddenly become intelligent. It is still predicting patterns. The same as humans, if you not explain it well, then nobody understand it.
So clarity still wins.
The Basics
1. Give It a Role
Not for cosplay. For constraints.
Most people write:
"Explain Kubernetes."
And then complain the answer is generic. Instead, define who the model is and who it speaks to. Example (technical):
"You are a senior backend architect. Explain Kubernetes to a junior developer who has only worked with monolith applications. Focus on deployment and scaling, not infrastructure theory."
Now the model knows:
- level of abstraction
- what to ignore
- what to emphasize
Example (business):
"You are a solar energy advisor. Analyze the following installation report and summarize whether the investment is financially justified for a household in Germany. Highlight ROI, risks, and break-even point."
That’s very different from:
"Summarize this solar report."
Or another one:
"You are a financial analyst preparing a short executive briefing for a CFO. Summarize this quarterly report in 5 bullet points, focusing on risk and growth."
Same data. Completely different output.
Role defines:
- vocabulary
- depth
- assumptions
- perspective
- decision criteria
Without role, the model defaults to “neutral Wikipedia mode”. With role, you reduce randomness. That’s the goal.
2. Define the Goal
Bad:
"Write about observability."
Better:
"Write a short technical explanation of why observability matters in distributed systems. Focus on practical impact, not history."
No goal → generic blog-flavored air.
3. Define the Output Format
This is where real power starts.
I’ve seen “AI-powered” systems that ask:
- Financial results for Company A
- Financial results for Company B
- Financial results for Company C
That’s automation.
Real power is:
"Create a comparison table for Companies A, B, and C for the last 3 years. Include revenue, net profit, and YoY growth. Then summarize key differences and trends."
Now you’re not retrieving. You’re structuring and synthesizing.
Format forces thinking.
4. Add Constraints
Constraints improve quality.
Try:
- "Max 5 bullet points."
- "No buzzwords."
- "One concrete example."
- "Avoid metaphors."
- "Keep it under 300 words."
Freedom increases fluff. Structure increases signal.
5. Tone Matters
AI mirrors you. Statistically.
If you write:
"Short answer."
You’ll get minimal output.
If you write:
"Please explain step by step and include edge cases."
You’ll get deeper structure.
Models were trained on human discussions. They continue the pattern you initiate.
And yes — “please” matters. Not emotionally. Structurally.
Politeness usually equals clearer instruction.
6. Provide Examples (Few-Shot Prompting)
If you care about style or format, show it.
"Here is an example of my writing style: …
Rewrite the following in a similar tone."
AI mirrors patterns. Weak example → polished weakness. Strong example → strong imitation.
7. Be Explicit About Assumptions
If something matters, state it.
"Assume the reader has basic SQL knowledge but no experience with distributed systems."
Unstated assumptions create generic answers.
Tips & Tricks
8. Ask It to Ask You Questions
"Before answering, ask me 3 clarifying questions."
Better context → better output.
Many bad answers are just under-specified problems.
9. Separate Thinking from Writing
Don’t request the final output immediately.
Try:
- "List assumptions."
- "Propose 3 possible structures."
- "Now write the final version."
First draft is rarely the best draft.
10. Ask for Critique
After receiving the answer:
"Critique this response. What is vague, weak, or missing?"
LLMs are often better reviewers than initial writers.
Use that.
11. Force Trade-Offs
Instead of:
"Explain microservices."
Try:
"When are microservices a bad idea?"
Or:
"Compare A and B. Highlight risks, not benefits."
Contrast reduces marketing-style fluff.
12. Force Specificity
If the answer feels generic:
"Give a concrete example." "Provide numbers." "Show a real-world scenario."
Abstraction is where AI becomes empty.
13. Control Confidence
You can say:
"If uncertain, say so." "Mark assumptions clearly." "Separate facts from speculation."
Fluency ≠ correctness. Confidence ≠ accuracy.
14. Use AI to Compress, Not Only Expand
Everyone uses AI to generate more text.
It is often more useful for:
- summarizing long documents,
- extracting decisions from meeting notes,
- cleaning chaotic drafts.
Reduction is underrated.
15. Iterate
Prompting is not one-shot.
Draft. Refine. Constrain. Critique. Repeat.
Treat AI like a thinking surface, not a magic box.
Final Rule
Prompting is not about tricks.
It is structured thinking.
Vague in → generic out.
Structured in → useful out.
Master the basics. That alone already puts you ahead.