Prompting fundamentals: what actually matters in 2026
Forget the prompt engineering theater. These five principles explain 90% of what makes a good prompt — backed by the actual behavior of today's models.
Half of "prompt engineering" advice from 2023 is wrong now. Models got smarter. The tricks became unnecessary. Here's what still matters.
1. Say what you want, not what you don't
Positive instructions outperform negatives. "Write concisely" works better than "don't be verbose."
Models are associative — when you say "don't mention pricing," you've now primed the concept of pricing.
2. Examples > explanations
One good example is worth three paragraphs of description. Instead of describing the format, show it:
Turn this into a tweet:
[input]
Format like this:
[example tweet]
Now do this one:
[new input]Every hour you spend writing a better example is an hour you save in iterations.
3. Set context before the task
The order matters more than you'd think. State who the AI is and what it's working on before asking it to do the thing.
You are a senior technical editor at a tech publication for non-developers.
Your job is to rewrite the following article to be clearer without making it condescending.
Here's the article: [...]
Compare to dumping the article in with "rewrite this to be clearer." You'll get noticeably different output.
4. Let it think
For anything complex, ask for reasoning before the answer:
Think through the problem step by step. Then give me your final answer.
Today's models are trained to reason when asked to. Reasoning produces better output even if you throw away the reasoning.
5. Know when to stop prompting and just do it yourself
If you've iterated more than three times, your prompt isn't the problem — your task is ambiguous, or it's not a task the model can do well yet. Edit the output by hand and move on.
Things that used to matter and don't anymore
- "Act as an expert in..." — models already assume expertise
- "Take a deep breath" — urban legend
- Elaborate JSON schema declarations — modern models handle plain descriptions
- Threatening the model — please stop
Things nobody talks about but should
- The system prompt matters more than the user prompt. Time spent on the persistent context gets compounded.
- Temperature 0 for anything you'll parse, temperature ~0.7 for creative. Most people leave it on default.
- Long contexts decay. Once you're past 100k tokens, the model gets worse at retrieving mid-context info. Summarize and restart.
TL;DR: Say what you want, show an example, set context, let it think. That's 90% of it.
Newsletter
A short weekly email about AI tools and what's worth trying.
Free. No spam. Unsubscribe anytime.