How to Get Better Results from ChatGPT, Gemini, and Claude (5+ Prompt Engineering Frameworks)

How to Get Better Results from ChatGPT, Gemini, and Claude (5+ Prompt Engineering Frameworks)

According to an Adobe survey, 53% of Americans have used generative AI, out of which the majority (81%) use it in their personal lives, followed by work (30%) and school (17%). The same survey revealed the most popular use cases for generative AI were research and brainstorming, creating first drafts of written content, creating visuals or presentations, summarizing written text, creating programming code, and others.

You can probably tell that generative AI is widely used and makes people's lives easier. However, we have all, at some point, experienced that generative AI tools like ChatGPT, Gemini, or Claude don't quite give you the output you want. That could be due to several reasons, most commonly being vague prompts that don't convey your idea well to generative AI tools.

To get better outputs from generative AI tools like ChatGPT, Gemini, and Claude, you need to learn how to communicate with them. In this article, we will share 5+ prompting techniques/frameworks that will help you get better results from ChatGPT, Gemini, and Claude and turn mediocre outputs into production-grade results.

featured

AdCreative.ai: An AI-powered platform that automates the creation of high-performing ad creatives for social media and display campaigns.

Try Now

Here are 5+ prompting frameworks to help you get better results from generative AI tools (ChatGPT, Gemini, Claude):

Technique 1: Constraint-Based Prompting

Most prompts are too open-ended; hence, by adding hard constraints, you force the model into a narrower solution space, effectively eliminating the majority of bad outputs before they are generated.

  • How it works: It defines exactly what must be included and what isn't allowed.
  • When it helps: Writing product copy, summaries, templates, policy text, or anything where format and tone matter.

Template

Generate [output] with these non-negotiable constraints

  • Must include: [requirement 1], [requirement 2]
  • Must avoid: [restriction 1], [restriction 2]
  • Format: [exact structure]
  • Length: [range]

Technique 2. Multi-Shot with Failure Cases

Examples are useful and standard prompting techniques use examples to tell AI what to do; however, adding failure cases along with those examples is better. Using this technique, you not only show the model what to do but also what not to do and explain why it's wrong. That creates clearer boundaries than good examples alone.

  • How it works: Give the AI a "Good Example" and a "Bad Example," then explain specifically why the bad example failed.
  • When it helps: Technical explanations, style-sensitive writing, customer support replies, and data formatting.

Template

Task: [goal]

  • Good example: [correct output]
  • Bad example: [incorrect output]
  • Reason it fails: [specific explanation]
  • Now do this: [your request]

Technique 3: Metacognitive Scaffolding

This technique asks the AI model to explain its thinking process before giving an answer. Taking the time to think helps the model find and fix any logical mistakes during planning.

  • How it works: Ask the model to list assumptions, identify edge cases, and explain its approach before generating the final output.
  • When it helps: Coding, regex, analysis, strategy docs, plans, and anything with hidden edge cases.

Template

Before you [generate output], first:

  • List 3 assumptions you're making
  • Identify potential edge cases
  • Explain your approach in 2 sentences

Then provide [output]

featured

Notion AI: Notion's AI features can enhance your workflow by helping you write, summarize, brainstorm, and automate tasks with simple prompts.

Try Now

Technique 4: Differential Prompting

A good prompt engineer rarely settles for the first option. Ask for two answers that focus on different goals, then compare or combine them. This takes advantage of the model's ability to explore multiple solutions at the same time.

  • How it works: Request Version A and Version B with different optimization criteria.
  • When it helps: Code performance tradeoffs, tone variations, marketing vs. compliance, and short vs. detailed.

Template

Generate two versions of [output]:

  • Version A: optimized for [criterion 1]
  • Version B: optimized for [criterion 2]

For each, explain the tradeoffs you made.

Technique 5: Specification-Driven Generation

This technique shows how good software and content get made: you first get the specifications, then build. This method separates "what to build" from "how to build it," as using this technique, you first ask the model to write a specification, then you approve or adjust it, and only then generate the final work.

  • How it works: Ask the model to define inputs, outputs, constraints, and edge cases, and wait for your approval before it executes the task.
  • When it helps: Functions, workflows, dashboards, long-form content outlines, policies, and SOPs.

Template

First, write a specification for [task] including:

  • Inputs and their types
  • Outputs and format
  • Constraints/requirements
  • Edge cases

Ask me to approve before implementing.

Technique 6: Chain-of-Verification

Generative AI models often hallucinate or miss details. By using this technique, you make the AI model grade its own output and verify the result against your criteria, and regenerate if any checks fail. This is a simple self-correction loop that often catches missing requirements.

  • How it works: Instruct the model to verify its output against a checklist of criteria and regenerate if it fails any check.
  • When it helps: SQL, code, formatted deliverables, checklists, and anything with clear acceptance criteria.

Template

[your request]

"Once the output has been generated," verify the output against:

  • [check 1]
  • [check 2]
  • [check 3]

If any check fails, regenerate with corrections.

In Conclusion:

These techniques can significantly reduce ambiguity to help you turn mediocre outputs into production-grade results. Every model is capable in 2025; what matters is control and clarity, and by adding more constraints, you force the generative AI tools to produce better outfits with fewer wrong answers. A bad prompt is when you order AI what to do, whereas a prompt engineer communicates with generative AI tools to plan how it will perform the task, then actually performs the task, and finally verifies it. Resource credit for this article goes to God of Prompt on X (Twitter).


💡 For Partnership/Promotion on AI Tools Club, please check out our partnership page.

Learn more
About the author

AI Tools Club

Find the Most Trending AI Agents and Tools

AI Tools Club

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Tools Club.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.