Skip to main content
Reasoning models like OpenAI’s O1 and O3-mini are different from standard AI models. They automatically break down problems into steps internally before giving you an answer. This means you need to adjust how you write prompts.

What Makes Reasoning Models Different

Regular AI models respond immediately with their best guess. Reasoning models pause to think through the problem systematically before responding. They:
  • Analyze multi-step problems without being told how
  • Work through logical deductions on their own
  • Check their own work for consistency
  • Handle complex analysis better than standard models
The trade-off is they take longer to respond, but the quality is usually worth the wait for complex legal analysis.

Key Prompting Changes

Be Direct and Simple

Reasoning models don’t need extensive instructions about how to think. They already know how to break down problems. Standard model prompt:
First, identify all liability provisions.
Then, assess each for unlimited exposure.
Finally, suggest appropriate caps.
Reasoning model prompt:
Review the liability provisions and suggest appropriate caps where we have unlimited exposure.
The reasoning model will automatically identify, assess, and suggest – you don’t need to spell out the steps.

Skip the Examples

These models work best with zero-shot prompting (no examples). Adding examples can actually confuse them or make them overthink simple tasks. Don’t do this: “Here are three examples of good liability caps…” Do this: “Ensure liability caps align with industry standards for SaaS vendors.”

Control Output Formatting Explicitly

Reasoning models now avoid markdown formatting unless you specifically ask for it. If you want formatted output, say so. To get formatted output:
Provide your analysis with markdown formatting enabled. Use headers, bullets, and tables where appropriate.
Or simply include “Formatting re-enabled” in your prompt.

Provide Essential Context Only

These models can handle large documents, but don’t dump unnecessary background. Give them what matters for the specific task. Too much: Full company history, all previous negotiations, entire email chains Just right: Current document, your role, key constraints, specific question

Best Practices

Structure Your Input Clearly

Use XML tags or clear sections to organize different parts of your prompt:
<context>
We're a startup vendor reviewing an enterprise MSA.
Low leverage situation.
</context>

<task>
Identify terms that could prevent us from raising our next funding round.
</task>

Specify Output Preferences

Be explicit about what you want back:
Provide:
- Three bullet points with the critical issues
- One paragraph explaining the business impact
- Table of suggested redlines

Request Self-Checking

These models are good at verification. Ask them to check their own work:
After your analysis, verify there are no contradictions between your recommendations.

Control Reasoning Effort

Some platforms let you specify how hard the model should think:
Use high reasoning effort to analyze this complex indemnification structure.

Handle Ambiguity Upfront

Tell the model what to do with unclear situations:
If any provisions are ambiguous, state your assumptions before analyzing.

What to Avoid

  • Don’t Micromanage the Thinking Process The model already knows how to reason. Don’t write “First think about X, then consider Y.”
  • Don’t Provide Excessive Examples Unlike standard models that learn from examples, reasoning models work better figuring things out themselves.
  • Don’t Assume Formatting If you want bullets, tables, or bold text, explicitly request it. Otherwise, you’ll get plain text.
  • Don’t Rush Complex Analysis These models take longer but produce better results. Don’t try to shortcut the process with oversimplified prompts.

When to Use Reasoning Models

Perfect For:

  • Complex multi-party agreements
  • Regulatory compliance analysis
  • Untangling contradictory provisions
  • Risk assessment across multiple documents
  • Novel legal issues without precedent

Use Standard Models For:

  • Simple document summaries
  • Basic information extraction
  • Routine playbook applications
  • Quick yes/no questions

Practical Comparison

Here’s the same task for both model types: Standard Model Approach:
Step 1: Extract all termination provisions
Step 2: Identify which party can terminate
Step 3: Check notice requirements for each
Step 4: Flag any missing cure periods
Step 5: Draft summary table
Reasoning Model Approach:
Analyze the termination provisions and create a summary table showing which party can terminate, notice requirements, and any missing cure periods.
The reasoning model handles all the steps internally.

Working with Large Documents

Reasoning models excel at large document analysis. Instead of breaking documents into chunks:
Review this entire merger agreement for provisions that could delay closing. Focus on conditions precedent and termination rights.
The model will systematically work through the entire document.

Common Pitfalls

  • Over-prompting: Writing long, detailed instructions when a simple request would work better.
  • Fighting the model: Trying to force a specific reasoning path instead of letting it find the best approach.
  • Impatience: Not giving the model enough time to think through complex problems.
  • Format assumptions: Forgetting to request formatted output when you need it.

The Key Insight

Reasoning models are like senior attorneys who already know how to approach legal analysis. Don’t treat them like junior associates who need step-by-step instructions. Give them the problem clearly and let them apply their training. The shift is from “how to think” to “what to think about.” Focus your prompts on defining the problem and desired output, not the process in between.

Remember

Less is more with reasoning models. Clear, concise prompts that define the task and context will outperform lengthy instructions about methodology. Trust the model to handle the reasoning – your job is to frame the question well.