What Makes Reasoning Models Different
Regular AI models respond immediately with their best guess. Reasoning models pause to think through the problem systematically before responding. They:- Analyze multi-step problems without being told how
- Work through logical deductions on their own
- Check their own work for consistency
- Handle complex analysis better than standard models
Key Prompting Changes
Be Direct and Simple
Reasoning models don’t need extensive instructions about how to think. They already know how to break down problems. Standard model prompt:Skip the Examples
These models work best with zero-shot prompting (no examples). Adding examples can actually confuse them or make them overthink simple tasks. Don’t do this: “Here are three examples of good liability caps…” Do this: “Ensure liability caps align with industry standards for SaaS vendors.”Control Output Formatting Explicitly
Reasoning models now avoid markdown formatting unless you specifically ask for it. If you want formatted output, say so. To get formatted output:Provide Essential Context Only
These models can handle large documents, but don’t dump unnecessary background. Give them what matters for the specific task. Too much: Full company history, all previous negotiations, entire email chains Just right: Current document, your role, key constraints, specific questionBest Practices
Structure Your Input Clearly
Use XML tags or clear sections to organize different parts of your prompt:Specify Output Preferences
Be explicit about what you want back:Request Self-Checking
These models are good at verification. Ask them to check their own work:Control Reasoning Effort
Some platforms let you specify how hard the model should think:Handle Ambiguity Upfront
Tell the model what to do with unclear situations:What to Avoid
- Don’t Micromanage the Thinking Process The model already knows how to reason. Don’t write “First think about X, then consider Y.”
- Don’t Provide Excessive Examples Unlike standard models that learn from examples, reasoning models work better figuring things out themselves.
- Don’t Assume Formatting If you want bullets, tables, or bold text, explicitly request it. Otherwise, you’ll get plain text.
- Don’t Rush Complex Analysis These models take longer but produce better results. Don’t try to shortcut the process with oversimplified prompts.
When to Use Reasoning Models
Perfect For:
- Complex multi-party agreements
- Regulatory compliance analysis
- Untangling contradictory provisions
- Risk assessment across multiple documents
- Novel legal issues without precedent
Use Standard Models For:
- Simple document summaries
- Basic information extraction
- Routine playbook applications
- Quick yes/no questions
Practical Comparison
Here’s the same task for both model types: Standard Model Approach:Working with Large Documents
Reasoning models excel at large document analysis. Instead of breaking documents into chunks:Common Pitfalls
- Over-prompting: Writing long, detailed instructions when a simple request would work better.
- Fighting the model: Trying to force a specific reasoning path instead of letting it find the best approach.
- Impatience: Not giving the model enough time to think through complex problems.
- Format assumptions: Forgetting to request formatted output when you need it.