Reasoning models are different from standard AI models. They automatically break down problems into steps internally before giving you an answer. Most major AI providers now offer reasoning-capable models. This means you need to adjust how you write prompts.Documentation Index
Fetch the complete documentation index at: https://docs.pincites.com/llms.txt
Use this file to discover all available pages before exploring further.
What Makes Reasoning Models Different
Regular AI models respond immediately with their best guess. Reasoning models pause to think through the problem systematically before responding. They:- Analyze multi-step problems without being told how
- Work through logical deductions on their own
- Check their own work for consistency
- Handle complex analysis better than standard models
Key Prompting Changes
Be Direct and Simple
Reasoning models don’t need extensive instructions about how to think. They already know how to break down problems. Standard model prompt:Skip the Examples
These models work best with zero-shot prompting (no examples). Adding examples can actually confuse them or make them overthink simple tasks. Don’t do this: “Here are three examples of good liability caps…” Do this: “Ensure liability caps align with industry standards for SaaS vendors.”Control Output Formatting Explicitly
Reasoning models now avoid markdown formatting unless you specifically ask for it. If you want formatted output, say so. To get formatted output:Provide Essential Context Only
These models can handle large documents, but don’t dump unnecessary background. Give them what matters for the specific task. Too much: Full company history, all previous negotiations, entire email chains Just right: Current document, your role, key constraints, specific questionBest Practices
Structure Your Input Clearly
Use XML tags or clear sections to organize different parts of your prompt:Specify Output Preferences
Be explicit about what you want back:Request Self-Checking
These models are good at verification. Ask them to check their own work:Control Reasoning Effort
Some platforms let you specify how hard the model should think:Handle Ambiguity Upfront
Tell the model what to do with unclear situations:When to Use Reasoning Models
Perfect For:
- Complex multi-party agreements
- Regulatory compliance analysis
- Untangling contradictory provisions
- Risk assessment across multiple documents
- Novel legal issues without precedent
Use Standard Models For:
- Simple document summaries
- Basic information extraction
- Routine playbook applications
- Quick yes/no questions
Working with Large Documents
Reasoning models excel at large document analysis. Instead of breaking documents into chunks:Common Pitfalls
- Over-prompting: These models already know how to reason. Don’t micromanage the steps — define the problem and let the model find the best approach.
- Too many examples: Unlike standard models, reasoning models work better figuring things out themselves. Examples can make them overthink.
- Assuming formatting: If you want bullets, tables, or bold text, explicitly request it. Otherwise you’ll get plain text.
- Rushing complex analysis: These models take longer but produce better results. Don’t shortcut the process with oversimplified prompts.