Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pincites.com/llms.txt

Use this file to discover all available pages before exploring further.

Reasoning models are different from standard AI models. They automatically break down problems into steps internally before giving you an answer. Most major AI providers now offer reasoning-capable models. This means you need to adjust how you write prompts.

What Makes Reasoning Models Different

Regular AI models respond immediately with their best guess. Reasoning models pause to think through the problem systematically before responding. They:
  • Analyze multi-step problems without being told how
  • Work through logical deductions on their own
  • Check their own work for consistency
  • Handle complex analysis better than standard models
The trade-off is they take longer to respond, but the quality is usually worth the wait for complex legal analysis.

Key Prompting Changes

Be Direct and Simple

Reasoning models don’t need extensive instructions about how to think. They already know how to break down problems. Standard model prompt:
First, identify all liability provisions.
Then, assess each for unlimited exposure.
Finally, suggest appropriate caps.
Reasoning model prompt:
Review the liability provisions and suggest appropriate caps where we have unlimited exposure.
The reasoning model will automatically identify, assess, and suggest – you don’t need to spell out the steps.

Skip the Examples

These models work best with zero-shot prompting (no examples). Adding examples can actually confuse them or make them overthink simple tasks. Don’t do this: “Here are three examples of good liability caps…” Do this: “Ensure liability caps align with industry standards for SaaS vendors.”

Control Output Formatting Explicitly

Reasoning models now avoid markdown formatting unless you specifically ask for it. If you want formatted output, say so. To get formatted output:
Provide your analysis with markdown formatting enabled. Use headers, bullets, and tables where appropriate.
Or simply include “Formatting re-enabled” in your prompt.

Provide Essential Context Only

These models can handle large documents, but don’t dump unnecessary background. Give them what matters for the specific task. Too much: Full company history, all previous negotiations, entire email chains Just right: Current document, your role, key constraints, specific question

Best Practices

Structure Your Input Clearly

Use XML tags or clear sections to organize different parts of your prompt:
<context>
We're a startup vendor reviewing an enterprise MSA.
Low leverage situation.
</context>

<task>
Identify terms that could prevent us from raising our next funding round.
</task>

Specify Output Preferences

Be explicit about what you want back:
Provide:
- Three bullet points with the critical issues
- One paragraph explaining the business impact
- Table of suggested redlines

Request Self-Checking

These models are good at verification. Ask them to check their own work:
After your analysis, verify there are no contradictions between your recommendations.

Control Reasoning Effort

Some platforms let you specify how hard the model should think:
Use high reasoning effort to analyze this complex indemnification structure.

Handle Ambiguity Upfront

Tell the model what to do with unclear situations:
If any provisions are ambiguous, state your assumptions before analyzing.

When to Use Reasoning Models

Perfect For:

  • Complex multi-party agreements
  • Regulatory compliance analysis
  • Untangling contradictory provisions
  • Risk assessment across multiple documents
  • Novel legal issues without precedent

Use Standard Models For:

  • Simple document summaries
  • Basic information extraction
  • Routine playbook applications
  • Quick yes/no questions

Working with Large Documents

Reasoning models excel at large document analysis. Instead of breaking documents into chunks:
Review this entire merger agreement for provisions that could delay closing. Focus on conditions precedent and termination rights.
The model will systematically work through the entire document.

Common Pitfalls

  • Over-prompting: These models already know how to reason. Don’t micromanage the steps — define the problem and let the model find the best approach.
  • Too many examples: Unlike standard models, reasoning models work better figuring things out themselves. Examples can make them overthink.
  • Assuming formatting: If you want bullets, tables, or bold text, explicitly request it. Otherwise you’ll get plain text.
  • Rushing complex analysis: These models take longer but produce better results. Don’t shortcut the process with oversimplified prompts.

The Key Insight

Reasoning models are like senior attorneys who already know how to approach legal analysis. Don’t treat them like junior associates who need step-by-step instructions. Give them the problem clearly and let them apply their training. Focus your prompts on defining the problem and desired output, not the process in between.