Skip to main content
Pincites uses large language models (LLMs) from Anthropic, OpenAI, and Google to analyze contracts, generate redlines, answer legal questions, and draft language. We select models based on their performance for legal work, including reasoning ability, accuracy, and reliability with complex documents.

Providers

Anthropic

Pincites integrates models from Anthropic, including the Claude family. These models excel at tasks requiring safety and reliability, such as analyzing complex contract language and generating context-aware legal guidance. Your data sent to Anthropic models remains private, isn’t used for training, and is subject to our zero data retention agreement.

OpenAI

Pincites uses models from OpenAI, including the GPT series, known for advanced text understanding and generation. These models power features like in-depth document review and nuanced legal drafting. Your data processed by OpenAI models is kept confidential, is not used for training, and is never stored.

Google

Pincites incorporates Google’s Gemini models for tasks like understanding complex legal queries and summarizing regulations. Google does not use your data for training purposes.

How we select models

We evaluate models based on four criteria, in order of priority:
  1. Accuracy: Does the model produce correct, reliable outputs for legal work?
  2. Transparency: Can we understand how the model reached its conclusions?
  3. Latency: How quickly does the model return results?
  4. Cost: What’s the expense relative to performance?
We continuously evaluate model performance and update our selections as capabilities improve.

How we use models

Different tasks may use different models depending on what performs best:
  • Contract analysis: Models optimized for reading and understanding long documents
  • Redline generation: Models with strong instruction-following for precise edits
  • Legal research: Models capable of synthesizing information from multiple sources
  • Drafting: Models that produce clear, professional legal language

Data privacy

Your data is never used to train AI models. When Pincites sends your documents or queries to a model provider:
  • The content is processed and returned, then deleted
  • No provider retains your data after processing
  • Your documents are not used to improve or train any models
  • We maintain zero data retention agreements with all providers

Performance and reliability

Legal work demands accuracy. We prioritize:
  • Consistency: Same inputs should produce similar outputs
  • Precision: Redlines and analysis should be specific, not vague
  • Reasoning: Models should explain their conclusions when asked
  • Safety: Outputs should be professionally appropriate
When models make mistakes (and they do), your feedback helps us improve. Use the thumbs down button to flag issues.

Learn more

For details on how Pincites handles your data, see our Security Overview and Data Protection documentation.