Applications

Instructor

A Python library that makes it trivial to get structured, validated outputs from LLMs using Pydantic models — the simplest path from LLM text to typed Python objects.

Validation
Pydantic v2
Retry logic
built-in
Backends
OpenAI/Anthropic/Gemini/Ollama

Table of Contents

SECTION 01

What Is Instructor?

Instructor wraps LLM clients (OpenAI, Anthropic, Gemini, Cohere, Ollama) and patches them to accept a response_model parameter. You pass a Pydantic model as the expected return type; Instructor prompts the LLM to return JSON matching that schema, validates the output, and automatically retries on validation failures. The result: every LLM call returns a typed Python object.

SECTION 02

Basic Usage

from pydantic import BaseModel
import instructor
from openai import OpenAI
client = instructor.from_openai(OpenAI())
class UserInfo(BaseModel):
    name: str
    age: int
    email: str
user = client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=UserInfo,
    messages=[{"role": "user", "content": "Extract: John Smith, 34 years old, john@example.com"}],
)
print(user.name)   # "John Smith"
print(user.age)    # 34
print(user.email)  # "john@example.com"
print(type(user))  # <class 'UserInfo'>
SECTION 03

Validation & Retries

Pydantic validators run on the extracted output. If validation fails, " "Instructor retries with the error message appended to the prompt — " "the LLM sees its mistake and corrects it.

from pydantic import BaseModel, field_validator
class ProductReview(BaseModel):
    sentiment: str
    score: int
    summary: str
@field_validator("score")
    def score_range(cls, v):
        if not 1 <= v <= 5:
            raise ValueError("Score must be between 1 and 5")
        return v
@field_validator("sentiment")
    def sentiment_values(cls, v):
        if v not in ("positive", "negative", "neutral"):
            raise ValueError("Must be positive, negative, or neutral")
        return v
review = client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=ProductReview,
    max_retries=3,  # retry up to 3 times on validation error
    messages=[{"role": "user", "content": "Review: Great product, works perfectly! 5 stars."}],
)
SECTION 04

Nested Models

Instructor handles arbitrarily nested Pydantic models — " "LLMs reliably produce complex structured outputs.

from typing import Optional
from pydantic import BaseModel
class Address(BaseModel):
    street: str
    city: str
    country: str
class Company(BaseModel):
    name: str
    industry: str
    founded_year: Optional[int] = None
    headquarters: Address
company = client.chat.completions.create(
    model="gpt-4o",
    response_model=Company,
    messages=[{"role": "user", "content": "Extract company info: OpenAI, founded 2015, San Francisco, AI research"}],
)
print(company.headquarters.city)  # "San Francisco"
SECTION 05

Streaming Structured Output

Instructor supports streaming partial objects as they're generated — useful for progressive rendering in UIs. Use instructor.Partial[YourModel] as the response_model to receive partial objects as each field completes.

SECTION 06

When to Use Instructor

Use when you need: typed LLM outputs in Python, zero-boilerplate JSON extraction, automatic validation and retries, or complex nested data structures from unstructured text. It's the fastest way to go from 'I want structured data from LLM' to working code.

SECTION 07

Prompt Optimization and Validation

Instructor excels at extracting structured data from unstructured text. Define your output format as a Pydantic model. Instructor validates the LLM output against the schema automatically, retrying with corrected prompts if validation fails. This guarantees type safety without manual parsing.

Practical example: extract contact information from an email. Define the schema (email, phone, name, company). Pass the email text to Instructor. It returns a validated Contact object. No manual regex parsing, no type errors, no hallucinated fields outside the schema.

from pydantic import BaseModel, Field
from instructor import from_openai
import openai

class Contact(BaseModel):
    email: str = Field(description="Valid email address")
    phone: str = Field(description="Phone number in E.164 format")
    name: str = Field(description="Person's full name")
    company: str = Field(description="Company name if mentioned")

client = from_openai(openai.OpenAI())

resp = client.messages.create(
    model="gpt-4-turbo",
    messages=[{"role": "user", "content": email_text}],
    response_model=Contact
)
print(resp)  # Guaranteed to be valid Contact object

Validation benefits: type safety, schema enforcement, automatic retries, clear error messages. Drawback: adds latency from validation loop. For most use cases, one retry is sufficient (~10% of the time). Set max_retries to control retry behavior.

resp = client.messages.create(
    model="gpt-4-turbo",
    messages=[{"role": "user", "content": text}],
    response_model=Contact,
    max_retries=3
)
# Instructor will retry up to 3 times if validation fails
Use CaseBenefitCostComplexity
Data extractionHigh (guaranteed valid)Small (1 retry typical)Low (define schema)
Entity recognitionHigh (no hallucinations)Small (1-2 retries)Low
ClassificationMedium (valid categories)Tiny (rare retry)Minimal
Free-form generationLow (not suitable)N/AN/A
SECTION 08

Advanced Patterns and Integration

Nested models: define complex hierarchies. A Contact contains an Address, which contains coordinates. Instructor validates the entire tree. This is powerful for document parsing: extract a document structure with nested sections, fields, and relationships.

Union types: define multiple possible output formats. The LLM chooses the right format based on input. Instructor validates against all possible types and returns the correct one. Useful for multi-domain applications.

Custom validation: add Pydantic validators to enforce business logic. Example: validate that phone numbers are in correct format, emails are valid, dates are reasonable. Instructor reruns the LLM with validation error messages if custom logic fails.

Instructor bridges the gap between LLMs and structured output. It solves the fundamental problem of LLM unpredictability: guaranteeing output format and validity. This enables reliable integration of LLMs into production systems. Use Instructor whenever you need structured, validated output from language models. It pays for itself through reduced error handling and debugging.

Production reliability: Instructor adds latency from validation and retries. In practice, 90-95% of responses pass validation on first try. Add ~100-200ms for retry logic. For most applications, this is acceptable. If latency-critical, use simpler schemas or disable retries.

Cost considerations: retries mean extra API calls. A response that requires 2 retries costs 3x API tokens. Optimize schemas to maximize first-pass success rate. Use simple types when possible. Add detailed field descriptions to help the LLM understand requirements. These small changes reduce retries and save costs.

Combining with RAG: use Instructor to extract information from documents as part of retrieval-augmented generation. Extract entities, relationships, facts. Store structured outputs for downstream processing. This creates knowledge graphs from unstructured text, enabling powerful query and reasoning capabilities.

SECTION 09

Scaling and Production Deployment

Batch processing: process multiple documents in parallel. Instructor handles concurrent validation. Use async/await for non-blocking I/O. Build queues for high-throughput scenarios. Process thousands of documents per minute with proper infrastructure. Each document gets validated independently, failures don't block others.

Monitoring and observability: track validation success rates. Monitor retry frequencies. Alert on pattern changes (sudden increase in retries indicates distribution shift). Log all validation failures for debugging. Use observability to understand production behavior and improve schemas over time.

Fine-tuning for your domain: use Instructor with fine-tuned models for specialized domains. Medical document extraction, legal contract parsing, scientific paper analysis. Fine-tuning reduces hallucinations and improves accuracy. Combine domain-specific models with structured validation for state-of-the-art reliability.

Integration patterns: use Instructor as the output layer in RAG systems. Extract from retrieved documents. Feed extracted structured data to downstream processors. Build knowledge graphs, update databases, trigger workflows. This architecture is more reliable than trying to extract information from freeform LLM outputs.

Future of structured extraction: Instructor is part of a broader trend toward constrained generation. As LLMs improve, structured output becomes increasingly valuable. Expect tooling to evolve toward more sophisticated schema validation and error recovery. Tools like Instructor will become standard infrastructure for production LLM systems.

Getting started: begin with simple schemas. Validate basic extractions from text. Iterate on schema design based on real errors. Add custom validators incrementally. Start with one use case, expand to others once you master the technique. Instructor unlocks reliable structured output from LLMs, essential capability for production systems that need guaranteed formats.

Real-world example: a company uses Instructor to extract structured data from customer support tickets. Define schema with fields like: issue type, severity, required action, customer sentiment. Instructor validates each extraction. Failed validations trigger manual review. Success rate improved from 70% (manual parsing) to 99% (Instructor). Reduced support team burden. Improved response quality. Data quality improved downstream. This is typical of Instructor adoption: it unlocks reliability that was previously impossible with unstructured LLM outputs. Small investment in schema design pays massive dividends in production quality and cost savings. Structured output is the future of reliable language model integration in production systems.

Structured output guarantees are essential for production AI systems. Hallucinations become dealable when confined to optional fields. Type violations become impossible. Invalid enums get automatically corrected. Instructor makes this reliability possible without custom parsing logic. As LLM applications mature from prototypes to production, structured output becomes non-negotiable. Instructor is the de facto solution in the Python ecosystem. Adoption is accelerating as teams discover the value of guaranteed output formats and schema enforcement for production reliability and cost control. Adopt Instructor for all structured extraction from language models. Transform unstructured LLM outputs into reliable structured data. Guarantee output quality and format. Structured output validation is mandatory for building reliable production language model applications.