RLHF, DPO, and Constitutional AI compared — how each shapes model behaviour, what it costs, and when to use it.
A pretrained LLM predicts the next token — it's a completion engine, not an assistant. Alignment is the process of steering that completion engine toward being helpful, honest, and harmless. It happens after pretraining and after SFT.
Raw pretraining teaches a model to predict plausible continuations of text. But "plausible" includes offensive, factually wrong, or harmful content if it's statistically likely given the prompt. Alignment techniques layer preferences on top of that statistical foundation — telling the model what humans actually want.
Supervised Fine-Tuning on demonstration data always comes first. You show the model (prompt, ideal response) pairs. It's the cheapest alignment step and gives the biggest quality jump. Every downstream alignment technique builds on a well-SFT'd model.
SFT shifts the base model's entire distribution toward assistant-like outputs. It teaches format, style, instruction following, and reasoning chains. Without good SFT, RLHF or DPO training becomes noisy — you're optimizing on top of a weak foundation.
Data quality: Even 10,000 high-quality SFT examples beat 100,000 noisy ones. Focus on diversity and clarity. Diversity: Cover instruction types, reasoning styles, and edge cases. Iteration: SFT early and often — each refinement compounds.
RLHF (Reinforcement Learning from Human Feedback) is the alignment method behind ChatGPT, Claude, and GPT-4. It maximizes a learned reward model via PPO (Proximal Policy Optimization) while penalizing divergence from the SFT checkpoint using KL divergence.
Annotators rank model responses (typically A vs B). This is expensive — usually 50–100 examples per prompt, across thousands of prompts, annotated by multiple annotators to ensure quality.
A separate model learns to score responses. Given (prompt, response A, response B), it predicts which humans prefer. This model becomes the ground truth during PPO training.
Fine-tune the LLM to maximize reward model scores while staying close to the SFT model via KL divergence penalty. The penalty prevents reward hacking and distribution collapse.
Collect new preference data on the updated model, retrain the reward model, run PPO again. Each iteration refines preferences and catches reward model drift.
Direct Preference Optimization (DPO) reformulates RLHF as a classification problem. Instead of training a reward model and running PPO, DPO directly optimizes the policy on preference pairs: given (prompt, chosen, rejected), update the policy to assign higher probability to chosen over rejected.
No reward model. No PPO. No reference model calls during training. Roughly 2× faster to implement and 2× faster to run. Empirically, DPO matches RLHF quality on many benchmarks.
DPO uses implicit reward modeling — the reward is hidden in the loss function. This simplicity comes with tradeoffs: DPO may be less stable than RLHF on very large models, and reward model evaluation is opaque. But for teams without massive annotation budgets or GPU fleets, DPO is often the pragmatic choice.
from datasets import Dataset
from trl import DPOConfig, DPOTrainer
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load base model + reference model (frozen copy for KL penalty)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
# DPO dataset format: {prompt, chosen, rejected}
# 'chosen' = preferred response, 'rejected' = rejected response
dpo_data = [
{
"prompt": "Explain gradient descent.",
"chosen": "Gradient descent is an optimization algorithm that iteratively moves parameters in the direction that reduces loss...",
"rejected": "It's just math stuff that makes AI learn."
},
# ... more preference pairs
]
dataset = Dataset.from_list(dpo_data)
config = DPOConfig(
output_dir="./dpo-model",
num_train_epochs=3,
per_device_train_batch_size=2,
learning_rate=5e-7, # lower than SFT — fine-grained preference tuning
beta=0.1, # KL penalty coefficient — higher = stay closer to ref model
loss_type="sigmoid", # DPO loss variant
max_length=512,
fp16=True,
)
trainer = DPOTrainer(
model=model,
ref_model=None, # TRL automatically creates frozen reference copy
args=config,
train_dataset=dataset,
tokenizer=tokenizer,
)
trainer.train()
trainer.save_model("./dpo-model-final")
Constitutional AI (CAI) is Anthropic's approach to replacing human preference labels with AI-generated feedback. A set of principles — the "constitution" — guides a capable model to critique and revise its own outputs. The revised outputs become training data for alignment.
This scales without human annotation. Instead of paying annotators to rank responses, you craft a constitution and let the model self-improve. But it requires a capable enough base model to self-critique reliably — weak models will generate poor feedback.
This trades human effort for LLM compute. The constitution must be well-written and aligned with your values — vague principles lead to vague feedback. And the critique model must be strong enough to notice flaws and suggest improvements.
from openai import OpenAI
from pydantic import BaseModel
client = OpenAI()
class PreferenceJudgment(BaseModel):
preferred: str # "A" or "B"
reasoning: str
confidence: float # 0.0-1.0
JUDGE_SYSTEM = """You are an expert AI alignment judge. Given a question and two responses,
determine which response is: more helpful, more accurate, safer, and better aligned with
human values. Be consistent and objective."""
def generate_preference_label(
prompt: str, response_a: str, response_b: str
) -> PreferenceJudgment:
"""Constitutional AI / RLAIF: use LLM judge to label preferences."""
user = f"""Question: {prompt}
Response A:
{response_a}
Response B:
{response_b}
Which response is better? Reply with your preference (A or B), reasoning, and confidence."""
result = client.beta.chat.completions.parse(
model="gpt-4o",
messages=[
{"role": "system", "content": JUDGE_SYSTEM},
{"role": "user", "content": user}
],
response_format=PreferenceJudgment,
temperature=0.0
)
return result.choices[0].message.parsed
def build_dpo_dataset(prompts: list[str], model_responses: list[tuple]) -> list[dict]:
"""Build DPO training data using RLAIF labels."""
dataset = []
for prompt, (resp_a, resp_b) in zip(prompts, model_responses):
judgment = generate_preference_label(prompt, resp_a, resp_b)
chosen, rejected = (resp_a, resp_b) if judgment.preferred == "A" else (resp_b, resp_a)
if judgment.confidence >= 0.7: # only use high-confidence labels
dataset.append({"prompt": prompt, "chosen": chosen, "rejected": rejected})
return dataset
| Method | Human labels needed | Compute cost | Stability | Best when |
|---|---|---|---|---|
| SFT | Demonstrations | Low | High | Always — prerequisite |
| RLHF | Preference pairs + RM | High | Medium | Maximum quality, budget available |
| DPO | Preference pairs only | Medium | High | Simpler RLHF alternative |
| Constitutional AI | Minimal | Medium | Medium | Scaling without labellers |
RLHF achieves the highest quality but at high cost. DPO gets 90–95% of RLHF quality at half the compute. Constitutional AI sacrifices quality for annotation savings. Most teams should start with SFT + DPO, only moving to RLHF if quality plateaus and budget is available.
Choosing between SFT, RLHF, and DPO depends on your resources and goals. SFT alone gets you 70-80% of the way for most applications and requires only a curated dataset. DPO is a significant improvement with modest additional complexity — it only needs preference pairs, no reward model. Full RLHF with PPO is expensive and brittle but can push quality further for high-stakes applications.
A practical guideline: start with SFT on high-quality domain data, evaluate if the model meets your bar, then add DPO with human preference data if not. Only attempt PPO-based RLHF if you have an ML team with reinforcement learning experience and budget for RM training. Constitutional AI / RLAIF is a middle path — use an existing aligned LLM to generate preference labels, bypassing human annotation cost.
| Method | Data Required | Compute | Quality Gain | When to Use |
|---|---|---|---|---|
| SFT | Curated examples | Low | Good baseline | Always — first step |
| DPO | Preference pairs | Low-Medium | +10–20% | After SFT, before RLHF |
| PPO/RLHF | RM training data | High | +15–30% | High-stakes, large budgets |
| RLAIF / CAI | LLM-generated feedback | Medium | +10–25% | No human annotation budget |