AI Quality as Surrogacy for Idealized Deliberation: Technical Appendix
Formal framework with precise definitions, identification results, influence functions, and asymptotic theory.
Prerequisites: This appendix assumes familiarity with semiparametric efficiency theory, influence functions, and causal inference. For the conceptual introduction, see the main post.
0. Notation and spaces
- Context space , action space (text, code, plans), score space .
- A policy maps to a distribution on . Let be a class of admissible policies.
- denotes the population distribution of contexts; we treat single-turn first and extend to trajectories in §10.
- An Idealized Deliberation Oracle (IDO) is a functional representing the normalized evaluation under idealized deliberation. See the utility semantics box below for a precise definition.
- A judge (or surrogate measurement process) maps to a random score . We allow a ladder of rungs
induced by a filtration with measurable w.r.t. .
Target quality
For any ,
IDO semantics (utility view)
Fix an outcome space , a kernel , a utility , an optional social aggregator , a risk/aggregation functional (e.g. mean, CVaR), and a strictly increasing normalization .
Defaults: if single-stakeholder (), is identity; otherwise pick (e.g., weighted sum, max–min). expectation, reference-policy anchoring: . Record in the assumptions ledger.
1. Axioms for the IDO (normative)
Let be the limiting value of a deliberation procedure.
- A1 (Deliberative stability). There exists a sequence of increasing-effort labels s.t. in as .
- A2 (Evidence monotonicity). If , then
- A3 (Instrumental invariance). If two procedures yield the same world-state relevant to the objective, they have equal .
A1–A3 make a well-defined limit of a "deliberation ladder."
Note. If change across environments, selection enters and the calibration will not transport (§3.5 table, row ""). Record in the assumptions ledger (§12).
2. Surrogacy (structural) and transport (stability) assumptions
- S1 (IDO-surrogate sufficiency at rung ). There exists a measurable s.t.(Optionally add monotonicity in a one-dimensional risk index .)
- S2 (Transportability across policies/time). For a collection of environments (policies, cohorts, time), the same works: for all ,Graphical test (Pearl & Bareinboim, 2014[7]): In a selection diagram modeling environment differences via selection nodes, S2 holds if is S-admissible: in the diagram with incoming arrows to removed, where represents selection nodes. Intuitively: calibration transports if no selection node points into given the surrogate.
- S3 (Positivity/overlap for off-policy re-use). If estimating from logs of , then a.s.
- S4 (Judge availability). For any used in Direct mode, we can obtain at scale; for OPE/DR, we have in logs.
- L1 (Oracle MAR). Let indicate whether an example received an oracle label . Then . Oracle labeling is ignorable conditional on observed surrogates and covariates.
- L2 (Oracle positivity). on the support where will be applied. Ensures calibration function is identifiable and transportable.
Relationship to Kallus & Mao (2024)[1]
Kallus & Mao study a related but distinct problem: estimating treatment effects on a primary outcome when labels are sparse but surrogates are abundant. Their framework does not assume surrogacy sufficiency (our S1)—surrogates improve precision without replacing . They derive semiparametrically efficient doubly robust estimators under unconfoundedness and MAR labeling, using cross-fitting to handle flexible nuisance estimation.
In contrast, our IDO framework treats the calibrated as the target outcome scale (§0), relying on S1–S2 for identification. The key practical distinction is transportability:
- IDO with S1–S2: Calibration function transports across datasets/policies (S2). Calibrate once on an oracle slice → evaluate many new policies on different data using only . Example: calibrate on 1000 Arena prompts with GPT-5 labels, then evaluate 100 policies on 10,000 new prompts with only GPT-4.1-nano scores. Separation of calibration and evaluation is the key efficiency gain.
- K&M without S1: No transport assumption. Calibration and estimation must happen on the same dataset. Every new evaluation context requires labels for some units in that context. More robust to S1/S2 failures, but you can't amortize calibration investment across contexts.
- OUA variance: Our jackknife (§5.3) accounts for uncertainty in learning under S1–S2. K&M's EIF includes analogous calibration uncertainty via cross-fitting, but without assuming the calibration transports.
The frameworks are complementary: use IDO+surrogacy when S2 (transport) is credible and you want to amortize calibration across many evaluation contexts; use K&M-style estimation when transport is suspect and you're willing to collect labels in each new evaluation dataset. Our transport test (§6) is critical for detecting S2 violations.
3. Identification
Let be the calibrated reward on the IDO scale.
Proposition 1 (Direct identification)
Under S1 (and S2 + L1–L2 if learned out-of-domain),
Proof sketch. . Take expectations over . S2 + L1–L2 needed only if calibration transports from different distribution.
Proposition 2 (IPS identification)
Under S1, S3 (and S2 + L1–L2 if learned out-of-domain), from logs ,
Proposition 3 (DR identification)
Under S1, S3 (and S2 + L1–L2 if learned out-of-domain). Let be any outcome model ("critic"). Then
where .
This holds even if either or is misspecified (doubly robust).
3.5. Transport formulas (cross-environment evaluation)
When evaluating in a target environment that differs from the calibration source, Pearl & Bareinboim's transport framework [7] tells us exactly which target quantities to measure. Below are the three common deployment scenarios:
Case A: Covariate shift only (selection into X)
Scenario: Prompt distribution changes (new user population, different time period), but judge mechanism and oracle meaning are invariant.
Transport formula:
What you need in target: (ability to draw prompts from target population). Can keep and trained on source data.
Case B: Judge/measurement shift (selection into S^(k))
Scenario: Judge model changes (GPT-4.1-nano → GPT-4.5-nano), instrumentation updates, or deliberation depth increases, but prompt distribution and oracle meaning are invariant.
Transport formula:
What you need in target: (new judge channel). Can keep if S-admissibility holds (no selection into ). If prompts also shift, replace by in the outer expectation (i.e., use Case C).
Case C: Covariate + judge shift (selection into X and S^(k))
Scenario: Both prompt distribution and judge mechanism change (e.g., deploying to new geography with different user base and updated judge model).
Transport formula:
What you need in target: Both and .
When transport fails: Selection into Y*
If selection points into (oracle meaning changed—e.g., safety standards shifted, evaluation criteria evolved), S-admissibility is violated and does not transport. You must recalibrate with new oracle labels in the target environment, or adopt the Kallus & Mao estimator (§4.6) that targets directly per context without assuming transport.
| Selection node location | f_k transports? | Required target measurements | Source pieces you keep |
|---|---|---|---|
| only | ✓ | ||
| only | ✓ | ||
| ✓ | |||
| ✗ | New oracle labels to recalibrate | — |
4. Estimators
Let index examples with expensive IDO labels (at a top rung one can afford); others have only .
4.1. Calibrator
Estimate on by:
- Monotone (isotonic): nondecreasing in and mean-preserving on the oracle slice.
- Two-stage: Fit (e.g., spline in ), then isotonic with mean preservation.
Note on mean preservation: Mean preservation holds on the calibration slice; after transport to new domains/policies, the mean can differ unless S2 (transport) and L1–L2 (oracle MAR/positivity) hold. Use the transport test (§6) to validate.
Use K-fold cross-fitting: train on folds , predict on fold , to obtain out-of-fold .
4.2. Direct (fresh draws)
With prompts scored under ,
4.3. IPS (logs only)
4.4. DR (logs + critic ± fresh draws)
Fit via cross-fitting. If fresh draws from are available, approximate by Monte Carlo. Then
4.5. Weight stabilization (optional, off-policy)
Project raw weights to a mean-one, score-indexed monotone cone (SIM-style calibration) to boost ESS. This is a bias–variance tradeoff: stabilized weights can introduce small bias unless they converge to the true importance ratio. Use weight stabilization inside DR estimators (where outcome models guard against modest weight misspecification), and report diagnostics (ESS, tails).
4.6. K–M drop-in estimator (when transport is doubtful)
If S2 (transport) or L1–L2 (oracle MAR/positivity) are suspect, use a Kallus & Mao–style estimator that does not assume surrogacy sufficiency. This requires collecting labels in each evaluation context, but is robust to calibration transport failures.
Setup for two-policy comparisons (ATE) on a fixed prompt set:
- Let denote the policy indicator (e.g., baseline vs. candidate); = prompts/contexts, = cheap judge scores, = oracle outcome
- Observe on all examples; collect on an MAR-selected subset ()
- Nuisances (cross-fit): propensity , labeling propensity (or density ratio via offset logistic if labels very sparse), , and its projection
Estimator: Sample-average of the K&M EIF (see comparison box in §5):
Rate: if label fraction bounded away from 0; in very-sparse-labels regime. Doubly robust: consistent if either () or () are correct. Use cross-fitting and empirical IF variance for valid inference.
5. Influence functions and inference
Assume pathwise differentiability and regularity (bounded moments, entropy conditions satisfied via cross-fitting).
5.1. Efficient influence function (EIF) for V(π)
Under S1 and known ,
With DR structure and nuisances ,
which is Neyman-orthogonal to first-order perturbations of holding fixed. Uncertainty from learning on the oracle slice is added separately via OUA (§5.3). If desired, one can treat as a nuisance and cross-fit it jointly to achieve formal orthogonality. We separate it and account for uncertainty via OUA for transparency and modularity.
5.2. Asymptotics and SEs
With K-fold cross-fitting,
Estimate variance with the empirical variance of (cluster-robust if needed).
5.3. Oracle-uncertainty aware (OUA) variance
If is learned from a finite oracle slice, add delete-one-fold jackknife over oracle folds:
Total variance: . Use Satterthwaite df for small-sample t-intervals if desired.
Kallus & Mao EIF (without S1): for comparison
K&M estimate the ATE on Y (not on calibrated ), using surrogates to improve efficiency without assuming surrogacy sufficiency. Their setup:
- Treatment , outcome , surrogate , covariates
- Oracle label indicator with propensity (MAR)
- Nuisances: , , and its projection
Regime 1: Balanced labels (). The EIF for is:
This is doubly robust: consistent if either () are correct or () are correct. Cross-fitting ensures valid inference with flexible ML nuisances.
Regime 2: Very sparse labels (). Replace with a density ratio (estimated via offset logistic regression). The rate becomes , showing efficiency gains from unlabeled surrogates.
Key difference from IDO: K&M target and do not assume S1, so they cannot "calibrate once, evaluate everywhere." To run K&M in your stack, collect labels in each evaluation context and estimate nuisances jointly.
6. Testable diagnostics (falsifiable implications)
- Transport test (policy/time). Per-group residual mean test:where indexes groups (policies, time periods, domains). Use labeled subset; apply multiple-testing correction (e.g., Bonferroni). This is a weaker, testable implication of S-admissibility—if you lack labels in multiple domains, you can only partially test S2.
- Coverage of surrogate support. Compare histograms of on oracle-labeled vs. full sets; flag extrapolation if tails are unlabeled.
- Overlap diagnostics (off-policy). Effective sample size , weight CV, max/median ratio, Hill tail index.
- OUA share. Report to guide budget (more labels vs. more prompts).
- Prentice test (surrogacy sufficiency / S1). On oracle-labeled subsets, regress on and test whether adding (and ) improves fit. Failing to reject supports S1 (surrogacy sufficiency). For S-admissibility (S2, cross-domain), use a domain indicator and test on pooled labeled data across domains: does (and ) improve prediction? If yes, does not transport—recalibrate or use K-M estimator (§4.6).
7. Learning with the IDO objective
For parametric , the policy learning problem is
A plug-in gradient follows from the policy gradient identity with calibrated rewards:
optionally replacing by an advantage . This "RL with calibrated reward" aligns training with IDO.
For safe deployment, maximize a lower confidence bound .
8. Multiple stakeholders and social choice
Let index stakeholders with oracles . A social aggregator defines
Common choices: weighted utilitarian (), max-min (), or constrained variants. Surrogacy extends with ; calibrate each and plug into .
9. The deliberation ladder as information order
Model rungs by a filtration . Define
Then by Blackwell/Doob ordering, implies . If is Blackwell more informative than , a calibrated estimator at rung is (weakly) more efficient than at rung .
10. Extension to trajectories (agents)
Let a trajectory with policy and environment . Define an IDO trajectory value
Surrogates may be terminal () or stepwise (). Direct/IPS/DR estimators extend with clustering by trajectory; sequential IPS is typically ill-conditioned, so prefer Direct or DR with trajectory-level critics.
11. Limits (scope conditions)
- Non-regular targets. If or induces non-differentiable functionals (e.g., maxima, boundary problems), first-order theory fails; use selective/subsampling or shape-constrained methods.
- Severe non-transport. If S2 fails (e.g., adversarial policy styles), base-only calibration is biased; require per-policy calibration or new oracle labels.
- Overlap failures. If S3 fails, IPS/DR is unreliable even with stabilized weights; collect fresh draws and use Direct.
12. Minimal "assumptions ledger" (for every deployment)
| Code | Statement | Used by | Test / Diagnostic | Mitigation |
|---|---|---|---|---|
| S1 | All | Incremental signal; residual vs. fk | Add covariates; richer judge; higher rung | |
| S2 | (S-admissibility); fk transports when no selection node points into Y* | All (cross-environment) | Per-group residual test (§6); Cross-domain Prentice test with G indicator; diagram review (§3.5) | If selection into X or S(k): measure target distributions (§3.5 table). If selection into Y*: recalibrate with target oracle labels |
| S3 | (overlap) | IPS/DR | ESS, tail index, max/median | Weight stabilization; collect draws |
| A1–A3 | IDO well-posed | All | Rung stability checks | Clarify oracle definition; adjust W |
| L1 | (Oracle MAR) | All (calibration) | Oracle selection independent of residuals | Randomize oracle sampling; stratify by S,X |
| L2 | (Oracle positivity) | All (calibration) | Coverage plots; extrapolation warnings | Label tail regions; flag OOD predictions |
| OUA | Finite oracle labels | Inference | OUA share | Add labels if OUA dominates |
| N | Strictly increasing normalization to [0,1]; anchored to (πlow, πhigh) (or specified benchmarks) | All (comparability & reporting) | Anchor stability check across releases; report raw F and anchored Y* when anchors change | Re-anchor or freeze anchors; append change log when re-anchoring |
13. What you report (template)
For each :
- on the IDO scale with 95% CI (main + OUA), and DF rule.
- Diagnostics: transport test p-values, ESS (if OPE/DR), OUA share, oracle coverage plots.
- If choosing a policy: a decision with one-sided CI (safety margin).
Summary
- Definition:
- Mechanism: use surrogates and a calibration so that
- Identification: Direct (fresh draws), IPS (reweight logs), DR (two chances)
- Uncertainty: influence-function variance + oracle-learning variance (OUA)
- Governance: multi-party encodes whose IDO matters and how
This turns "AI should do what you'd do with unlimited time" into a measurable target, with estimators, CIs, and failure tests you can run.
