Keywords mapped to chapters where they are substantively discussed. Chapter numbers only; see the table of contents for titles.
| Keyword | Chapters |
|---|---|
| counterbalancing (ABBA notation) | 2, 4, 5 |
| agreement scoring | 2, 5, 8 |
| arbitration (structured) | 2, 5 |
| ATO (Authority to Operate) | 13, 14 |
| automated resolution rate | 2, 14 |
| batch economics | 6, 14 |
| batch processing | 6, 7, 14 |
| bounded agency | 1, 11, 13 |
| Bradley-Terry aggregation | 4 |
| checkpoint | 6, 7 |
| cloud parity gap | 13 |
| Cohen’s kappa | 2, 5 |
| compaction | 9, 11 |
| config-driven architecture | 7, 9, 10 |
| confabulation | 1, 4, 8, 9 |
| confabulation graph | 4 |
| confidence laundering | 3, 4 |
| confidence routing | 2, 3, 5 |
| context window | 1, 9, 11 |
| convergence ceiling | 1 |
| cost comparison (AI vs. manual) | 2, 14 |
| data residency | 12, 13 |
| data wrangling | 3 |
| disagreement as signal | 2, 3, 4, 5 |
| disclosure review assistance | 4, 12 |
| dual-modal assignment | 2 |
| dual-model cross-validation | 2, 3, 5 |
| ensemble | 2, 5 |
| error classification (transient/permanent/data) | 7 |
| evaluation framework | 8, 12 |
| evaluation harness | 8, 13, 14 |
| EvoScore | 7, 8, 11 |
| evidence chain | 1, 2, 10 |
| exponential backoff | 6, 7 |
| extraction pipeline | 4, 10 |
| FCSM | 8 |
| FCSM/NIST crosswalk | 8, 12 |
| FedRAMP | 12, 13, 14 |
| Federal Survey Concept Mapper | 2, 5, 6, 8, 10, 14 |
| fine-tuning | 2, 3 |
| fine-tuning cost trap | 3 |
| Five Safes | 12, 13 |
| Fleiss’ kappa | 2, 5 |
| format extraction | 3 |
| format normalization | 6 |
| generator-critic loop | 5 |
| golden test set | 2, 7, 8, 9 |
| governance | 1, 12, 13 |
| governance-as-gate vs. governance-as-enabler | 13 |
| handoff document | 7, 10 |
| human-in-the-loop | 1, 2, 5, 10 |
| institutional overhead | 14 |
| idempotent operation | 7 |
| imputation | 3 |
| inference-time degradation (self-refinement) | 1, 5 |
| inter-rater reliability | 2, 5 |
| iterative refinement trap | 1, 5 |
| knowledge graph | 4 |
| last mile problem | 11, 13 |
| LLM-as-judge | 5 |
| maturity levels (AI automation) | 1, 13 |
| model mix (per-stage selection) | 14 |
| MCP (Model Context Protocol) | 11 |
| model card | 12 |
| model collapse | 1 |
| model provenance | 12 |
| model transience | 2, 6, 9 |
| model version pinning | 2, 6, 7, 9 |
| NAICS coding | 3 |
| NIST AI 600-1 | 8, 9, 12 |
| NIST AI RMF | 7, 8, 12 |
| 90/10 rule | 11 |
| “offline isn’t offline” | 12 |
| opportunity cost | 14 |
| pairwise comparison | 4, 5 |
| parallel consensus | 2, 5 |
| parallel processing | 6 |
| position bias | 2, 5 |
| pilot specification | 13 |
| procurement (government) | 13 |
| provenance chain | 4, 9, 10 |
| quantization | 3 |
| rate limiting | 6, 7 |
| recursive stochasticity | 7, 11 |
| regression testing | 7, 8 |
| reproducibility | 1, 7, 10 |
| response code correction | 3 |
| reward hacking (self-refinement) | 1, 5 |
| schema inference | 3 |
| self-bias amplification | 1, 5 |
| Semantic Drift (T1) | 9, 10 |
| Session Continuity (SC) | 9 |
| SFV (State Fidelity Validity) | 9, 10 |
| small language model (SLM) | 3 |
| smoke test | 7 |
| soup spoon principle | 8 |
| SOC coding | 2, 3 |
| State Coherence (SCoh) | 9 |
| State Discontinuity (T5) | 9 |
| State Provenance (SP) | 4, 9, 10 |
| State Supersession Failure (T4) | 9 |
| statistical disclosure limitation | 4, 12 |
| stochastic liabilities | 1 |
| stochastic tax | 1, 2, 9 |
| supply chain (model/software) | 12 |
| Terminological Consistency (TC) | 4, 9 |
| TEVV | 7, 8 |
| tiered governance | 13 |
| token budget | 11, 14 |
| training cutoff | 7, 9 |
| unit test | 7 |
| valid and reliable (NIST) | 8 |
| variance amplification | 1 |
| vendor diversity | 2, 5 |
| workflow orchestration | 11 |