| # | Authority | Risk Score | Predictive | Tier | A | B | C | D | Flags |
|---|
| # | PHA Code | Authority Name | State | HUD Designation | Flags | View A HUD Official |
View B Scraped Operational |
Gap Divergence |
|---|
| # | Authority | Predictive | Descriptive | Predicted Status | Ground Truth | Trend |
|---|
| # | Authority | Predictive | Status | Ground Truth | Early Warning |
|---|
| # | Authority | Descriptive | Predictive | Tier | Predicted Status | Trend | Ground Truth |
|---|
PRISM operates four parallel analytical layers, each producing an independent score so the user can see where the layers agree and where they diverge:
- Descriptive layer (PRISM Composite) — a 100-point reconstruction of a PHA's current compliance, financial, physical, audit, judicial, transparency, and governance posture, derived from federal datasets.
- Predictive layer — a reweighted score using EWMA-smoothed financial trend, physical trajectory, and audit recurrence to project which PHAs are most likely to be designated Troubled in the next reporting cycle.
- Three-View framework (A / B / C) — A is the score derivable solely from official HUD records; B is the score derivable solely from scraped operational signals; C is the divergence between them. Divergence flags PHAs that look healthy on paper but show operational distress, or vice-versa.
- HUD Oversight Defect Model (HODM v1.0) — a defect-pattern model derived from documented HUD OIG audit findings (2009–2018), assuming structural defects remain in place absent published, evidence-based reform. Generates the “Action-Gap” signals that surface PHAs at high risk of HUD inaction.
pha_dashboard/pha_full/pha_scoring_v3.py (lines 271–275)Trend_Boost = +5 if 2-year worsening trend ≥ 3 components; +3 if 3-year worsening trend ≥ 2 components — defined in the v3 scoring module but currently held at
0 in the live API pending the next pipeline pass.
pha_dashboard/pha_full/dashboard/app.py _compute_predictive (lines 263–289); v3 spec in pha_scoring_v3.py (lines 278–293)source: pha_dashboard/pha_full/pha_scoring_v3.py (lines 324–436)
| COMPONENT | REPRESENTING | MAX | LOGIC |
|---|---|---|---|
| A | HUD Compliance | 30 | Graduated points for “Troubled” (18–26) or “Substandard” (12) designations + chronic-troubled multiplier based on years on list. |
| B | Physical Condition | 20 | PASS bucket (<15 to <35) or REAC inspection avg. Penalty if >20% of units fail. |
| C | Financial Distress | 20 | Blend: 55% heuristic (FASS missing, low score, vendor concentration) + 45% EWMA-normalized trend. |
| D | Audit & Oversight | 15 | Material weakness (8) + repeat findings (4) + questioned costs >$500k (3) + OIG findings. |
| E | Judicial Pathway | 5 | Active receivership (5.0), federal monitorship (4.0), historical receivership (3.0). |
| F | Transparency | 5 | Document-availability gap ratio (missing vs. expected public records). 5 pts if gap ≥ 80%. |
| G | Board Governance | 5 | Missing minutes (max 3) + ED vacancy ≥ 6 mo (2) + emergency resolutions ≥ 3 (1). |
Three independent views are computed for every PHA so divergence between official and operational data can be surfaced. Source: pha_dashboard/pha_full/pullers/risk_rank_engine.py (lines 430–443).
C_hud: 10 if FASS<40, 6 if <60, 2 if <75
F_hud: 1 if MASS<10
F_scraped: 4 if 0 docs found, 2 if <3
C_scraped: 3 if vendor concentration >40%
Divergent = true when
B ≥ 3.0 AND A ≤ 40.0
Financial scores are processed through a three-layer pipeline before they enter Component C. Source: pha_dashboard/pha_full/pha_normalization.py (lines 180–234).
e−0.25 · years_stale. Early warning fires when EWMA rises > 1.5 over 3 years and absolute score > 2.0.| CRITICAL | ≥ 70 |
| HIGH | 55–69 |
| ELEVATED | 40–54 |
| MODERATE | 20–39 |
| LOW | < 20 |
| TROUBLED_HIGH_CONFIDENCE | ≥ 65 |
| AT_RISK | ≥ 55 |
| WATCHLIST | ≥ 45 |
| LOW_RISK | < 45 |
A PHA is flagged active_receivership when a record exists in the pha_receivership table. Component E automatically assigns 5.0 points for active receivership, 4.0 for federal monitorship, and 3.0 for historical receivership. The Receivership view scrapes each authority's site for current operational artifacts (board minutes, recovery plans, transition reports). Source: pha_dashboard/pha_full/pullers/risk_rank_engine.py (lines 128–152).
PRISM ingests six fiscal years of HUD's Troubled PHA reports to Congress (FY2020–FY2025) plus the April 2026 PHAS_Troubled spreadsheet. A PHA is “Chronic Troubled” when it appears on the list for 8 or more years and is “Newly Troubled” when it first appears in the FY2024 or FY2025 reports without prior PRISM flagging. Source: pha_dashboard/pha_full/data/troubled_history.json + pha_scoring_v3.py (lines 80–88).
Three pools are tracked over time: Troubled, At-Risk, and Worsening. Historical recovery rate is 17.9% per year. Tipping rate (At-Risk → Troubled) ranges from 15% (best case) to 40% (HUD-inaction scenario, derived from the HODM patterns below). Source: pha_dashboard/pha_full/prism_forecast.py.
PRISM's other models score PHA risk — the likelihood that an authority is or will be operationally distressed. HODM scores the orthogonal question: HUD-inaction risk — the likelihood that, even when distress is documented, HUD will not act in time. The model is built from a corpus of HUD Office of Inspector General audit reports (2009–2018) that share a consistent set of structural defects in HUD's receivership, recovery, and enforcement processes. Each defect becomes a signal computed for every active PHA.
- Long lag between known deterioration and HUD action — observed at HANO, New London, East St. Louis, Chester, Alexander County (5–26 year gaps).
- Five-phase receivership process exists on paper only — situation assessment, stabilization, recovery plan, implementation, transition rarely produced as artifacts.
- ORO leadership chronically vacant or “Acting” — receivership oversight not in any official's performance plan.
- Geographic mismatch between receiver and PHA — East St. Louis receiver 300–370 miles from the PHA while a HUD office sat 5 miles away.
- Self-reported / falsified documentation undetected — Richmond's $2.2M misspending surfaced via outside complaint, not HUD review.
- HQS inspection failures even at “recovered” PHAs — Chester: 94% of voucher units failed; 93 24-hour life-safety violations missed by contracted inspectors.
- Sponsor-government capture — PHA funds and decisions co-mingled with sponsor city/county (Richmond pattern).
- Cross-program enforcement only when multi-office team assembled — PIH alone never moved on Alexander County; cross-office team in 2014 led to action in 2016.
- Senior leadership characterizes long-known issues as “recent” — ACHA email review documented 18+ months of field warnings before HQ acknowledgment.
- PIH hesitates to declare substantial default for fear of weak administrative record — explicitly cited in the ACHA email follow-up.
- Statutory maximum recovery period exceeded with no remedy invoked — New London required OIG to recommend HUD notify its own Assistant Secretary that statutory action was due.
| SIGNAL | FORMULA | FIRES WHEN |
|---|---|---|
| HUD Inaction Clock | years_since_first_troubled | > 3 years troubled with no receivership |
| Stalled Receivership | years_since_receivership_started | > 5 years with no transition-to-local-control event |
| Statutory Recovery Overdue | days_past_max_recovery_window | > 0 days past statutory maximum |
| Five-Phase Compliance [v1.0: PENDING] | phases_with_public_artifact / 5 | score ≤ 2/5 for receivership PHAs — defined but currently held pending; does not fire in v1.0 until per-receivership artifact corpus is backfilled |
- Field-vs-HQ Lag — earliest public complaint / news article / FHEO charge to date of formal HUD enforcement.
- Sponsor-Government Capture — sponsor city/county supplies shared services + PHA shows financial-control weakness (Richmond pattern).
- Inspection-Contractor Failure proxy — outsourced HQS inspections + above-baseline tenant complaint rate (Chester pattern).
- Cross-Programmatic Risk composite — active risk in 3+ of: PIH, FHEO, OIG, FAC, DEC, Labor Standards (Alexander County pattern).
- Imminent-Threat Trigger — surfaces evidence meeting HUD's emergency-action threshold (life/health/safety + criminal/fraudulent activity), removing the PHA's right to a cure period.
- Administrative-Record Strength Index — counts public, discoverable evidence per PHA (news, FHEO charges, OIG memos, FAC findings, lawsuits). Inverts the ACHA failure mode by publishing the record HUD said it lacked.
PRISM exposes three independent analytical lenses on the same PHA universe. The Triangulated Assessment combines them using set-membership classification, not a weighted-average score. There are no analyst-tunable weights; the classification is fully determined by whether each track flags a given PHA.
This design is deliberate. The three lenses have different failure modes (model under-fit, designation lag, evidence-corpus depth). Combining them with a weighted score would invent a precision none individually possesses. Set-membership preserves the source of signal at every step, so an analyst can drill from a Triangulated finding back to the originating track.
| TRACK | FIRES WHEN | FAILURE MODE |
|---|---|---|
| PRISM | composite tier ∈ {CRITICAL, HIGH} | model under-fit; unmeasured causes; scoring lag |
| HUD | on FY2025 troubled list or HUD designation contains “Troubled” | designation lag (18–24 month signal latency); political-cycle suppression |
| HODM | any Tier-1 signal fires (Inaction Clock, Statutory Recovery Overdue, Stalled Receivership) | corpus scope (audit reports 2009–2018); OIG reporting cycle |
| CLASS | DEFINITION | CONFIDENCE | USE |
|---|---|---|---|
| TRIPLE-CONFIRMED | flagged by all three tracks | HIGH | priority intervention — multi-source agreement |
| DUAL-CONFIRMED | flagged by exactly two tracks | MEDIUM | action warranted; specify the missing track |
| LONE-SIGNAL | flagged by exactly one track | ANALYTIC | novel finding; investigate; lowest confirmation, highest novelty |
| UNFLAGGED | none of the three fire | — | excluded from the Triangulated Assessment |
What this fusion does NOT do. It does not combine the three tracks into a single numeric score. Doing so would require analyst-defined weights for which there is no defensible empirical basis — the three tracks measure different things. The Triangulated Assessment is a set-membership classification. Reading the chip on a PHA tells you exactly which tracks fired; it does not tell you a synthesized magnitude. For magnitude, consult the standalone PRISM Risk Report.
What this fusion preserves. Independence. An analyst reviewing a TRIPLE-CONFIRMED PHA can see which three tracks agreed; an analyst reviewing a LONE-SIGNAL PHA can see precisely which one track is the lone alarm. Compare this to a weighted-average score, where the source of signal is collapsed into a single number and the analyst loses the ability to evaluate the chain.
Cross-system enrichment. The fusion class is computed once per PHA and stored on the score record. It is therefore available to every other PRISM view (Risk Rankings, Three-View, Watchlist, individual PHA profiles), not only to the Triangulated Assessment. The Triangulated Assessment is one consumer of the classification; it is not the only one.
Implementation. See _compute_hodm_and_fusion() in pha_dashboard/pha_full/dashboard/app.py. The classification is computed once at application startup over the in-memory PHA list (after composite scoring) and served from that cache. Restarting the dashboard process refreshes the inputs; per-request recomputation is not currently performed.
The HUD Oversight Defect Model is derived from six core HUD Office of Inspector General reports plus one follow-up data review, listed below. All are publicly available at www.hudoig.gov.
PRISM is not an official HUD system. It has not been reviewed, endorsed, validated, or calibrated by HUD, and is not intended for operational, compliance, or enforcement decision-making.
HODM is a historical-defect model, not a current-conditions claim. The model assumes the structural defects documented in the source audits remain in place absent published, evidence-based reform. It does not assert that any specific HUD official is currently failing in any specific way.
All scoring is exploratory. Weights, thresholds, and tiers reflect the analyst's judgment of what is most informative given available public data, not a peer-reviewed or statutorily defined methodology.
Falsifiable by design. If HUD publishes evidence that a documented defect was reformed, the source catalog and signal weights will be updated and versioned (HODM v1.x).