PRISM
PHA Risk Intelligence System Monitor
Independent analytical tool based on publicly available data. Not an official HUD system.
NEWS FEED // LIVE
3
In Receivership
Critical
High
Elevated
At-Risk
Watchlist
Monitored
Scored
About
Methodology
Data Sources
In Receivership
Risk Rankings
Predictive
Watchlist
Early Warning
Trend View
6-Year Reports
Validation
v2→v3
States
Forecast
Feed Status
Investigative Flags
Three Views
SNA Graph
In Receivership
3
HUD Monitorship · Click for details
PHAs under direct federal monitorship or receivership. Substantial Default declared by HUD; Cure Monitors appointed.
Critical Risk
Score ≥ 70 · Click to filter
Composite of 7 risk components exceeds 70. All major indicators — management, physical, financial, audit — near maximum.
High Risk
Score 55–69 · Click to filter
Multiple risk components elevated. Typically Troubled-designated PHAs with poor physical or financial scores.
Elevated
Score 40–54 · Click to filter
Two or more risk components flagged. May include low FASS scores, inspection failures, or audit findings.
Early Warning
Rising EWMA trend · Click to filter
Financial trend rising ≥0.3 year-over-year or EWMA ≥4.0. Flags PHAs before they reach Troubled status.
Data Errors
Requires review · Click to filter
FASS or PASS scores outside valid ranges. Data may be stale, misreported, or missing from HUD systems.
Ranked PHAs
Normalization Explorer
# Authority Risk Score Predictive Tier A B C D Flags
PHA Deep Dive
Full Analysis
Select a PHA from the table to view detailed analysis
PRISM // OFFICIAL USE // ANALYTIC ASSESSMENT
PRISM Intelligence Assessment
Risk to the U.S. Public Housing Portfolio
A standing assessment of risk and impact across the high-risk PHA list.
Date of Assessment:
Prepared by: PRISM Analytic Cell
Reporting cycle: Standing
PHAs in scope:
Compiling assessment…
PRISM // OFFICIAL USE // ANALYTIC ASSESSMENT
Congressional Watchlist
HCFS Feb 2026 — 32 Troubled PHAs
Loading…
The 32 — Click any row for full dossier
Tier Risk Score Code PHA State Units HUD Designation
Loading 32 PHAs…
Cohort Risk Ranking
loading…
Loading…
Cross-Cohort Vendor Network
loading…
Loading…
HCFS Dossier
Loading…
Risk Distribution by State
EWMA Score Trends — Top Risk PHAs
Normalization Explorer
Cross-PHA Investigative Flags
Data Feed Status
Intel Signal Coverage — last 90 days
Refresh
Loading…
Corpus Coverage — At-Risk Cohort
Refresh
Per-PHA Coverage Detail
PHA Code Authority State Cohort Coverage Docs Signals Last Crawl Website
Seed Inbox — Curated URL Ingestion
Paste one row per line: PHA_CODE,URL[,DOC_TYPE_HINT][,SOURCE_LABEL]
Pending: · Processed: · Failed:
Recent Seeds
ID PHA URL Type Source Status Added
Scheduler — Background Jobs
Refresh
Recurring scrape + ingestion jobs. Toggle a row off to pause it without restarting the server. Set PRISM_SCHEDULER_DISABLED=1 as an env var to globally pause every job. Use Run Now for an immediate one-off run.
Job Cadence Enabled Last Run Status Duration Summary Next Run Action
Divergent PHAs
PHAs Shown
PHAs Crawled
Total Scored
Three-View Comparison — HUD Official vs. Scraped Operational
Divergent PHAs (scraped risk HUD does not see) ranked first; click any row for the full View A / B / C breakdown.
# PHA Code Authority Name State HUD Designation Flags View A
HUD Official
View B
Scraped Operational
Gap
Divergence
Troubled High Confidence
Predictive ≥65
At-Risk
Predictive 55–64
Watchlist
Predictive 45–54
Ground Truth Match
Currently troubled captured
Predictive Rankings — v3 Scoring (B+C+D+F+G weighted)
Export CSV
# Authority Predictive Descriptive Predicted Status Ground Truth Trend
Cohort-Targeted Site Crawl
Each button kicks off an adaptive crawl on the staleest 5 sites in the selected Build-4 cohort. Results return inline.
Watchlist — Predictive Score ≥45
Export CSV
# Authority Predictive Status Ground Truth Early Warning
Congressional Troubled PHA Reports — 6-Year Analysis (FY2020–FY2025)
PHAs Missed by PRISM v2 (Now Caught by v3)
Model Validation — PRISM v3 vs Congressional Ground Truth
v2 → v3 Model Comparison
Early Warning Active
Rising EWMA trend
High Risk + Warning
Critical/High with rising trend
Predictive ≥55 + Warning
At-Risk with worsening trend
Intervention Window
Pre-troubled, actionable
Early Warning PHAs — Rising EWMA Trend Detected
Export CSV
# Authority Descriptive Predictive Tier Predicted Status Trend Ground Truth
Optimistic (FY2031)
Best case scenario
Baseline (FY2031)
Expected trajectory
Adverse (FY2031)
Stress case estimate
5-Year Troubled PHA Forecast — FY2026 to FY2031
PRISM · ANALYTICAL METHODOLOGY · v3.1
Models, Weights, and Sources
PRISM (PHA Risk Intelligence System Monitor) is an independent analytical model built exclusively from publicly available U.S. Department of Housing and Urban Development (HUD) reports, federal datasets, and open-source intelligence (OSINT). This page documents every scoring model, normalization step, and oversight-defect pattern used to produce the risk assessments shown elsewhere in the system. Every formula, weight, and threshold below is reproducible from the cited source files.
§ 1
Overview — What PRISM Models

PRISM operates four parallel analytical layers, each producing an independent score so the user can see where the layers agree and where they diverge:

  • Descriptive layer (PRISM Composite) — a 100-point reconstruction of a Public Housing Agency's (PHA) current compliance, financial, physical, audit, judicial, transparency, and governance posture, derived from federal datasets.
  • Predictive layer — a reweighted score using Exponentially Weighted Moving Average (EWMA)-smoothed financial trend, physical trajectory, and audit recurrence to project which PHAs are most likely to be designated Troubled in the next reporting cycle.
  • Three-View framework (A / B / C) — A is the score derivable solely from official HUD records; B is the score derivable solely from scraped operational signals; C is the divergence between them. Divergence flags PHAs that look healthy on paper but show operational distress, or vice-versa.
  • HUD Oversight Defect Model (HODM v1.0) — a defect-pattern model derived from documented HUD Office of Inspector General (OIG) audit findings (2009–2018), assuming structural defects remain in place absent published, evidence-based reform. Generates the “Action-Gap” signals that surface PHAs at high risk of HUD inaction.
§ 2
PRISM Composite Risk Score (v3)
DESCRIPTIVE FORMULA
PRISM = min(100, round(A + B + C + D + E + F + G, 1))
source: pha_dashboard/pha_full/pha_scoring_v3.py (lines 271–275)
PREDICTIVE FORMULA (LIVE API)
predictive_score_adjusted = Predictive_Base
Predictive_Base = weighted sum rescaled to 100, with weights B 30%, C 25%, D 20%, F 15%, G 10%.
Trend_Boost = +5 if 2-year worsening trend ≥ 3 components; +3 if 3-year worsening trend ≥ 2 components — defined in the v3 scoring module but currently held at 0 in the live API pending the next pipeline pass.
source: pha_dashboard/pha_full/dashboard/app.py _compute_predictive (lines 263–289); v3 spec in pha_scoring_v3.py (lines 278–293)
§ 3
Components A–G

source: pha_dashboard/pha_full/pha_scoring_v3.py (lines 324–436)

COMPONENT REPRESENTING MAX LOGIC
AHUD Compliance30Graduated points for “Troubled” (18–26) or “Substandard” (12) designations + chronic-troubled multiplier based on years on list.
BPhysical Condition20Physical Assessment Subsystem (PASS) bucket (<15 to <35) or Real Estate Assessment Center (REAC) inspection avg. Penalty if >20% of units fail.
CFinancial Distress20Blend: 55% heuristic (Financial Assessment Subsystem (FASS) missing, low score, vendor concentration) + 45% EWMA-normalized trend.
DAudit & Oversight15Material weakness (8) + repeat findings (4) + questioned costs >$500k (3) + OIG findings.
EJudicial Pathway5Active receivership (5.0), federal monitorship (4.0), historical receivership (3.0).
FTransparency5Document-availability gap ratio (missing vs. expected public records). 5 pts if gap ≥ 80%.
GBoard Governance5Missing minutes (max 3) + Executive Director (ED) vacancy ≥ 6 mo (2) + emergency resolutions ≥ 3 (1).
§ 4
Three-View Comparison Model

Three independent views are computed for every PHA so divergence between official and operational data can be surfaced. Source: pha_dashboard/pha_full/pullers/risk_rank_engine.py (lines 430–443).

VIEW A
HUD Official
min(100, A + B + C_hud + D + E + F_hud)

C_hud: 10 if FASS<40, 6 if <60, 2 if <75
F_hud: 1 if Management Assessment Subsystem (MASS)<10
VIEW B
Scraped Operational
min(100, F_scraped + C_scraped)

F_scraped: 4 if 0 docs found, 2 if <3
C_scraped: 3 if vendor concentration >40%
VIEW C
Divergence
Gap = max(0, B − A/10)

Divergent = true when
B ≥ 3.0 AND A ≤ 40.0
§ 5
Normalization Pipeline

Financial scores are processed through a three-layer pipeline before they enter Component C. Source: pha_dashboard/pha_full/pha_normalization.py (lines 180–234).

1. Months Expendable Net Asset Ratio (MENAR) Winsorization
Caps extreme outliers at the 5th and 95th percentiles to prevent a single year of anomalous data from dominating a multi-year trend.
2. Bayesian Audit Weight
adj_score = weight · raw_score + (1 − weight) · population_mean
Weights: Audited = 1.0, Unaudited = 0.5, Missing = 0.4. Pulls self-reported (unaudited) values toward the population mean to discount unverified data.
3. EWMA Temporal Smoothing
EWMAt = α · valuet + (1 − α) · EWMAt−1,   α = 0.40
Identifies sustained deterioration and discounts old data via staleness multiplier e−0.25 · years_stale. Early warning fires when EWMA rises > 1.5 over 3 years and absolute score > 2.0.
§ 6
Risk Tier Thresholds
DESCRIPTIVE TIERS
CRITICAL≥ 70
HIGH55–69
ELEVATED40–54
MODERATE20–39
LOW< 20
PREDICTIVE STATUS
TROUBLED_HIGH_CONFIDENCE≥ 65
AT_RISK≥ 55
WATCHLIST≥ 45
LOW_RISK< 45
§ 7
Receivership & Monitorship Tracking

A PHA is flagged active_receivership when a record exists in the pha_receivership table. Component E automatically assigns 5.0 points for active receivership, 4.0 for federal monitorship, and 3.0 for historical receivership. The Receivership view scrapes each authority's site for current operational artifacts (board minutes, recovery plans, transition reports). Source: pha_dashboard/pha_full/pullers/risk_rank_engine.py (lines 128–152).

§ 8
Troubled-History Analysis

PRISM ingests six fiscal years of HUD's Troubled PHA reports to Congress (FY2020–FY2025) plus the April 2026 PHAS_Troubled spreadsheet. A PHA is “Chronic Troubled” when it appears on the list for 8 or more years and is “Newly Troubled” when it first appears in the FY2024 or FY2025 reports without prior PRISM flagging. Source: pha_dashboard/pha_full/data/troubled_history.json + pha_scoring_v3.py (lines 80–88).

§ 9
5-Year Forecast (Stock-and-Flow)

Three pools are tracked over time: Troubled, At-Risk, and Worsening. Historical recovery rate is 17.9% per year. Tipping rate (At-Risk → Troubled) ranges from 15% (best case) to 40% (HUD-inaction scenario, derived from the HODM patterns below). Source: pha_dashboard/pha_full/prism_forecast.py.

§ 10 · HODM v1.0
HUD Oversight Defect Model
Modeled on HUD OIG audit findings, 2009–2018. Defects documented in those reports are assumed to remain structurally in place absent published, evidence-based reform.

PRISM's other models score PHA risk — the likelihood that an authority is or will be operationally distressed. HODM scores the orthogonal question: HUD-inaction risk — the likelihood that, even when distress is documented, HUD will not act in time. The model is built from a corpus of HUD Office of Inspector General audit reports (2009–2018) that share a consistent set of structural defects in HUD's receivership, recovery, and enforcement processes. Each defect becomes a signal computed for every active PHA.

FAILURE-PATTERN CATALOG
  1. Long lag between known deterioration and HUD action — observed at the Housing Authority of New Orleans (HANO), New London, East St. Louis, Chester, Alexander County (5–26 year gaps).
  2. Five-phase receivership process exists on paper only — situation assessment, stabilization, recovery plan, implementation, transition rarely produced as artifacts.
  3. Office of Receivership Oversight (ORO) leadership chronically vacant or “Acting” — receivership oversight not in any official's performance plan.
  4. Geographic mismatch between receiver and PHA — East St. Louis receiver 300–370 miles from the PHA while a HUD office sat 5 miles away.
  5. Self-reported / falsified documentation undetected — Richmond's $2.2M misspending surfaced via outside complaint, not HUD review.
  6. Housing Quality Standards (HQS) inspection failures even at “recovered” PHAs — Chester: 94% of voucher units failed; 93 24-hour life-safety violations missed by contracted inspectors.
  7. Sponsor-government capture — PHA funds and decisions co-mingled with sponsor city/county (Richmond pattern).
  8. Cross-program enforcement only when multi-office team assembled — HUD's Office of Public and Indian Housing (PIH) alone never moved on Alexander County; cross-office team in 2014 led to action in 2016.
  9. Senior leadership characterizes long-known issues as “recent” — the Alexander County Housing Authority (ACHA) email review documented 18+ months of field warnings before HQ acknowledgment.
  10. PIH hesitates to declare substantial default for fear of weak administrative record — explicitly cited in the ACHA email follow-up.
  11. Statutory maximum recovery period exceeded with no remedy invoked — New London required OIG to recommend HUD notify its own Assistant Secretary that statutory action was due.
DERIVED SIGNALS (TIER 1, COMPUTED FROM EXISTING DATA)
SIGNAL FORMULA FIRES WHEN
HUD Inaction Clockyears_since_first_troubled> 3 years troubled with no receivership
Stalled Receivershipyears_since_receivership_started> 5 years with no transition-to-local-control event
Statutory Recovery Overduedays_past_max_recovery_window> 0 days past statutory maximum
Five-Phase Compliance [v1.0: PENDING]phases_with_public_artifact / 5score ≤ 2/5 for receivership PHAs — defined but currently held pending; does not fire in v1.0 until per-receivership artifact corpus is backfilled
DERIVED SIGNALS (TIER 2, REQUIRE LIGHT SCRAPING)
  • Field-vs-Headquarters (HQ) Lag — earliest public complaint / news article / Office of Fair Housing and Equal Opportunity (FHEO) charge to date of formal HUD enforcement.
  • Sponsor-Government Capture — sponsor city/county supplies shared services + PHA shows financial-control weakness (Richmond pattern).
  • Inspection-Contractor Failure proxy — outsourced HQS inspections + above-baseline tenant complaint rate (Chester pattern).
  • Cross-Programmatic Risk composite — active risk in 3+ of: PIH, FHEO, OIG, Federal Audit Clearinghouse (FAC), Departmental Enforcement Center (DEC), Labor Standards (Alexander County pattern).
DERIVED SIGNALS (TIER 3, STRATEGIC)
  • Imminent-Threat Trigger — surfaces evidence meeting HUD's emergency-action threshold (life/health/safety + criminal/fraudulent activity), removing the PHA's right to a cure period.
  • Administrative-Record Strength Index — counts public, discoverable evidence per PHA (news, FHEO charges, OIG memos, FAC findings, lawsuits). Inverts the ACHA failure mode by publishing the record HUD said it lacked.
§ 11 · FUSION LOGIC
Triangulation — All-Source Fusion
Set-membership classification across three independent analytical lenses. No analyst-tunable weights. The fusion is fully determined by which tracks fire on a given PHA.

PRISM exposes three independent analytical lenses on the same PHA universe. The Triangulated Assessment combines them using set-membership classification, not a weighted-average score. There are no analyst-tunable weights; the classification is fully determined by whether each track flags a given PHA.

This design is deliberate. The three lenses have different failure modes (model under-fit, designation lag, evidence-corpus depth). Combining them with a weighted score would invent a precision none individually possesses. Set-membership preserves the source of signal at every step, so an analyst can drill from a Triangulated finding back to the originating track.

THE THREE TRACKS
TRACK FIRES WHEN FAILURE MODE
PRISM composite tier ∈ {CRITICAL, HIGH} model under-fit; unmeasured causes; scoring lag
HUD on FY2025 troubled list or HUD designation contains “Troubled” designation lag (18–24 month signal latency); political-cycle suppression
HODM any Tier-1 signal fires (Inaction Clock, Statutory Recovery Overdue, Stalled Receivership) corpus scope (audit reports 2009–2018); OIG reporting cycle
FUSION CLASSIFICATION
CLASS DEFINITION CONFIDENCE USE
TRIPLE-CONFIRMED flagged by all three tracks HIGH priority intervention — multi-source agreement
DUAL-CONFIRMED flagged by exactly two tracks MEDIUM action warranted; specify the missing track
LONE-SIGNAL flagged by exactly one track ANALYTIC novel finding; investigate; lowest confirmation, highest novelty
UNFLAGGED none of the three fire excluded from the Triangulated Assessment

What this fusion does NOT do. It does not combine the three tracks into a single numeric score. Doing so would require analyst-defined weights for which there is no defensible empirical basis — the three tracks measure different things. The Triangulated Assessment is a set-membership classification. Reading the chip on a PHA tells you exactly which tracks fired; it does not tell you a synthesized magnitude. For magnitude, consult the standalone PRISM Risk Report.

What this fusion preserves. Independence. An analyst reviewing a TRIPLE-CONFIRMED PHA can see which three tracks agreed; an analyst reviewing a LONE-SIGNAL PHA can see precisely which one track is the lone alarm. Compare this to a weighted-average score, where the source of signal is collapsed into a single number and the analyst loses the ability to evaluate the chain.

Cross-system enrichment. The fusion class is computed once per PHA and stored on the score record. It is therefore available to every other PRISM view (Risk Rankings, Three-View, Watchlist, individual PHA profiles), not only to the Triangulated Assessment. The Triangulated Assessment is one consumer of the classification; it is not the only one.

Implementation. See _compute_hodm_and_fusion() in pha_dashboard/pha_full/dashboard/app.py. The classification is computed once at application startup over the in-memory PHA list (after composite scoring) and served from that cache. Restarting the dashboard process refreshes the inputs; per-request recomputation is not currently performed.

§ 12
Source Audit Catalog (HODM Corpus)

The HUD Oversight Defect Model is derived from six core HUD Office of Inspector General reports plus one follow-up data review, listed below. All are publicly available at www.hudoig.gov.

REPORT NO. DATE SUBJECT & KEY DEFECTS DOCUMENTED
2009-AO-0003 Sep 2009 Housing Authority of New Orleans (HANO) Receivership. No clear chain of command; no periodic reporting required; no adequate recovery plan; 4 board appointees and 8 receivers since 2002; last formal Memorandum of Agreement (MOA) expired 2003 with undocumented period through 2008; Public Housing Assessment System (PHAS) scores remained Troubled throughout.
2010-BO-0001 Jan 2010 New London, CT Housing Authority. Overall Troubled since May 2004 with HUD failing to act; $1.7M unpaid utility bills; $524K Federal funds misused for State programs; $99K Federal capital misused for State security; statutory maximum recovery period exceeded.
2012-KC-0003 2012 East St. Louis Housing Authority Receivership. 26+ years in receivership; no receivership plan specific to the Authority; receiver located 300–370 miles from PHA while a HUD office sat 5 miles away; receiver role not in performance plans; five-phase process never fully implemented.
2016-LA-1006 2016 Richmond, CA Housing Authority. Misleading documentation submitted to HUD; $2.2M misspent + $944,910 unsupported; lack of independence from City of Richmond; Public Housing Assessment Recovery System (PHARS) recovery agreement signed Feb 2013 yet weak controls persisted; HUD found out via outside complaint, not its own monitoring.
2017-PH-1007 2017 Chester, PA Housing Authority Voucher Program. Just regained control after 20 years in receivership (2014); 61 of 65 voucher units failed HQS inspection (94%); 22 in material noncompliance; 217 violations missed by contractors; 93 violations needed correction within 24 hours; ~$2.6M annual HUD payments to substandard units.
2017-OE-0014 Jun 2018 Alexander County (Cairo, IL) Housing Authority. HUD aware of negative conditions since at least 2010; took until Feb 2016 to take possession; 200 children/families lived with peeling paint, pest infestations, inoperable appliances; PIH limited expertise on receivership; cross-programmatic enforcement only after multi-office team assembled.
2017-OE-0014
(follow-up)
Dec 2018 Review of Data Related to ACHA Evaluation. 1.2M emails reviewed (142,082 selected); Region V Public Housing Director knew of “downward spin” in July 2013; senior PIH did not seriously consider receivership until November 2015; FHEO Director said receivership needed ASAP in Nov 2014, action came Feb 2016; PIH front office hesitated due to inadequate “documented administrative record.”
§ 13
Limits & Non-Claims

PRISM is not an official HUD system. It has not been reviewed, endorsed, validated, or calibrated by HUD, and is not intended for operational, compliance, or enforcement decision-making.

HODM is a historical-defect model, not a current-conditions claim. The model assumes the structural defects documented in the source audits remain in place absent published, evidence-based reform. It does not assert that any specific HUD official is currently failing in any specific way.

All scoring is exploratory. Weights, thresholds, and tiers reflect the analyst's judgment of what is most informative given available public data, not a peer-reviewed or statutorily defined methodology.

Falsifiable by design. If HUD publishes evidence that a documented defect was reformed, the source catalog and signal weights will be updated and versioned (HODM v1.x).

PRISM METHODOLOGY v3.1 · HODM v1.0 · LAST UPDATED 2026-04-29
Pipeline Control
Pipeline controls load from Replit scheduler.
PHAs Under HUD Receivership / Monitorship
Most severe enforcement category. Federal Cure Monitors appointed; direct HUD oversight of operations.
Total in Receivership
3
All Previously Troubled
3 of 3
Avg PHAS Score (last)
53
States Affected
AR, KS, NJ
Last Scrape Run
Pending
Combined Network Analysis — Receivership Cohort
SNA across all 3 receivership PHAs detects shared vendors, consultants, board overlap, and follow-the-money patterns. The engine cross-references entities extracted from scraped board minutes, audit reports, and procurement records to flag concentration risk that may have contributed to systemic failures.