fifteen-percent-decision

The Fifteen Percent

Chapter 14 of 14

The queue counter reads nine when Marcus opens the COMPASS-NG dashboard at 8:04. His coffee is from the lobby cart, already losing temperature. The window behind him faces the parking structure. The fluorescent overhead cycles through its hum. He's been at this desk for seven years and the institutional choreography has compressed into something automatic: badge scan, elevator, third floor, desk, dashboard, coffee. He no longer marks the sequence as a sequence.

The first case is a routine parole eligibility review. Rashid Edmonds, 34, drug conviction, released twenty-three months ago on a prior grant, compliance record clean throughout. Marcus opens the digital file and the physical notebook at the same time. He reads in his order: incident report, compliance history, employment verification, housing record, family contacts. With Edmonds, all of it holds. Check-ins met for twenty-three consecutive months. Warehouse work in Roseville, confirmed each quarter for the duration of the release period. One address change, properly filed. No violations, no flags, nothing in the file that reads as a gap between what's documented and what's real. He submits to COMPASS-NG. The verdict field populates in four seconds: green. APPROVED. Confidence: 78%.

He agrees. But he reads the file again, a different part this time. The Roseville address, the distance to the check-in office, the family contact listed — a sister, which means a housing anchor in a stable area. He notes the surrounding geography. He doesn't have Vasquez's models, the outcome maps, the 1,847 cases across six states. What he has is the Eastern District, known over fifteen years into something that lives in his head as topography rather than database, and the FMI data access portal Vasquez provisioned in the first week of May. Three hundred and forty of the 1,200 cases in the corrections subset reviewed since then, three evenings a week at the kitchen table. The downstream profiles on each one, built slowly into a methodology he hasn't finished naming. He opens the notebook and writes:

Edmonds, Rashid. APPROVED 78%. Agree. Roseville — sister, stable housing anchor. Q3 check.

The Q3 note wasn't in his entries before March. The entries before that look like what they always looked like: name, date, AI decision, his own assessment, outcome if he'd been able to find it. Williams, Dewayne. AI approved. I would have denied. Watch. That one is still near the back of his 2028 notebook, the first entry in what became something else. The entries after March carry the Q3 line, or Q4, or a timeframe question in his shorthand: check at six months, when the downstream picture has stabilized. Most officers don't track it. The district doesn't require it. He closes the notebook and moves to the next case.

By 10:30 the queue is half-cleared. Two approvals, both agreed, both noted with downstream questions. The third case is a REFER — COMPASS-NG generates these at around 8% of the caseload, when it flags uncertainty rather than reaching a decision. Marcus treats them as the AI's version of a question mark, and he reads the file with more attention than he gives the clean approvals, looking for the specific uncertainty that triggered the flag.

This REFER is for Robert Akins, 28. Employment gaps: two in the past eighteen months, each lasting five to seven weeks. The documentation in the file covers both. A plant closure in Dearborn. A seasonal contract ending in January. The documentation looks legitimate. The compliance record throughout is clean — no violations, no change-of-address issues, no lateral associations he can identify. The gaps look like layoffs, not concealment, and the compliance record looks like someone working to stay compliant through layoffs — harder than staying compliant when nothing has gone wrong.

He marks APPROVE, adds written justification, saves the file. His institutional override rate is tracked quarterly. His private accounting is kept in the back of the notebook, updated monthly: Institutional override rate: 4.1%. Personal right-rate: 76%. The first measures whether he diverges from the AI's determinations. The second measures whether he was right, as best he can calculate from the outcomes he's tracked. They don't measure the same thing, and the difference between what they measure is part of what he's been trying to map since February. He's noticed the REFER cases are useful for this — the edge of COMPASS-NG's confidence, the zone where the system registers uncertainty. He's logged twelve REFERs this month. Their downstream check dates are in the notebook. What he hasn't written is what he'd do if the downstream picture were visible before the decision, not after. Vasquez hadn't answered that question either.

Janet appears over the cubicle wall at 10:47. "Woodward Thai, Tuesday." "Yeah," Marcus says. She nods and is gone. She stopped asking about the notebook sometime in April — it had crossed from anomaly to habit, unremarkable now. He drinks his coffee. It has gone cold.

Lydia's queue counter has been on application 12 of 39 for eleven minutes while she chases a missing W-2 on application 11. Second employer, income on extension, not a red flag but requiring a follow-up email before the file can close. The email is sent. She's on 12 now.

The processing center holds at sixty-eight degrees. Outside, Phoenix is heading toward its May peak. The monitors are the relevant landscape. She works through the file in order: income verification, credit check, property records, AUS submission. UnderwRite runs and returns APPROVED. She confirms and then opens the second tab in her browser. The GIS layer she built in March — Maricopa County school district boundaries, magnet and STEM program locations marked in red against the property grid. She checks the zip code: Scottsdale north, no magnet programs. She notes it in the neutral column of her secondary spreadsheet and moves to 13.

The GIS layer has become something she carries the way she carries the DTI threshold — not by consulting it each time but by knowing it. The anomalous D-42 cluster maps onto a Phoenix metro geography she can sketch from memory now: east Mesa, south Phoenix, Laveen, the western-edge suburbs still developing. When an address populates in an application, she knows before checking whether it falls in the relevant zone. She checks anyway. Checking is the methodology, and methodology requires consistency to produce data rather than impression.

Application 14 returns a D-42. She reads the address — East Mesa — and checks the GIS layer against the flood zone adjacency data from a county assessment she found in March. The D-42 is legitimate here. She processes the denial and confirms the adverse action notice without flagging it. Most D-42s are legitimate. That's what she knows now that she didn't know in November, when every anomaly felt like evidence of the pattern. The pattern is real. It's also a fraction. She clears 15, 16, and 17 without flagging anything and stops on 18.

Application 18 is a purchase in Chandler. Two parents, two children on the tax return, ages 9 and 11. DTI 34%, LTV 78%, credit above threshold. The property is four blocks from BASIS Chandler, the STEM academy that has appeared in three other approvals she's tracked this month — families who got through, buying near schools with strong science programs, families whose financial profiles match the ones in her denial cluster except for the direction of the verdict. UnderwRite returns APPROVED, and she confirms it, opening the APPROVED tab in her secondary spreadsheet. Fourteen entries before today. She creates entry fifteen: Chandler, two school-age children, BASIS Chandler adjacency, approved. Application date, approval date, district code, child ages. She doesn't record names — a data handling question she's resolved in the direction of aggregate rather than individual tracking — but enough to pattern-match when the dataset is large enough to matter.

She opens the messaging app and types to Marcus: Started tracking approvals in target districts. Building the other half of the dataset. His response comes seven minutes later, while she's on 19, and she reads it between the income verification and the AUS submission: Good. Same here — tracking approvals where I'd have denied. The gap is the data.

She reads it twice. The phrasing has the precision Marcus has had since the Vasquez call, as if absorbing the outcome maps had given him vocabulary for what he was already doing. The distance between what the AI decides and what human judgment would have decided. Between the families who get through and the families who don't. The space where consequence accumulates before it's measurable. The cross-domain alignment — Marcus in corrections and she in lending arriving at the same formulation independently — would have read as coincidence before February. Vasquez had a term for it. She's still learning the terminology. But she recognizes the phenomenon. She saves the phrase to her methodology notes file and finishes application 19.

Marcus packs his bag at 5:07. Queue cleared, afternoon case files filed, the Akins justification processed through Terrence's routing system without comment. He puts on his jacket and looks at his desk before he leaves. The notebook is in the drawer. The dashboard has gone dark. In the elevator, he checks his phone. A link from Lydia: a county school district enrollment database, public access, which she's thinking about cross-referencing with the property approval data. He types back: I'll look at it tonight. Tonight means the kitchen table, the FMI portal, another twenty cases from the corrections subset. He's been at this pace since the second week of May. He has three hundred and forty of twelve hundred reviewed.

Brightmoor is twenty minutes northwest. He doesn't think about the river daily. But it's in the map now — the FMI outcome maps had made it literal, a weighted node connected by lines to the February parole grant and the rescue on February 28 and the foster care placement that followed. Three confirmed downstream points from a single decision. In the FMI database it's one case among 1,200. What he's been doing for three months is building the cases around it, the context that gives a single node its weight. He drives north on Michigan Avenue.

Lydia picks up Amara from daycare at 5:41. Last parent again, which is most Tuesdays and Thursdays when the queue runs long. Amara describes her day with the thoroughness of someone who hasn't learned to summarize yet. Lydia listens and drives and nods at the right moments. David is cooking when they get home. Chicken and garlic, the kitchen warm with it. "How was your day?" he asks. "Fine," Lydia says.

He doesn't ask which files. The investigation is quiet now, built into the work rather than running alongside it, no longer the urgent strangeness it was in November when she couldn't articulate what she was chasing. She can articulate the methodology now, if not the answer. But he hasn't asked, and she doesn't offer, and the kitchen smells like dinner.

Amara is down at 8:30. Lydia opens her laptop at 8:47. Fifteen entries in the APPROVED tab. She opens the enrollment database she'd found that afternoon and begins cross-referencing against the property records for the approved cluster. The comparison takes shape in the spreadsheet: families in target districts who were approved, families in equivalent financial situations who were denied. Two populations. The distribution beginning to emerge.

The FMI servers run their thermal cycles overnight in Oakland. The outcome maps update with the day's cases — new decision nodes, new connecting lines, the impact weights recalculated as the database grows. The 15% continues across its domains, verdict fields populating in corrections and lending and the other sectors where the threshold has been met. Somewhere in the corrections subset, Rashid Edmonds is three months and fourteen days into the period Marcus has marked for a downstream check. The Hernandez case has two confirmed nodes and a prediction horizon that runs five years forward, into the period when Sofia will be fifteen or sixteen and the comparison cohort's educational outcomes will be visible in the data. Vasquez had said: three to five years before the educational trajectory is clear. That's not long, at the scale of how these things run.

Most of the people inside the systems don't follow what happens after the verdict field populates. Two of them do. They have the same question — the one that doesn't resolve, the one that compounds as the dataset grows. They have the gap, which is also the data. They have their notebooks and spreadsheets and the methodology they've built from inside systems that don't know anyone is watching this carefully.

The work goes on.

← PreviousContents