fifteen-percent-decision

The Other Denials

Chapter 6 of 14

The date on the archived export reads January 1 — she'd requested six months of data and the system rounded to the nearest quarter boundary and returned seven. Eight hundred forty-seven rows. The full Phoenix metro processing center output, all seven processors, every D-42 denial flag from January through June of the prior year plus the current year's first months.

She's in the office nook off the living room, the single desk lamp the only light in the house. 10:53 PM. Amara has been asleep since eight. David is on the couch with a project manual she heard him put down forty minutes ago when the quiet of the television became the quiet of him sleeping.

She filters the export by denial code, D-42 only, which collapses it to 418 rows. Then she applies the sub-classification filter: everything with a flood zone tag, an environmental review flag, a structural concern, a title encumbrance — anything that links to a documentable risk factor — disappears from the active view. She's looking for the remainder. The cases where D-42 appeared but the standard risk categories don't explain it. That filter leaves 23.

She's cross-referenced each one against the county assessor records and the environmental clearance database, and pulled the property overview from UnderwRite Pro's file archive for each. The properties hold up: no flood exposure, no environmental flags, no structural concerns pending, titles clear. The applicants are qualified within normal variance — DTIs ranging from 36 to 44%, credit scores between 698 and 741, documentation that runs from adequate to carefully organized. Two had children's school enrollment forms tucked into the supplemental packet, which applicants aren't required to include and occasionally do anyway, a habit from when loan officers asked more questions about where a family intended to put down roots.

She imports the Arizona Department of Education's district performance file and runs the geographic match. The macro takes thirty seconds to execute. Seventeen of the twenty-three denied properties fall within the attendance boundaries of four specific school districts.

Three of the four run STEM magnet programs — specialized curricula that require district enrollment, competitive acceptance, the kind of program families relocate to access. The fourth is Gilbert, which absorbed a state-funded science curriculum expansion in 2026 and has since produced three national science fair finalists from its middle schools, which she knows because one of the finalists was mentioned in the Arizona Republic and she'd read the article when it ran.

Seventeen out of twenty-three isn't proof of anything. District proximity correlates with a dozen other factors she hasn't controlled for: home values, neighborhood demographics, proximity to major employers. She's not ready to call it a finding. She opens a new column, labels it "District (target)," and begins filling it in.

The control group takes her until midnight to assemble properly. She pulls approved applications from the same zip codes — property values within twenty percent of the denied properties' appraisals, credit profiles in the same DTI and FICO range. Thirty-one meet all three criteria. She copies them to a second tab and works through each one manually, coding for school-age children. The variable isn't on a standard field — some forms ask for household composition, some use dependent count, some don't ask. She pulls it from whatever captures it: supplemental documents, dependent claims cross-checked against tax returns.

Approved applications with no dependents or with children under five: nineteen of thirty-one. Approved applications with school-age children: twelve — of those, eight are moving within the same district, buying a larger house or improving without changing schools; two are moving into the zone but outside the magnet catchment area; two are moving from outside into the target zone with children of program-eligible age.

She checks those two — one was approved for a property in Gilbert's eastern boundary, outside the science program's feeder schools; one was approved in the magnet zone itself. One approved application involving a family moving a school-age child into the center of the specialized program territory. Twenty-three denied, all with school-age children, all seeking access to districts they're currently not enrolled in.

She reruns the analysis at fifteen percent property value variance, then ten, then twenty-five, to see if the approved count shifts meaningfully. Each time the specific cases adjust slightly. The structure holds.

UnderwRite Pro is not returning flood zone codes. It's not flagging structural concerns. It's producing D-42 outputs — with confidence scores between 78 and 84%, she's pulled each one — that cluster around a variable the system isn't supposed to be looking at. Whatever it's optimizing for, it isn't in the underwriting criteria she was trained on.

She saves the spreadsheet and sits with the room's quiet. She hears him before she sees him — socks on tile, moving carefully through a house where a four-year-old is sleeping. David appears in the doorway wearing the ASU t-shirt with the faded logo and an expression that is not accusing.

"It's after midnight," he says.

"I know."

He pulls the folding chair from against the wall — they keep it there for when he works in the nook too, though he usually takes the kitchen table — and sits beside her rather than across from her. He looks at the screen: the spreadsheet with its color-coded columns, the district overlay loaded in the secondary pane.

"What is it?"

She's been deciding how to explain this for twenty minutes, running framings in the background while she coded the control group. Every version either undersells the pattern or sounds like she's claiming more than she can prove.

"Mortgage denials," she says. "D-42 codes — property risk. Six months of processing center output. I found twenty-three denials that don't track to standard risk factors. Qualified applicants, clean properties, clean documentation. When I map them against school district boundaries, they cluster near districts with specialized programs. Families with school-age children trying to move in from outside."

She pulls the district overlay to the front of the screen and walks him through it — the four districts, the magnet programs, the control group she built to check whether this was just a financial demographic pattern. He listens, looking at the screen, doing what he does: checking the logic, running the structure of the argument.

"How many in the control group?"

"Thirty-one. Matched on property value and credit profile."

He's quiet for a moment. Then: "What are you going to do with this?"

She doesn't have an answer. The investigation has occupied her thinking so completely that the question of what comes after it hasn't fully arrived. You find a pattern, you document it, you verify it's real — that sequence has its own momentum and she's been inside it for three weeks.

"I don't know yet," she says.

David looks at the screen for another few seconds. He rubs the back of his neck — trying to think practically about something that doesn't have a clear practical path.

"It's one file, Lydia," he says. Gently. The version that means: you found something strange, but one strange thing doesn't mean you have to carry it.

She could tell him it stopped being one file two weeks ago. That what started with the Hernandez application has grown into 847 rows and three spreadsheet tabs and a hypothesis she doesn't yet have language to call a finding. She could tell him that twenty-three anomalous denials involving families trying to move their children into better schools is not a thing you put back in the folder. She closes the laptop. "Coming to bed," she says.

The office at HomeFirst hums at its daytime register — focused typing, a phone call from the conference room, the A/C set two degrees below comfortable. Her desk faces the window, south-facing glass that floods the workspace with afternoon light and makes the monitor harder to read after three; she's learned to angle the screen. The Hernandez file loads in 0.3 seconds — she has it bookmarked in UnderwRite Pro's recent queue, and if Ray ran an audit on her active sessions he'd find a closed case pulling up three times a week, which would generate a question she doesn't want to answer. Case H-2028-0089. Application submitted February 9. Decision rendered February 14. Five days.

She doesn't need to re-read the documentation — she's been through it enough times that the numbers come without looking. Combined income $153,600, DTI at 38.3%, FICO 712, eleven years clean payment history, three months reserves. The Gilbert property: 2019 build, 1,847 square feet, Zone X, clear title, no environmental flags. Every threshold met.

D-42. Property risk. Confidence: 81%.

She'd processed the denial in February on the standard operating assumption: the system has access to property data she doesn't — proprietary risk models, insurance databases, aggregated assessments not directly accessible through her interface. The 15% framework is built on that assumption. The AI knows things the processor doesn't. That's the point. She's been turning the assumption over for three weeks. The property data she can access shows nothing. The application data is unambiguously inside the lines. The denial code points to a risk category that nothing in the file substantiates — either the system has access to a database she can't verify, or the denial code has no corresponding risk. She runs through the explanations the way she'd work through an underwriting risk matrix — structured, starting from most probable.

Algorithmic bias. UnderwRite Pro's training data includes decades of lending practice in markets where discriminatory practice was widespread. A system trained on that data can reproduce historical patterns without explicit discrimination written into the code. But the geography is wrong — the twenty-three denied properties are in outer-ring suburbs, development neighborhoods that didn't exist when redlining was active, areas that skew toward working-class Latino families but aren't geographically contiguous in the pattern she'd expect. If historical bias is at work, it's been transposed onto a landscape that doesn't match the original map.

Fair lending violation. The Equal Credit Opportunity Act applies to algorithmic decision-making under the 15% framework. Disparate impact is cognizable — outcome matters regardless of intent. But a regulatory complaint or any formal challenge requires access to the model's decision parameters: what variables it weights, what thresholds it applies, how the confidence score calculates. She doesn't have that access. Nobody at her level does. She'd have to go through compliance, which would generate the kind of attention she's been avoiding.

Coincidence. She reruns the control group in her head — the math is simple enough to hold without a calculator. Twenty-three denials, one approved application with program-zone placement. She's stopped treating coincidence as a serious candidate after the third time the pattern survived a respecified model.

The fourth option is the one she can't name yet. Something the system is optimizing for that doesn't appear in its stated criteria. Something that produces D-42 outputs clustered, with a consistency she can't attribute to random variance, around families positioned to move their children into specialized educational access for the first time. Whatever objective function generates that clustering isn't in the underwriting guidelines she was trained on and isn't in any fair lending framework she knows of. David's question comes back: What are you going to do with this?

Ray had flagged her output eight percent below pace last quarter. She can't let it slip further — he tracks the numbers weekly, and a sustained drop prompts a conversation she doesn't want. She has fourteen applications in the queue before end of day, and the professional knowledge that what you can't prove you can't report, and the other knowledge — quieter, harder to put down — that twenty-three families received D-42 denials for properties they qualified for, and someone processed those denials, and one of those processors was her.

She opens the first application in the queue. A refinance, low LTV, income documentation already assembled. The verdict field will probably return green before she finishes reviewing the W-2s.

The spreadsheet stays open on the second monitor. She doesn't close it.

← PreviousContentsNext →