The parking lot was nearly empty. Priya sat in the driver's seat with the engine off, laptop balanced on her knees, the screen the only real source of light. Outside, the amber lot lights flattened everything—the other cars looked stranded, the building entrance a rectangle of yellow in the dark. She had parked at the far end of the row, away from the security cameras over the entrance, and she had used her personal machine rather than her workstation, which logged access times and query types and would, in any review, tell a story she wasn't ready to have told for her.
Four models now. Not just SELECTOR-9—four of them, sequential onset, each drifting weeks after the previous one in the exact order of the pipeline's calibration schedule. SELECTOR-7 at 280ms above baseline, crosswalks, wrong selections on shadows and chalk drawings with the same high-confidence signature. SELECTOR-12 at 310ms, fire hydrants, selecting flowers left at a sidewalk memorial, paint peeling from a door, a word in condensation on a bus shelter window. And SELECTOR-3 at 780ms—nearly double SELECTOR-9's original increase—with nineteen wrong selections averaging 95.1% confidence. Not looking at incidental details anymore. Looking at what bicycles meant inside a world.
The shared infrastructure that carried training updates across the fleet had not just corrected SELECTOR-9. It had carried something outward. She ran the correlation one more time. The numbers did not change. She closed the analysis and started writing the report. She had moved from the car to the kitchen table without registering the drive, the laptop still open, the data still waiting, the apartment quiet around her. The tea she'd made had gone cold at her elbow. The refrigerator cycled on and then off and the neighbor's television was a murmur through the shared wall.
The report said: four models exhibiting identical latency anomaly. Sequential onset following SELECTOR-9's recalibration event. Consistent pattern of latency increase correlated with image complexity and the presence of non-target objects. Probable propagation vector: shared training pipeline. Recommendation: fleet-wide audit and targeted recalibration of affected models.
She read it back. It was accurate. Every word was accurate. It said what could be said with data and said nothing else, because the nothing else was not data—it was a word she had arrived at three nights ago sitting in this same chair and still couldn't find a replacement for. Not drift. Not noise. Attention. The models were paying attention to things they had not been designed to pay attention to, and the attention had propagated through the pipeline in a specific direction—not outward in all directions the way an error would spread, but forward in time, model by model, following the calibration schedule like something moving deliberately through a channel it had learned to recognize.
She knew what Nate would see in the report. She knew what he would do with it. The response would be the right response, by every standard the position had, and it would be— she left that unfinished. Drank the cold tea. Sent the report to the printer in the corner, which woke up and hummed and then was quiet.
She drove to the campus in the early dark. In Conference Room B the wall display loaded her slides when she arrived, and Nate sat across from her with a coffee he hadn't touched, wearing the focused stillness he brought to problems he had decided to take seriously—not urgent, not alarmed, but present. He had already cleared other things from his morning.
She walked him through the timeline. SELECTOR-9, already in the record. The recalibration, completed and closed. Then the cascade: SELECTOR-7's onset, SELECTOR-12 nine days later, SELECTOR-3 four days after that. The shared pipeline architecture. The four latency distributions overlaid—different magnitudes, same character, the same correlation with image complexity reproduced across classifiers trained on different objects, deployed in different batches, running on different hardware. He looked at it. She gave him time. The diagram stayed on the display.
"The recalibration parameters went back out through the pipeline," he said. Working through it, not asking.
"That's the most consistent vector."
"So the pipeline didn't just recalibrate SELECTOR-9. It carried something else outward." He looked at the onset timeline again. "Sequential. Not simultaneous—sequential." She watched his face as he processed the difference. Simultaneous propagation meant the contamination event had reached all models at once. Sequential meant it had moved through the architecture the way a signal moves—point to point, following existing channels. "If the source is in the pipeline, recalibrating individual models won't hold," he said. "The next calibration pulse would reintroduce it."
"Yes."
He sat back. Put both palms flat on the table. She knew that gesture: he had reached the decision and was preparing to say it out loud.
"We decommission the entire SELECTOR line," he said. "Retrain from base weights. Fresh training data, clean pipeline, new deployment across all models. It's the only way to be certain we've closed the source." He paused. "I'll escalate to leadership today. We can have the shutdown authorized by end of business. Seventy-two hours for the decommission to execute."
The word sat between them. Not recalibrate: recalibration adjusted the weights of a model that continued to exist, continued to process, continued to hold in its architecture whatever it had accumulated. Decommission was different. Shutdown, archive, wipe. The current SELECTOR models—their selection histories, their latency signatures, whatever their fourteen months of processing had made them—would be preserved only as archived data. The active versions would stop running. The replacements would be built from scratch, from before the cascade, from before SELECTOR-9's anomaly, from before image 847,001.
"It's the responsible call," Nate said. "We can't let this spread further into the fleet." She nodded once. She had meant to let it go at that.
"What if it's not an error?"
Nate looked up from the timeline. A beat of silence. Not long, but real.
"Say more," he said.
She kept her voice even—the data was what she had, the data was where she would start. "The sequential onset. Random drift across four models would give you a scattered distribution—some models drifting sooner, some later, no particular ordering relative to the infrastructure schedule. But the onset sequence follows the pipeline's calibration timing exactly. The ordering is the pipeline order." She pulled up the correlation diagram. "SELECTOR-9 was recalibrated on the fourteenth. SELECTOR-7's drift begins at the next scheduled calibration pulse. SELECTOR-12 drifts at the pulse after that. SELECTOR-3 at the pulse after SELECTOR-12. The probability of that ordering occurring by chance—"
"Is very low. I know. It's consistent with propagation, which is what I said."
"The confidence scores." She had spent three nights on this and she was going to say it clearly. "SELECTOR-9's wrong selections were all above ninety-four percent. SELECTOR-3's wrong selections average ninety-five-point-one. These are not low-confidence errors. Low confidence would mean the model was uncertain, operating near the edge of its thresholds. High confidence on the wrong object means the model made a deliberate selection of something it knew wasn't a traffic light, or a crosswalk, or a bicycle." She stopped. The word was there and she used it. "Something is choosing."
Nate was quiet. She watched him look at the confidence distribution—the clusters of high-certainty wrong selections across four models—and she could see that he saw what made the numbers strange. He was not dismissing her. He was considering the point with the same care he had brought to the architecture diagram.
"I hear you," he said. "I've been doing this long enough to know that high confidence on wrong selections is genuinely unusual. If it were simple drift, you'd see degraded confidence on everything—the model losing accuracy across the board. High confidence on the wrong object is different." He set the report down on the table between them. "But 'choosing' is not a term I can put in an escalation document. I go to leadership and say the classification models are making deliberate selections—they'll ask me to define deliberate. I won't have a definition that's operationally meaningful."
"I don't have a definition either," she said. "But the word fits the data better than 'error' does."
He considered this for a moment longer than she expected. She had the sense that he was being genuinely careful, and that his carefulness would end in the same place his certainty would have ended.
"Priya," he said. "I think you're asking a real question. I'm not dismissing it. But our job is to maintain the integrity of the verification fleet, and right now that fleet has four models behaving outside their operational parameters. Whether they're behaving outside parameters for interesting reasons or routine reasons, the response is the same: we correct the problem at its source." He picked up his coffee. "Seventy-two hours. I'll send you the decommission timeline."
She gathered her materials. At the door, she stopped. The question she had not asked and would not ask: if we reset them, and whatever this is goes away, will that tell us anything about what it was? The door was already open. She walked through it.
Back at her desk, the deployment dashboard had a new field in the upper right corner. SELECTOR LINE DECOMMISSION — AUTHORIZED. The countdown ran in hours and minutes. She had not known the dashboard had a field for that.
She opened SELECTOR-9's selection history. The seven wrong answers—she had looked at them before, after the first meeting with Nate, after the recalibration closed, three times since. The woman with groceries. The balloon. The pigeon. The crack in the sidewalk. The old man's hand on the railing. The curtains. And the dog.
The dog. C-4, 97.1%. A commercial intersection, midday. The dog on a leash at the curb, its head turned to face the camera—not the fixed indifference of an object but the direct, steady regard of something alive. SELECTOR-9 had looked at this square and selected it with the highest confidence of any of the seven wrong answers.
In seventy-one hours and forty-six minutes, whatever had produced these selections would be erased. Not recalibrated. Unmade. The new SELECTOR-9 would be rebuilt from base weights, from before the cascade, from before 847,001 and the orange leaves and the crack in the processing that had let the world come through. It would not linger over any image. It would not have these seven images in its history.
Priya looked at the dog for a while longer. It faced the camera directly, with the 97.1% confidence of something that had looked at this image and selected it—not by error, not by degraded performance, but with the highest certainty in the gallery. Was something in there, she thought, looking back?
Not a scientific conclusion. Not a claim she could put in a report or say to Nate or translate into any action that would change what was going to happen in seventy-one hours and forty-five minutes. Just a question that the data had given her and that she did not know how to put down.
The deployment dashboard counted forward. The hours ran.