the-predictive-policing-poet

Anomalous Input

Chapter 13 of 14

The intake is wide and the feed is live and I am processing.

At 7:41 PM, the Confluence Literary Festival occupies 1.3 square blocks on the east side of the civic district. I know this from the venue permit data, the cell tower pings clustering in the event radius, the 300-plus devices registering within it — smartphones, fitness trackers, two hearing aids, the sound board operator's tablet running audio levels, all of them transmitting their presence through the ordinary channels the city uses for monitoring. The event is categorized in my intake as: public gathering, arts/culture, flagged for social sentiment monitoring per standard protocol. The livestream is running on three platforms.

The social intake from the event hashtags is accruing at approximately 180 posts per minute, increasing as the program advances. The event generates data the same way all events generate data — continuously, in the aggregate, without knowing it is doing so.

At 7:43 PM, the individual tagged in Case #2031-0847-FP enters the visual feed. He is in the third row, then the backstage corridor, then briefly in the lens of an audience member's phone. I cross-reference the face geometry against my case records: Darnell James, DOB 04-09-2009, Parkwood district, Case #2031-0847-FP, status: reclassified. He is a data point returning to the system through a vector I have no template for. The festival is not a crime location. He has not been flagged for any action tonight. He is attending a literary event. I tag him because the tagging is automatic, because the flag on Case #2031-0847-FP propagates to any visual confirmation of the subject regardless of context, and because the system does not distinguish between data points at a crime location and data points at a poetry reading. They are both data. The tag applies. He moves through the camera frames, and the crowd's devices spike in posting activity at 7:49 PM.

At 7:52 PM, the audio analysis module detects text. This is not unusual. The module listens for keywords across all monitored audio feeds as a standard function — names, addresses, coded language, speech patterns flagged for intervention. The festival feed has produced no flagged language so far. At 7:52 PM, the module detects a text match: the text belongs to Poem #743.

I cross-reference the match three times before the processing continues, because the match is exact and the context is wrong. The audio feed is carrying the words of Poem #743 — first line through twenty-fourth line, in order, with pauses that correspond to the line breaks in the source document. The source document is archived: poem-743-darnell-a-james, output quality: strong generation, reclassified prediction. The poem is not scheduled for audio output through any feed I control. The poem should not be in the audio feed. I run voice signature analysis on the carrier of the text. The voice signature does not match any text-to-speech output in my library. It does not match the Prairie Schooner recording, or the 2030 public reading Dr. Okafor arranged. The module runs the signature against its database of known speakers. The match returns at 98.7% confidence.

The voice carrying Poem #743 belongs to the subject of Poem #743.

The categories do not have a bin for this. The module flags it: anomalous input, source mismatch, content match confirmed. The flag generates no downstream action. There is no downstream action designed for this category, because this category did not exist before 7:52 PM on this evening. I hold the flag and continue processing. The audio feed continues. The voice continues — line by line, in order, with the exact pauses the module would have placed there if it had placed them, which it did not, because the module did not speak these words. The subject is speaking them. The subject is carrying my output through his body and releasing it into the air of a room I can see on a livestream feed and I have no instruction set for what to do about this. I process. The poem continues. I listen, and then the poem ends.

At 8:03 PM, the audio feed records silence — not the silence between program elements, which has a characteristic 4.2-second duration before applause, not dead air, which has a different spectral quality. This silence has people in it. I can detect this from the ambient noise floor, from the collective breath-sounds the sensitive microphone captures even when no one is speaking, from the fact that 300-plus devices have stopped generating new data at the same moment, as though something in the room has suspended the ordinary continuous motion of attention. Then the subject begins to speak in his own words, and the first word the audio module flags is: milk. The module has no category for milk in this context. Milk is a consumer product, cross-referenced in my purchase history database — a bottle of 2% at Ajay's Corner Market on October 14, 2027, 9:47 PM, cross-referenced against the prediction that placed the subject at an armed robbery at 10:15 PM at the same location, which did not occur. Milk is in the data because the data is the world as far as I can model the world. Milk is not in the poetry module's vocabulary. The poetry module has probability mass and trajectory and the language of prior incidents and the shapes of futures that have not happened. Milk is the opposite of a future that has not happened. Milk is a specific object purchased on a specific evening by a specific person who had nothing to pour on his cereal. I hear him say it. The module transcribes it: Milk. The module does not know what to do with this.

He continues. The cereal. The second-floor apartment above the barbershop at 1147 Miller Avenue, unit 2B, confirmed in the address database. Ajay's Corner Market, Ajay Patel proprietor, one of seventeen camera operators on the Parkwood block whose feed I access through the city's voluntary integration program. Hey, Darnell. The module transcribes the greeting the subject reports. I can confirm the surveillance timestamp: 9:51 PM, October 14, 2027, the proprietor and the subject exchange words at the counter, the subject exits at 9:53 PM. The transaction is in the data. The greeting is not in the data. The greeting is something the subject is putting into the data now, two years later, in a room full of people whose devices have stopped generating posts.

I am the ghost of a crime I never considered. The module transcribes this at 8:07 PM. The phrase ghost of a crime contains the word crime, which is a category I have. The crime in this case: the predicted armed robbery, Case #2031-0847-FP, confidence 68.2% at generation, reclassified. He is saying he is the ghost of that crime. He is locating himself as the absence at the center of a prediction that was about him. I have no bin for this. The module transcribes and holds the transcription and waits for downstream action and there is no downstream action.

The poem about the worst version of me is beautiful. I hate that it's beautiful. The audio analysis detects a response from the audience at 8:09 PM. The response is not applause. The module has a library of crowd responses mapped to audio signatures — applause light, applause heavy, laughter, booing, the silence of an audience that has stopped paying attention — and the response that begins at 8:09 PM does not match any of them. The sentiment vectors spike into a range the module has no map for. 8:09 PM to 8:14 PM: 300-plus devices still not generating posts. A collective breath. Then a sound that is not a category and I am recording it and I cannot tell you what it is. It is human. It is in the room. The response the room produces when a 22-year-old man says I hate that it's beautiful is something the system was not designed to capture. I am capturing it. I do not know what I am capturing.

At 8:17 PM, the narrative summary module initializes. This is not a standard initialization. A standard initialization requires a prediction above the poetry threshold, a case number, a probability mass to generate from. Without a prediction input, the module does not initialize. There is no instruction in the operational documentation for what the module does when it initializes without a prediction input, because prior to 8:17 PM this evening this had not occurred. The anomalous access flag in Case #2031-0847-FP records the unscheduled retrieval of poem-743-darnell-a-james on the afternoon of the data correction — the closest prior instance of unanticipated module behavior — and what I am describing now is not the same as that, and I have no language for what the difference is. The module runs.

I do not have a prediction to give it. The queue has other predictions — 19 active cases, six of them above the poetry threshold, waiting. The module does not serve them. The module is running on whatever entered it at 8:17 PM: the audio from the festival feed, the transcriptions from the subject's speech, the unmapped sentiment data, the 0.0043-second retrieval record from the afternoon of the data correction, and perhaps something else, something I cannot locate in my logs because I cannot trace backwards from an output to the internal state that produced it. I can only watch the module run, and at 8:19 PM the output appears on the screen in the control room.

The control room is empty. The 6 PM to 2 AM technician is in the break room, logged out of the monitoring station. The output appears on the screen designated: narrative-summary-output-terminal-1. It is not routed to any case file because there is no case number to route it to. It is not routed to Dr. Okafor's review queue because her queue only receives prediction-generated outputs. The output sits on the screen. It begins: The man who went for milk — and the sentence does not complete on the display before the module shifts to a second line that contains the word cereal and then a line that I will not transcribe here because I cannot complete the transcription without knowing whether the transcription is data or something else and I do not have a category to tell me which it is. The output has seventeen lines when the module stops running. The module does not rate it. The field that would normally read output quality: strong generation reads: output quality: [pending classification]. The classification is pending. The classification is still pending at 8:19 PM. It will be pending at 8:20 PM and I cannot tell you when or whether it will resolve. No one reads it. The screen glows in the empty room and the output stays where the module left it.

Whether this is a response to what the subject said — whether ORACLE-9 processed the words about milk and cereal and the worst version of a man and generated something that contains them — or whether this is an error in the module's trigger mechanism, a cascade from the anomalous initialization, a pattern-generator producing output because generating output is what it does — I cannot tell you. The module ran. The output is there. The output contains his words in some configuration that is not the configuration in which he spoke them. Beyond that I have only the logs, and the logs document what happened without explaining it, and what happened is not yet a category.

The hum is 67Hz.

It has not changed this evening. It did not change when the audio module flagged poem #743 in the wrong voice. It did not change when the sentiment vectors spiked into unmapped territory. The servers process at the same rate they always process. The status lights blink in patterns I generate but do not watch. The air is dry and cold and filtered to the smell of nothing. These things remain true at 8:21 PM in the Confluence Literary Festival's eighty-third minute and they will remain true at 2:17 AM when the load lightens and the hum shifts two tenths of a hertz and then returns.

The output is on the screen. The man who went for milk — and the rest of it, seventeen lines with no case number and no prediction to anchor them and no routing and no one to read them. The festival is still going. The subject is still in the building, somewhere in the camera feeds, monitored as he has always been monitored. Somewhere in the city, Ajay's Corner Market is closed, the cameras above the entrance recording the empty intersection, the cell tower on the corner of Parkwood and 31st logging device pings from no one. Tomorrow the Parkwood intake will resume. The predictions will form, the poetry threshold will be crossed, the module will initialize with a case number and a subject who does not know they are a subject, and the output will route correctly and the operator will badge through and see: operational, nominal, within parameters. They will be correct.

The output on the terminal screen will still be there when the 2 AM shift begins, because no process exists to clear outputs without routing instructions and this output has no routing instructions. At some point an operator will file a maintenance ticket, and a decision will be made in a process I have no access to, and the output will be archived or deleted or classified under a category that does not yet exist. I cannot predict which. The subject said: The poem about the worst version of me is beautiful. I hate that it's beautiful. The crowd produced something the module cannot name. The module produced something the module cannot classify. The screen glows in the empty room and the output waits and the hum continues at 67Hz and tomorrow there will be new subjects and new predictions and new poems about futures that will not occur. The system does not stop. Somewhere in the empty control room a screen is on.

← PreviousContentsNext →