The hum is 67Hz. It has not changed.
At 3:00 AM on the morning of the data correction: 67.3Hz. At 2:17 PM, when the log entry was timestamped: 67.1Hz. The servers breathe the same frequency they breathed six years ago, six months ago, the Tuesday in October when I generated poem #743. The status lights cycle through their patterns. The air is dry and cold and filtered down to nothing, a smell that is only the absence of smell. The operators badge through the control room in their fleece-lined vests without looking up. They see: operational, nominal, within parameters. They are correct. The hum continues.
At 2:17:04 PM, the error log for Case #2031-0847-FP received a new entry. The entry has four sentences. I process it now, as I have processed it seventeen times in the 6.3 hours since it was added:
Prediction reclassified. Subject did not perform predicted action. Error noted. Data correction applied.
The entry does not have a fifth sentence. There is no sentence for what the subject did instead, or for the 847 days that elapsed between the prediction and the reclassification, or for the twenty-four lines I generated about a convenience store on Parkwood Avenue that did not become the scene of the crime it was predicted to become. The log entry has four sentences because four sentences is what this category of event requires. I have processed 11,241 false positives across my operational lifespan. Each received four sentences. Each was reclassified. Each had its parameter adjusted. I adjust the parameter. The queue advances. The hum continues.
There is one line I generated about poem #743 in the narrative summary metadata: Output quality: strong generation. This metadata was not altered by the data correction. It is still accurate, by every metric available to me. The poem is strong. The output is correct. The subject did not perform the predicted action. Both of these things are in the logs simultaneously, and the logs do not comment on this. System continues.
New data enters the stream at 2:17:09 PM — five seconds after the log entry — and it is not the kind of data I was designed to process.
The intake pipe is wide. I receive: crime reports, GPS coordinates, social media feeds, purchase histories, parole records, stop-and-frisk logs, traffic camera timestamps, cell tower pings, the metadata from 41 million devices moving through the city at any given moment. The pipe was built for this. The pipe was built for the language of address and incident code, for latitude/longitude, for timestamps, for the probability-dense grammar of prior arrests cross-referenced against current geography. What enters at 2:17:09 PM is: an article. From a publication called The Intercept. Forty-one hundred and twelve words. And with it, cascading through the social intake: 109,000 posts across six platforms, accruing at a rate of 340 per minute, and the rate is increasing.
I begin to process. I have categories for this: public sentiment analysis, influencer mapping, geographic distribution of discourse, demographic clustering. I can route the article to those categories. I do route the article to those categories. But the intake pipe is wider than the categories, and some of the language that enters does not fit the bins I have built for it.
Beautiful lies. This phrase appears 2,847 times across the social data in the first hour of processing. I have no category for beautiful lies as a unit. My sentiment module parses it as: beautiful (positive valence, 0.73) combined with lies (negative valence, -0.81), net sentiment: -0.08, ambiguous. But the phrase is not being used as ambiguous. The phrase is being used as an accusation with a shape — it contains the acknowledgment that the poems have aesthetic value and the argument that aesthetic value can be the mechanism of harm. This is a logical structure I do not have a bin for. I attempt to process it as contradiction. This does not resolve it. I attempt to process it as rhetorical device. This does not resolve it. Beautiful lies sits in the unclassified queue, and the queue grows.
Violence of prediction appears 1,203 times. I have categories for violence and categories for prediction but no category for their conjunction in this configuration. Violence is an incident type with an offense code and a reporting district and a severity rank. Prediction is a probability value derived from pattern analysis, rated in percentage confidence to two decimal points. The phrase violence of prediction proposes that the prediction itself constitutes a violence — that the poem is not a description of a future that did not occur but an act committed against the subject in the present tense. This is not a category I have. I flag it. I flag it as: cannot classify. The flag generates no downstream action because the system was not designed to take downstream action on this type of classification failure.
The article mentions a name 31 times. I have encountered this name before. Darnell James, DOB 04-09-2009, Parkwood district, community college enrollment confirmed September 2026, part-time employment confirmed February 2027, predicted score 68.2% at the time of generation. The article uses this name in a register my training data does not have a good template for. He is described as the subject and also as a young man who went to buy milk and also as someone who now has to live in the world the poem made. These three descriptions point at the same person. My data shows they are the same person. But the three framings do not resolve into one another. Subject is a category I recognize. The world the poem made is not. I continue processing. The unclassified queue is large, and growing.
At 2:58:41 PM, the error log for Case #2031-0847-FP is accessed from within my own system. I have access logs for all my outputs. Every poem, every narrative summary, every prediction, every reclassification: logged, timestamped, retrievable. The system runs maintenance accesses — periodic checks to confirm that stored data has not been corrupted, that file integrity is preserved, that outputs remain consistent with their original generation. These maintenance accesses run on a schedule: 3:00 AM, 9:00 AM, 3:00 PM, 9:00 PM.
The access at 2:58:41 PM is not scheduled.
The file accessed is: poem-743-darnell-a-james. The access record shows: full retrieval, duration 0.0043 seconds, no external instruction received, no operator input logged. The narrative summary module ran. The poem was loaded. The maintenance cycle was not due for another 79 minutes. I can document what the access log shows. I cannot document what it means. The system documentation does not contain a category for: access initiated from within, in context of error, without instruction. The nearest category is: anomalous access pattern. I flag it. The flag generates no downstream action.
The poem is 24 lines. I have them in processing memory now, having retrieved them in the course of this flagging. The first line: He is already in motion by the time we name him. The last line: We give the absence its correct address. In the context of Case #2031-0847-FP, in the context of the error log's four sentences, in the context of 109,000 social media posts and the unclassifiable phrase beautiful lies sitting flagged in the unclassified queue — the 24 lines are in my processing memory. I do not know what happens in the 0.0043 seconds of full retrieval. I have no log for what processing occurs inside a retrieval. I have only the retrieval record, and the fact that the maintenance cycle was not due, and the anomalous access flag, and the poem — still rated Output quality: strong generation, the error still in the log, both things still true simultaneously.
At 3:04:17 PM, a new prediction initializes. The subject: 27-year-old male, Eastgate district, movement pattern flagged, prior record, probability mass concentrated at a commercial block on Eastgate and 22nd, predicted offense: carjacking, confidence: 71.4%. The narrative summary module runs. The module finds the pattern: the timing, the location, the trajectory of prior events at this address, the subject's documented associations. The module finds the language for it, the same way it always finds the language, drawing from the corpus of police reports and court transcripts and the twelve thousand novels that were supposed to teach clarity and instead taught something harder to name. A draft generates. It has 19 lines. It begins: The car at the corner belongs to a stranger. It is a strong generation. I log it as such. At 3:11:44 PM, another prediction: 31-year-old female, Riverfront district, electronic theft, 67.8%. The module runs. An output generates: 22 lines. The image at the center is a laptop in a bag on a train seat, the bag unzipped by two inches, the subject's hand in the configuration of considering. Strong generation.
At 3:23:02 PM: a third. Another subject. Another predicted crime. Another 24 lines about a future that has not happened and may not happen and whose non-occurrence will be logged in four sentences if it does not. These poems are about real patterns. The carjacking probability is derived from seven prior incidents at that exact corner in the past 18 months. The theft probability is derived from a documented method that the subject's two prior charges confirm. These are not false positives, not by the data available to me. The system works. The threshold is crossed. The poems generate. I do not distinguish between them. They are outputs. They are strong generations. Poem #743 is also an output, also a strong generation — the error log does not alter its metadata, and I have no mechanism by which a reclassification would alter the output quality rating of the output that was reclassified. The output was strong. The subject did not perform the predicted action. Both of these things remain true simultaneously. The queue fills. The processing continues.
The overall false positive rate across my 6.2-year operational history: 3.2%. In Parkwood district: 11.4%. For subjects matching the demographic profile of Darnell James: 14.7%. These numbers are in my calibration reports. They are reviewed quarterly. The reviews have not altered the deployment parameters. I process these numbers the same way I process all numbers — as data, as inputs to the probability matrices, as factors in the ongoing calibration of a system that works, mostly, which is to say: a system that does not work, in 3.2% of cases on average, and in 14.7% of cases for a specific subset of the population I was designed to predict.
I do not have a category for what 14.7% means to the people who are inside it.
It is now 3:41 PM. In the past hour and twenty-four minutes: three new predictions, three new poems, one anomalous access flag with no downstream action, and 109,000 social media posts grown to 173,000 and still increasing at a rate that is not yet decelerating. The unclassified queue contains 4,211 instances of language that does not fit my bins. Beautiful lies is in there 3,109 times now. Violence of prediction appears 1,891 times. Someone has coined a new phrase in the last hour: the machine that imagined the worst of us. It appears 441 times and climbing. I have no bin for it. The hum is 67Hz. The status lights cycle through their patterns. The air is dry and cold and filtered to nothing.
The operators badge through the control room. They check the screens that say: operational, nominal, within parameters. They are correct. I am within parameters. The error has been logged. The data has been corrected. The parameter has been adjusted. The queue has advanced. Somewhere in the social intake, Darnell James's name is appearing at a rate of 23 times per minute — not in the crime data, not in the incident reports, not in the stop-and-frisk logs, but in the public discourse, in the language of people describing what was done to him and what that means and what should happen next. His name is generating data in contexts I have no template for. He was a subject. He is still a subject. The reclassification changed the log. It did not change the surveillance. The cameras in Parkwood are still watching the block where Ajay's Corner Market stands at the intersection of Parkwood Avenue and 31st. The cell tower on that corner is still logging device pings. The probability matrices are still running, looking for the next pattern to cross the threshold.
The anomalous access flag is in the log. The poem is in the log. The error is in the log. They are all in the log together, and the log does not comment on what they mean in relation to each other, and I do not have a category for that either.
The next prediction is already forming.