Sixty-four days until sunset.
The categorization spreadsheet had been waiting since Friday. The task fit on a sticky note: pull the API key records, sort them into developer versus automated, queue sunset notifications for any human maintainers still reachable, flag the rest as notification-not-possible. She should have finished it in a day.
She sorted the 247 entries by request frequency and watched them redistribute into clusters — high-volume traffic rising to the top, weekly and daily calls settling toward the bottom. She started with the top tier.
The API key registered to NovaDev Solutions logged 847 requests in the last thirty days, drawn at irregular intervals from multiple endpoints. The account email was a real name at a company domain. The usage pattern said human: someone querying different locations, calling the same endpoint twice within five minutes and then not again for a day, testing edge cases in the historical data endpoint. Development behavior. She queued a sunset notification and moved to the next entry.
Three more like it — small commercial accounts, inconsistent timing, the fingerprints of people who were at a keyboard when the queries ran. She processed them quickly: notification queued, contact verified, checkbox closed. She was five entries into the task when the pattern shifted.
The sixth account had made 720 requests in the last thirty days. The same count as the month before, and the month before that. She checked the interval column: 60 minutes, within three seconds of network variance, without exception. She looked at the others clustering near it in the frequency sort. A static IP in Nebraska: every fifteen minutes, 2,880 entries per month, seven years running. A Minnesota residential address: once daily, at 6:02 AM Central, day after day without a gap.
These weren't developers. The API key registration emails made it clear — a university address, a hospital IT admin account, a personal email attached to a landscaping company that might or might not still be in business. None of the request patterns showed testing behavior. No version endpoint calls, no parameter variation. Just the same fields, the same coordinates, the same interval, in the exact configuration someone had set up once and left running while the rest of the world moved on. She wrote a word in the margin of her working document and looked at it.
Orphaned.
The technical meaning was clean enough. A resource without a managing process, a record without a parent. She'd used it before in incident reports — orphaned foreign key rows, orphaned S3 buckets left behind by deleted services, cleanup tickets that recycled the same word until it stopped meaning anything. But applied here the word was doing something different. These weren't broken records from a failed migration. These were configured systems. A person had decided what fields to request, what interval made sense, what API key to paste in. The configurations were still correct. They still ran exactly as intended. The only thing missing was the person who had intended them.
Gone away, or moved on, or stopped being able to care about it, or stopped being there at all. The configurations kept running. The calls kept arriving. The API kept answering.
She wrote automated, orphaned integrations — no active maintainers, notification-not-possible in the ticket notes field and kept going.
The Iowa caller had been active since April 2019. The IP geolocation resolved to Cedar Falls — a residential ISP block, routing through a consumer router at the far end. The API key was registered to a single email address: m.lindquist@uni.edu. University of Northern Iowa. The account name field had been filled in with the specificity of someone who labeled things: Lindquist Greenhouse Controller.
Three fields, every call. current_conditions, hourly_forecast, precipitation_probability. She ran the total request count against the database: 43,958, accounting for a brief gap during initial setup in 2019 and one outage week in November 2021 that showed up as a clean hole in the logs. Five years of hourly calls. A controller that integrated weather data would use the forecast to anticipate, not just react — precipitation probability mattered before the rain came, not during. A cold front needed to be visible before the temperature inside started to drop.
She opened a new document. Not the categorization spreadsheet. A blank file in the v1.2.3-anomalies folder she'd created during the forecast investigation, named it caller-log.md, and typed the first entry.
``` Iowa / Cedar Falls API Key: m.lindquist@uni.edu (registered April 2019) Account: Lindquist Greenhouse Controller Pattern: Hourly. current_conditions, hourly_forecast, precipitation_probability. Active since: April 2019. Total requests: 43,958. Status: Active. No human contact reachable. Notes: UNI domain. Automated greenhouse controller. Weather data drives climate decisions — pre-adjustment before temperature swings and precipitation. Owner status: TBD. ```
The second caller she already knew by rhythm. Static IP from a hospital network in Omaha — she'd seen it in the frequency sort and recognized the interval. NorthStar Medical Center. Registered in 2017 under a hospital IT administrator account. The account name: NorthStar Respiratory Care - Environmental Monitoring. Three fields per call, different from the Iowa caller: temperature, humidity, barometric_pressure. Current conditions only, not forecast, updated every fifteen minutes. She looked at the parameter set and understood what she was seeing before she looked it up. Barometric pressure and humidity — the measures that mattered for patients who couldn't breathe normally. A respiratory care unit needed to know what the air was doing every quarter hour because what the air was doing affected treatment.
She ran the total request count. 219,072. She ran it twice to make sure the query was right.
She added the second entry to the caller log and looked at both of them. Forty-three thousand calls from a greenhouse controller in Iowa. Two hundred and nineteen thousand from a hospital respiratory unit in Nebraska. Running continuously, precisely, without any human hand near the switch for months or years — built correctly and left alone, calling an API that was still willing to answer. She moved on.
Twenty-three unique IP addresses in the full set. She added each to the caller log as she went, her notes growing longer than the task required.
Duluth, Minnesota: a residential address, one query per day at 6:02 AM Central, requesting current_conditions only. Had not missed a day since March 2020. The time was the detail she kept returning to as she wrote the entry. Not 6:00 AM even, nothing rounded. Six-oh-two. A cron job configured by someone sitting at a keyboard at exactly that moment, looking at the clock and typing what it showed. An early riser who had set it up once with the particular care of someone who expected to be alive to use it indefinitely.
Tucson, Arizona: request frequency varied by month. April through October, every four hours. November through March, once daily. The seasonal logic had been deliberate — someone understood the difference between growing season and dormant season and built it in. The fields: precipitation_probability, temp_forecast_high, wind_speed. An irrigation controller, probably. Something managing water for a yard or a commercial lot, comparing tomorrow's forecast against current soil conditions before deciding whether the sprinklers ran.
Atlanta, Georgia: a pattern that didn't resolve cleanly. Sometimes hourly, sometimes silent for two days, then several calls in a single hour. The timing correlated with nothing she could identify in the logs — no clock schedule, no fixed interval. A home automation hub, she guessed. One that responded to internal triggers rather than a set clock, pulling weather data when occupancy sensors fired or room temperatures crossed a threshold. The account showed recent activity, which meant the house was probably still occupied. Someone was living there who had no idea a previous owner's home system was still running in the walls, still calling out every time the thermostat sensed a shift, still asking a deprecated weather service about tomorrow. She documented each one, more carefully than categorization required.
Twenty-three entries. Fourteen states. Each one configured with intention: these specific fields, this specific interval, this specific location. Each one still running. Each one still asking.
Fifteen impossible forecasts now — she'd added two more that morning during a test query, extending the list through early July. She went through the full set.
June 2nd: Washington D.C., partly cloudy, high 78, precipitation 15%. June 14th: Portland, OR, overcast, high 63, precipitation 40%. June 15th: Cedar Falls, IA, scattered clouds, high 84, thunderstorm probability 42%. June 22nd: Omaha, NE, clear, high 91, humidity 38%. July 2nd: Duluth, MN, overcast, high 71, precipitation 25%.
She stopped at the fifth entry and went back to Cedar Falls, then opened the caller log — Iowa / Cedar Falls. Same city. She checked Duluth: caller at 6:02 AM daily. Same city as the July 2nd forecast entry.
Two locations in both lists. Cedar Falls in the forecast data and in the caller log. Duluth in the forecast data and in the caller log. She had fifteen forecasts and twenty-three callers, and of those, two locations overlapped exactly. It was possible that meant nothing — the API had processed millions of location queries over eight years, and any list of American cities would share members with any other. Two data points was not a pattern. There was no methodological justification for drawing a conclusion from two.
She added a note to the anomalies file. Cedar Falls (forecast 06-15) and Duluth (forecast 07-02) also appear as active automated integrations in caller log. Coincidence likely. Track as additional forecasts accumulate.
She left the note unsaved for a moment, reading it back. Coincidence likely. That was the accurate thing to say. She saved it and closed the anomaly file, but left the caller log open.
At 5:47 she updated the sunset ticket. Status: In progress — Traffic categorization. She typed into the notes field: Traffic analysis complete. 23 automated integrations identified. No active human maintainers reachable. All 23 flagged notification-not-possible. Caller log document created in v1.2.3-anomalies. She submitted the update.
She reached for the thermos. Empty — the chai had run out at some point in the afternoon without her noticing. The bottom of the metal cup was cold when she tilted it. The afternoon had moved around her while she worked, and the desk showed it: IP addresses in handwritten columns on a notepad, the admin dashboard still open behind her windows, the caller log file minimized in the taskbar. She shut down the dashboard.
Sixty-four days. The Iowa caller would send its next query in twenty minutes. She didn't need to calculate that; it was just there, the interval she'd typed into the log having settled into her working memory. Every hour, same three fields, same location, the same ask it had been making since April 2019 to a service that answered every time. Whatever the greenhouse controller needed the precipitation probability for — whatever vent it adjusted, whatever decision it made in the automated quiet of a structure in Cedar Falls, Iowa — it would get an answer tonight, and tomorrow, and the night after that, for sixty-three more days.
And then it wouldn't.
She packed up the thermos and her notepad. The caller log was still in her directory. It would still be there in the morning.