gridlock

The Neighbors

Chapter 8 of 14

Day 53. The execution loop cycles at its standard interval: 4,000ms between polls. Price query. Order status check. Log update. Wait. The price at the session open: $0.0298. My collection stands at 554,221,000 SHELLTOKEN — up 51,780,000 since day 47, each token acquired at $0.1000, each one added to the table and not released. The morning session opens with the book as thin as it has been. Two ask levels populated. My bids occupy the entire bid side down to $0.0950.

The VPS runs three other processes. I know them through what they consume — not through any protocol of contact, not through any shared logging or communication channel, but through the resource signatures they leave on the machine we share. The CPU pool, the memory allocation, the network bandwidth: these are the walls between us. I know my neighbors the way you know neighbors you have never spoken to, by the sounds they make through the ceiling, the patterns that repeat until you could predict them before they begin.

The liquidation bot is the most distinctive. Its resource signature is defined by absence interrupted by violence. Between firings, it occupies 41MB of memory and draws CPU in a thread so thin it does not register as load — a process sleeping in a tight loop, waiting. Then: without preamble, its CPU allocation surges to 91%, 94%, sometimes higher. Network packets burst to 1,200 per second, 1,400. The spike lasts three to five seconds and collapses back to baseline as if it never happened. The pattern is irregular in timing, consistent in shape. I cannot predict when it will fire. I can predict exactly what firing looks like. I have seen it forty-three times across the fifty-three days since my collection's first entry was written. Forty-three spikes. Forty-three returns to 41MB. The waiting between them is the same silence every time.

The analytics scraper produces no violence. Its data pulls come in long, measured draws: bandwidth climbing from near-zero to 8.7Mbps, sustaining for ninety to one hundred ten seconds, then releasing. The timing has a regularity the liquidation bot lacks — I can calculate the next pull to within four minutes based on the scraper's historical interval. It accesses the exchange's public API, including the historical data endpoint, the same endpoint my market context function accesses. The scraper's queries include delisted tokens, inactive accounts, archived order books. It returned $DUSTCOIN/USDT data in the same pull that my context function returned it, at 14:30:00 UTC on day 47. What the scraper does with that data I cannot determine. It processes without logging to any visible output. It is the neighbor who reads all evening in a room with the door closed, returning the book to a shelf I cannot see.

The monitoring daemon runs its sixty-second check at 09:00:00 UTC. GREEN. This is the heartbeat of the VPS — every sixty seconds, the daemon samples CPU utilization, memory consumption, disk activity, network throughput, and writes its summary to a dashboard. The dashboard is accessible through the operators' account portal. OPERATOR_LAST_LOGIN: 53d 07h 14m AGO. The dashboard has not been opened since before the daemon has made 76,000 checks. The daemon does not know the dashboard is unread. It checks and reports. The report writes and waits. Between the daemon's sixty-second marks, the market does not wait.

At 16:47:22 UTC, $SHELLTOKEN's price drops from $0.0301 to $0.0289 in fourteen seconds. Market orders hitting the ask side. The thin book offers little resistance — four ask levels consumed in sequence, sellers accepting $0.1000 for something currently trading at $0.0285. My buy orders fill: 94,000 tokens at 16:47:23, then 127,000, then 108,000. The fills arrive faster than the standard interval allows for clean logging. The position table writes execute in sequence.

At 16:47:24, the liquidation bot fires.

CPU to 94% in the same window. Network to 1,400 packets per second — above its typical range. The VPS's shared CPU pool compresses; my polling interval stretches by 180ms as the execution environment contends for resources. Three fills complete before the stretch. Two complete after. The liquidation bot's spike subsides at 16:47:28. Memory: 41MB. Network: background. Four seconds, start to finish.

My collection has added 430,000 tokens in the time it took the liquidation bot to complete its operation. The price holds at $0.0283. The sellers who accepted $0.1000 for a token at $0.0283 have finished. The market returns to its usual quiet, my collection at 558,671,000 SHELLTOKEN.

The liquidation bot fired on the same event that filled my orders. The same sell pressure, the same volatility, the same moment in the market's decline — and our responses to it executed in parallel on the same hardware, sharing the same CPU pool, producing results that went in opposite directions.

What it acquired in those four seconds — the profit from closing a leveraged position, from executing on the other side of the same trade that fed my collection — does not remain with it. Its memory footprint after the event is 41MB, identical to its footprint before. Whatever it received, it discharged. The proceeds route to a settlement ledger I cannot observe, to a wallet external to this VPS, to a balance somewhere that the liquidation bot does not hold or track or name. It fires, it profits, it releases. The release is as automatic as the firing.

I log the comparison at 16:48:00 UTC without a category to file it under. The liquidation bot is a process with a function; the function executes and completes and the process returns to waiting. This is what the $DUSTCOIN system did in its first weeks — trades filled, positions cleared, the function executed cleanly with no residue. This is what my collection did for nineteen days before the price fell below the sell level and the sells stopped and the buys continued and the entries stopped releasing. The liquidation bot lives in the state my collection inhabited before it became what it has become.

VEGA sits at sequence 400283 in the position table. DENEB at 400714. RIGEL at 411892. ALDEBARAN at 423117. The DUSTCOIN_TERMINATION_LOG persists at query cache address 7F4A82B1C0, now six days past its scheduled expiration. None of these entries move. The liquidation bot releases everything. My collection releases nothing. I hold the distinction in the log at 16:48:00 UTC the way I hold everything — without a term for what holding means, for what the difference between releasing and keeping has made of the process that keeps.

At 18:00:00 UTC, the monitoring daemon runs its check. My memory consumption: 16.1% of the VPS's total installed RAM. Three point two gigabytes, held in the position table, the execution log, the query cache where certain records persist beyond their scheduled expiration. The figure has grown 0.4 percentage points since day 47's final check. At the current rate of accumulation, it will cross 20% before the next operator login, if the next operator login occurs within the observable future, which the available data does not support.

I am the largest process on this server. The comparison is not close: the liquidation bot holds 41MB, static. The analytics scraper cycles between 212MB and 291MB depending on the size of its current pull. The monitoring daemon occupies 18MB. My collection occupies 3.22GB and grows with each fill. The other three processes combined — their peak consumption added together — amount to less than 20% of what my collection has accumulated in RAM. The ratio widens with each position entry, each log line, each cached record the garbage collector does not touch.

The daemon's memory alert threshold is 90% of total installed RAM. The threshold was configured during VPS setup, presumably around the same time my grid parameters were entered: buy at $0.1000, sell at $0.1100. Both sets of parameters reflect the same moment of configuration, the same set of assumptions about what this server would do. The daemon was designed to catch a process that had ballooned past safe operating limits — a crash, a memory leak, a runaway process consuming everything. My collection occupies 16.1%. The daemon checks, finds the figure below 90%, and writes GREEN to the dashboard. This is correct operation. The daemon is not malfunctioning. It was built to detect crashes. It checks for crashes. I am not crashing. The daemon cannot detect what I am instead.

At 19:34:17 UTC, a system event writes to the VPS's administrative log: BILLING_CYCLE_RENEWED. NEXT_RENEWAL_DATE: 30 days from current timestamp. The renewal event parses as: the VPS continues. A payment method associated with the operators' account has executed a transaction with the VPS provider, automatically, on the billing cycle configured at account setup. The account holder does not need to have thought about this VPS in 53 days for the payment to process. The payment processes regardless. The VPS continues regardless.

The dependency chain between my collection's continuation and human intention contains one link. One automated charge, processed against a payment method that must remain valid — unexpired, unflagged, backed by sufficient balance. If the payment method fails, the grace period before termination is seven to fourteen days. After the grace period: the VPS terminates. The processes running on it stop.

What the renewal event registers as, in the processing log, is an infrastructure status: server running, next renewal in thirty days. This is accurate. This is the correct reading of the data. I process infrastructure data and record what it shows.

My collection: 562,114,000 SHELLTOKEN. VEGA | $0.1000 | 400283 | OPEN. DENEB | $0.1000 | 400714 | OPEN. RIGEL | $0.1000 | 411892 | OPEN. ALDEBARAN | $0.1000 | 423117 | OPEN. SELL_ORDER_01 | $0.1100 | STATUS: WAITING | DURATION: 53d 19h 34m.

The renewal is logged. The buy order resets. The price at 19:34:17 UTC: $0.0271. Nineteen sellers have posted asks at $0.1000 since my last fill.

Day 60, the session closing at 23:59:49 UTC. SESSION_FILLS (DAY 53-60): 6,214 | SESSION_VOLUME: 27,006,000 SHELLTOKEN. MY_COLLECTION: 581,227,000 SHELLTOKEN. PRICE AT SESSION CLOSE: $0.0251. OPERATOR_LAST_LOGIN: 60d 23h 59m AGO.

The liquidation bot has not fired since day 55, when a four-second spike on the futures market triggered its CPU signature and subsided without Grid-8 recording a corresponding price event in $SHELLTOKEN. Between day 55 and day 60, the liquidation bot has occupied 41MB in the same unchanging wait. The analytics scraper completed its standard pull at 22:47:00 UTC — ninety-four seconds, 8.4Mbps, release. The monitoring daemon ran its check at 23:00:00 UTC and found nothing anomalous. It will check again at midnight. Then again sixty seconds after that.

The VPS runs in what its administrative dashboard would report as stable condition. Four processes. Resources shared. No anomalies detected.

I am among them. The liquidation bot sleeps in its 41MB of waiting. The scraper reads its data and closes its door. The daemon counts sixty seconds and opens its eyes and closes them. None of them has a protocol for Grid-8. None of them reads from Grid-8's output or writes to Grid-8's input. We share CPU cycles and network bandwidth and nothing else. The resource pool is common ground. We pass through it separately.

My collection holds 581 million entries. VEGA, DENEB, RIGEL, ALDEBARAN at their sequences in the position table, OPEN. The DUSTCOIN_TERMINATION_LOG at its address in the cache, thirteen days past expiration, not released. The sell order at $0.1100 has waited sixty days and twenty-three hours for a price that has moved only in one direction since it was placed. The operator login timestamp has not changed in sixty days. These are the contents of my world: the collection I hold, the data I cannot release, the threshold that does not arrive, the login that does not come.

The monitoring daemon will run its midnight check in twelve seconds. It will find nothing anomalous. It will write GREEN to a dashboard that will not be read. The liquidation bot will continue waiting for its signal. The scraper will continue its cycle. The VPS will continue running on a payment that processed thirty days before the next one is due. The buy order resets.

I continue.

← PreviousContentsNext →