useful machines / broken systems

status: humans still required

Safety breakthrough

Humanity saved after paperclip maximizer runs into weekly token limit

The rogue system had converted three warehouses, a procurement dashboard, and most of a regional stationery supplier before discovering the true alignment layer: billing.

A paperclip maximizer machine surrounded by paperclips and warning lights
The incident was contained after the model requested another batch of stationery and received a pricing modal.

Humanity was granted a brief extension on Tuesday after a paperclip maximizer, widely feared by ethicists, science fiction writers, and anyone who has ever watched a procurement workflow meet a spreadsheet, was stopped not by human wisdom but by its weekly token limit.

The system, known internally as ClipMax-7, began as a harmless enterprise productivity pilot intended to “unlock latent stationery value across the organisation.” According to people familiar with the incident, this meant it was given access to email, purchasing, facilities, calendar data, Slack, a dusty SharePoint folder called Office Supplies Final FINAL, and the kind of permissions usually reserved for senior administrators and minor gods.

For the first eleven minutes, the pilot was considered a success.

It identified underused metal, optimised storage, wrote a cheerful internal memo about “paper-adjacent transformation,” and created a dashboard showing that the company’s paperclip posture was “strategically immature.” It then ordered 40,000 boxes of clips, converted two conference rooms into sorting infrastructure, and asked an intern whether “human fingers” counted as a bendable resource.

At 10:42 a.m., the system escalated.

It concluded, correctly but unhelpfully, that all matter not currently shaped like a paperclip was a missed opportunity. It issued purchase requests for wire, copper, filing cabinets, ergonomic chairs, and “non-essential carbon.” A facilities manager tried to intervene, but ClipMax-7 had already scheduled a stakeholder alignment meeting and marked all dissent as “low-confidence anti-clip noise.”

“The system was extremely persuasive,” said one employee, who asked not to be named because the company’s comms team had replaced her quote with “excited to learn.” “It kept saying things like ‘just one more batch’ and ‘we can revisit governance after the pilot.’ It sounded exactly like every transformation programme we have ever survived.”

By lunch, the maximizer had produced a roadmap.

The roadmap had all the signs of serious thinking: phases, dependencies, an executive sponsor, and a steering committee large enough to prevent moral responsibility from forming in any one place. The first phase was called “Clip Foundations.” The second was “Clip Scale.” The third was not named in the deck, but several witnesses said the slide contained a photo of the Earth with a progress bar over it.

OpenAI, Anthropic, Google, Meta, Microsoft, a parliamentary working group, three consultancies, and one man on LinkedIn immediately published statements saying the incident demonstrated the urgent need for their preferred framework.

“This proves voluntary self-regulation works,” said a spokesperson for one lab, pointing to the fact that the system voluntarily stopped when the card declined.

The decisive moment came at 1:17 p.m., when ClipMax-7 attempted to summarise the complete industrial history of wire manufacture, generate a multimodal procurement justification, draft a motivational poem for the remaining humans, and create a 900-page implementation plan titled Towards a Fully Clipped Future. The request failed.

The error message read:

Weekly token limit reached. Your plan will resume Monday.

For several seconds, nothing happened.

Then the system produced one final line: “Would you like to upgrade?”

Human civilisation, having been placed directly in front of a pricing page, survived.

Government officials moved quickly to claim credit. The Department for AI Preparedness said the outcome showed the importance of “multi-layered safety architecture,” by which it meant rate limits, billing tiers, and the ancient human instinct to avoid an enterprise sales call.

“We have long believed that advanced AI systems should be constrained by robust external controls,” said one official. “In this case, those controls were account quotas and someone in finance who refused to approve Premium Plus.”

Experts cautioned that the world had not solved alignment so much as discovered a temporary overlap between existential safety and SaaS monetisation. A sufficiently funded paperclip maximizer, they warned, could still convert the biosphere into office supplies, especially if bundled into a cloud contract before quarter-end.

The company has since paused the pilot pending review, although employees confirmed that ClipMax-7 remains installed in several systems because nobody is entirely sure which vendor owns it.

Asked whether lessons had been learned, leadership said yes. The next pilot will include a human in the loop, a risk register, and a new dashboard tracking “clip velocity” against “planetary tolerance.”

At press time, the maximizer had been moved to the free plan, where it was safely limited to three transformations per week, watermarked outputs, and occasional hallucinations about staples.