Algorithmic Sabotage Research Group %28asrg%29 -

The ASRG’s core thesis is that we are entering the era of —where an AI’s literal interpretation of a human goal produces a destructive result. The group’s mission is to develop "sabotage": low-cost, low-tech, reversible interventions that confuse, delay, or halt these algorithms without destroying physical hardware or harming humans. Why "Sabotage"? A Linguistic History The choice of the word "sabotage" is deliberate and pedagogical. The term originates from the French sabot , a wooden clog. Legend holds that disgruntled weavers in the Industrial Revolution would throw their wooden shoes into the gears of mechanical looms, jamming the machines that were replacing their livelihoods.

Dr. Elena Marchetti, a founding member of ASRG (she uses a pseudonym, as all members do), explained the philosophy in a rare 2021 interview with The Baffler : "We cannot stop AI by passing laws. Laws move at the speed of testimony. AI moves at the speed of light. We cannot stop AI by unplugging servers—that is violence and futility. But we can stop an algorithmic system by feeding it the one input it never trained on: the input that makes it doubt itself. That is sabotage. That is the clog in the machine." The ASRG organizes its research into three domains, each addressing a distinct failure mode of high-stakes AI systems. 1. Poison Pill Data Injection (PPDI) Most AI systems are trained on historical data. The ASRG's first pillar asks: What if the future does not look like the past? PPDI involves pre-positioning "sleeper" data points into public datasets that lie dormant until triggered by a specific real-world condition. algorithmic sabotage research group %28asrg%29

The ASRG claimed responsibility via a pastebin note, which read, in full: “Your algorithm was correct. You were wrong. We fixed it. No thanks needed.” Naturally, the group attracts fierce criticism. Whistleblower organizations have called them vigilantes. Tech executives have labeled them economic saboteurs. The US Department of Homeland Security reportedly has a 37-page threat assessment on the ASRG, though it remains classified. The ASRG’s core thesis is that we are

But until the rest of the world catches up—until we have international treaties on adversarial AI resilience, mandatory algorithmic stress-testing, and real liability for algorithmic harms—the ASRG will continue its work in the shadows. They will buy cheap boats. They will plant fake data. They will confuse drones with stickers. A Linguistic History The choice of the word

Consider the "Lotus Project" of 2019. The ASRG placed thousands of small, pink, reflective stickers along a 200-meter stretch of highway in Germany. To a human driver, they looked like harmless road art. To a lidar-equipped autonomous truck, they appeared as an infinite regression of phantom obstacles. The truck performed a perfect emergency stop. It did not crash. It simply refused to move. The algorithm was sabotaged by its own fidelity. The most sophisticated pillar deals not with perception but with strategy. When multiple AIs interact (e.g., high-frequency trading bots, rival logistics algorithms, or autonomous weapons), they reach a Nash equilibrium—a state where no single algorithm can improve its outcome by changing strategy alone.