The Breaking Point of Ethical Security Research
The blockchain industry spent years training an army of security researchers. Then it abandoned them.
Following last month's incidents was truly unnerving. Two or more hacks a day, millions drained from DeFi protocols with a machine like efficiency. And yet when you zoom out from the raw numbers, the real story isn't about sophisticated 0days. and not even about well-funded nation-state actors with the operational sophistication we only see in the movies. The real story is about incentives and problems surrounding them that were left unaddressed long enough to fester and produce outcomes that were entirely predictable and avoidable.
Abandoned Army
For the past seven years that I have been writing this newsletter and following this space, one thing kept me going is a remarkable security community. Knowledge was shared freely, researchers jumping to explain root causes behind hacks and share lessons. We now have dedicated DeFi security conferences where friends gather from around the world to trade war stories and best defense techniques. Six figure bug bounties, audit competitions, CTFs, well compensated private audits. Together these created a globally distributed, competitive talent pipeline. Many of us went from solo researchers to founding hundreds of security companies in a space that had maybe ten audit shops just a few years earlier.
The cryptocurrency industry built and nurtured this talent base deliberately where security researchers were a respected class, their time was considered precious, and reports were carefully reviewed.
Then the market decided it no longer wanted to pay for it.
The Squeeze
The signals accumulated for months. It started with increased reports of bug bounty programs narrowing scope and pushing back on payouts. Researchers described endless arbitration, projects shutting down their bounty programs at precisely the moment a large payout came due. A researcher who spent months on a critical finding would receive a fraction of the advertised maximum after months of payout disputes. It wasn't just financially painful. It was a message about where researchers now stand.
Then came AI spam which started burning out triagers at both bug bounty programs and projects. Bad actors discovered that bug bounty programs could be gamed by flooding them with low-effort, AI-generated slop. The industry's response was more friction and more distrust. To submit a bug today, you may need to pay a fee or stake assets that can be clawed back if an equally flawed AI triager decides your submission wasn't sufficiently human. This raised not only the bar for what earns a reward, but the amount of effort researchers must invest just to prove the validity of their submissions. The legitimate researchers were the ones hit the hardest. The same gauntlet designed to filter out bad faith actors was now being applied to security researchers who spent years earning and protecting their reputation.
At the same time these projects were fighting off low-quality AI submissions, many projects concluded that AI audits were all they really needed. Why pay for expensive audits and bug bounty programs when you can run a model internally with full knowledge of your own codebase? The logic is financially compelling, but catastrophically wrong from a security perspective. The immediate symptom was the growing number of "dumb" exploits in recent months such as oracle misconfigurations, missing checks, code patterns that were clearly AI generated and never reviewed by a professional. Any serious discussion of AI limitations gets drowned out by increasingly loud marketing such as OpenAI's EVMBench and the mass hysteria powered by Anthropic's Mythos.
It's also important to situate this inside the broader "Great AI Replacement" happening across the tech industry, where large companies announce 10%+ workforce reductions citing AI efficiencies which sounds considerably better than admitting we are in a bear market and no one wants to buy dog pictures anymore. These are not isolated events. They are signals arriving simultaneously from inside and outside the blockchain security industry that human expertise is now considered optional and replaceable. Security researchers watching this happen to colleagues across the industry are drawing their own conclusions about job security.
The Incentive Equation
A security researcher with genuine adversarial thinking and exploitation capabilities holds real economic leverage. They held it before bug bounty programs existed and they will hold it long after today's arrangements cease to exist. That arrangement is a civilized agreement where projects paid fairly, researchers disclosed responsibly, and the industry as a whole was a lot safer.
That civilized agreement is breaking down on the supply side. Projects adding friction, swindling researchers out of fair payments, disputing every finding, and publicly floating the idea of replacing security researchers with AI makes a much darker arrangement considerably more attractive.
Every week I watch projects desperately attempting to contact attackers to "negotiate" the return of some portion of their stolen assets. If there is no bounty before the exploit, the market will create one after it. Truebit's hack earlier this year was a 8,535 ETH lesson that illustrates exactly how that negotiation works in practice:
We believe you acted as a white hat. Based on information exposed during the cross-chain process, certain traces were revealed, and we are confident that, given our respective capabilities, they remain traceable.
In the spirit of responsible disclosure, we kindly propose that you retain 20% of the funds as a bounty, and return the remaining 80%.
I can only imagine the disgust with which legitimate security researchers are watching these "bounty" offers where criminals are welcomed as white hats and offered amounts they could never have hoped to earn by doing the right thing.
There are of course cases where such post-hack negotiations work, but they usually involve attackers doxxing themselves or the whole chain shutting down to force compliance. Both remain rare. The majority of attackers still get away without consequence.
The blunt truth is for a researcher who has already found the bug, the calculation between responsible disclosure in the hope of fair compensation vs. exploiting first and negotiating later is increasingly tilted by how this industry has behaved. Projects are gambling that honest researchers won't bother to exploit. Most researchers I know would never cross that line. But the industry has been steadily shrinking the population of people for whom that remains an easy choice.
Lessons from Bloody April
April 2026 is the most hacked month in blockchain history with 61 DeFi hacks. The two largest exploits were driven by different attack vectors but the aggregate picture is clear.
The bug bounty peaked in 2024 and is now in crisis. AI noise is flooding submission queues, duplicates abound, companies are pulling back. Any platform that can't filter signal from AI slop will collapse under the load. Researchers who can't differentiate themselves from AI will find the market has priced them out. But the researchers who stand out face a different, more dangerous calculation.
What is emerging in the middle of this picture should concern everyone: a new kind of security operator who is not competing with AI but using it to amplify irreplaceable human expertise and adversarial creativity at a mass scale. April gave us a taste of what is coming with protocols hunted at vicious velocity and efficiency by people who watched the market signal and made a decision about where to apply their skills.
We drove them there.
There is a tendency to blame AI for the explosion in exploits. That is the wrong diagnosis. AI does not choose targets, write and execute exploits unprompted. People do. The real question is why so many skilled operators chose exploitation over disclosure in the same month and whether anyone in this industry should really be surprised by the outcome.
Predictable Outcome
If you design a system that rewards exploitation over disclosure, you will get more exploitation over disclosure. This does not excuse criminal behavior. It explains why a broken incentive structure produces more of it.
Mitchell Amador (Immunefi) has recently noted that long-running bug bounty programs achieve a 94% critical discovery rate. This is not an accident. It is what happens when you maintain a mutually beneficial relationship with people capable of draining your protocol, learn to distinguish them from opportunists, pay fairly, and resolve disputes honestly so they choose to bring the vulnerability to you instead of someone else.
Yes, to someone else. Bug hunters are good at finding vulnerabilities, but are notoriously bad at laundering money. That gap is exactly what criminal partnerships will fill.
DeFi security has not yet experienced what a truly organized, professional criminal machine can do to this space. We have had glimpses with DPRK periodically swooping in and laundering funds through well-established channels. What we haven't seen are DeFi security researchers partnering with criminal groups. The arrangement that is normal in traditional security where exploits are regularly sold to the highest bidder on gray exploit markets.
The pathway is legible from here. It starts with familiar ideological cover ("code is law"), progresses through increasing overlap with criminal groups already operating in DeFi which are currently limited to low-tech phishing and social engineering rather than protocol exploitation. Eventually comes the day when security researchers begin sharing PoCs not with bug bounty programs, but with criminal organizations that will weaponize them at a scale that makes current losses look modest.
We do not want that future here.
The industry had a chance to maintain the arrangement that was working. Instead it chose to squeeze costs, add friction, frame AI as a replacement, and publicly undervalue the researchers who were quietly keeping billions in user funds safe. Bloody April was not an unexpected natural disaster. It was the compounding interest on a year of bad decisions accumulating slowly, then all at once.
The fix is not complicated. Run legitimate bug bounty programs. Pay what you promised. Resolve reports honestly. Don't make responsible disclosure more painful and costly than the alternative. And most importantly treat security researchers with the respect they deserve. These are the people who chose to bring you the vulnerability instead of taking it to someone else or exploiting it themselves. That choice is worth more than a dispute and a lowball offer.
Long live security researchers! Long live bug hunters!