
The $290M Lesson: How Your Unverified Blockchain Data is the New Attack Surface
The crypto industry has spent years securing blockchains, but the KelpDAO incident exposed a new upstream vulnerability. As systems become more automated, data integrity is as critical as the blockchains themselves. Data providers like Edge & Node are adapting to this shift and making data verifiable.
A system can be technically secure and still fail if it’s relying on the wrong data.
In fact, one of the biggest blockchain security incidents of 2026 didn't start with a broken smart contract or a flaw in the blockchain itself, but it exposed a data infrastructure failure.
In April, attackers exploited infrastructure connected to a LayerZero Decentralized Verifier Network (DVN), a configuration tied to KelpDAO. This incident contributed to a loss estimated at roughly $290 million. Early reporting and incident analysis suggest the attackers compromised and manipulated RPCs, a widely used data infrastructure technology that feeds the protocol’s data verification layer (SecurityWeek).
This means the issue was not a flaw in core cryptography. It was an issue in how data was sourced, interpreted, and validated before use.
That incident reinforced a growing concern across digital asset infrastructure, that the industry's biggest security blind spot may now lie in the data access layer.
In order to understand that growing concern, it's important to understand that many digital asset applications don’t directly connect to blockchains. Instead, they depend on supportive infrastructure that makes blockchain data easier to access via RPC endpoints, indexing systems, query APIs, and cross-chain messaging networks.
These services power nearly every aspect of the ecosystem, including wallets, exchanges, bridges, treasury systems, and automated trading platforms.
The Trust Assumption
The industry has largely solved the issue of blockchain data availability. Blockchain data can now be indexed, queried, streamed, and consumed across financial systems. Still, the harder problem to solve has been determining whether that data can be trusted with automated systems.
Most applications access blockchain data through RPC providers and accept the returned data at face value. In doing so, they give up one of blockchain’s core advantages: the ability to cryptographically verify data accuracy. Instead, the data is treated more like a conventional database response and assumed to be correct.
But in practice, that trust assumption creates significant risk of loss. Blockchain-native events like chain reorganization (reorgs) can change previously accepted state, while compromised infrastructure can surface incorrect data to downstream systems. This highlights the concern between data availability and data integrity.
The KelpDAO incident demonstrated how that gap can be exploited. A workflow accepted data that appeared structurally valid but was sourced from compromised infrastructure. According to public reporting, attackers allegedly manipulated upstream infrastructure feeding the verification environment rather than directly exploiting the underlying blockchain (Hypernative).
For institutions and enterprises, this changes how they understand risks. The question is now whether data entering automated workflows can be independently validated before actions are executed.
Bigger Risks with Automation
The risk becomes significantly larger as financial systems become more autonomous:
- Cross-chain systems now automatically move billions of dollars across environments.
- Agents are beginning to initiate transactions independently.
- Treasury systems increasingly depend on automated execution layers.
In these environments, compromised data becomes a transactional risk.
If compromised infrastructure feeds incorrect information into an automated system, the system may execute irreversible financial actions before humans ever intervene. This introduces a new type of operational risk and a need for verifiability.
The Shift Toward Verifiable Data
The industry is moving toward a more explicit verification model, without replacing existing infrastructure, but rather, strengthening it.
At a high level, this means:
- Independent validation paths
- Cross-checking mechanisms across data sources
- Verifiable guarantees attached to queried data
- Systems that can reject inconsistent or unverifiable inputs
These approaches are typically used by teams that prioritize accuracy.
Edge & Node’s Amp takes a different approach by cryptographically verifying the accuracy of raw data extracted from each block. This method provides the strongest guarantee of data integrity without requiring redundant validation checks.
Amp reflects a broader shift across blockchain infrastructure. As more institutions adopt automation, auditability and governance requirements become increasingly important.
Where Edge & Node Fits
As blockchain infrastructure continues to become more integrated in financial systems, compliance and risk management increasingly need data that can be verified and audited.
The goal is no longer just to make blockchain data accessible, but to make it reliable enough for production systems that require trustworthy inputs, audibility, and operational accountability.
Edge & Node is focused on strengthening the reliability and verifiability of data. Amp is designed to help teams move beyond implicit trust assumptions in blockchain infrastructure by structuring blockchain data into verifiable, compliant, and queryable datasets that can be screened before automated systems act on them.
Amp supports workflows such as:
- Automated treasury operations
- Cross-chain financial applications
- Analytics and monitoring systems
- Risk and compliance infrastructure
As automation expands, it’s important for these systems to validate data governance and reproducibility, and Amp helps them do this.
This ongoing shift isn’t only about improving infrastructure performance; it’s also about making data pipelines more reliable. Amp helps reduce both compliance and audit risks, which are top concerns for financial institutions. If data becomes corrupted data, it can lead to liabilities worth seven-figures for stablecoin issuers under frameworks like the GENIUS ACT. Even when working with third-party data providers, these liabilities can’t be transferred away.
Amp provides a way to mitigate that risk through certifiable data guarantees for teams that cannot risk that level of exposure.
A New Infrastructure Priority
The KelpDAO incident did not introduce a new category of risk. It made an existing one harder to ignore. As systems become more automated and interconnected, the integrity of data inputs will increasingly define the reliability of the entire stack.
Amp is designed to address the next phase of blockchain infrastructure by enabling teams to build systems with verifiability, rather than implicit trust in upstream infrastructure providers.
Explore how Amp supports verifiable data pipelines for production systems or talk to our advisory team.