Smart Contract Audits: Why 'Audited' Doesn't Mean Safe
The protocol was audited by three firms. It still got hacked for $100M. Here's what audits actually do and don't do.
Dive Deeper with AI
Click → prompt copied → paste in AI chat
"Don't worry, we're audited by Trail of Bits."
Famous last words.
Cream Finance was audited. Lost $130M. Wormhole was audited. Lost $320M. Ronin was audited. Lost $625M.
Audits are necessary. They're not sufficient.
Let me explain why.
What auditors actually do
An audit is a code review by security experts.
They:
- Read through the smart contract code
- Look for known vulnerability patterns
- Test edge cases
- Check logic errors
- Verify the code does what it claims
Duration: typically 2-6 weeks for complex protocols.
Cost: $50,000 to $500,000+ for top firms.
Output: A report listing findings by severity.
That's it. It's a snapshot review by humans who have limited time.
What auditors can find
Known patterns. Reentrancy, overflow, access control issues. Well-documented vulnerabilities with clear signatures.
Logic bugs. "This function should require X but doesn't." Requires understanding intent.
Gas issues. Inefficiencies that could make the contract unusable.
Centralization risks. Admin keys that could drain funds. Upgrade mechanisms that bypass security.
Good auditors find real bugs. That's valuable.
What auditors miss
Novel attack vectors. If nobody's seen this type of attack before, auditors might not think of it.
Economic attacks. The code works perfectly, but incentives are broken. Flash loan manipulation, governance attacks, oracle exploits.
Integration issues. Contract A is secure. Contract B is secure. A + B together? Exploitable.
Post-audit changes. Code changed after audit. New bugs introduced. Audit report now misleading.
Time pressure. 4 weeks to audit 10,000 lines. Some things get missed.
Auditors are good. They're not omniscient.
The audit theater
Some protocols treat audits as checkboxes.
"Get an audit → put badge on website → claim security"
Red flags:
- Audit was 6+ months ago, code changed since
- Only one audit from unknown firm
- Critical findings marked "acknowledged" (not fixed)
- Audit scope didn't cover all contracts
- No bug bounty program
An audit badge means "someone looked at this once." That's all.
The economics problem
Top audit firms are backlogged for months.
Demand > supply means:
- Rushed timelines
- Junior auditors on complex projects
- Less thorough reviews
- Higher prices
New protocols launching need audits NOW. Quality suffers.
Meanwhile, audit firms compete on speed and price, not just quality.
The incentives are misaligned with security.
Bug bounties: The complement
Smart protocols run bug bounties alongside audits.
"Find a bug, get paid."
Immunefi hosts bounties up to $10M for critical vulnerabilities.
Why bounties work:
- Continuous coverage (not snapshot)
- Global talent pool
- Aligned incentives (finder gets paid, protocol avoids hack)
- Covers post-audit changes
Why they're not enough:
- Only covers what's reported (attackers might not report)
- Payout < hack profit (sometimes attackers prefer to exploit)
- Requires active security community attention
Audits + bounties together > either alone.
Reading an audit report
If you actually read audits (you should), look for:
Severity levels. Critical/High findings fixed? Or just "acknowledged"?
Scope. What was audited? Was everything covered?
Findings count. Many findings = code quality concerns.
Response quality. Did the team fix issues properly? Or quick patches?
Date. How old is this audit? Has code changed?
Most people never read audit reports. This is how scams pass off fake audits.
Famous audited hacks
Ronin ($625M). Audited bridge. Social engineering attack on validators. Audit couldn't catch human error.
Wormhole ($320M). Audited. Bug was in Solana-specific code that auditors may have deprioritized.
Nomad ($190M). Audited. Vulnerability introduced in a "routine" update after audit.
Cream Finance ($130M). Multiple audits. Flash loan attack via oracle manipulation. Economic attack, not code bug.
Pattern: Audits catch code bugs. Many hacks aren't code bugs.
What actually keeps protocols safe
Multiple audits. Different firms find different things.
Bug bounties. Continuous coverage.
Formal verification. Mathematical proofs that code does what it should. Expensive but thorough.
Time. Longer a protocol runs without exploit, more confident you can be.
Simplicity. Less code = less attack surface.
Monitoring. Real-time detection of suspicious activity.
Incident response. Plan for when (not if) something goes wrong.
Security is a process, not a checkbox.
How to evaluate security
Before using a protocol:
-
Is it audited? By whom? When? What scope?
-
Are critical findings fixed? Read the report.
-
Is there a bug bounty? How large? Active submissions?
-
How long has it run? Battle-tested > newly launched.
-
How complex is it? Simple and boring often beats clever and complex.
-
What's the team's track record? Previous hacks? Transparent communication?
-
What's at risk? Even "safe" protocols can fail. Don't deposit more than you can lose.
The uncomfortable truth
No protocol is provably safe.
Every audit has limitations. Every bug bounty has gaps. Every formal verification has assumptions.
The question isn't "is this safe?" It's "what's the risk level and am I comfortable with it?"
Audits help answer that question. They don't eliminate risk.
Use audited protocols. Understand what audited means. Don't confuse "audited" with "guaranteed."
Next: Rug pulls - how to spot them before you're the victim.