The Economics of Attacking a Secure Element

Why an attack only makes sense if it can be scaled, and what that means for you.

Author logo
Patrick Dike-Ndulue
Post image

AI summary

Security is often discussed in absolute terms: an attack is either possible or it isn't. This framing is technically precise but practically misleading. In the world of cybersecurity, the relevant question is rarely "can this be done?" Almost anything can be done given enough resources. The relevant question is: "Does it make economic sense to do it?"

Every attack on a hardware wallet has a cost structure. Upfront investment in tools, infrastructure, and expertise. It also includes operational execution costs, the probability of success, and the expected return per target. An attacker is, in economic terms, running a business. Like any business, it only survives if revenues exceed costs, which means the attack must either work at scale or target victims whose assets justify the individual expense.

This article, the fourth in our secure element series, analyzes the major attack categories targeting secure elements through an economic lens. For each attack type, we examine its cost, the number of targets required to be profitable, which actor categories are realistically positioned to execute it, and what that means for the protection a certified secure element actually provides.
 

The attacker's balance sheet

Before examining individual attack types, let’s lay out the variables that determine whether an attack makes economic sense.

Fixed costs

Fixed costs are paid once, regardless of how many targets are hit. They include: acquiring or developing attack tools and malware, purchasing laboratory equipment, bribing or socially engineering insiders, and investing time in research and development.

For software-based attacks, fixed costs are low; a competent developer can build clipboard-hijacking malware for a few hundred to a few thousand dollars. For hardware-based attacks, fixed costs can reach millions before a single wallet is touched.

Variable costs

Variable costs scale with the number of targets. Distributing malware to an additional million devices costs almost nothing incrementally. Running a focused ion beam analysis on an additional individual secure element costs tens of thousands of dollars in machine time, specialist labor, and consumables. This is the single most important economic difference between mass and targeted attacks: their variable cost curves are almost inversely related.

Expected return per target

Crypto holdings are highly unequally distributed. The median hardware wallet user holds a few thousand dollars. Whales, a small fraction of users, hold millions. An attack that can only be run individually must be aimed at the right individual. An attack that runs at scale harvests whatever it finds across the entire population, making it viable even when most targets hold modest amounts.

Probability of success

Technical difficulty affects the probability of success, thereby directly discounting the expected return. A malware campaign that succeeds on 2% of infected devices still generates revenue at scale. A fault injection attack that has a 40% probability of destroying the target device while attempting extraction, and is being run against a single person whose holdings you may have overestimated, has a very different risk profile.

The economic logic of attacks is simple: cost divided by expected return must be less than one. The secure element dramatically increases the cost side of that equation for targeted attacks. It does almost nothing to raise costs for mass attacks, which go around the chip entirely.

Attack Type 1: Malware and clipboard hijacking

Malware-based attacks against cryptocurrency users are software operations. The most prevalent variant is the clipboard hijacker: a background process that monitors the system clipboard and, when it detects a string matching a cryptocurrency address format, silently replaces it with an attacker-controlled address. The user copies a legitimate address, pastes what they believe is that address, and the funds go elsewhere.

More sophisticated variants include transaction-intercepting browser extensions that modify destination addresses inside web interfaces before they are displayed, screen overlay attacks that superimpose fake UIs over legitimate wallet software, and keyloggers that capture seed phrases if a user is ever careless enough to type one. 

Entire malware-as-a-service ecosystems exist on darknet markets, and criminal organizations can purchase or rent these tools without writing a line of code.

Cost structure

  • Developing clipboard hijacking malware from scratch: roughly $500–$3,000 for a competent developer.
     
  • Purchasing it from a darknet marketplace: as low as a few hundred dollars.
     
  • Distribution, embedding it in fake wallet software, malicious browser extensions, cracked applications, or phishing downloads, adds operational cost, but campaigns reaching hundreds of thousands of devices for under $10,000 total investment.

Variable cost per additional infected device is effectively zero. The marginal cost of infecting device number one million is the same as that of device number one: the malware is already written.

Who can execute this attack?

This is the operational territory of organized criminal groups and darknet developers. Nation-states are generally overqualified for this attack category; it doesn't require their capabilities. At the lower end, individual criminals with basic technical skills can purchase complete toolkits. At the upper end, sophisticated criminal organizations operate an infrastructure that rivals legitimate software companies in scale and professionalism.

What the secure element does, and doesn't do

Against key theft: the secure element is a complete defense. Malware running on an infected PC cannot reach inside the chip. The private key is not in any memory location accessible to the operating system.

Against the clipboard attack: the secure element does not help directly. The attack does not attempt to steal the key. It substitutes the destination address before the transaction is submitted. The key signs exactly what it is asked to sign, the wrong address, because neither the chip nor the firmware has any way to know the address was tampered with upstream.

The only defense against the clipboard attack is behavioral. Verify the recipient address before confirming. If the address displayed on the device screen matches the intended recipient, the transaction is safe regardless of any malware running on the PC.

Economic verdict

Malware attacks make economic sense at any scale because their cost basis is near zero relative to the total addressable victim pool. The secure element eliminates the most valuable attack vector, key theft, but the clipboard substitution attack remains viable and is the far more common real-world threat. The economics favor the attacker at the mass level because they are not attacking the chip at all.

 

Attack Type 2: Firmware compromise

A firmware supply chain attack inserts malicious code into the software that runs on hardware wallets before or during distribution. The compromised firmware looks and behaves identically to the legitimate version from the user's perspective. Underneath, it may generate weak or predictable private keys, silently exfiltrate key material through covert channels, display different addresses on-screen than those actually being signed, or create a backdoor that activates on a specific trigger.
 

The attack surface includes the manufacturer's development infrastructure (build servers, code signing systems), the update delivery pipeline, resellers who install custom firmware before shipping, and, in some cases, community-maintained or third-party firmware distributions.

Cost structure

Compromising a software build pipeline at a manufacturer requires either a sophisticated intrusion operation, attacking and persisting inside a well-defended corporate network, or an insider who has been recruited, bribed, or coerced. Intrusion costs depend heavily on the target's defenses but typically run into six figures for a genuine persistent compromise of a security-focused organization. Insider recruitment varies widely: a developer at a small hardware wallet company might be approachable for a good sum; a senior engineer with signing key access at a major manufacturer is a much more expensive and higher-risk proposition.
 

Once inside, the attacker modifies firmware and signs it with the manufacturer's legitimate key, bypassing secure boot entirely, since the modified firmware carries a genuine signature. The attack then propagates automatically to all users who install the update.

Variable cost per additional affected device: near zero. Like malware, this attack scales after the fixed cost of the initial compromise.

Who can execute this?

Nation-state intelligence agencies are the most capable and motivated actors at this tier. They have the resources for sustained corporate intrusion operations, the ability to legally or extra-legally pressure manufacturers in their jurisdiction, and strategic motivations beyond pure financial return, intelligence collection, sanctions enforcement, or the ability to monitor specific targets at scale.

Sophisticated, organized criminal groups capable of persistent corporate intrusions do exist, but they are rare. The risk profile of compromising a well-known hardware wallet manufacturer, the significant law enforcement attention, and the limited ability to launder returns make pure criminal motivation less compelling than nation-state motivation.

What the secure element does

This is where the relationship between the secure element and the firmware layer becomes critical. A firmware compromise does not break the secure element directly. What it does is control what the secure element is asked to do, and what results get displayed to the user.
 

Some secure element implementations are architected to make firmware-based attacks harder: the chip verifies certain operations independently of firmware instructions, and its attestation functions allow external verification of its state. But in most standard hardware wallet designs, the firmware is the trusted intermediary between the user and the chip. Compromised firmware is in a very powerful position.


The practical defense is firmware verification; independently checking that installed firmware hashes match officially published values before use, and restricting firmware update behavior.

Economic verdict

Firmware supply chain attacks require significant fixed cost investment but scale well across a large victim pool; every user of a compromised update becomes a target.

The minimum viable holding per victim is relatively modest because of the scale: if ten thousand users install a compromised update and the average holding is $5,000, the total addressable pool is $50 million. Fixed costs of $1,000,000–$5,000,000 against that pool represent a reasonable return on investment for a well-organized attacker.

This is why this attack category is most associated with nation-states rather than conventional criminals: the economics work, but the execution requires capabilities and risk tolerance beyond those of typical criminal organizations.

 

Attack Type 3: Supply chain-hardware substitution

Hardware supply chain attacks physically modify or replace components in the device before it reaches the end user. The most sophisticated variant replaces the genuine certified secure element with a counterfeit chip that appears identical but is designed to leak key material. 

Less sophisticated variants add components, such as a secondary chip that monitors communications between the main processor and the secure element, or a covert radio transmitter.

Documented real-world examples exist: intelligence agency programs to intercept and implant hardware in network equipment during shipping have been reported in multiple countries. The technical capability is established. The question is always whether it is applied to consumer hardware wallets, which require a specific economic motivation.

Cost structure

Designing and fabricating a counterfeit secure element that is functionally identical to a certified chip in appearance and basic behavior, while containing a backdoor, requires semiconductor design capabilities, access to a fabrication facility, and substantial reverse engineering of the original chip. This is a multi-million dollar fixed investment, plausibly in the $1M–$5M range depending on chip complexity.

Intercepting devices in the supply chain, identifying the right shipments, accessing them, performing the swap without physical evidence of tampering, and resealing, adds high operational cost and risk of detection. Doing this at scale multiplies both cost and detection probability.
 

This is why hardware substitution attacks are fundamentally small- to medium-scale operations. The economics do not support mass deployment. The fixed cost per batch of implanted devices is high, and the attack is most viable when targeting a small number of high-value targets rather than conducting a statistical sweep.

Who can execute this?

Nation-state intelligence agencies. This is not an attack category that organized crime can realistically execute; the fabrication and logistics requirements put it firmly in state-sponsored territory. The motivation is typically not financial: it is surveillance, intelligence collection, or the ability to covertly drain assets of a specific adversary (a sanctioned individual, a political opponent, a foreign government official).
 

Rogue insiders at the manufacturing level, a quality control engineer at a chip fabrication facility, a warehouse employee at a distribution center, could potentially introduce hardware modifications at a lower cost but with a much narrower scale and much higher personal legal risk.

What the secure element does, and doesn't do

A genuine, certified secure element with strong tamper resistance makes hardware substitution significantly harder. The chip's physical security features, active shields, encrypted memory, and tamper detection mean that modifying a genuine chip in situ is extremely difficult. Replacing it with a lookalike requires fabricating or sourcing a chip that can pass any authenticity attestation the device performs.
 

Many hardware wallet manufacturers implement attestation mechanisms: the device can cryptographically prove that its secure element is the genuine component it claims to be. This directly attacks the substitution vector; a replaced chip that cannot pass attestation raises a flag.

Attestation mechanisms are only as strong as their implementation, and a sophisticated adversary with access to the manufacturer's attestation keys or who has compromised the attestation process can forge credentials. This is a residual risk, not a solved problem.

Economic verdict

Hardware substitution makes economic sense only for highly targeted operations against high-value individuals or for strategic national intelligence purposes. The cost floor, millions of dollars before a single wallet is compromised, means this attack is economically irrational against anyone holding less than substantial seven-figure crypto holdings. For most hardware wallet users, this attack is purely theoretical from an economic standpoint.

Attack Type 4: Fault injection and voltage glitching


Fault injection attacks attempt to cause a secure element to malfunction in a controlled way, one that causes the chip to bypass security checks, output sensitive data it would not normally reveal, or enter an exploitable error state. The most accessible variant is voltage glitching: briefly dropping or spiking the chip's power supply at a precisely timed moment during a cryptographic operation, attempting to corrupt an instruction in a way that skips a PIN verification check or causes the chip to output intermediate key material.

Clock glitching works similarly, manipulating the timing signal that synchronizes the chip's operations. Laser fault injection is the most sophisticated non-invasive variant: a precisely aimed laser pulse at a known location on the chip die can flip specific bits in memory or disrupt specific operations. This requires decapping the chip to expose the die (itself a technically demanding step) and a laser setup capable of micron-level precision.

These attacks are heavily documented in academic security research. Published papers describe the successful extraction of key material from older or less hardened microcontrollers using these techniques. Against certified secure elements, the picture is substantially more difficult: modern chips implement voltage monitors, clock monitors, light sensors, and randomized timing that detect or defeat most glitching attempts.

Cost structure

A basic voltage-glitching setup, an FPGA development board, an oscilloscope, and power analysis tools can be assembled for $5,000–$15,000. Open source glitching frameworks exist. At this level, the attack has a meaningful success probability against uncertified microcontrollers but a much lower probability against a hardened secure element that actively monitors for exactly these conditions.

A more capable setup, laser fault injection with a decapping station, precision positioning system, and dedicated laser source, runs $50,000–$200,000 in equipment alone, plus the expertise to operate it effectively. This is the approximate capability level of a well-funded academic security lab or a professional hardware security consultancy.

Critically, these attacks destroy or damage a significant fraction of target devices during the attempt. A 30–50% device loss rate is not unusual. Each destroyed device is a failed attempt with no return. The cost per successful extraction is therefore substantially higher than the equipment cost alone.

Who can execute this

At the lower equipment tier: skilled individual security researchers and small, technically sophisticated criminal groups. The barrier is not just financial; it requires genuine expertise in embedded hardware security, signal analysis, and chip-level debugging. This is a specialized skill set.

At the laser fault injection tier: professional security research firms, academic institutions, and well-resourced criminal organizations. Nation-states have these capabilities routinely.

The criminal use case requires a clear economic justification: the attacker must have reason to believe the target device contains holdings that justify the equipment cost, the time investment (typically days to weeks per target), and the failure rate. This means the attack is only rational when directed at a specific individual known to hold significant cryptocurrency, not a random stolen device.
 

What the secure element does, and doesn't do

Certified secure elements implement extensive countermeasures against fault injection. Voltage and clock monitors trigger automatic memory erasure when anomalous conditions are detected. Randomized timing and dummy operations obscure the precise moment at which sensitive operations occur, making timed attacks dramatically harder. The execution environment inside the chip is specifically hardened against the exact conditions that fault injection attempts to create.
 

At the CC EAL5+ and EAL6+ certification level, the evaluation process includes active attempts to extract keys using these techniques. A successful certification means that trained evaluators with appropriate equipment tried and failed within the scope of the evaluation. This is meaningful assurance, not a guarantee, but a strong data point.
 

The residual risk is real but bounded: a sufficiently resourced attacker with significant time investment may be able to find a device-specific implementation weakness. Academic papers have demonstrated successful attacks against certified chips under specific conditions. The key variable is the effort-to-return ratio for the specific target.

Economic verdict

Fault injection attacks are individually expensive, time-consuming, have a meaningful failure rate, and require specialized skills. They are economically rational only against targets known to hold $100,000 or more, and more realistically, $500,000 or more, to justify the cost and risk. For the vast majority of hardware wallet users, this attack is not economically viable against them specifically. The secure element's countermeasures are a major cost multiplier that pushes the minimum viable target value upward.

 

Attack Type 5: Scanning Electron Microscopy and Invasive Chip Analysis

Invasive chip analysis is the most technically demanding and expensive attack category. It begins with decapping: chemically or mechanically removing the chip's protective packaging to expose the silicon die. From there, the analysis may proceed via scanning electron microscopy (SEM) to visually map the chip's circuit structure, focused ion beam (FIB) analysis to modify specific circuit paths or probe internal signals, or direct probing of exposed memory cells using nanoscale electrical contacts.

The goal is typically one of two things: reverse-engineer the chip's architecture well enough to identify a previously unknown implementation vulnerability that can then be exploited without full physical invasiveness; or directly read protected memory cells that contain key material, assuming the chip's encryption and scrambling mechanisms can be bypassed or defeated at the physical level.

Against a modern certified secure element, both goals are extremely difficult. The chip's memory is encrypted with keys derived from physical unclonable functions (PUFs), values unique to each chip that cannot be externally predicted or replicated. Even physically reading the memory cells yields ciphertext that is useless without the PUF-derived decryption key, which is itself not stored anywhere readable.

Cost structure

An SEM suitable for semiconductor analysis costs $300,000–$1,000,000. An FIB system costs $500,000–$3,000,000. Operating these systems requires specialized training and typically the support of a laboratory team. Sample preparation, decapping the chip without damaging the die, preparing cross-sections, and coating for electron imaging requires additional equipment and expertise.
 

Total laboratory setup cost for a capability sufficient to meaningfully attack a modern certified secure element: conservatively $1M–$5M in capital equipment, plus ongoing operational costs and highly skilled personnel.

Unlike fault injection, there is essentially no commercial-off-the-shelf ecosystem for this attack. The knowledge and equipment are concentrated in semiconductor companies, academic research institutions, national laboratories, and defense contractors.

Who can execute this?

Nation-state intelligence agencies are the primary realistic actors at this capability level. The NSA, GCHQ, their counterparts in China, Russia, Israel, and a handful of other states have documented capabilities in this area. A small number of elite private security research firms have partial capabilities, enough to do meaningful analysis, but not necessarily enough to defeat the most hardened implementations.

There is no credible scenario in which a purely financial criminal organization executes this attack. The capital requirements, the expertise requirements, and the time required, weeks to months per target, make the economics nonviable for anything other than the most extraordinary individual targets (think: seized exchange wallets containing hundreds of millions of dollars under active government investigation).

What the secure element does, and doesn't do

At this capability level, the secure element is the last line of defense, and it is designed specifically for this threat. Physical unclonable functions, active shields that destroy memory on intrusion detection, encrypted internal buses, scrambled memory addressing; these countermeasures make invasive analysis of a modern certified secure element a genuine research problem, not a solved one.

Published academic research has demonstrated partial successes against specific chip implementations under controlled conditions. These results are important; they demonstrate that the protection is not absolute, but they also typically require months of work by expert teams against chips that are not the latest generation. Chip manufacturers read the same academic papers and update their implementations.
 

The honest assessment is that no public evidence exists that the private keys in a current-generation certified secure element have been extracted via invasive analysis under operational (non-laboratory) conditions. The theoretical possibility is real; the operational demonstration is not.

Economic verdict

This attack makes economic sense for nation-states pursuing strategic objectives where financial return is not the primary motivation, and potentially for extraordinary financial targets where the holdings justify multi-million dollar extraction costs. For any individual holding less than eight figures in cryptocurrency, the economics are simply irrational; no actor capable of this attack has a financial incentive to apply it to that target.

What the economics tell us

Mapping these five attack categories against their cost structures produces a picture that is counterintuitive in one important way: the secure element's protection is most complete against the most expensive attacks and least relevant against the cheapest ones.

Malware and clipboard hijacking, which cost almost nothing to deploy and target millions of people simultaneously, do not attack the secure element at all. They attack user behavior at transaction time. The secure element is irrelevant to this threat, not because it fails, but because it is not in the attack path. The clipboard attack happens upstream of the chip.
 

Firmware supply chain attacks, medium cost, medium scale, threaten the layer that controls the chip rather than the chip itself. The secure element's value here depends on how well the firmware layer is hardened and verified. It provides partial but not complete protection.
 

Hardware substitution, fault injection, and invasive analysis, progressively more expensive, progressively more individually targeted, are where the secure element's engineering genuinely shines. Each dollar of additional chip security directly raises the cost floor for these attacks, and the cost floor is already high enough that the economics are irrational for all but the most extraordinary targets.

The secure element is economically most powerful against the attacks that cost the most to execute. Against attacks that cost almost nothing, because they bypass the chip, it provides no economic friction at all. This is not a contradiction; it is a precise description of what the chip is and isn't designed to do.

The practical consequence for a hardware wallet user is this: your most likely threat is not a laboratory attempting to extract your key. It is malware on your PC manipulating a transaction while you are not paying attention. The secure element is excellent insurance against a scenario that is unlikely. The on-device address verification habit is essential protection against a scenario that is common.

Understanding the economics of attacks is not an argument for complacency, it is an argument for allocating attention correctly. The chip handles the expensive attacks. The cheap attacks require your attention.

 

Attack economics: quick reference

Attack Type

Scale

Attacker cost

Min. viable target

Who executes this

Malware / clipboard hijack

Mass (millions)

$500–$5,000 to deploy

Any amount, statistical game

Criminal orgs, darknet devs

Supply chain, firmware

Medium (thousands)

$50K–$500K+ (org compromise)

$10K+ per wallet on average

Nation-states, organized crime

Supply chain, hardware swap

Small–medium (hundreds)

$100K–$1M+ (logistics + fab)

$50K+ per wallet on average

Nation-states, rogue insiders

Fault injection / glitching

Individual

$5K–$50K (equipment + time)

$100K+ individual holding

Skilled criminals, researchers

SEM / invasive chip analysis

Individual

$500K–$5M+ (lab + expertise)

$1M+ individual holding

Nation-states, elite labs

Mass → individual attack spectrum

MASS

← ─ ─ ─ ─ ─

SPECTRUM

─ ─ ─ ─ ─ →

INDIVIDUAL

Malware

Firmware supply chain

Hardware supply chain

Fault injection / glitch

SEM / chip analysis

Cost: hundreds $

Cost: tens of thousands $

Cost: hundreds of thousands $

Cost: tens of thousands $

Cost: millions $

Targets anyone

Targets a product line

Targets a batch

Targets one device

Targets one key

Limited-Time Offer

20% OFF + up to $20 in BTC 🛍️ Selected Wallets Ends March 16

Get the Deal
Author logo
Authors Patrick Dike-Ndulue

Patrick is the Tangem Blog's Editor