Classical computers make errors. Cosmic rays flip bits in RAM. Power surges corrupt data. But classical error correction is solved. Your laptop’s memory uses ECC (error-correcting codes) that detect and fix single-bit errors without you ever knowing.
Quantum computers make errors too. Far more often, and in more ways. A qubit can flip (like a classical bit flip). A qubit’s phase can flip (something with no classical analog). Both can happen at the same time. And measuring a qubit to check for errors destroys the quantum state you are trying to protect.
This is the central challenge of quantum computing. Not building more qubits. Building qubits that are reliable enough to run deep circuits without drowning in noise.
Why Qubits Are Noisy
Quantum states are fragile. A qubit in superposition is like a spinning coin. Any interaction with the environment causes it to wobble and eventually settle. This process is called decoherence.
There are three main sources of quantum errors.
Decoherence is the loss of quantum information to the environment. Qubits interact with heat, electromagnetic radiation, vibration, and even neighboring qubits. Each interaction nudges the quantum state toward a classical state. Superconducting qubits decohere in microseconds to milliseconds. Trapped ions last longer (seconds), but they are slower.
Gate errors happen when quantum operations are not perfectly precise. A Hadamard gate that should rotate exactly 90 degrees might rotate 89.7 degrees. Over hundreds of gates, these small errors accumulate and poison the computation. Current two-qubit gate error rates are around 0.1% to 1%, depending on the platform.
Measurement errors occur when reading out qubits. The detector might report the wrong value. Current measurement error rates are around 1% to 5%.
These errors compound. A circuit of depth 1000 with 0.1% error per gate accumulates roughly 170% total error. The computation disintegrates into randomness long before it finishes.
For context. A classical computer has gate error rates around 10⁻¹⁷. That is 15 orders of magnitude better than current quantum hardware. Classical computers do not need error correction for most operations. Quantum computers need it for every operation.
The Error Correction Paradox
Classical error correction works by making copies. Store three copies of a bit. If one flips, the other two outvote it. This is the three-repetition code.
Quantum error correction faces two obstacles that make copying impossible.
First, the no-cloning theorem. You cannot make an identical copy of an unknown quantum state. It is a theorem, not a engineering limitation. Quantum mechanics forbids it. So you cannot simply replicate a qubit three times and vote.
Second, measurement destroys information. If you measure a qubit to check if it has an error, you collapse its superposition. The quantum information is gone.
The solution is both elegant and expensive: encode one logical qubit across multiple physical qubits in a way that lets you detect errors without measuring (and thus destroying) the encoded information.
How Quantum Error Correction Works
The key insight is indirect measurement. Instead of measuring the logical qubit directly, you measure parity checks called stabilizers. These are collective properties of the physical qubits that reveal whether an error has occurred without revealing the logical state itself.
Here is a simplified example using the three-qubit repetition code for bit-flip errors.
You encode one logical qubit across three physical qubits:
|0⟩_L = |000⟩|1⟩_L = |111⟩
If a bit flip happens on one qubit, the state becomes something like |010⟩. Two qubits agree (both 0), one disagrees (the 1). The majority wins. You can detect and correct the error without knowing the logical state.
But this code only protects against bit flips (X errors). It does nothing for phase flips (Z errors). For that, you need a different encoding.
The Steane code uses seven physical qubits to protect one logical qubit against any single-qubit error (bit flip, phase flip, or both). The Shor code uses nine physical qubits for the same protection.
Modern quantum error correction converges on the surface code. It uses a 2D grid of physical qubits where data qubits and measurement qubits (ancillas) alternate. The ancilla qubits measure stabilizers (parity checks) without disturbing the data. Errors are detected as changes in these parity measurements over time, called the syndrome.
The surface code has several advantages. It requires only nearest-neighbor interactions on a 2D grid, which matches the physical layout of superconducting qubit processors. It has a high error threshold around 1%, meaning the physical error rate only needs to be below 1% for the code to work. And it scales well. More physical qubits means better protection.
Physical Qubits vs Logical Qubits
This distinction is critical and widely misunderstood.
A physical qubit is a hardware qubit. It is noisy, error-prone, and short-lived. Current quantum computers have hundreds to thousands of physical qubits.
A logical qubit is an error-corrected qubit made from multiple physical qubits. It is the unit that actually runs quantum algorithms reliably.
How many physical qubits per logical qubit? It depends on the physical error rate and the target logical error rate. For the surface code, with current physical error rates around 0.1% to 1%, you need roughly 1000 physical qubits per logical qubit to achieve error rates low enough for Shor’s algorithm.
Numbers make this concrete. To factor a 2048-bit number with Shor’s algorithm, you need roughly 4000 logical qubits. At 1000 physical qubits per logical qubit, that is 4 million physical qubits.
IBM’s current largest processor has about 1121 physical qubits. Google’s latest is in a similar range. We are off by a factor of thousands from what we need for cryptographically relevant factoring.
But physical error rates are improving. If physical error rates drop from 0.1% to 0.01%, the overhead drops from 1000x to roughly 100x. If they drop to 0.001%, it drops further. Better physical qubits mean fewer are needed for error correction. This is why improving gate fidelity is as important as increasing qubit count.
The Threshold Theorem
The threshold theorem is the theoretical foundation that makes quantum error correction viable. It states that if the physical error rate per gate is below a certain threshold, then arbitrarily long quantum computations can be performed with arbitrarily low error by increasing the size of the error-correcting code.
For the surface code, the threshold is around 1%. If your physical gate error is below 1%, adding more qubits makes the computation more reliable, not less. If it is above the threshold, adding more qubits makes things worse because the additional error channels overwhelm the correction.
Current hardware is around or slightly below this threshold for superconducting qubits and above it for some operations. We are in the regime where error correction works in principle but is extremely expensive in practice.
Error Mitigation vs Error Correction
Error mitigation is a set of techniques for reducing noise in quantum computations without using full error correction. It is what people use on current NISQ devices.
Zero-noise extrapolation runs the circuit at different noise levels (by stretching pulse durations or adding identity gates) and extrapolates the results to zero noise. The assumption is that the noise is a smooth function of the circuit depth. If you can run the circuit at noise levels 1x, 2x, and 3x, you can fit a curve and extrapolate to 0x.
Probabilistic error cancellation characterizes the noise model of the hardware using calibration circuits. Then it constructs a new circuit that cancels the noise probabilistically by inserting additional gates with opposite error signatures. This improves results at the cost of increased sampling overhead.
Measurement error mitigation builds a confusion matrix for measurement errors by preparing and measuring all computational basis states. Then it inverts the matrix to correct subsequent measurement results.
These techniques help, but they do not scale. Zero-noise extrapolation requires running circuits at increased depth, which increases exponentially with circuit size. Probabilistic error cancellation has exponential sampling overhead. Measurement mitigation only fixes readout errors, not gate or decoherence errors.
Error mitigation buys time for near-term algorithms. Error correction is the long-term solution.
The Road to Fault Tolerance
Fault-tolerant quantum computing means running arbitrary quantum circuits with arbitrarily low error rates by using error correction as a fundamental part of every operation. This is the goal. Anything short of this is limited to short, noisy circuits.
The roadmap from here to fault tolerance looks roughly like this.
NISQ era (current). 50 to 1000 physical qubits. Gate errors around 0.1% to 1%. No error correction. Circuits are limited to depth 10-1000 depending on hardware. Useful for small-scale VQE, QAOA, and demonstrations.
Early fault tolerance (near future). 10,000 to 100,000 physical qubits. Gate errors around 0.01% to 0.001%. A few hundred logical qubits. Small error-corrected circuits. Could run medium-depth Shor on small numbers (factoring 15-21 is already done, but factoring 2048-bit numbers needs millions of qubits).
Full fault tolerance (future). Millions of physical qubits. Thousands of logical qubits. Gate errors below 10⁻⁶. Can run deep circuits reliably. Factoring, chemistry simulation at scale, and other exponential speedups become practical.
IBM has published a roadmap targeting 100,000 physical qubits by 2033. Google has published a roadmap targeting a useful, error-corrected quantum computer by 2029. These timelines are ambitious and subject to the usual uncertainties of hardware development.
Hardware Approaches
Each hardware platform has different error characteristics and scaling tradeoffs.
Superconducting qubits (IBM, Google) have fast gates (nanoseconds) but short coherence (microseconds). They are manufacturable using semiconductor fabrication techniques. The main challenge is wiring density and cooling at 15 millikelvin. Current state of the art is around 1000 physical qubits with 99.5% two-qubit gate fidelity.
Trapped ions (IonQ, Quantinuum) have long coherence (seconds) and high gate fidelity (99.9%). The challenge is speed (microsecond gate times) and scaling. Current systems shuttle ions through trap zones to create connectivity, but adding more qubits adds more complexity.
Neutral atoms (QuEra, Pasqal) use optical tweezers to arrange atoms in 2D and 3D arrays. They can scale to hundreds of atoms naturally and have demonstrated entangling operations with moving atoms between zones. Gate fidelities are improving rapidly.
Photonic qubits (Xanadu, PsiQuantum) operate at room temperature and are naturally suited for networking. The challenge is two-qubit gate fidelities and the need for many photons per logical qubit. PsiQuantum claims they are building a fault-tolerant photonic computer using fusion-based gates.
The winning platform is not yet determined. Each has fundamental advantages and engineering challenges. It is possible that different platforms serve different applications, similar to how CPUs, GPUs, and TPUs coexist in classical computing.
What Matters for Developers
If you are a software engineer, the practical takeaway is this. Quantum computing is real but not ready for production workloads. The algorithms exist, the theory is solid, and the hardware is improving. But you cannot run Shor’s algorithm on meaningful inputs today, and VQE results on current hardware are still being validated against classical methods.
What you can do now. Learn the concepts. Understand superposition, entanglement, and interference. Write circuits in Qiskit, Cirq, or PennyLane. Run them on simulators and cloud quantum hardware. Build intuition for what quantum algorithms can and cannot do.
The field needs people who understand both the classical and quantum sides. Not everyone needs to be a physicist. But understanding the basics of error correction, gate fidelities, and algorithmic constraints will matter more as the hardware matures.
The timeline is uncertain. The direction is clear. Quantum computers will solve specific problems that are intractable classically. The question is when, not if. And the answer depends on progress in error correction, qubit quality, and hardware scaling.
Happy designing