Article Authors: Arnaud Carignan-Dugas and Stefanie Beale, Keysight Technologies
One of the major barriers to reaching the disruptive potential of quantum computing is its susceptibility to noise and calibration.
Many in the quantum computing industry boast about the disruptive potential of quantum computers to drive rapid growth and innovation in many fields such as medicine, materials science, and finance. However, one of the major barriers to reaching this potential is the susceptibility of quantum computing to noise and calibration.
Our ability to manage or reduce quantum computer error rates will determine the pace at which we reach the ability to begin leveraging quantum computers for these revolutionary leaps. Understanding the impact of errors and the extent to which current technology can compensate for them can provide insight into what stage of development the quantum computing industry has reached.
How are quantum computing errors different from classical computing errors?
Computing devices deal with processing information. Classical computers store information and perform operations on bits. A bit is a hardware memory element with two distinct states labeled 0 and 1. These computers perform operations by manipulating information stored in bits according to program specifications.
Quantum computers have “qubits” (or qubits), which are hardware components analogous to traditional “bits”. Qubits can store the same binary states that are allowed in classical computers, but they can also store and manipulate additional states through quantum mechanical capabilities, namely superposition and entanglement. Researchers believe that this additional capacity introduced by quantum mechanics will allow quantum computers to achieve performance that conventional computers cannot. Quantum computing algorithms, in particular, aim to solve dense combinatorial problems that would take too much time on a classical computer.
A computational error, quantum or otherwise, is an undesirable operation that replaces one state in memory with another. In conventional computers, single-bit errors are limited to accidental flips from 0 to 1 or 1 to 0. In quantum computing, errors can take many more forms, as additional states are featured beyond the sequence of bits. They have more quantum states than traditional bit sequences, leaving room for more kinds of unwanted state changes.
Because qubits must exploit the effects of quantum mechanics, they are inherently small and highly sensitive to interactions with the environment, which can lead to errors or complete destruction of the stored state. There is a nature.
Because qubits must exploit the effects of quantum mechanics, they are inherently small and highly sensitive to interactions with the environment, which can lead to errors or complete destruction of the stored state. There is a nature. Below are some examples of noise sources that adversely affect the computational power of quantum computers.
Causes of quantum computation errors:
• External force: Even small oscillations or changes in magnetic forces, currents, or ambient temperature can cause quantum computations to return incorrect results, or, in some types of quantum computers, to completely lose the memory state.
• Internal control:Qubits are very sensitive to small fluctuations, so the signals used to act on the stored state for computation must be very precise. Deviations from a perfect signal are errors.
What is Error Correction Calculation?
Traditional computational errors are usually caused by one or more bits flipping unexpectedly. Error correction strategies have been developed to correct these bit flips and bring the system back to the expected state. The use of error correction was common in early computing systems before technology advanced enough to make them very robust to environmental changes.
Error correction in classical computing is usually unnecessary today and is used when failures would be catastrophic or when computers are in environments where errors are more likely to occur, such as on space missions.
The simplest example of classical code is repetitive code, where every bit is copied to introduce redundancy.
0 → 000
1 → 111
This mapping from the state stored in one bit to the same state stored (or encoded) in multiple bits is called “encoding”. Therefore, the word “code” is used to designate an error correction strategy.
In the 3-bit repetition code above, if we had a 0 state encoded as 000 and introduced a bit flip error in the second bit, we would find state 010. Looking at the state, we can see that there are more 0s. Assuming the error is unlikely, it’s safe to assume that the correct state is an encoded 0, so we’ll set it back to 000.
In general, error correction consists of three parts:
- Encode state into more bits
- See the encoded state at regular time intervals
- Correct the condition based on observations from step 2
If the error rate is low, you can use error correction strategies to identify and fix the changes that occurred. A high error rate starts to cause problems. For example, let’s say he hasn’t seen the encoded state 111 for a while, and in the meantime he gets two errors, resulting in say 001. The last bit was flipped, so fix it to 000 so that the final state is incorrect.
There are strategies to account for higher error rates, such as increasing encoding redundancy. For example, you can use 5-bit or 7-bit repetition codes and employ the same strategy as described for 3-bit repetition codes. In these cases, you can recover from up to 2-bit or 3-bit flip errors, respectively.
These error-correction strategies only work if the error rate is lower than the rate at which the particular code can be fixed. Longer intervals between corrections increase the chance of bit-flip errors, so if your system is error-prone, system latency becomes an issue. As a result, the biggest challenge for error correction was speed. It’s about finding more effective and efficient ways to fix errors before they cause serious problems.
Longer intervals between corrections increase the chance of bit-flip errors, so if your system is error-prone, system latency becomes an issue. As a result, the biggest challenge for error correction was speed. Finding ways to detect errors more effectively and efficiently before they cause serious problems.
Why is quantum error correction so difficult?
Once we start scaling up quantum computers, we will need error correction strategies similar to those developed for classical computers. Quantum error correction procedures follow the encoding, measurement, and recovery procedures used in conventional computers. However, applying these procedures to quantum computers presents new challenges.
Classical computing would look at the encoded state to see what went wrong in order to apply a fix. This is not possible with quantum computers. One of the basic tenets of quantum mechanics is that when we look at quantum states, they change. This means you can’t directly measure the encoded state without destroying the information you’re trying to store. For this reason, quantum researchers have developed methods that allow us to obtain information about state errors without directly measuring the state. These methods involve indirect measurements, do not give any information about which logical state is, and ideally do not affect the state.
Quantum computers need bigger encodings
Given how fragile the quantum state is to the environment, large-scale encoding may be required. This means that hundreds, if not thousands, of qubits may be required to encode a single qubit state. As pointed out by Science.org, Google researchers believe that scaling error-correction efforts across 1,000 qubits could potentially keep the qubits alive indefinitely.
Much like classical computing, where there was uncertainty that an error occurred when a state was measured, the measurement result in quantum computing is that one error occurred out of a particular set of possible errors. only. I don’t know which of these errors caused me. Since the states are more complex for qubits than for bits, there are more types of errors and more uncertainty about which modifications will return the correct state. Finding the best way to select fixes is a difficult problem and still a work in progress.
If you know the noise acting on your system, you can compute the best strategy for small code. However, it becomes prohibitively expensive for larger codes. For example, consider the surface code, the most common large-scale quantum error-correcting code. Instead of pre-selecting the correction for each measurement result and using a lookup table, we use a classical algorithm to select the recovery operation at every error correction step. This algorithm introduces significant latency.
However, even a small piece of code using a lookup table would require a classical computer to route the measurement results, select a recovery operation, and send that information back to the quantum computer. This introduces considerable latency and makes your code less effective. This is one of the major bottlenecks for effective quantum error correction, and many researchers in the field are actively working to overcome it.
Keysight is collaborating with researchers to accelerate progress in the pursuit of viable quantum computers. To learn more about quantum computing and Keysight’s involvement, visit our Quantum Solutions page.
About the author
Arnaud Carignan-Dugas is a research scientist at Keysight specializing in open quantum systems, quantum error characterization, and quantum error suppression. He is interested in probing quantum computers, quantifying performance-limiting factors, and finding solutions to develop means to increase computational range.
Stefanie Beale is an R&D Engineer on the Quantum Engineering Solutions team at Keysight Technologies Canada. Her research in Stefanie specializes in quantum error correction, and she has worked on the development of the TrueQ software. This software is used worldwide for error characterization, suppression, compilation, etc.

