Contents

Quantum Error Mitigation

Zero Noise Extrapolation and Improvement based on Pauli-Lindblad Noise Model

Motivation

IBM recently published a paper that carried out the Trotterized time evolution of a transverse-field Ising model Hamiltonian \(H = -J \sum_{<i,j>}Z_{i}Z_{j} + h \sum_{i}X_{i}\) on 127 qubits. The quantum circuit used for time evolution consisted of "up to 60 layers of two-qubit gates, a total of 2880 CNOT gates" [1]. This is a significant experiment due to the relatively low quality of CNOT gates across all currently available quantum computing platforms. The number and layers of CNOT gates involved generally can lead to meaningless results due to noise. A brute-force classical simulation of such a quantum circuit is definitely out of reach for even the most powerful supercomputers.

The wording of their title, suggesting that their experiment demonstrates quantum utility (a weaker version of quantum advantage), received mixed reviews within the Quantum Computing community on Twitter. However, some positive results emerged from the discussions between proponents of this paper and others. For instance, Sels and colleagues [2] demonstrated a method to exploit the heavy-hexagon topology of IBM's superconducting quantum chip to speed up classical simulation of the quantum circuit in IBM's paper with belief propagation tensor networks. This, although contradicting IBM's claim of demonstrating quantum utility, helps extend our understanding of classical simulation of quantum circuits. Unfortunately, an in-depth analysis of this topic is beyond the scope of this blog. Interested readers are encouraged to consult the reference below and investigate further on their own.

Conversely, IBM clarifies that the essence of this paper is not quantum utility but two other crucial aspects. First, it demonstrates "advances in the coherence and calibration of a superconducting processor at this scale", with 127 qubits [1]. More importantly, it showcases "the ability to characterize and controllably manipulate noise across such a large device" [1], which will be the primary focus of this post.

This post will proceed as follows: First, I will discuss the importance of Quantum Error Mitigation by examining the effect of noise on quantum computers and why Quantum Error Correction is not a viable solution at present. Then, I will delve into one of the Quantum Error Mitigation techniques used in IBM's paper - Zero Noise Extrapolation. Finally, I will conclude with how IBM improved upon the basic Zero Noise Extrapolation with the Pauli-Lindblad noise model to achieve promising experimental results on 127 qubits.

Achilles Heel of Quantum Computer: Noise

One of the most formidable challenges faced by quantum computers is noise. It's widely recognized that quantum algorithms can outperform their classical counterparts in solving a few critical problems. For instance, our daily communication, web browsing, and bank transactions are safeguarded by public-key encryption schemes such as RSA and Diffie-Hellman schemes. These methods hinge on the difficulty of period finding [3]. Shor's algorithm solves the period finding problem exponentially faster than classical methods. However, noise renders the results of Shor's algorithm useless on real quantum computers. Since Shor's algorithm was used to factor 15 on an NMR quantum computer in 2012 [4], only one other experiment has successfully pushed this limit to 21 using Shor's algorithm [5]. Noise has made it impossible to scale Shor's algorithm further.

Of course, solutions have been proposed to tackle the noise problem on quantum computers. Quantum Error Correction—an algorithm that shields information from noise and decoherence—has been considered. This algorithm accomplishes its goal using Quantum Error Correction codes, which encode one qubit's worth of information onto multiple, physically separated qubits. As noise and decoherence result from undesirable physical interactions, information encoded onto physically separated qubits is safeguarded from noise. This protection is based on the fact that our world predominantly supports local physical interactions. To interfere with information that is physically separated, one would need non-local interaction, which is typically unlikely.

However, a significant challenge with implementing Quantum Error Correction codes is posed by the threshold theorem. This theorem proposes that "a quantum computer with a physical error rate below a certain threshold can reduce the logical error rate to arbitrarily low levels" through the application of Quantum Error Correction schemes [6]. Today's most advanced physical qubits can barely achieve parity in terms of logical error rate when implementing a Quantum Error Correction code as compared to a basic repetition code [7]. Consequently, with current engineering techniques, Quantum Error Correction cannot eliminate the noise in quantum computers to arbitrary level.

Given the inherent noise accompanying all quantum computer operations and the restrictions on the number of a physical quantum computer's qubits, we find ourselves in the Noisy Intermediate Scale Quantum Era, a term coined by John Preskill [8]. Various methods have been proposed to enhance the usability of quantum computers in the face of noise. These methods, collectively known as Quantum Error Mitigation methods, each focus on a specific type of noise. With a particular type of noise in mind, quantum circuits are modified to amplify or cancel the effect the noise has on their results, and post-processing of results is executed on quantum circuit outputs to mitigate the noise's impact on these results. Notable Quantum Error Mitigation techniques include Dynamical Decoupling, Measurement Mitigation, Pauli Twirling, and Zero Noise Extrapolation. The combined application of these techniques made it possible to obtain sensible quantum dynamics simulation results in IBM's paper [1]. The remainder of this blog will be dedicated to explaining Zero Noise Extrapolation, given its straightforward nature and its simplistic assumptions about the underlying noise model of a quantum computer.

Zero Noise Extrapolation

Zero Noise Extrapolation refers to an "algorithmic scheme(s) that reduce the noise-induced bias in the expectation value by post-processing outputs from an ensemble of circuit runs, using circuits at the same noise level as the original unmitigated circuit or above" [9]. This technique was initially introduced by Li et al [11] and Temme et al [12] and later popularized by Kandala et al [10] to perform the variational optimization of molecular Hamiltonian of Hydrogen and Lithium hydride. One notable feature of Zero Noise Extrapolation is that it meets the requirement for a "reliable error mitigation method" as proposed by Cai et al [9]. Specifically, Zero Noise Extrapolation "requires few assumptions (or no assumptions) about the final state prepared by the computation" [9]. Additionally, it places a simple constraint on the noise model and the effect ZNE could mitigate. Hence, Zero Noise Extrapolation stands as a broadly applicable and straightforward-to-implement technique in Quantum Error Mitigation.

To comprehend the simplified noise model and its implication on how it influences the expectation value of an operator for the state prepared by a quantum circuit, we need to introduce a few notations. First, denote locations marked by indices \(i\), ranging from \(0\) to \(n\), in a quantum circuit where independent Pauli errors could occur. At each location \(i\), the probability of a Pauli error occurring is \(p_{i}\). The sum of all probabilities is \(\lambda \equiv \sum_{i=0}^{n}p_i\). This can be understood as a noise parameter characterizing how noisy the circuit is. When \(n\) is large and \(\lambda\) is moderately sized, Le Cam's inequality suggests that the probability of \(k\) errors occurring in a noisy circuit is \(P_k \approx e^{-\lambda}\lambda^{k}/k!\) [11], i.e., it is approximated by a Poisson distribution. We then model the erroneous expectation value, \(\left\langle O_\lambda\right\rangle\) as the weighted sum of operator expectation values with \(k\) number of errors occurring during the quantum circuit, \(\left\langle O_{|\mathbb{L}|=k}\right\rangle\). Specifically, \( \left\langle O_\lambda\right\rangle=\sum_{k=0}^{\infty} P_k\left\langle O_{|\mathbb{L}|=k}\right\rangle=e^{-\lambda} \sum_{k=0}^{\infty} \frac{\lambda^k}{k !}\left\langle O_{|\mathbb{L}|=k}\right\rangle \).

By focusing on the noise parameter \( \lambda \), we observe that the expectation value of the operator follows an exponential decay. Methods like pulse-stretching or unitary-folding [10] could be employed to manually increase the noise level, \(\lambda\), and obtain a noisier expectation value for the operator \( \hat{O} \). After obtaining a set of \( \hat{O_{\lambda}} \), we could use an exponential fitting to achieve the noiseless estimation of \( \hat{O} \)."

It's remarkable that such a straightforward technique can eliminate the effect of noise from a simple noise model on quantum circuit results. However, it's crucial not to overlook the costs and constraints associated with this technique. Firstly, we are performing an exponential extrapolation, so the accuracy of \(\left \langle O_{\lambda} \right \rangle\) for each \(\lambda\) needs to be precise, necessitating a small variance. The variance of the estimation of \(\left \langle O_{\lambda} \right \rangle\) is essentially an expectation value over the \(N_{sample}\) of quantum circuit evaluations. This requires large number of sampling points which naturally leads to increased experimental time. Secondly, another challenge arises. Zero Noise Extrapolation essentially requires the noise model affecting the device to remain constant, and the only one who can adjust the noise level is the experimenter. This requirement is suggested to be violated in relation to a physical phenomenon known as Two-Level Defect.

Two level Defect

The Josephson Junction in the superconducting transmon requires an amorphous \( AlO_x\) layer between the aluminium electrodes. At low temperatures, Two-Level Defects form in this amorphous layer. These defects can include tunneling atoms, dangling bonds, and trapped charges. As a result, it's proposed that the coupling of Two-Level Defects and qubits induces noise parameter fluctuations on an hourly scale [1].

In IBM's paper, they demonstrated such correlation between the coupling of Two-Level Defects with qubits and device fluctuation [1]. In Figure S4 (b) and (c) of the supplementary information from IBM's Paper [1], a clear drop in the single qubit \(X\) operator's expectation value is observed around the \(99\) to \(103\) hour mark during the experiment. Around the same time, the \(T_{1}\) time for the same qubit is observed to fluctuate, indicating a variation in the underlying noise model.

This inference is further supported by the extrapolation of the expectation value of a single qubit X operator on a simulatable quantum circuit. When the data points observed during the fluctuation period are discarded, the extrapolation results align well with the ideal circuit simulation results performed on a classical computer, as reported in Figure S4 (d), (e), (f) of IBM's paper [1].

Pauli-Lindblad Noise Model

While the previous method successfully mitigates the effect of fluctuating noise parameters, it has a significant drawback. It requires discarding experimental data, which implies an algorithmic overhead. In larger or more unstable devices, this could eliminate the quantum algorithm's speed advantage.

To address this problem, we recognize that the essence of Zero Noise Extrapolation is to amplify the noise level to a desired amount and perform extrapolation. Device quality fluctuation will cause the underlying noise parameter to change from \(\lambda\) to \(\alpha \lambda\) with \(\alpha > 1\). When we naively amplify the noise with a target of \(2 \lambda\), we end up reaching \(2 \alpha \lambda\), hence misplacing the data point on the x-axis for extrapolation.

To alleviate this without discarding experimental results, a solution is to efficiently characterize the device's noise parameter and then amplify accordingly. In IBM's paper, they employ the Pauli-Lindblad Noise Model [1]. The Pauli-Lindblad Noise model is a "noise channel \(\Lambda\) that arises from a sparse set of local interactions by a Lindblad generator," which yields \(\mathcal{L}(\rho)=\sum_{k \in \mathcal{K}} \lambda_k\left(P_k \rho P_k^{\dagger}-\rho\right)\) [15]. Note that \(\mathcal{K}\) is a \(poly(n)\)-sized subset of \(4^n\) Pauli operators on \(n\) qubits. You only need to consider a polynomial subset of all possible Pauli operators because of the topology of IBM's quantum computer. Weight two Pauli operators on non-neighboring qubits and higher weight Pauli operators will have no support. Their corresponding noise parameters will be zero. The noise channel is then modeled as \(\Lambda(\rho)=\exp \mathcal{L}\). It can be computed as \(\Lambda(\rho)=\underset{k \in \mathcal{K}}{\bigcirc}\left(w_k \cdot+\left(1-w_k\right) P_k \cdot P_k^{\dagger}\right) \rho\) where \(w_k=2^{-1}\left(1+\mathrm{e}^{-2 \lambda_k}\right)\)[15].

To characterize the noise parameters \(\lambda_{b}\), we measure \(f_b\), the fidelity of Pauli operators \(P_b\) with increasing layers of two-qubit gates. Since \(f_{b} = \frac{1}{2^{n}}Tr(P_{b}^{\dagger} \Lambda(P_{b})\), we expect \(log(f_{b}) \propto \lambda_{b} * d\) with \(d\) being the number of two-qubit gate layers. Since the supported Pauli operators have a \(poly(n)\) number, we can efficiently characterize them.

To amplify the noise, experimentalists can manually insert Pauli operators on the two-qubit gate layer of the quantum circuit to mimic the noise modeled in the Pauli-Lindblad noise model. A schematic sense of such amplification can be obtained by examining Fig 1 (d) of IBM's paper [1]. IBM's team compared the extrapolated result using such scheme to classical simulation result and confirmed the correctness of this method.

Conclusion

Quantum error mitigation is fascinating in the sense that, by presenting a physically motivated and reasonable noise model and probing the experimental platform to parameterize it, we can obtain results from non-error corrected circuits.

However, it remains unclear to me whether the overhead of noise model probing will be scalable. In other words, will such a technique maintain its effectiveness when applied to industrial applications involving thousands of qubits?

Regardless, a successful quantum error mitigation experiment does indicate the validity and accuracy of the target noise model's efficient characterization. In this regard, quantum error mitigation experiments serve as a touchstone for understanding the noise in current NISQ devices.

Disclaimer

I have also done some work in quantum error mitigation. I found the idea of using quantum error mitigation results to verify the noise model occurred in the actual device interesting.

This blog is written with love using Emacs and Org Mode.