top of page
Infineon Wafer for website.png

11 April 2025

The tyranny of calibration in quantum computing

Maciej Malinowski, Quantum Architecture Team Lead

Tom Harty

5

MINUTE READ



A common tool for assessing the feasibility of using specific hardware for quantum computing are the Di Vincenzo criteria. Like several other platforms, trapped ions are widely considered to satisfy all the Di Vincenzo criteria [1], meaning that they can perform all the operations required for universal quantum computing, such as state preparation, quantum gates, and measurement. 


At Oxford Ionics, we have already proven that we can perform those operations better than anyone else [2,3]. So for us, as discussed in our first blog post, the quest of building powerful, commercially-valuable quantum computers is no longer about proving the basic building blocks - it’s about putting them together. 


But this can be more challenging than it sounds – once you start putting those building blocks together, new bottlenecks and challenges arise. So let’s dive into some of these under-appreciated challenges of quantum computing integration and how Oxford Ionics has tackled them. 


The control perspective


Before considering any specific technical challenge, let’s take a step back and look at quantum computing from a control perspective. Even in today’s NISQ stage, a very small-scale quantum computer needs a lot of stuff to control the qubits. A typical quantum computer may contain dozens of signal sources (RF, MW, laser modulators) per qubit – and in turn, each signal source may take in dozens of parameters to specify its waveform settings. Complicating this even further, most of these hundreds of parameters are not known a priori, but instead are dynamically updated during the quantum computer’s bring-up or regular calibration. 


Typically even a small quantum computer will be accompanied by hundreds of wires and signal sources.


In other words, a large-scale quantum computer represents a highly nonlinear multi-parameter control system. The presence of tens or hundreds of knobs per qubit, as well as complicated cross-talk mechanisms, presents a very daunting system calibration challenge. 


One analogy could be an inverted pendulum. A relatively straightforward control system may be able to keep an inverted pendulum upright for a few minutes. However, it would be borderline impossible to scale that up to a system of thousands of inverted pendula, each connected by springs to a few others –  especially if they are to stay upright with high reliability. The point of this analogy is to emphasise that sometimes “scalability” can be bottlenecked not by our ability to build large-scale systems (we can build a lot of pendula!), but by our ability to control them.




Scaling thousands of inverted pendula isn’t a simple scale-up problem.


To give a more relevant example, let’s consider a simple building block of trapped-ion quantum computers: an ion swap. Here is an image of what this “swap” operation looks like at Oxford Ionics:

While it may look simple, swapping ions with high speed and low excitation can actually be a complicated business, with multiple academic papers devoted to the issue. For example, in this study, researchers required a combination of time-consuming hand-crafted steps as well as machine-learning steps to generate six electrode waveforms that led to a single two-ion swap operation – and even that sequence only works if other electrodes in the device are set with extreme precision! 


Now imagine performing this calibration in a device with thousands of qubits and tens of thousands of electrodes. It is easy to see how the difficulty of the initial calibration, combined with the new challenges such as electrode cross-talk and additional ion-ion electrostatic forces, can turn this problem of “slightly tedious” at a small scale to “intractable” at a large scale.


So the takeaway is that even if a certain operation can be calibrated and demonstrated at a small scale, it can be difficult to integrate into a large quantum computer simply due to the calibration overheads – even before considering other technical challenges. And if left unaddressed, this leads to large-scale quantum computers with poor overall performance and low overall uptime compared to small-scale systems. 


Over recent years, the quantum computing industry has gradually become more aware of just how difficult and time-consuming device bring-up and calibration can be. For example, a number of recent papers demonstrated the use of black-box and white-box machine learning techniques to optimise quantum operations better or faster than human operators [4,5,6]. However, most of the existing literature demonstrations were limited to small-scale systems with only a handful of control parameters, falling significantly short of what’s required for large-scale, high-fidelity quantum computers.


Building inherently calibratable quantum computers


The rest of the industry focuses solely on building more and more complex quantum computers, leaving calibration as an afterthought right up until the point they find they have incredibly complex calibration requirements that limit the scale and performance of their system. At Oxford Ionics, we turned this challenge on its head – ensuring that calibration was designed into our systems from day one. We approached building our quantum computers with the vision that a 10,000 qubit quantum computer should have the same calibration overheads as a 10-qubit quantum computer – that is, building quantum computers that are inherently easy to calibrate. 


There is no single trick to make a quantum computer easy to calibrate. Instead, it comes as result of a disciplined approach to system architecture, design, and manufacture based on the following principles: 


  • High passive stability: By leveraging Electronic Qubit Control, we can replace the immature and finicky laser systems typically used for trapped-ion quantum computing with ultra-robust RF and microwave sources and amplifiers. This gives us a level of passive stability previously unheard of in trapped-ion quantum computers.


  • Precise electro-magnetic simulation: By developing and validating precision electromagnetic field simulations of our QPUs, which are produced on classical chips in a standard semiconductor fab, our quantum computers have their calibration knobs “set by design” – with only a small amount of fine-tuning needed to compensate for device-to-device and environmental variability.


  • Few degrees of freedom: Thanks to precision simulations, we don’t have to add knobs (or degrees of freedom) when we add qubits. This significantly decreases the wiring challenge, and allows us to use IO-efficient control techniques such as electrode co-wiring [7].


  • Highly localised control: We developed several techniques to make the control fields highly localised to a single qubit and its immediate environment, with negligible long-range couplings. This allows us to “decouple” the calibration procedures of different qubit zones, dramatically reducing complexity.


  • Accurate physics models: An under-appreciated feature of trapped-ion quantum computers  is that the equations describing how the qubits interact with the control fields can be written exactly and solved with extreme precision. This is owing to the combination of perfectly identical ions and well-modelled and toleranced classical chips. What it means is that we can construct very accurate “digital twins” of our quantum computers, where we can subsequently use the mathematical patterns to reduce calibration overheads.


  • Low-variability manufacturing: Because trapped-ion qubits are naturally identical, all we have to do is ensure the control environments they experience are the same. We achieve this by leveraging commercial CMOS nanofabrication capabilities, resulting in chips with high yield and low site-to-site and wafer-to-wafer variability.


Through combining these principles with novel physics and state-of-the art engineering, Oxford Ionics has managed to overcome the challenge of all this extra stuff required to build and calibrate quantum computers with thousands of qubits. And since we’ve made our quantum computers inherently easy to control and calibrate, whilst manufacturing our QPUs in standard semiconductor fabs, all we need to scale is simply copy and paste unit cells thousands of times over. 


In the coming blog posts, we will be shedding more light on these design principles, and the validation we’ve had to date. In the meantime, for more information on our architecture for a 1,000 qubit quantum computer, check out this paper here.













bottom of page