Many researchers believe that the best way to make a quantum computer is to interlink many small modules. But what does “small” mean?

In particular, will be quantum machine be more powerful (or perhaps, more resistant to errors) if it each module is itself a powerful device with 100’s of qubits? Or can we focus on making very simple and small modules that are individually weak, and yet collectively they will be just as powerful? In other words: If we’re thinking about a network-based quantum computer, do we need to care about the granularity?

In a recent paper Ying Li and Simon Benjamin suggest that, provided a module has at least a handful of qubits (6 or so) then it doesn’t really help to add more — at least until we’re talking about a thousand or more. Here the authors are thinking about the total number of qubits needed to perform a task, and observing that the total stays almost the same whether you assume the qubits are bunched together into modules containing hundreds, or spread thin over a larger number of smaller units.

This is true even if the error rates inside the modules are very low, so that almost all the errors occur on the interconnects. Intuitively one might think that in this case, its better to have a ‘chunkier’ machine with big modules and therefore fewer links — but surprisingly this is not so (or more accurately, it is only weakly superior). The result is good news for experimentalists since it means that they can design their modules to be as complex, or as simple, as they like. For the Oxford ion-trappers, this means they’ll target very simple units.


Simon Benjamin

Simon Benjamin

Leader of the Oxford quantum technology theory group