In the ever-evolving landscape of computing technology, quantum computers represent the frontier of what might be possible. However, this future remains on the horizon due to the significant challenge of error rates in quantum systems.
With the promise to revolutionize fields such as computer science, medicine, business, chemistry, physics, and more, these machines have the potential to solve problems that are currently beyond the reach of classical computers. But for now, due to these ever-present errors, they can’t be fully trusted.
Researchers from Caltech have taken a significant step towards addressing this obstacle. Their recent paper outlines a unique method that allows classical computers to assess the error rates in quantum machines without the need for full-scale simulations.
This innovation circumvents the impracticality of simulating complex quantum systems on traditional computers — a task that could otherwise take years.
Adam Shaw, the study’s lead author and a graduate student in Manuel Endres‘ lab, highlights the importance of understanding and mitigating errors in quantum computing.
“In a perfect world, we want to reduce these errors. That’s the dream of our field,” Shaw explains. “But in the meantime, we need to better understand the errors facing our system, so we can work to mitigate them. That motivated us to come up with a new approach for estimating the success of our system.”
The team’s approach to better comprehend and counteract these errors led to the development of their novel method, aiming to estimate the success of quantum systems more efficiently.
At the heart of the study is the use of a quantum simulator, a simpler form of quantum computer designed for specific tasks, involving individually controlled Rydberg atoms manipulated by lasers.
A key aspect of this simulator, and indeed all quantum computers, is the phenomenon of entanglement, where atoms become interconnected in a way that amplifies the machine’s computing power.
The complexity and potential of quantum computing come into sharper focus considering entanglement. Previous findings by Endres, Shaw, and colleagues demonstrated how entanglement’s growth could lead to unpredictable outcomes, similar to the butterfly effect in chaos theory.
This characteristic is what enables quantum computers to potentially solve complex problems, such as cryptography challenges, far quicker than classical computers.
However, the scalability of quantum computers presents its own set of challenges. Beyond a certain threshold of qubits, the quantum bits that represent information, these systems become too complex for classical simulations.
Shaw remarks on the difficulty of simulating systems beyond 30 qubits, noting the exponential increase in complexity with more qubits and entanglement.
The Caltech team’s quantum simulator, with its 60 qubits, exemplifies a system that defies exact simulation by classical means. Addressing this, the researchers devised a method akin to using varying brush sizes in painting.
“Let’s say our quantum computer is painting the Mona Lisa as an analogy. The quantum computer can paint very efficiently and, in theory, perfectly, but it makes errors that smear out the paint in parts of the painting. It’s like the quantum computer has shaky hands. To quantify these errors, we want our classical computer to simulate what the quantum computer has done, but our Mona Lisa would be too complex for it. It’s as if the classical computers only have giant brushes or rollers and can’t capture the finer details,” Shaw explained.
“Instead, we have many classical computers paint the same thing with progressively finer and finer brushes, and then we squint our eyes and estimate what it would have looked like if they were perfect. Then we use that to compare against the quantum computer and estimate its errors. With many cross-checks, we were able to show this ‘squinting’ is mathematically sound and gives the answer quite accurately,” Shaw concluded.
This innovative approach has led to the estimation that their 60-qubit simulator operates with a 91 percent error rate, or a 9 percent accuracy rate. While this might seem low at first glance, it represents a significant achievement in the field of quantum computing.
For perspective, a 2019 experiment by Google claimed their quantum computer outperformed classical computers with an accuracy of just 0.3 percent, though it involved a different system.
“We now have a benchmark for analyzing the errors in quantum computing systems. That means that as we make improvements to the hardware, we can measure how well the improvements worked,” Shaw says.
This benchmark also allows for the measurement of entanglement in quantum simulations, providing another metric for assessing the success of these advanced computing systems.
In summary, a Caltech research team has made a pivotal advancement in the realm of quantum computing by developing a method that allows for the estimation of error rates in quantum machines. This is accomplished without the need for exhaustive simulations.
Their innovation highlights the complex nature of quantum errors while setting a new benchmark for evaluating and improving quantum computing systems.
As we stand on the cusp of a new era in computing, the work of Adam Shaw and his colleagues marks a significant step forward, paving the way for the realization of quantum computers that can solve some of the most intractable problems across various fields.
With this progress, the dream of harnessing the full potential of quantum computing moves closer to reality, promising revolutionary changes in our approach to data processing and analysis.
The full study was published in the journal Nature.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–