Photons can now be entangled not only across space but also across time. Using a tiny quantum dot as a light source, scientists at the University of Copenhagen have linked photons emitted at different moments – like threading light through time – creating a new form of time-separated entanglement. The method could drastically reduce the hardware needed for quantum computers – and even keep quantum information alive after the original particle has disappeared.
Quantum computers promise breakthroughs in chemistry, communication and materials. In principle, such a computer could solve in a few hours what today’s fastest supercomputers would need millions of years for.
The crucial ingredient is entanglement – a quantum connection that makes particles behave as a single whole, no matter how far apart they are. Quantum computers work only because they exploit this entanglement and make their particles cooperate as one system.
“There are many ways to build such a machine. Most approaches use trapped atoms or ions, which can entangle particles more directly – but they are slow and difficult to scale,” says Peter Lodahl, Professor and Group Leader of the Quantum Photonics Group at the Niels Bohr Institute, University of Copenhagen, and founder and Chief Quantum Officer at Sparrow Quantum ApS.
A new study in Nature Communications shows how a fusion method for photonic quantum computers can overcome this barrier.
When light learns to remember
Photons are fast, easy to guide in optical circuits and ideal for sending information over long distances – but they do not interact in a stable or predictable way, which makes it difficult to entangle them. At the University of Copenhagen, it took a cross-disciplinary team – from nanofabrication to quantum optics – to make it work in practice.
“We use a single electron spin in the quantum dot to control the emitted photons. The first photon is sent through a fibre loop so that it can meet the next one when both reach a beamsplitter,” Lodahl explains. “In this way, we fuse the entanglement between the electron’s quantum states – its magnetic spin – across time.”
Even more striking, the team showed that this process keeps information alive longer than the spin’s natural memory. As Lodahl puts it: “You can think of it as a kind of quantum memory – information is stored in the light for a moment and then transferred back into the spin. The information is essentially teleported into the new spin state, so it continues beyond the original particle.”
Large quantum networks can be built using just one light source. In today’s all-optical approaches, thousands of separate photon sources may be needed to scale up. “Here,” Lodahl emphasises, “we can achieve the same with a single quantum dot that we simply reuse again and again.”
Challenge: when light refuses to cooperate
Quantum computers only work if their particles act as a team. But in photonics, where the players are photons, that is a nightmare. Photons are hard to control: when they meet, they sometimes become entangled and sometimes not – a probabilistic, random process governed by the laws of quantum mechanics.
“That makes it extremely difficult to get two of them to do anything together reliably – and these two-particle gates are the basic building blocks of quantum computers,” says Peter Lodahl.
Because of this, the researchers had to rethink the entire architecture of photonic quantum computing. A promising alternative is fusion-based photonic quantum computing. Instead of trying to realise deterministic two-qubit photonic gates, small entangled clusters of photons are created and then fused into larger networks.
The problem? In most laboratories, creating these initial entangled states is still probabilistic – like rolling dice each time. Sometimes it works, often it does not. “It is a bit like building a house of cards where each new card only sticks one time out of ten,” Lodahl explains.
That makes scaling painfully slow. As the Copenhagen team points out, “the real challenge is to generate these first resource states reliably. Relying entirely on probabilistic sources greatly increases the amount of hardware required, and it quickly becomes unmanageable.” Researchers around the world have spent years trying to overcome this bottleneck – and this breakthrough shows a new way forward.
Breakthrough: first entanglement across time
The breakthrough comes from using quantum light sources – emitters – tiny devices such as semiconductor quantum dots that can emit entangled photons on demand. No dice rolls, no maybes: every time you press the button, you get the pair you need.
Until now, researchers have only been able to entangle photons created at the same time. But that would require juggling many light sources at once – a task that quickly turns into a technical circus.
This study shows a new path: temporal fusion. Instead of using many emitters, the same one is reused again and again, fusing photons created at different moments. The result is an unusual form of time-separated entanglement – a way of stitching light together across moments – that makes scaling far simpler.
As Peter Lodahl puts it: “This brings theory into practice. Reliable light sources and temporal fusion together may be the crucial ingredients for building truly large-scale photonic quantum technologies.”
He also notes that the result builds on many years of teamwork – from constructing the experimental set-up to refining the protocols.
A miniature factory of light and matter
To make temporal fusion work, the Copenhagen team used a single semiconductor quantum dot – designed, fabricated and tested through close collaboration between physicists, engineers and quantum optics specialists.
Think of it as a miniature factory in which matter and light strike a deal. Each flip of the electron’s spin stamps out a photon whose timing is locked to that spin – turning the quantum dot into a translator between the world of matter and the world of light.
“This spin–photon interface is the essential building block,” Lodahl explains. “It enables us to create the small entangled states that can be fused into larger quantum networks.”
To keep the process efficient, the quantum dot is embedded in a photonic crystal waveguide – a microscopic light channel that guides each photon into a clean optical highway with almost no loss. The trapped electron can occupy two spin states, the quantum equivalents of 0 and 1. Magnetic fields and laser pulses flip or reset the spin, while special techniques suppress noise from surrounding atoms so that the spin remains stable long enough to perform its task.
“As a result,” Lodahl says, “we can generate spin–photon entanglement on demand. It is like tying an electron directly to a photon. The photon’s quantum bit is simply whether it comes out early or late – a binary tick-tock locked to the electron’s spin.”
When photons born at different times meet
Once the source is in place, the challenge is to make photons created at different times meet as if they were twins. Each emission cycle lasts only a few hundred nanoseconds, so the photons naturally emerge one after another.
“To make them meet, we send the first photon into a fibre delay – essentially a waiting loop where it pauses until its partner is ready,” Lodahl explains. “Then both photons arrive together at the beamsplitter.”
At that instant, the two photons interfere and undergo a so-called probabilistic fusion measurement – a measurement process that binds two quantum states together. If the detectors click in the right way, the fusion succeeds, and a new entangled spin state is created.
When particles shake hands with their own past
With the platform in place, the team set out to prove something never shown before: that photons created at different times can still be used to fuse a single spin into an entangled state: not just across space but across time itself.
“With the spin–photon interface we just described, we can generate two rounds of spin–photon entanglement in a row – each photon tagged as either early or late, depending on the orientation of the spin,” Lodahl explains.
The two photons are born just 300 nanoseconds apart – a tiny fraction of a millisecond. The first is sent into a fibre delay line until the second photon is ready. Both then arrive together at the beamsplitter. If the detectors fire in the right way, the two states fuse.
The key twist?
“With the spin–photon interface we just described, we can generate two rounds of spin–photon entanglement in succession – each photon labelled as either early or late, depending on the spin’s orientation,” Lodahl explains.
In other words, a single spin can reach back and become entangled with itself at an earlier moment in time. This approach reduces the required hardware and improves the efficiency of quantum architectures.
Teleportation in time: quantum information that outlives itself
To prove that the fusion had succeeded, the researchers investigated how the two spins behaved when measured in different directions – similar to flipping two coins in different ways and checking whether they still land the same.
“The patterns matched far more often than chance would allow,” Lodahl explains.
In more technical terms, the team checked the particles’ behaviour along three directions of quantum measurement. In every case, the results were far stronger than random chance – well above the 50% line that separates genuine entanglement from ordinary noise.
And then came the real surprise: it was not the spin itself that stored the memory of the entangled state – it was the photons. This meant that the quantum information actually outlived the spin’s own natural memory span – its coherence time, the period during which a quantum system remembers its state.
“It is almost like teleportation in time,” Lodahl says. “The quantum information does not stay in the same particle, because the spin is reset, but it moves smoothly from spin to photon and back again. In this way, the information can survive the memory and reappear later.”
This finding was only possible thanks to the team’s painstaking measurements, carried out across countless experimental runs to reveal the fragile signal hidden in the noise.
How precise is temporal fusion?
In practice, the photons make the entanglement last longer than the spin could manage on its own. But how well does the method really work? The team measured the error rates – basically how often the system gave the wrong answer.
“Some of the noise comes from the spin’s initialisation and readout, so it is not fundamental,” Lodahl explains. “In the ZZ basis, the error rate was quite low – about 17% – whereas in the XX and YY bases it was higher, roughly one third of the time.”
For comparison, a fault-tolerant quantum computer – one that can automatically correct its own mistakes – needs error rates below about 1%. This means that today’s results are still an order of magnitude away.
In everyday terms, try to have a phone conversation when every third word drops out (not an unusual phenomenon!) – you can follow the conversation but not with the reliability needed to trust it fully.
The good news is that the noise does not come from the photons or the fusion method itself but from the way the spins are prepared and read out. Future techniques can therefore realistically solve this problem – and the team’s younger researchers are already working on new spin-control methods to do just that.
Light is not the problem
The researchers showed that the noise mainly originates from how the spin is prepared and read out – not from the fusion method itself.
“The noise primarily stems from how we prepare and read the spin – not from the fusion protocol,” Lodahl explains. “This matters because it means that with better equipment and more precise spin control, we can realistically bring the error rate down to a fault-tolerant level.”
Overall, the experiment shows that temporal fusion with quantum dots is not only possible but offers clear advantages: less hardware, more stable entanglement and genuine quantum links across time.
“This proof of principle marks a milestone,” Lodahl says. “It shows that solid-state emitters really can carry out the key operations needed for fusion-based photonic quantum computing.”
One dot that can replace thousands of light sources
Achieving temporal fusion with just one quantum dot is not only a technical milestone – it changes how we think scalable photonic quantum computers can be built.
“For full-scale quantum computing, we will ultimately need millions of physical qubits – photons, in our case,” Lodahl says. “But being able to reuse a single quantum dot to produce many photons is a huge resource. The same quantum dot can generate and fuse states again and again across time, replacing thousands of separate sources. This gives us a realistic plan for scaling quantum computers far more efficiently than previously thought possible.”
The group has worked toward this vision for years, and this demonstration shows how one carefully engineered quantum dot can perform the task of many separate sources.
For the first time, the two essential ingredients of fusion-based quantum computing have been brought together in one system: deterministically generating entangled resource states and fusing them into spin networks.
The implications are immediate. The solid-state quantum dot platform developed at the University of Copenhagen has already spun out into Sparrow Quantum ApS, a company commercialising these high-performance single-photon sources.
As Lodahl emphasises:
“The beauty of the photonic approach is its simplicity. You only need a few components, and by repeating just two steps – generating resource states and fusing them – you can already weave vast entangled networks. That makes this approach so powerful.”
From the laboratory to the quantum industry
Turning a physics experiment into working technology takes more than clever ideas – it takes a coordinated team and, ultimately, industrial strength. That is precisely the mission of Sparrow Quantum ApS.
Built on years of work by the Quantum Photonics Group – from students and postdoctoral researchers in the cleanroom to senior scientists fine-tuning the optics – Sparrow Quantum now aims to turn quantum-dot photon sources from a laboratory curiosity into a reliable product.
The goal is to turn the fragile building blocks first demonstrated in the laboratory into the backbone of scalable photonic networks.
“Solid-state systems such as quantum dots also have clear advantages over atomic platforms,” Lodahl explains. “They emit photons faster and with shorter lifetimes, allowing us to use very compact, low-loss delay lines. And with active spin control, we can reconfigure the system as needed to generate many different types of entangled states.”
Right now, the team’s platform operates with only four qubits – the quantum version of the computer’s 0/1 bit – like the first flicker of a light bulb. But millions will be needed to power a full quantum computer, equivalent to lighting up an entire city. The design, however, is already compatible with much larger and more complex entangled networks.
“Physicists call these Greenberger-Horne-Zeilinger (GHZ) or cluster states,” Lodahl says. “You can think of them as different weaving patterns for tying many photons together at once. With better quantum dots and chip integration, we can really start boosting efficiency and pave the way for scalability.”
From theory to technology
The hurdles are real. Today’s noise levels are still too high for fault-tolerant quantum computing, and scaling benchmarks will require larger entangled resource states – at least seven qubits as a first milestone. But for the first time, the way forward is clear.
“With better devices and improved spin control, we can realistically drive the error rate down into the fault-tolerant regime,” Lodahl says. “With strain-free quantum dots that suppress spin noise and integrating our sources onto silicon or lithium niobate photonic circuits operating at telecommunication wavelengths, the roadmap to scaling is clearly within reach.”
In essence, the experiment has shown that a solid-state platform can realise the two central ingredients of fusion-based photonic quantum computing – reliably generating entangled resource states and fusing them into networks. This change is crucial because it moves quantum photonics out of the realm of thought experiments and into technology development.
As Lodahl concludes:
“With Sparrow Quantum ApS pushing this forward, it is no longer just physics on paper. This is the first real step from a laboratory experiment to practical quantum hardware – a shift from idea to industry – turning the once-speculative idea of stitching light across time into a working technology.”
