Skip to content

Commit 984b40c

Browse files
committed
udpate
1 parent 003e408 commit 984b40c

2 files changed

Lines changed: 255 additions & 13 deletions

File tree

README.md

Lines changed: 61 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,10 @@ A winter school project on circuit-level decoding of surface codes using belief
77
| Level | Task | Description |
88
|-------|------|-------------|
99
| **Basic** | MLE Decoder | Reproduce the integer programming (MLE) decoder as the baseline |
10-
| **Challenge** | BP Decoder | Develop belief propagation based decoders and compare performance |
11-
| **Extension** | Atom Loss | Handle atom loss errors in neutral atom systems |
10+
| **Challenge** | Atom Loss | Handle atom loss errors in neutral atom systems |
11+
| **Extension** | QEC Visualization | https://github.com/nzy1997/qec-thrust |
12+
13+
Note: we also want to explore the boundary of vibe coding, which may lead to a scipost paper.
1214

1315
## Learning Objectives
1416

@@ -24,6 +26,63 @@ After completing this project, students will:
2426
- **Mathematics**: Linear algebra, probability theory
2527
- **QEC Background**: Stabilizer formalism, surface codes (helpful but not required)
2628

29+
## Key Concepts
30+
31+
### Detection Events
32+
33+
In circuit-level quantum error correction, we don't use raw syndrome measurements directly. Instead, we use **detection events** — the XOR (difference) between consecutive syndrome measurements.
34+
35+
**Why detection events instead of raw syndromes?**
36+
37+
In code-capacity noise (simplified model), syndromes directly indicate errors. But in circuit-level noise:
38+
- Measurement errors exist and can randomly flip syndrome values
39+
- A syndrome value of 1 could mean "real data error" or "measurement error"
40+
- Detection events localize changes in space-time
41+
42+
```
43+
Round 1 syndrome: [0, 0, 1, 0]
44+
Round 2 syndrome: [0, 1, 1, 0]
45+
───────────
46+
Detection event: [0, 1, 0, 0] ← Only the CHANGE matters
47+
```
48+
49+
A detection event = 1 means "something happened in this space-time region" (data qubit error or measurement error). The decoder's job is to figure out which.
50+
51+
### Observable Flip
52+
53+
An **observable flip** indicates whether the logical qubit's value changed from initialization to final measurement.
54+
55+
For a surface code doing Z-memory:
56+
- The logical observable Z̄ is a product of Z operators along a path
57+
- Initialize in |0⟩_L (eigenstate of Z̄ with eigenvalue +1)
58+
- If final measurement gives Z̄ = -1, that's an observable flip → logical error
59+
60+
**The decoding problem:**
61+
62+
```
63+
Physical errors occur during circuit execution
64+
65+
Input: Detection events (what we observe)
66+
67+
Decoder
68+
69+
Output: Predicted observable flip (0 or 1)
70+
71+
Compare with actual observable flip
72+
73+
Match → Success
74+
Mismatch → Logical error
75+
```
76+
77+
In the Detector Error Model (DEM), errors are annotated with which detectors and observables they affect:
78+
79+
```
80+
error(0.001) D0 D1 # Triggers detectors 0,1 but NOT the observable
81+
error(0.001) D2 D3 L0 # Triggers detectors 2,3 AND flips logical observable L0
82+
```
83+
84+
Errors that include `L0` form logical error chains — these are what the decoder must identify.
85+
2786
## Must-Read Papers
2887

2988
Before starting, please read these foundational papers:

benchmark/circuit_data/README.md

Lines changed: 194 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -328,6 +328,81 @@ num_detectors ≈ (d² - 1) × r
328328

329329
---
330330

331+
## Data Interpretation
332+
333+
This section provides actual results from the generated datasets, comparing standard circuit-level noise with atom loss scenarios.
334+
335+
### Standard Datasets (Stim-generated)
336+
337+
These datasets model circuit-level noise without atom loss:
338+
339+
| Distance | Rounds | p_error | Detectors | Logical Error Rate |
340+
|----------|--------|---------|-----------|-------------------|
341+
| 3 | 3 | 0.001 | 24 | **2.06%** |
342+
| 3 | 3 | 0.005 | 24 | **10.31%** |
343+
| 5 | 5 | 0.001 | 120 | **5.58%** |
344+
| 5 | 5 | 0.005 | 120 | **17.97%** |
345+
346+
**Observations:**
347+
- Higher physical error rate (p_error) leads to higher logical error rate
348+
- Larger distance codes have more detectors but don't always have lower LER (depends on threshold)
349+
350+
### Atom Loss Datasets (TensorQEC-generated)
351+
352+
These datasets add atom loss on top of depolarizing noise:
353+
354+
| Distance | p_error | p_loss_1q | p_loss_2q | LER | Avg Loss Rate |
355+
|----------|---------|-----------|-----------|-----|---------------|
356+
| 3 | 0.001 | 0% | 0% | **1.14%** | 0% |
357+
| 3 | 0.001 | 1% | 2% | **44.34%** | 21.5% |
358+
| 3 | 0.001 | 2% | 4% | **61.78%** | 38.8% |
359+
| 5 | 0.001 | 0% | 0% | **2.9%** | 0% |
360+
| 5 | 0.001 | 1% | 2% | **68.51%** | 33.3% |
361+
| 5 | 0.001 | 2% | 4% | **74.66%** | 55.9% |
362+
363+
### Key Insights
364+
365+
```
366+
Impact of Atom Loss on Logical Error Rate
367+
==========================================
368+
369+
Without Loss (p_loss=0):
370+
d=3: LER ≈ 1% ████
371+
d=5: LER ≈ 3% ████████████
372+
373+
With 1% Loss (p_loss=0.01):
374+
d=3: LER ≈ 44% ████████████████████████████████████████████████████████████████████████████████████████
375+
d=5: LER ≈ 69% ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
376+
377+
⚠️ 1% per-gate loss → ~40x increase in logical error rate!
378+
```
379+
380+
**Why is atom loss so damaging?**
381+
382+
1. **Loss accumulates over rounds**: With 1% loss probability per 2-qubit gate and ~4 gates per qubit per round:
383+
- Per-round loss probability ≈ 1 - (1-0.02)^4 ≈ 7.7%
384+
- After 3 rounds: ~21% of qubits lost
385+
- After 5 rounds: ~33% of qubits lost
386+
387+
2. **Lost qubits inject random errors**: When an atom is lost, its state becomes completely unknown, equivalent to a random Pauli error (depolarizing channel).
388+
389+
3. **Error correction breaks down**: Surface codes require local stabilizer measurements. When qubits are lost, stabilizers become incomplete, reducing error correction capability.
390+
391+
4. **Larger codes suffer more**: More qubits means more opportunities for loss. A d=5 code has 25 qubits vs 9 for d=3.
392+
393+
### Why Loss-Aware Decoding Matters
394+
395+
| Decoding Strategy | Description | Expected LER with 1% Loss |
396+
|-------------------|-------------|---------------------------|
397+
| Naive (ignore loss) | Treat lost qubits as normal | ~45-70% (baseline) |
398+
| Erasure decoding | Mark lost qubits as erasures | ~20-40% (2x better) |
399+
| Supercheck construction | Combine stabilizers sharing lost qubits | ~10-20% (3-4x better) |
400+
| Adaptive decoding | Modify decoder graph per-shot | ~5-15% (5-10x better) |
401+
402+
**Reference**: [arXiv:2412.07841](https://arxiv.org/abs/2412.07841)
403+
404+
---
405+
331406
## Usage Examples
332407

333408
### 1. Load Data in Python
@@ -504,32 +579,140 @@ Each atom loss dataset includes:
504579

505580
### Using Atom Loss Data
506581

582+
#### Basic Loading
583+
507584
```python
508585
import numpy as np
586+
import json
587+
588+
# Load all components for atom loss dataset
589+
prefix = "atomloss_d5_r5_p0.0010_loss0.0100"
509590

510591
# Load detection events
511-
with open("atomloss_d5_r5_p0.0010_loss0.0100_events.01") as f:
592+
with open(f"{prefix}_events.01") as f:
512593
events = np.array([[int(c) for c in line.strip()] for line in f])
513594

514595
# Load loss mask
515-
with open("atomloss_d5_r5_p0.0010_loss0.0100_loss.01") as f:
596+
with open(f"{prefix}_loss.01") as f:
516597
loss_mask = np.array([[int(c) for c in line.strip()] for line in f])
517598

518-
# Load labels
519-
with open("atomloss_d5_r5_p0.0010_loss0.0100_obs.01") as f:
599+
# Load labels (ground truth)
600+
with open(f"{prefix}_obs.01") as f:
601+
labels = np.array([int(line.strip()) for line in f])
602+
603+
# Load metadata
604+
with open(f"{prefix}_metadata.json") as f:
605+
meta = json.load(f)
606+
607+
print(f"Dataset: d={meta['distance']}, p={meta['p_error']}, p_loss={meta['p_loss_1q']}")
608+
print(f"Shots: {meta['num_shots']}, LER: {meta['logical_error_rate']:.2%}")
609+
print(f"Avg qubits lost per shot: {meta['avg_loss_rate']:.1%}")
610+
print(f"Events shape: {events.shape}") # (10000, 120) for d=5
611+
print(f"Loss mask shape: {loss_mask.shape}") # (10000, 25) for d=5
612+
```
613+
614+
#### Complete Decoding Example
615+
616+
```python
617+
import numpy as np
618+
import json
619+
620+
def load_atom_loss_dataset(prefix):
621+
"""Load all components of an atom loss dataset."""
622+
with open(f"{prefix}_events.01") as f:
623+
events = np.array([[int(c) for c in line.strip()] for line in f], dtype=np.uint8)
624+
with open(f"{prefix}_loss.01") as f:
625+
loss_mask = np.array([[int(c) for c in line.strip()] for line in f], dtype=np.uint8)
626+
with open(f"{prefix}_obs.01") as f:
627+
labels = np.array([int(line.strip()) for line in f], dtype=np.uint8)
628+
with open(f"{prefix}_metadata.json") as f:
629+
meta = json.load(f)
630+
return events, loss_mask, labels, meta
631+
632+
def naive_decode(events, loss_mask):
633+
"""Naive decoder: just count non-zero detectors (baseline)."""
634+
# This is a very simple decoder - real decoders use MWPM or BP
635+
return (np.sum(events, axis=1) % 2).astype(np.uint8)
636+
637+
def loss_aware_decode(events, loss_mask, meta):
638+
"""
639+
Loss-aware decoder: modify decoding based on which qubits were lost.
640+
641+
This is a simplified example. A real implementation would:
642+
1. Identify which stabilizers involve lost qubits
643+
2. Construct superchecks by combining affected stabilizers
644+
3. Modify the decoder graph weights accordingly
645+
"""
646+
predictions = np.zeros(len(events), dtype=np.uint8)
647+
648+
for i in range(len(events)):
649+
lost = loss_mask[i]
650+
ev = events[i]
651+
652+
# Count lost qubits
653+
n_lost = np.sum(lost)
654+
n_qubits = len(lost)
655+
656+
# If too many qubits lost, predict random
657+
if n_lost > n_qubits // 2:
658+
predictions[i] = np.random.randint(0, 2)
659+
else:
660+
# Simple heuristic: weight detection events by loss proximity
661+
predictions[i] = np.sum(ev) % 2
662+
663+
return predictions
664+
665+
# Example usage
666+
prefix = "atomloss_d5_r5_p0.0010_loss0.0100"
667+
events, loss_mask, labels, meta = load_atom_loss_dataset(prefix)
668+
669+
# Compare naive vs loss-aware
670+
naive_preds = naive_decode(events, loss_mask)
671+
loss_aware_preds = loss_aware_decode(events, loss_mask, meta)
672+
673+
naive_ler = np.mean(naive_preds != labels)
674+
loss_aware_ler = np.mean(loss_aware_preds != labels)
675+
676+
print(f"Naive LER: {naive_ler:.2%}")
677+
print(f"Loss-aware LER: {loss_aware_ler:.2%}")
678+
print(f"Ground truth: {meta['logical_error_rate']:.2%}")
679+
```
680+
681+
#### Analyzing Loss Patterns
682+
683+
```python
684+
import numpy as np
685+
686+
# Load data
687+
prefix = "atomloss_d5_r5_p0.0010_loss0.0100"
688+
with open(f"{prefix}_loss.01") as f:
689+
loss_mask = np.array([[int(c) for c in line.strip()] for line in f])
690+
with open(f"{prefix}_obs.01") as f:
520691
labels = np.array([int(line.strip()) for line in f])
521692

522-
# For loss-aware decoding:
523-
# - Use loss_mask to identify which qubits have erasure errors
524-
# - Modify DEM weights or use erasure-aware decoder
693+
# Analyze loss statistics
694+
loss_per_shot = np.sum(loss_mask, axis=1)
695+
print(f"Average qubits lost: {np.mean(loss_per_shot):.2f}")
696+
print(f"Std dev: {np.std(loss_per_shot):.2f}")
697+
print(f"Max lost in single shot: {np.max(loss_per_shot)}")
698+
print(f"Shots with no loss: {np.sum(loss_per_shot == 0)}")
699+
700+
# Correlation between loss count and logical errors
701+
high_loss_mask = loss_per_shot > np.median(loss_per_shot)
702+
ler_high_loss = np.mean(labels[high_loss_mask])
703+
ler_low_loss = np.mean(labels[~high_loss_mask])
704+
print(f"LER with high loss: {ler_high_loss:.2%}")
705+
print(f"LER with low loss: {ler_low_loss:.2%}")
525706
```
526707

527708
### Loss-Aware Decoding Strategies
528709

529-
1. **Naive decoding**: Ignore loss information (baseline)
530-
2. **Erasure decoding**: Treat lost qubits as known erasures
531-
3. **Supercheck construction**: Combine stabilizers that share lost qubits
532-
4. **Adaptive decoding**: Modify decoder graph based on loss pattern
710+
| Strategy | Description | Implementation |
711+
|----------|-------------|----------------|
712+
| **Naive** | Ignore loss information | Standard MWPM/BP decoder |
713+
| **Erasure** | Mark lost qubits as erasures | Set high prior probability for lost qubits |
714+
| **Supercheck** | Combine stabilizers sharing lost qubits | Modify Tanner graph on-the-fly |
715+
| **Adaptive** | Fully modify decoder per-shot | Recompute DEM for each loss pattern |
533716

534717
Reference: [arXiv:2412.07841](https://arxiv.org/abs/2412.07841) - Quantum Error Correction resilient against Atom Loss
535718

0 commit comments

Comments
 (0)