1. Create Two EPR Pairs
Before running, the circuit uses current backend calibration data to select the best performing qubits based on their T_1 and T_2 coherence times and gate error rates. This makes sure that the qubits chosen for the experiment minimize noise and maximize fidelity.
Start by generating two entangled qubit pairs, represented as qubit pairs (q_0, q_1) and (q_2, q_3). Each pair is prepared in a maximally entangled state known as a Bell state:
∣Φ+⟩ = 1/sqrt(2) * (∣00⟩ + ∣11⟩)
Apply a Hadamard gate to q_0:
H∣0⟩ = 1/sqrt(2) * (∣0⟩ + ∣1⟩)
Apply a controlled-X (CNOT) gate to entangle q_0 and q_1:
CNOT(∣q_0⟩ ∣q_1⟩) = 1/sqrt(2) * (∣00⟩ + ∣11⟩)
Repeat the above for q_2 and q_3 to create the second Bell pair.
At this stage, the state of the system is:
∣Φ+⟩_(01) ⊗ ∣Φ+⟩_(23) = (1/2) * (∣00⟩ + ∣11⟩)_(01) ⊗ (∣00⟩ + ∣11⟩)_(23)
2. Prepare for Delayed-Choice Bell-State Measurement
The central feature of the experiment is the delayed-choice mechanism, where the decision to perform a Bell-state measurement is controlled by an additional qubit (q_4). This qubit will determine whether or not the entanglement swapping occurs.
Introduce a control qubit (q_4) initialized to ∣0⟩.
Add an auxiliary qubit (q_5) to assist in implementing the Bell-state measurement.
Apply a Hadamard gate to q_4 to create a superposition of decision paths, representing both the scenario where the Bell-state measurement is performed and where it is not:
H ∣0⟩ = 1/sqrt(2) * (∣0⟩ + ∣1⟩)
3. Perform Bell-State Measurement on Qubits 2 and 3
To perform the entanglement swapping, measure qubits q_2 and q_3 in the Bell basis, which consists of the following four states:
∣Φ+⟩ = 1/sqrt(2) * (∣00⟩ + ∣11⟩)
∣Φ−⟩ = 1/sqrt(2) * (∣00⟩ − ∣11⟩)
∣Ψ+⟩ = 1/sqrt(2) * (∣01⟩ + ∣10⟩)
∣Ψ−⟩ = 1/sqrt(2) * (∣01⟩ - ∣10⟩)
Apply a CNOT gate with q_2 as the control and q_3 as the target.
Apply a Hadamard gate to q_2 to transform the computational basis into the Bell basis.
Perform a conditional CCX (Toffoli) gate, where q_4 (delayed-choice qubit) controls whether the BSM is finalized by entangling q_2 and q_3, with q_5 serving as the auxiliary qubit in this operation.
At this point, depending on the state of q_4, the measurement entangles or does not entangle the remaining qubits q_0 and q_3.
4. Delayed-Choice Mechanism
The decision to entangle qubits q_0 and q_3 is retroactively determined by the measurement on q_4 and q_5. If q_4 is in state ∣1⟩, the Bell-state measurement is finalized, and qubits q_0 and q_3 become entangled. If q_4 is in ∣0⟩, no entanglement swapping occurs.
Mathematically, this process updates the state of the system as:
Final State = {
Entangled: 1/sqrt(2) * (∣00⟩ + ∣11⟩)_(03), if q_4 = ∣1⟩.
Not Entangled: ∣Φ+⟩_(01) ⊗ ∣Φ+⟩_(23), if q_4 = ∣0⟩.
5. Measurement and Verification
Apply measurement gates to all qubits (q_0, q_1, q_2, q_3, q_4, q_5), storing results in the classical register. The results reveal the state of qubits q_0 and q_3, verifying whether or not they are entangled.
6. Results and Analysis
The results are extracted from the backend in the form of bitstrings and saved to a Json. If qubits q_0 and q_3 are found in the same state (∣00⟩ or ∣11⟩) with high probability, this means entanglement swapping was successful. A histogram of the measurement results is created.
Code:
# Imports
import numpy as np
import json
import pandas as pd
import logging
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, transpile
from qiskit_ibm_runtime import QiskitRuntimeService, Session, SamplerV2
from qiskit.visualization import plot_histogram
import matplotlib.pyplot as plt
# Logging Setup
logging.basicConfig(level=logging. INFO, format="%(asctime)s - %(levelname)s - %(message)s")
logger = logging.getLogger(__name__)
# Load Backend Calibration Data
def load_calibration_data(file_path):
logger. info("Loading calibration data from %s", file_path)
calibration_data = pd. read_csv(file_path)
calibration_data.columns = calibration_data.columns.str.strip()
logger. info("Calibration data loaded successfully")
return calibration_data
# Parse Calibration Data
def select_best_qubits(calibration_data, n_qubits):
logger. info("Selecting the best qubits based on T1, T2, and error rates")
qubits_sorted = calibration_data.sort_values(by=["√x (sx) error", "T1 (us)", "T2 (us)"], ascending=[True, False, False])
best_qubits = qubits_sorted["Qubit"].head(n_qubits).tolist()
logger. info("Selected qubits: %s", best_qubits)
return best_qubits
# Load Calibration Data
calibration_file = '/Users/Downloads/ibm_kyiv_calibrations_2025-01-15T19_06_40Z.csv'
calibration_data = load_calibration_data(calibration_file)
# Select Best Qubits for Experiment
num_qubits = 6 # 4 qubits + 2 control qubits for delayed choice
best_qubits = select_best_qubits(calibration_data, num_qubits)
# IBMQ
logger. info("Setting up IBM Q service")
service = QiskitRuntimeService(
channel="ibm_quantum",
instance="ibm-q/open/main",
token='YOUR_IBMQ_API_KEY_O-`'
)
backend_name = "ibm_kyiv"
backend = service.backend(backend_name)
logger. info("Backend selected: %s", backend_name)
# Quantum and Classical Registers
qr = QuantumRegister(num_qubits, "q")
cr = ClassicalRegister(num_qubits, "c")
qc = QuantumCircuit(qr, cr)
# Create Two EPR Pairs (Qubit Pairs: (1,2) and (3,4))
qc.h(qr[0])
qc. cx(qr[0], qr[1])
qc.h(qr[2])
qc. cx(qr[2], qr[3])
# Prepare a Delayed-Choice Bell-State Measurement (Victor's Measurement)
qc.h(qr[4]) # Delayed-choice control qubit
qc.h(qr[5]) # Auxiliary qubit for measurement
qc. cx(qr[2], qr[3]) # Entangle qubits 2 and 3
qc.ccx(qr[4], qr[2], qr[5]) # Conditional entanglement based on delayed-choice qubit
# Perform Entanglement Swapping
qc.h(qr[4]) # Finalize delayed-choice decision
qc. cx(qr[4], qr[2]) # Entangle qubit 1 with Qubit 4 via intermediate qubits
# Measurement for Verification
qc.measure(qr, cr)
# Transpile
logger. info("Transpiling the quantum circuit for the backend")
qc_transpiled = transpile(qc, backend=backend, optimization_level=3)
logger. info("Circuit transpilation complete")
# Execute
shots = 16384
with Session(service=service, backend=backend) as session:
sampler = SamplerV2(session=session)
logger. info("Executing the circuit on the backend")
job = sampler. run([qc_transpiled], shots=shots)
job_result = job.result()
# Extract counts
data_bin = job_result._pub_results[0]['__value__']['data']
if 'c' in data_bin:
counts = data_bin['c'].get_counts()
else:
raise KeyError("No valid key found in data_bin to extract counts.")
# Save Json
results_data = {
"experiment_name": "Delayed-Choice Entanglement Swapping",
"raw_counts": counts
}
output_file = '/Users/Documents/Delayed_Choice_Entanglement_Swapping_0.json'
with open(output_file, "w") as f:
json.dump(results_data, f, indent=4)
logger. info(f"Results saved to {output_file}")
# Visual
plot_histogram(counts)
plt.title("Delayed-Choice Entanglement Swapping")
plt. show()
////////////////////////
Code for All Visuals from Run Data
# Imports
import matplotlib.pyplot as plt
import json
import numpy as np
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
# Load Run Results
file_path = '/Users/Documents/Delayed_Choice_Entanglement_Swapping_0.json'
with open(file_path, 'r') as f:
data = json.load(f)
raw_counts = data["raw_counts"]
total_counts_v = sum(raw_counts.values())
# Convert counts to arrays
states = list(raw_counts.keys())
counts = np.array(list(raw_counts.values()))
total_counts = np.sum(counts)
probabilities = counts / total_counts_v
# Parse qubit states for analysis
q0_q3_states = [state[0] + state[5] for state in states]
q4_states = [state[4] for state in states]
# Parse qubit states for analysis
q0_states = [int(state[0]) for state in states]
q3_states = [int(state[5]) for state in states]
q4_states = [int(state[4]) for state in states]
# Define high-count states threshold
high_count_threshold = 1000
high_count_indices = np.where(counts > high_count_threshold)[0]
# State Distribution Histogram
plt.figure(figsize=(12, 6))
plt. bar(states, counts, color='blue')
plt.title("State Distribution Histogram")
plt.xlabel("States")
plt.ylabel("Counts")
plt.xticks(rotation=90, fontsize=8)
plt.tight_layout()
plt. show()
# Correlations Between Qubits q0 and q3
q0_q3_counts = {}
for state, count in zip(states, counts):
pair = state[0] + state[5]
q0_q3_counts[pair] = q0_q3_counts.get(pair, 0) + count
pairs = list(q0_q3_counts.keys())
pair_counts = np.array(list(q0_q3_counts.values()))
plt.figure(figsize=(6, 6))
plt. bar(pairs, pair_counts / total_counts, color='green')
plt.title("Correlations Between Qubits q0 and q3")
plt.xlabel("q0-q3 States")
plt.ylabel("Probability")
plt. show()
# Impact of Delayed-Choice Qubit q4
q4_counts = {"0": 0, "1": 0}
for state, count in zip(states, counts):
q4 = state[4]
q4_counts[q4] += count
plt.figure(figsize=(6, 6))
plt. bar(q4_counts.keys(), np.array(list(q4_counts.values())) / total_counts, color='purple')
plt.title("Impact of Delayed-Choice Qubit q4")
plt.xlabel("q4 State")
plt.ylabel("Probability")
plt. show()
# Noise Analysis: High vs. Low Frequency States
high_count_threshold = 100
high_states = [(state, count) for state, count in zip(states, counts) if count > high_count_threshold]
low_states = [(state, count) for state, count in zip(states, counts) if count <= high_count_threshold]
high_states_labels = [state[0] for state in high_states]
high_states_values = np.array([count for _, count in high_states])
low_states_labels = [state[0] for state in low_states]
low_states_values = np.array([count for _, count in low_states])
plt.figure(figsize=(12, 6))
plt. bar(high_states_labels, high_states_values, color='red', label='High Frequency States')
plt. bar(low_states_labels, low_states_values, color='blue', label='Low Frequency States', alpha=0.7)
plt.title("Noise Analysis: High vs. Low Frequency States")
plt.xlabel("States")
plt.ylabel("Counts")
plt.legend()
plt.xticks(rotation=90, fontsize=8)
plt.tight_layout()
plt. show()
# Cumulative Probability Distribution
sorted_probs = np.sort(probabilities)[::-1]
cumulative_probs = np.cumsum(sorted_probs)
plt.figure(figsize=(10, 6))
plt.plot(range(len(cumulative_probs)), cumulative_probs, marker='o', linestyle='-')
plt.title("Cumulative Probability Distribution")
plt.xlabel("State Index (Sorted by Probability)")
plt.ylabel("Cumulative Probability")
plt.grid()
plt. show()
# Heatmap of State Probabilities
plt.figure(figsize=(12, 8))
heatmap_data = np.array(probabilities).reshape(-1, 1) # Convert to 2D for heatmap
sns.heatmap(heatmap_data, annot=False, cmap="viridis", cbar=True, xticklabels=["Probability"], yticklabels=states)
plt.title("Heatmap of State Probabilities")
plt.ylabel("States")
plt.xlabel("Probability")
plt. show()
# Histogram of High-Count States
high_count_threshold = 1000
high_count_states = [(state, count) for state, count in zip(states, counts) if count > high_count_threshold]
high_labels = [state for state, count in high_count_states]
high_values = [count for state, count in high_count_states]
plt.figure(figsize=(12, 6))
plt. bar(high_labels, high_values, color='orange')
plt.title("Histogram of High-Count States")
plt.xlabel("States")
plt.ylabel("Counts")
plt.xticks(rotation=90, fontsize=8)
plt.tight_layout()
plt. show()
# Probability of Delayed-Choice Correlations
q4_correlations = {"0": {"00": 0, "01": 0, "10": 0, "11": 0}, "1": {"00": 0, "01": 0, "10": 0, "11": 0}}
for state, count in zip(states, counts):
q4 = state[4]
q0_q3 = state[0] + state[5]
q4_correlations[q4][q0_q3] += count
# Normalize correlations
for q4 in q4_correlations:
total = sum(q4_correlations[q4].values())
for q0_q3 in q4_correlations[q4]:
q4_correlations[q4][q0_q3] /= total
fig, axes = plt.subplots(1, 2, figsize=(12, 6), sharey=True)
for idx, (q4, correlations) in enumerate(q4_correlations.items()):
axes[idx].bar(correlations.keys(), correlations.values(), color=['blue', 'green', 'red', 'purple'])
axes[idx].set_title(f"Correlations for q4 = {q4}")
axes[idx].set_xlabel("q0-q3 States")
axes[idx].set_ylabel("Probability")
plt.tight_layout()
plt. show()
# 3D Bar Plot of State Probabilities
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(111, projection='3d')
x = np.arange(len(states))
y = np.zeros_like(x)
z = np.zeros_like(x)
dx = np.ones_like(x)
dy = np.ones_like(x)
dz = probabilities
ax.bar3d(x, y, z, dx, dy, dz, shade=True)
ax.set_title("3D Bar Plot of State Probabilities")
ax.set_xlabel("State Index")
ax.set_ylabel("Y Index")
ax.set_zlabel("Probability")
plt. show()
# 3D Cumulative Probability Surface
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111, projection='3d')
# Prepare data for the surface
sorted_probs = np.sort(probabilities)[::-1]
cumulative_probs = np.cumsum(sorted_probs)
x = np.arange(len(sorted_probs))
y = np.arange(2)
X, Y = np.meshgrid(x, y) # Create a 2D grid
Z = np.tile(cumulative_probs, (2, 1)) # Make Z 2D by repeating the cumulative_probs
ax.plot_surface(X, Y, Z, cmap='viridis', edgecolor='none')
ax.set_title("3D Cumulative Probability Surface")
ax.set_xlabel("State Index (Sorted by Probability)")
ax.set_ylabel("Y Index")
ax.set_zlabel("Cumulative Probability")
plt. show()
# 3D Delayed-Choice Qubit Correlations
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(q0_states, q3_states, q4_states, c=probabilities, cmap='plasma', s=100)
ax.set_title("3D Delayed-Choice Qubit Correlations")
ax.set_xlabel("q0 State")
ax.set_ylabel("q3 State")
ax.set_zlabel("q4 State")
plt. show()
# 3D State Transition Path
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111, projection='3d')
# Extract high-count states and their neighbors
dominant_indices = np.where(probabilities > 0.05)[0] # Threshold for significant probabilities
x_transitions = []
y_transitions = []
z_transitions = []
weights = []
for idx in dominant_indices:
state = states[idx]
prob = probabilities[idx]
# Generate neighboring transition states by flipping bits
for flip in range(len(state)):
neighbor = list(state)
neighbor[flip] = '1' if neighbor[flip] == '0' else '0' # Flip the bit
neighbor_str = ''.join(neighbor)
if neighbor_str in states:
neighbor_idx = states.index(neighbor_str)
x_transitions.append(idx)
y_transitions.append(neighbor_idx)
z_transitions.append(0)
weights.append(prob * probabilities[neighbor_idx]) # Weighted influence
# Normalize weights for better visualization
weights = np.array(weights)
weights = weights / max(weights)
# Plot state transitions
ax.scatter(x_transitions, y_transitions, z_transitions, c=weights, cmap='coolwarm', s=100 * weights)
for i, (x, y, z, w) in enumerate(zip(x_transitions, y_transitions, z_transitions, weights)):
ax.plot([x, x], [y, y], [z, w], 'k-', alpha=0.3)
ax.set_title("3D State Transition Path")
ax.set_xlabel("Dominant State Index")
ax.set_ylabel("Neighbor State Index")
ax.set_zlabel("Transition Weight")
plt. show()
# End