Why can't there be an error correcting code with fewer than 5 qubits?
Clash Royale CLAN TAG#URR8PPP
up vote
17
down vote
favorite
I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?
quantum-error-correction
add a comment |
up vote
17
down vote
favorite
I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?
quantum-error-correction
2
Removed the comment with a false claim. Refer to Niel's accepted answer.
– Jalex Stark
Nov 26 at 18:29
add a comment |
up vote
17
down vote
favorite
up vote
17
down vote
favorite
I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?
quantum-error-correction
I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?
quantum-error-correction
quantum-error-correction
edited Nov 23 at 14:04
DaftWullie
11.1k1536
11.1k1536
asked Nov 23 at 12:53
Adex
1667
1667
2
Removed the comment with a false claim. Refer to Niel's accepted answer.
– Jalex Stark
Nov 26 at 18:29
add a comment |
2
Removed the comment with a false claim. Refer to Niel's accepted answer.
– Jalex Stark
Nov 26 at 18:29
2
2
Removed the comment with a false claim. Refer to Niel's accepted answer.
– Jalex Stark
Nov 26 at 18:29
Removed the comment with a false claim. Refer to Niel's accepted answer.
– Jalex Stark
Nov 26 at 18:29
add a comment |
4 Answers
4
active
oldest
votes
up vote
13
down vote
accepted
A proof that you need at least 5 qubits (or qudits)
Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.
(As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)
Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).
It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.
On correcting erasure errors
* The earliest reference I found for this is
[1]
Grassl, Beth, and Pellizzari.
Codes for the Quantum Erasure Channel.
Phys. Rev. A 56 (pp. 33–38), 1997.
[arXiv:quant-ph/9610042]
— which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).
The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.
If the locations of those $d-1$ qubits were unknown, this would be fatal.
However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
Knill-Laflamme conditions.Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.
N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
– Niel de Beaudrap
12 hours ago
add a comment |
up vote
13
down vote
What we can easily prove is that there's no smaller non-degenerate code.
In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
$$
|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
$$
must map to orthogonal states.
If we apply this argument in general, it shows us that we need
$$
2+2times(3n)
$$
distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
$$
2^ngeq 2(3n+1).
$$
This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.
1
Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
– Norbert Schuch
Nov 23 at 23:12
@NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
– DaftWullie
Nov 24 at 6:17
Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
– DaftWullie
Nov 24 at 6:56
1
Thought that was a standard argument :-o
– Norbert Schuch
Nov 24 at 12:11
add a comment |
up vote
7
down vote
As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
beginequation
2^n-kgeqsum_j=0^tpmatrixn\j3^j,
endequation
where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfracd-12rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^n-k$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.
However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.
add a comment |
up vote
3
down vote
I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of
A Theory of Quantum Error-Correcting Codes
Emanuel Knill, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9604034
where the specific result is:
Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.
Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!+1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
$$
beginaligned
n
;⩾
4bigllceiltfracd-12bigrrceil + lceil log 2^k rceil
\[1ex]⩾
bigllceil 4 cdot tfracd-12 bigrrceil + lceil k rceil
\[1ex]&=;
2d - 2 + k
;geqslant;
6 - 2 + 1
;=;
5.
endaligned
$$
(N.B.
There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)
As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used
Quantum MacWilliams Identities
Peter Shor, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9610040
to show that no degenerate code on five qubits exists that can correct any single errors.
Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
– Niel de Beaudrap
12 hours ago
Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
– Felix Huber
12 hours ago
As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
– Felix Huber
12 hours ago
Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
– Niel de Beaudrap
12 hours ago
Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
– Felix Huber
12 hours ago
add a comment |
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
13
down vote
accepted
A proof that you need at least 5 qubits (or qudits)
Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.
(As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)
Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).
It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.
On correcting erasure errors
* The earliest reference I found for this is
[1]
Grassl, Beth, and Pellizzari.
Codes for the Quantum Erasure Channel.
Phys. Rev. A 56 (pp. 33–38), 1997.
[arXiv:quant-ph/9610042]
— which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).
The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.
If the locations of those $d-1$ qubits were unknown, this would be fatal.
However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
Knill-Laflamme conditions.Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.
N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
– Niel de Beaudrap
12 hours ago
add a comment |
up vote
13
down vote
accepted
A proof that you need at least 5 qubits (or qudits)
Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.
(As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)
Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).
It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.
On correcting erasure errors
* The earliest reference I found for this is
[1]
Grassl, Beth, and Pellizzari.
Codes for the Quantum Erasure Channel.
Phys. Rev. A 56 (pp. 33–38), 1997.
[arXiv:quant-ph/9610042]
— which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).
The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.
If the locations of those $d-1$ qubits were unknown, this would be fatal.
However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
Knill-Laflamme conditions.Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.
N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
– Niel de Beaudrap
12 hours ago
add a comment |
up vote
13
down vote
accepted
up vote
13
down vote
accepted
A proof that you need at least 5 qubits (or qudits)
Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.
(As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)
Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).
It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.
On correcting erasure errors
* The earliest reference I found for this is
[1]
Grassl, Beth, and Pellizzari.
Codes for the Quantum Erasure Channel.
Phys. Rev. A 56 (pp. 33–38), 1997.
[arXiv:quant-ph/9610042]
— which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).
The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.
If the locations of those $d-1$ qubits were unknown, this would be fatal.
However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
Knill-Laflamme conditions.Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.
A proof that you need at least 5 qubits (or qudits)
Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.
(As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)
Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*.
Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[![4,2,2]!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).
It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits.
Now: suppose you have a quantum error correcting code on $n geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n geqslant 5$ instead.
On correcting erasure errors
* The earliest reference I found for this is
[1]
Grassl, Beth, and Pellizzari.
Codes for the Quantum Erasure Channel.
Phys. Rev. A 56 (pp. 33–38), 1997.
[arXiv:quant-ph/9610042]
— which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).
The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.
If the locations of those $d-1$ qubits were unknown, this would be fatal.
However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the
Knill-Laflamme conditions.Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.
edited 12 hours ago
answered Nov 23 at 22:04
Niel de Beaudrap
5,3061932
5,3061932
N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
– Niel de Beaudrap
12 hours ago
add a comment |
N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
– Niel de Beaudrap
12 hours ago
N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
– Niel de Beaudrap
12 hours ago
N.B. If you've upvoted my answer, you should consider upvoting Felix Huber's answer as well, for having identified the original proof.
– Niel de Beaudrap
12 hours ago
add a comment |
up vote
13
down vote
What we can easily prove is that there's no smaller non-degenerate code.
In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
$$
|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
$$
must map to orthogonal states.
If we apply this argument in general, it shows us that we need
$$
2+2times(3n)
$$
distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
$$
2^ngeq 2(3n+1).
$$
This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.
1
Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
– Norbert Schuch
Nov 23 at 23:12
@NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
– DaftWullie
Nov 24 at 6:17
Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
– DaftWullie
Nov 24 at 6:56
1
Thought that was a standard argument :-o
– Norbert Schuch
Nov 24 at 12:11
add a comment |
up vote
13
down vote
What we can easily prove is that there's no smaller non-degenerate code.
In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
$$
|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
$$
must map to orthogonal states.
If we apply this argument in general, it shows us that we need
$$
2+2times(3n)
$$
distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
$$
2^ngeq 2(3n+1).
$$
This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.
1
Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
– Norbert Schuch
Nov 23 at 23:12
@NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
– DaftWullie
Nov 24 at 6:17
Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
– DaftWullie
Nov 24 at 6:56
1
Thought that was a standard argument :-o
– Norbert Schuch
Nov 24 at 12:11
add a comment |
up vote
13
down vote
up vote
13
down vote
What we can easily prove is that there's no smaller non-degenerate code.
In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
$$
|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
$$
must map to orthogonal states.
If we apply this argument in general, it shows us that we need
$$
2+2times(3n)
$$
distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
$$
2^ngeq 2(3n+1).
$$
This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.
What we can easily prove is that there's no smaller non-degenerate code.
In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_Lrangle$ and $|1_Lrangle$. The set of possible single-qubit errors are $X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$, and it means that all the states
$$
|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots
$$
must map to orthogonal states.
If we apply this argument in general, it shows us that we need
$$
2+2times(3n)
$$
distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need
$$
2^ngeq 2(3n+1).
$$
This is called the Quantum Hamming Bound. You can easily check that this is true for all $ngeq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.
edited Nov 26 at 7:49
answered Nov 23 at 13:17
DaftWullie
11.1k1536
11.1k1536
1
Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
– Norbert Schuch
Nov 23 at 23:12
@NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
– DaftWullie
Nov 24 at 6:17
Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
– DaftWullie
Nov 24 at 6:56
1
Thought that was a standard argument :-o
– Norbert Schuch
Nov 24 at 12:11
add a comment |
1
Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
– Norbert Schuch
Nov 23 at 23:12
@NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
– DaftWullie
Nov 24 at 6:17
Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
– DaftWullie
Nov 24 at 6:56
1
Thought that was a standard argument :-o
– Norbert Schuch
Nov 24 at 12:11
1
1
Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
– Norbert Schuch
Nov 23 at 23:12
Can't you prove this by no-cloning for any code, without invoking the Hamming bound?
– Norbert Schuch
Nov 23 at 23:12
@NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
– DaftWullie
Nov 24 at 6:17
@NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it!
– DaftWullie
Nov 24 at 6:17
Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
– DaftWullie
Nov 24 at 6:56
Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool :)
– DaftWullie
Nov 24 at 6:56
1
1
Thought that was a standard argument :-o
– Norbert Schuch
Nov 24 at 12:11
Thought that was a standard argument :-o
– Norbert Schuch
Nov 24 at 12:11
add a comment |
up vote
7
down vote
As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
beginequation
2^n-kgeqsum_j=0^tpmatrixn\j3^j,
endequation
where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfracd-12rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^n-k$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.
However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.
add a comment |
up vote
7
down vote
As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
beginequation
2^n-kgeqsum_j=0^tpmatrixn\j3^j,
endequation
where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfracd-12rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^n-k$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.
However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.
add a comment |
up vote
7
down vote
up vote
7
down vote
As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
beginequation
2^n-kgeqsum_j=0^tpmatrixn\j3^j,
endequation
where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfracd-12rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^n-k$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.
However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.
As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is
beginequation
2^n-kgeqsum_j=0^tpmatrixn\j3^j,
endequation
where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = lfloorfracd-12rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^n-k$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.
However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.
answered Nov 23 at 14:45
Josu Etxezarreta Martinez
1,333217
1,333217
add a comment |
add a comment |
up vote
3
down vote
I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of
A Theory of Quantum Error-Correcting Codes
Emanuel Knill, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9604034
where the specific result is:
Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.
Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!+1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
$$
beginaligned
n
;⩾
4bigllceiltfracd-12bigrrceil + lceil log 2^k rceil
\[1ex]⩾
bigllceil 4 cdot tfracd-12 bigrrceil + lceil k rceil
\[1ex]&=;
2d - 2 + k
;geqslant;
6 - 2 + 1
;=;
5.
endaligned
$$
(N.B.
There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)
As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used
Quantum MacWilliams Identities
Peter Shor, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9610040
to show that no degenerate code on five qubits exists that can correct any single errors.
Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
– Niel de Beaudrap
12 hours ago
Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
– Felix Huber
12 hours ago
As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
– Felix Huber
12 hours ago
Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
– Niel de Beaudrap
12 hours ago
Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
– Felix Huber
12 hours ago
add a comment |
up vote
3
down vote
I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of
A Theory of Quantum Error-Correcting Codes
Emanuel Knill, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9604034
where the specific result is:
Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.
Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!+1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
$$
beginaligned
n
;⩾
4bigllceiltfracd-12bigrrceil + lceil log 2^k rceil
\[1ex]⩾
bigllceil 4 cdot tfracd-12 bigrrceil + lceil k rceil
\[1ex]&=;
2d - 2 + k
;geqslant;
6 - 2 + 1
;=;
5.
endaligned
$$
(N.B.
There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)
As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used
Quantum MacWilliams Identities
Peter Shor, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9610040
to show that no degenerate code on five qubits exists that can correct any single errors.
Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
– Niel de Beaudrap
12 hours ago
Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
– Felix Huber
12 hours ago
As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
– Felix Huber
12 hours ago
Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
– Niel de Beaudrap
12 hours ago
Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
– Felix Huber
12 hours ago
add a comment |
up vote
3
down vote
up vote
3
down vote
I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of
A Theory of Quantum Error-Correcting Codes
Emanuel Knill, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9604034
where the specific result is:
Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.
Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!+1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
$$
beginaligned
n
;⩾
4bigllceiltfracd-12bigrrceil + lceil log 2^k rceil
\[1ex]⩾
bigllceil 4 cdot tfracd-12 bigrrceil + lceil k rceil
\[1ex]&=;
2d - 2 + k
;geqslant;
6 - 2 + 1
;=;
5.
endaligned
$$
(N.B.
There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)
As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used
Quantum MacWilliams Identities
Peter Shor, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9610040
to show that no degenerate code on five qubits exists that can correct any single errors.
I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of
A Theory of Quantum Error-Correcting Codes
Emanuel Knill, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9604034
where the specific result is:
Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r geqslant 4e + lceil log k rceil$.
Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$.
In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[![n,k,2e:!+1]!]$ code. Theorem 5.1 then allows us to prove that for $k geqslant 1$ and $d geqslant 3$, an $[![n,k,d]!]$ code must satisfy
$$
beginaligned
n
;⩾
4bigllceiltfracd-12bigrrceil + lceil log 2^k rceil
\[1ex]⩾
bigllceil 4 cdot tfracd-12 bigrrceil + lceil k rceil
\[1ex]&=;
2d - 2 + k
;geqslant;
6 - 2 + 1
;=;
5.
endaligned
$$
(N.B.
There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)
As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used
Quantum MacWilliams Identities
Peter Shor, Raymond Laflamme
https://arxiv.org/abs/quant-ph/9610040
to show that no degenerate code on five qubits exists that can correct any single errors.
edited 12 hours ago
answered 13 hours ago
Felix Huber
413
413
Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
– Niel de Beaudrap
12 hours ago
Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
– Felix Huber
12 hours ago
As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
– Felix Huber
12 hours ago
Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
– Niel de Beaudrap
12 hours ago
Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
– Felix Huber
12 hours ago
add a comment |
Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
– Niel de Beaudrap
12 hours ago
Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
– Felix Huber
12 hours ago
As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
– Felix Huber
12 hours ago
Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
– Niel de Beaudrap
12 hours ago
Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
– Felix Huber
12 hours ago
Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
– Niel de Beaudrap
12 hours ago
Excellent reference, thanks! I didn't know the Knill--Laflamme paper well enough to know that the lower bound of 5 was there as well.
– Niel de Beaudrap
12 hours ago
Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
– Felix Huber
12 hours ago
Thanks for editing! About the lower bound, it seems they don't address that five qubits are needed, but only that such code must necessarily be non-degenerate.
– Felix Huber
12 hours ago
As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
– Felix Huber
12 hours ago
As a side not, from the quantum Singleton bound also $n=5$ follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as $dleq n/2+1$ already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill's lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf
– Felix Huber
12 hours ago
Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
– Niel de Beaudrap
12 hours ago
Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for $r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
– Niel de Beaudrap
12 hours ago
Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
– Felix Huber
12 hours ago
Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper ("a theory of QECC..), as they point out, general.
– Felix Huber
12 hours ago
add a comment |
Thanks for contributing an answer to Quantum Computing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fquantumcomputing.stackexchange.com%2fquestions%2f4798%2fwhy-cant-there-be-an-error-correcting-code-with-fewer-than-5-qubits%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
Removed the comment with a false claim. Refer to Niel's accepted answer.
– Jalex Stark
Nov 26 at 18:29