So, I've been trying to teach myself about quantum computing, and I found a great YouTube series called Quantum Computing for the Determined. However. Why do we use ket/bra notation? Normal vector notation is much clearer (okay, clearer because I've spent a couple of weeks versus two days with it, but still). Is there any significance to this notation? I'm assuming it's used for a reason, but what is that reason? I guess I just don't understand why you'd use ket notation when you have perfectly good notation already.
-
3I don't see how this question is about quantum computing -- the ket notation is much older. As you probably know, it originates from cutting a scalar product $\langle\cdot,\cdot\rangle$ in two halves. Whether it is better is probably mostly a matter of taste. Honestly, this seems more a history of science questions. – Norbert Schuch Sep 25 '16 at 14:22
-
2@NorbertSchuch, well, actually, I didn't know it originates from cutting a scalar product into two halves, though I do know it is older than quantum computing. I'm asking about the significance of this notation and what it is used for, especially in the context of quantum computing. I don't see how this is a "history of science" question. It seems more practical. – auden Sep 25 '16 at 14:24
-
There is no special use of this notation in quantum computing. In fact, it is probably more restrictive than in "normal" quantum mechanics (though mostly due to the fact that one works with finite-dimensional Hilbert spaces). – Norbert Schuch Sep 25 '16 at 15:11
-
@NorbertSchuch, is there an easy way to translate between bra/ket and normal vector notation? I ask because, I see in the videos for example $|0\rangle$ which is supposed to be the basis vector $\begin{bmatrix}1\0\end{bmatrix}$ but that isn't obviously clear unless you know that it is. – auden Sep 25 '16 at 15:14
-
2You should translate $|0\rangle=\vec e_0$, $|1\rangle=\vec e_1$, etc., where $\vec e_i$ are the (canonical) basis vectors. It's really just a different way of writing vectors, with some small advantages/disadvantages in specific situations. (For one thing, it can be more compact as you omit the $\vec e$ and don't need to use subscripts.) – Norbert Schuch Sep 25 '16 at 15:45
-
You should bear in mind that part of the reason is historical. The "standard" notation is quite modern and wasn't used around physics when the bra-ket notation was introduced, so it originated exactly out of the lack of anything just as useful. Now they're quite interchangeable (in my opinion the bra-kets carry the meaning of the equivalence of a Hilbert space and its dual quite nicely, bearing the two classes of objects quite impartial) so you can pick your favourite. But when you write a |ψ〉 people will easily know something quantum is going on. – The Vee Sep 25 '16 at 23:55
-
2It's important to note that notation for vectors isn't enough -- you need to notate covectors too. In fact, you need to be able to work with vectors, covectors and operators all in the same equation! You really want the algebra to follow the analogy with matrix algebra, with vectors acting like columns and covectors acting like rows. – Sep 26 '16 at 00:36
-
@NorbertSchuch Isn't it a nice way to keep track of co-variance and contra-variance when dealing with good old 3-1 space time? I had always had trouble with Einsteinian notation, but very little trouble with bra/ket... – Aron Sep 26 '16 at 07:48
-
Then you will be able to multiply them as matrices. – v7d8dpo4 Sep 26 '16 at 14:33
-
kets are like column vectors while bras are like row vectors. One belongs to the vector space and the other to the dual space. It takes time, and a few different people's interpretations, to make sense. – QuantumFool Sep 27 '16 at 00:54
-
@QuantumFool, thank you. I understand what they are, I just got annoyed at the notation. =) I'll keep working on it. – auden Sep 27 '16 at 00:55
-
1@heather Here's an example. Let's say you're working with the free particle in introductory quantum mechanics, where the "vector" $ \psi $ has infinitely many components. With traditional notation, you can't keep track of both (1) whether $ \psi $ belongs to the dual or regular space (in which case you have to write out the components to demonstrate whether they are in a column or row) and (2) all the elements (because there are infinitely many of them). Bra-ket is nicer there. – QuantumFool Sep 27 '16 at 01:05
-
1Can't stop wondering if the OP phrased the notation as "ket and bra" in the question as a click-bait for us OCD people. – Zano Sep 27 '16 at 18:44
-
@Zano, haha, no. Is that bothering you? I can edit it to change it to the proper form (bra ket, right?) if you want. =) – auden Sep 27 '16 at 20:30
-
Nah, it's fine. After all, it made me pause and think "wait what?" followed by clicking the link, just to see what the question was about. A key to getting views is having an eye-caching question titles, isn't? :-) – Zano Sep 27 '16 at 23:01
-
This YouTube video may be helpful: https://www.youtube.com/watch?v=pBh7Xqbh5JQ in understanding how Bra Ket notation can be converted to standard vector notation. According to Dr. Physics, Bra Ket notation is used because it works. – Ernie Jan 03 '17 at 00:40
9 Answers
Indeed, I agree with you, standard notation is, in my personal view, already sufficiently clear and bra-ket notation should be used when it is really useful. A typical case in QM is when a state vector is determined by a set of quantum numbers like this $$\left|l m s \right\rangle$$ Another case concerns the use of the so-called occupation numbers $$\left|n_{k_1} n_{k_2}\right\rangle$$ in QFT. Also q-bits notation for states $\left|0\right\rangle$, $\left|1\right\rangle$ in quantum information theory is meaningful... Finally the use of bra ket notation permits one to denote orthogonal projectors onto subspaces in a very effective manner $$\sum_{|m|\leq l}\left|l m \right\rangle \left\langle l m\right|\:.$$
A reason for its, in my view, nowadays not completely justified use is historical and due to the famous P.A.M. Dirac's textbook. In the 1930s, mathematical objects like Hilbert spaces and dual spaces, self-adjoint operators, were not very familiar mathematical tools to physicists. (The modern notion of Hilbert space was invented in 1932 by J. von Neumann in his less famous textbook on mathematical foundations of QM.) Dirac proposed a very nice notation which embodied a fundamental part of the formalism. However it also includes some drawbacks. In particular, manipulating non-self adjoint operators, e.g., symmetries, turns out to be very cumbersome within bra-ket formalism. If $A$ is self-adjoint, in $\left\langle \psi\right| A\left| \phi\right\rangle$ the operator can be viewed, indifferently, as acting on the left or on the right preserving the final result. If the operator is not self-adjoint this is false.
I think bra-ket notation is a very useful tool, but should be used "cum grano salis" in QM. In my view $\left|\psi\right\rangle$ where $\psi$ is a qunatum mechanics wavefunction, may be a dangerous notation, especially for students, as it generates misleading questions like this, $A\left|\psi\right\rangle = \left|A\psi \right\rangle$?
ADDENDUM. I understand that I interpreted the question into a broader view, regarding the use of bra-ket notation in QM rather than the restricted field of quantum information theory.
- 71,578
-
So there isn't really any significance to it, it is just like vector notation only more confusing? Or is there some uses for it that make it worth it? – auden Sep 25 '16 at 14:47
-
1I started 20 years ago to deal with mathematics of Quantm Theories, I never found a cogent reason to always use bra ket notation. You can check my pse answers and you see that I have rarely used that notation...though I think it is very valuable in some cases. – Valter Moretti Sep 25 '16 at 14:53
-
1
-
If $A $ is self-adjoint, in $\langle \psi|A|\phi\rangle $ the operator can be viewed, indifferently, as acting on the left or on the right. If the operator is not self-adjoint this is false. In my view $|\psi\rangle $ where $\psi $ is a wavefunction, is a dangerous notation, especially for students, as it generates misleading questions like this, $A|\psi\rangle = |A\psi \rangle $? – Valter Moretti Sep 25 '16 at 15:07
-
Sorry, you were referring to cases where the bk notation is useful. When quantum numbers are relevant. I pointed out a case in my answer. Another case concerns occupation numbers in qft. Also qbits states in quantum information theory obviously... – Valter Moretti Sep 25 '16 at 15:12
-
1IMO, the problem with $A$ not self-adjoint is not in the idea of $A$ acting on the left, but on the strange idea of writing $\langle A \psi | $ for the result of $\langle \psi | A$. (I also don't like $|A \psi \rangle$, but that's less weird since it at least gets the ordering of things right) – Sep 26 '16 at 00:41
-
I've always understood $\langle\varphi|A|\psi\rangle$ to mean unambiguously $\langle\varphi|(A|\psi\rangle)$. Not sure why though... I can't remember explicitly being tought thus. But, it does just make sense. Effectively, $|\psi\rangle$ is nothing but $\psi$, whereas $\langle\varphi|$ is the linear functional defined by $\langle\phi|(v) = \langle\phi, v\rangle$. – leftaroundabout Sep 26 '16 at 16:28
-
Thus @Hurkyl, I don't think it makes sense what you're saying. $\langle A\psi|$ expresses exactly what it says: apply the operator $A$ to the vector $\psi$, then apply the inverse Riesz isomorphism to the result to get to a linear functional. – leftaroundabout Sep 26 '16 at 16:35
-
" If the operator is not self-adjoint this is false" What about $\eta=\langle\psi|U|\phi\rangle$ where $U$ is unitary? The complex conjugate would be $\eta^*=\langle\phi|U^{\dagger}|\psi\rangle$ and as far as I know it is completely valid to consider the operator as acting on $|\psi\rangle$. – flippiefanus Sep 27 '16 at 08:49
-
"view $|\psi\rangle$ where $\psi$ is a wavefunction, is a dangerous notation" I think the point is that the ket by itself does not represent a wavefunction. To get a wavefunction, one should do something like $\langle x|\psi\rangle = \psi(x)$. In this way everything is nice and consistent as far as I understand it. – flippiefanus Sep 27 '16 at 08:52
-
What is false is that $\langle \psi| U |\phi\rangle$ can be read both thinking $U$ acting on the right and on the left obtainig the same result. – Valter Moretti Sep 27 '16 at 08:53
-
Regarding your second observation, for me $\psi$ does represent a wavefunction. It is matter of personal taste. I do not think it is relevant to insist on these things as it were fundamental, everybody may use the notation he/she prefer... – Valter Moretti Sep 27 '16 at 08:54
What is "normal vector notation"? I've seen angle brackets with commas, parentheses, square brackets, $\hat{x}$, $\hat{i}$, column matrices, row matrices ... which of those is "normal", $(x|y)$, ...?
Bras and kets are just another, with the particular benefit that it distinguishes the vector space from its dual space.
edit after comment
Note that some of these are component notations, which do not work for quantum mechanics as the number of dimensions can be large or infinite.
- 22,210
-
Well, I've always seen vector notation as either $\begin{bmatrix}x\y\end{bmatrix}$ or $(x, y)$. I personally think the first of these two is the best, as it is the most clear for calculations. Ket/bra notation seems very different from the notations seen in linear algebra books, such as these two. – auden Sep 25 '16 at 16:29
-
18The two that you have seen are component notations, while bra and ket (and others) are a symbolic notation. Components do not work for quantum mechanics as the number of dimensions can be large or infinite. – garyp Sep 25 '16 at 16:54
-
1In linear algebra (the math of vectors) and more importantly in multi-linear algebra (the math of tensors) I agree with @garyp in that there is no "normal" notation. The notation used should fit the problem domain. Sometimes, vectors in column form (or its dual in row form) are useful. But, in QM, Bra/Ket can be very useful. In General Relativity, a vector is most useful considered as a rank 1 tensor with distinctions for its dual (aka covector or one-form). The study of orthogonal functions brings about new ideas for vectors that don't fit the $(x,y)$ syntax. – K7PEH Sep 25 '16 at 17:35
I think there is a practical reason for ket notation in quantum computing, which is just that it minimises the use of subscripts, which can make things more readable sometimes.
If I have a single qubit, I can write its canonical basis vectors as $\mid 0 \rangle$ and $\mid 1 \rangle$ or as $\mathbf{e}_0$ and $\mathbf{e}_1$, it doesn't really make much difference. However, now suppose I have a system with four qubits. Now in "normal" vector notation the basis vectors would have to be something like $\mathbf{e}_{0000}$, $\mathbf{e}_{1011}$, etc. Having those long strings of digits typeset as tiny subscripts makes them kind of hard to read and doesn't look so great. With ket notation they're $\mid 0000\rangle$ and $\mid 1011\rangle$ etc., which improves this situation a bit. You could compare also $\mid\uparrow\rangle$, $\mid\to\rangle$, $\mid\uparrow\uparrow\downarrow\downarrow\rangle$, etc. with $\mathbf{e}_{\uparrow}$, $\mathbf{e}_{\to}$, $\mathbf{e}_{\uparrow\uparrow\downarrow\downarrow}\,\,$ for a similar issue.
- 33,913
All the answers so far provide valid reasons for Dirac notation (bra's and ket's). However, the central reason why Dirac felt the need to introduce this notation seems to be missing from these answers.
When I specify a quantity as a vector, say $$ \mathbf{v}=[a, b, c, d, ...]^T $$ then in effect I have already decided what the basis is in terms of which the quantity is expressed. In other words, each entry represents the value (or coefficient) for that basis element.
When Dirac developed his notation, he realized that a quantum mechanical state contains the same information regardless of the basis in terms of which the state is expressed. So the notation is designed to represent this abstractness. The object $|\psi\rangle$ does not make any statement about the basis in terms of which it is expressed. If I want to consider it in terms of a particular basis (say the position basis) I would compute the contraction $$ \langle x|\psi\rangle = \psi(x) $$ and then I end up with the wavefunction in the position basis. I can equally well express it in the Fourier basis $$ \langle k|\psi\rangle = \psi(k) $$ and obtain the wavefunction in the Fourier domain. Both $\psi(x)$ and $\psi(k)$ contain the same information, since they are related by a Fourier transform. However, each represents a certain bias in the sense that they are cast in terms of a particular basis. The power of the Dirac notation is that it allows one to do calculations without having to introduce this particular bias. I think this is a capability that Dirac notation provides that is not available in ordinary vector notation.
- 14,614
-
2This is not really something that's special about Dirac notation. Sure, component notation is inferior, but vectors as abstract quantities without a basis reference could also be written $\vec{\psi}$ or in fact simply $\psi$. – leftaroundabout Sep 27 '16 at 16:19
-
That would be confusing, because one can write the wavefunction also like that. – flippiefanus Sep 28 '16 at 04:15
-
That in itself is a bit of a questionable overloading of the $\psi$ symbol, but again it has little to do with bra-ket. Instead of $\psi(x) := \langle x|\psi\rangle$ you'd then write $\psi(x) := \langle e_x, \psi\rangle$. – leftaroundabout Sep 28 '16 at 07:54
-
Though I agree that the overloading becomes less problematic when using a ket symbol for the plain quantum state, because it's clear this does not mean the function. – leftaroundabout Sep 28 '16 at 08:04
First, this notation makes it very clear which objects are interpreted as elements of the primal space (kets) or elements of the dual space (bras).
The names "bra" and "ket" recall how the notation was formed: as the left and right halves of an inner product, the projection of the state $a$ along the measurement $\phi$ is the inner product(indicated by angle brackets) $\langle \phi , a \rangle$, which can be typographically broken into $\langle \phi \mid\,\mid a \rangle$, two objects which typographically hint vector-ness.
There is also a typewriter limitation that contributes to this notation (and rather too many notations that are just variants of two or more elements in a comma-separated list bounded by parentheses or square brackets: GCD, LCM, object generated by, meet, join, intervals with various endpoint conventions, sequences, object tuples, et al...). It is very time consuming to type a column vector on a typewriter. Typewriters do not have oversized parentheses for column vectors. This leads to strictly malformed constructions like "Let $A$ be a linear operator between $\Bbb{R}^2$ and $\Bbb{R}^2$, then $A \cdot (1,0)$ has ..." where a row vector is typed in a place that a column vector is required. In particular, this means that the most common form of vector in beginning linear algebra is hard to typeset and so was frequently typed incorrectly transposed.
Further, elements of the primal and dual spaces should be readily distinguished (to prevent unintentionally writing, e.g., $\sum_i \mid \mathrm{e}_i \rangle \mid \mathrm{e}_i \rangle$). However, the "obvious" solution is even harder to type: "$\sum_i \langle \mathrm{e}_i \mid \underline{\widehat{\mathrm{e}_i}}$" (and even with the full power of MathJax, as much time as I'm willing to spend on this necessarily has the primal vector pointing up instead of down).
Finally, the stuff one puts in a bra or ket is seldom a set of vector components. By the definitions that a mathematician uses, the components of a vector all come from the same field. This isn't going to work for states described by some continuous and some discrete variables, or by states with some variables in the primal space and some variables in the tangent space. (If we force this to work, we actually get direct sums of modules, not vector spaces.) So while we might like to put lists of state-describing numbers in a bra or a ket, the thing we get is not and cannot be a (formal) vector.
- 1,767
-
IMO the whole column vector vs colums vector issue is just an artifact of the over-reliance on matrices. I find nothing wrong with writing vectors $v \in \mathbb{R}^2 \equiv \mathbb{R}\times\mathbb{R}$ as tuples, i.e. $v = (v_0, v_1)$. If this leads to inconsistencies when it comes to matrix multiplication then that's an issue of the matrix notation, not the notation you use to represent vectors in some give space. And of course the component writings only work in the finite-dimensional case. Bra-ket notation avoids all these issues. – leftaroundabout Sep 26 '16 at 22:07
-
(Nevertheless: I rather prefer the maths convention of not using any special markup for vectors or dual vector at all; it should simply be declared in what space the quantity lives that some symbol refers to.) – leftaroundabout Sep 26 '16 at 22:09
-
@leftaroundabout : The idea of having no specific notation for primal versus dual vectors is ineffective in the quantum mechanical setting where the same symbols are used for an eigenstate and its dual. Consider $\langle n+1 \mid H \mid n+1 \rangle$. – Eric Towers Sep 27 '16 at 15:59
-
Well, that is mainly inefficient to write because you can't use $n+1$ as an identifier without wrapping in in a bra/ket. However, if you wrote it $\langle \psi_{n+1} | H | \psi_{n+1}\rangle$ then this could perfectly well be translated to an ordinary inner-product expression $\langle \psi_{n+1}, H, \psi_{n+1}\rangle$. No bras or kets here. What does get a bit annoying is when you actually want to talk about dual vectors as such, without executing an inner product – e.g. $\sum_i |v_i\rangle\langle v_i|$ becomes $\sum_i v_i \langle v_i,\cdot\rangle$, which may not be very clear. – leftaroundabout Sep 27 '16 at 16:06
-
@leftaroundabout : ... but that notation requires an adjoint notation ... $\langle \psi_{n+1}^*, H \psi_{n+1} \rangle$, which, contrary to your prior comment, marks up the dual vectors. – Eric Towers Sep 28 '16 at 12:51
-
No, exactly that is not required! The inner product has complex conjugation built in. – leftaroundabout Sep 28 '16 at 12:53
-
@leftaroundabout : Yes and no. The inner product requires that you feed it a primal and a dual. You still have to feed it an actual dual, which means you have to cast a primal to a dual when you use a primal in a dual slot. – Eric Towers Sep 29 '16 at 13:43
-
No you don't. The inner product takes two primal vectors and gives you a number. In fact, that's the main thing that's interesting about an inner product, and the only reason you can simply switch between space and dual space: the mapping $\ast : \mathcal{H}\to \mathcal{H}^\ast$ is defined by $\psi \mapsto \psi^\ast := \langle \psi,\cdot\rangle$. The inverse of that mapping is even less trivial: it only exists in Hilbert spaces, not in general inner-product spaces; this is given by the Riesz representation theorem. – leftaroundabout Sep 29 '16 at 17:14
-
@leftaroundabout : The Question is in the context of quantum mechanics, not in the context of general inner product spaces. Since you seem to want to talk about some other Question, perhaps you should find one that fits and go talk there. – Eric Towers Sep 29 '16 at 18:05
The bra-ket notation is an advancement of the dot product of "normal" vectors. $$ \vec{a} \cdot \vec{b} = \sum_i a_ib_i . $$ This is generalized to the inner product $ \langle a, b \rangle $, which for functions is defined as: $$ \langle a(x), b(x) \rangle = \int a(x)b(x) dx $$ in the simple case of 1-dim functions.
Well the big advantage of the bra-ket notation is that there is no need to specify the representation, i.e. the coordinate system until one wants to calculate something in a specific space.
Part of the appeal of the notation is the abstract representation-independence it encodes, together with its versatility in producing a specific representation (e.g. x, or p, or eigenfunction base) without much ado, or excessive reliance on the nature of the linear spaces involved.
It is pretty handy when, for example, evaluating equations like
$$ \langle \psi_0 | ( |\psi_0\rangle + |\psi_1\rangle) = \langle \psi_0 |\psi_0\rangle ,$$ where $ |\psi_i\rangle $ are some orthogonal states. It is fast evaluation without the need to specify the system of $|\psi\rangle$ - whether $|\psi_0\rangle = (1, 0) $ and $|\psi_1\rangle = (0, 1) $ or it is $ |\psi_0\rangle = (1, \pi/2) $ and $|\psi_1\rangle = (1, 0) $ in polar coordinates $r, \varphi$.
I do see, your point in saying "normal" vector notation is much clearer. That might be the case for these simple vectors as the ones above, but makes things hard to write when it comes to functions in multi- or even infinite-dimensional Hilbert space.
- 226
The bra-ket notation comes from Dirac. Feynman gives a good explanation in his Lectures on Physics, vol. 3, p 3-2. If you are familiar with conditional probability, we write the probability of seeing $b$ if we have seen $a$ is written $$P(b|a)$$
In quantum mechanics the calculation of seeing $b$, if we have already seen $a$, is written in bracket notation: $$\langle b|a \rangle$$ which is the same idea, except that it is not a probability, but a complex number called a probability amplitude. In quantum mechanics we don't work with real numbers; the probability calculations give good predictions only when we work with complex numbers. At the end of calculations, the length of a complex number is squared to obtain the real-number probability we expect to observe: $$| \langle b|a \rangle |^2$$
Now we can talk about posterior conditions and prior conditions as a "bra" $\langle b|$ and a "ket" $|a \rangle$. Then if in place of a specific outcome $b$ we consider all possible outcomes, that is a vector, a "bra-vector". The space of prior values (or states) is a "ket-vector".
- 746
- 17,055
The preference for bracket notation might be related to how make an elegant classic interpretation of quantum measurement.
Consider a system described by the state $|\beta\rangle$ then the average or expected value of an operator $\hat{A}$ which corresponds to the classical theory is simply the bracket of the operator. Or sandwiching the operator:
$$\langle \beta| \hat{A} |\beta\rangle$$
In the case of the hydrogen atom, for example, the bracket of position operator, for an electron in an eigenstate $|\epsilon_{n}\rangle$, is zero. Classically, therefore, electron is in the nucleus or origin:
$$\langle \epsilon_{n}| \hat{X} |\epsilon_{n}\rangle=0$$
Makes sense, classically, why no radiation is emitted while the system is in an energy eigenstate.
Here's an example. Let's say you're working with the free particle in introductory quantum mechanics, where the "vector" $ \vec{\psi} $ has infinitely many components (I know that sounds crazy if you don't have a lot of experience with quantum mechanics, but it's the case). With traditional notation, you can't keep track whether $ \vec{\psi} $ belongs to the dual or regular space - whether $ \vec{\psi} $ is a row vector or a column vector, respectively. In standard notation you'd have to write out the components (infinitely many of them!) to demonstrate a row or a column.
Bra-ket notation is nicer there. The "bras" $ \left \langle \psi \right | $ are dual vectors to the "kets" $ \left | \psi \right \rangle $.
A more crazy and more useful interpretation is that bras are linear functions and kets are their arguments.
- 746
- 1,266