This post is a little different from the last one, only because it's more about mathematics than physics. It's based on thoughts I have been having about complex numbers and how they relate to 2-dimensional vectors. Follow the jump to see more.
Just to refresh, the imaginary unit is defined by i2=−1, and by convention the sign is taken as i=√−1. This imaginary unit exists outside of the space of real numbers, so an imaginary number line can be constructed perpendicular to the real number line intersecting at zero. Any point in this new two-dimensional space is a complex number z=x+iy. This can also be written in polar form z=reiθ, where r=√x2+y2 and θ=arctan(yx). The conjugate of a complex number is defined as z⋆≡x−iy=re−iθ. This means that the number z⋆z∈R. Other than that, complex numbers feature commutativity, associativity, and distributivity in addition, subtraction, multiplication, and division. Using the facts that i2=−1 and eiθ≡cos(θ)+isin(θ), exponentiation and other mathematical operations can be recovered too.
What about 2-dimensional vectors? The traditional Cartesian basis selection allows the decomposition of a vector u=uxex+uyey. These also have a polar form r=urer+uθeθ, where the unit vectors er and eθ depend on the angle with respect to some fixed basis vector (usually ex in the counterclockwise direction). In the Cartesian basis, these vectors can be added, subtracted, and multiplied by scalars, along with the other properties that allow these vectors to be part of a vector space.
You may be wondering what the connection is between 2-dimensional vectors and complex numbers. The initial connection is just that the imaginary axis lies perpendicular to the real axis by convention, so complex numbers lie in a plane just like points in the xy-plane. That's basically all that the book Waves and Vibrations by A. P. French (required for 8.03 — Physics III) had to say about it, and it didn't make a whole lot of sense to me that this alone should connect 2-dimensional vectors with complex numbers. There are, after all, a few differences. The inner product for 2-dimensional vectors u and v is the dot product u⋅v∈R and is independent of basis. In the Cartesian or polar bases, it can be expanded as u⋅v=uxvx+uyvy=urvreru⋅erv+uθvθeθu⋅eθv, where the polar representations are u=ureru+uθeθu and v=vrerv+vθeθv and where the dot product of the two radial and tangential unit vectors cannot be simplified further without expanding in the Cartesian basis due to the angular dependence of different radial and tangential unit vectors. Anyway, I haven't yet defined a similar inner product for complex numbers that would allow them to represent 2-dimensional vectors in the same way, but it is possible, and that shall be done. Also, complex numbers can be multiplied and divided, whereas 2-dimensional vectors cannot; this is on account of complex numbers being a field (related to i2=−1), and 2-dimensional vectors cannot be made to have direct multiplication and division except by essentially becoming complex numbers.
Before getting into inner products, though, I'd like to digress a little and show how the polar basis is actually more tractable for complex numbers than for 2-dimensional vectors, especially when working with problems in Newtonian mechanics such as in 8.012 — Physics I. Because 8.012 goes deeper into the Newtonian analysis of problems like rigid-body rotation, gyroscopes, Euler equations, and central forces, and because all central force problems can be confined to a plane, knowledge of polar coordinates is crucial for success in the class. A general vector is decomposed into polar coordinates as u=urer+uθeθ, in which the Cartesian decomposition of the polar unit vectors yields er=cos(θ)ex+sin(θ)ey and eθ=−sin(θ)ex+cos(θ)ey. The position vector r=rer, with the direction of the vector implicit if decomposition into the Cartesian basis is not desired. Because the direction of the radial unit vector is implicit and because it changes at each point in the plane, its time derivative is not constant: ˙er=˙θeθ, and likewise for the tangential unit vector, ˙eθ=−˙θer. These can be proved through decomposition into the Cartesian basis or through geometric arguments if one wants to avoid the Cartesian basis altogether. This means that the velocity vector is ˙r=˙rer+r˙θeθ and the acceleration vector is ¨r=(¨r−r˙θ2)er+(2˙r˙θ+r¨θ)eθ. Without knowing the geometrical argument for the time derivatives of the polar unit vectors, it is not immediately obvious why a radial position vector would have time derivatives that contain more complicated radial and tangential components. But what if it was possible to do this using complex numbers instead?
A radial position vector would be represented by a complex number as (and I will be abusing vector and complex notation here by mixing them a bit) r=reiθ. Note here that the radial unit vector is er≡eiθ, which makes explicit the directionality of the vector; this makes sense given that in the Cartesian basis, er=cos(θ)ex+sin(θ)ey while eiθ=cos(θ)+isin(θ). Taking one time derivative of the position vector yields ˙r=˙reiθ+ir˙θeiθ. It is now easy to read off that the tangential unit vector is eθ=ieiθ=ei(θ+π2), which makes manifestly obvious the fact that the tangential unit vector is the radial unit vector rotated by π2, as well as the facts about the time derivatives of those unit vectors. Taking another time derivative yields ¨r=(¨r+r˙θ2)eiθ+(2˙r˙θ+r¨θ)ieiθ, which is also consistent with the expression for the acceleration vector. Hence, it is possible to do Newtonian mechanics in 2 dimensions with complex numbers.
Now, what about inner products? We know that for 2-dimensional vectors, in general, u⋅v=|u||v|cos(θ(u,v)), and in the Cartesian basis, u⋅v=uxvx+uyvy. How is it possible to reproduce this with complex numbers? A quick check for u=ux+iuy and v=vx+ivy shows that multiplication won't work. However, we can use a simpler version of a result of inner products for complex vector spaces that says that (using Dirac bra-ket notation here) for actual complex vectors expanded in a basis |u⟩=∑juj|j⟩ and likewise |v⟩=∑jvj|j⟩ with uj,vj∈C, then the inner product ⟨u|v⟩=∑ju⋆jvj=⟨v|u⟩⋆. This means that we could possibly take the inner product to be u⋅v=u⋆v, where we remember now that u and v are single complex numbers and can then be thought of as 1-dimensional complex vectors. The problem is that this notion of an inner product yields a complex number for its result, and we want the result to be real so that it can mimic the result for two-dimensional vectors. The answer turns out to be in taking the real part of that expression: u⋅v=12(u⋆v+uv⋆). This works in the Cartesian basis: if u=ux+iuy and v=vx+ivy, then u⋅v=uxvx+uyvy; it works in the polar basis as well: if u=ueiθu and v=veiθv, then u⋅v=uvcos(θv−θu), which is in fact the more general definition anyway. (As a side note, what if we took the imaginary part? The answer would in fact be the magnitude of the cross product: |u×v|=uv|sin(θv−θu)|=|uxvy−uyvx|. The vector cross product would not lie in the plane and therefore could not be expressed as a complex number. If we wanted to represent 3-dimensional vectors as some generalization of the complex numbers, we would use quaternions, which are extremely well-documented and which I still don't understand that much.)
Note that theoretically, any 2-dimensional vector or complex number could be expressed as a product of its magnitude and direction in their respective polar forms. Because of this, radial and tangential unit vectors differ based on position. From this, it is customary to fix the position as a purely radial vector r=rer and express other vectors as superpositions of both the radial and tangential unit vectors. In complex notation, this means that for a given angle θ, the position r=reiθ, and all other vectors could be expressed for that same angle as u=ureiθ+iuθeiθ. And there you have it: you have been able to explore with me some of the other properties of complex numbers when representing 2-dimensional vectors!
Just to refresh, the imaginary unit is defined by i2=−1, and by convention the sign is taken as i=√−1. This imaginary unit exists outside of the space of real numbers, so an imaginary number line can be constructed perpendicular to the real number line intersecting at zero. Any point in this new two-dimensional space is a complex number z=x+iy. This can also be written in polar form z=reiθ, where r=√x2+y2 and θ=arctan(yx). The conjugate of a complex number is defined as z⋆≡x−iy=re−iθ. This means that the number z⋆z∈R. Other than that, complex numbers feature commutativity, associativity, and distributivity in addition, subtraction, multiplication, and division. Using the facts that i2=−1 and eiθ≡cos(θ)+isin(θ), exponentiation and other mathematical operations can be recovered too.
What about 2-dimensional vectors? The traditional Cartesian basis selection allows the decomposition of a vector u=uxex+uyey. These also have a polar form r=urer+uθeθ, where the unit vectors er and eθ depend on the angle with respect to some fixed basis vector (usually ex in the counterclockwise direction). In the Cartesian basis, these vectors can be added, subtracted, and multiplied by scalars, along with the other properties that allow these vectors to be part of a vector space.
You may be wondering what the connection is between 2-dimensional vectors and complex numbers. The initial connection is just that the imaginary axis lies perpendicular to the real axis by convention, so complex numbers lie in a plane just like points in the xy-plane. That's basically all that the book Waves and Vibrations by A. P. French (required for 8.03 — Physics III) had to say about it, and it didn't make a whole lot of sense to me that this alone should connect 2-dimensional vectors with complex numbers. There are, after all, a few differences. The inner product for 2-dimensional vectors u and v is the dot product u⋅v∈R and is independent of basis. In the Cartesian or polar bases, it can be expanded as u⋅v=uxvx+uyvy=urvreru⋅erv+uθvθeθu⋅eθv, where the polar representations are u=ureru+uθeθu and v=vrerv+vθeθv and where the dot product of the two radial and tangential unit vectors cannot be simplified further without expanding in the Cartesian basis due to the angular dependence of different radial and tangential unit vectors. Anyway, I haven't yet defined a similar inner product for complex numbers that would allow them to represent 2-dimensional vectors in the same way, but it is possible, and that shall be done. Also, complex numbers can be multiplied and divided, whereas 2-dimensional vectors cannot; this is on account of complex numbers being a field (related to i2=−1), and 2-dimensional vectors cannot be made to have direct multiplication and division except by essentially becoming complex numbers.
Before getting into inner products, though, I'd like to digress a little and show how the polar basis is actually more tractable for complex numbers than for 2-dimensional vectors, especially when working with problems in Newtonian mechanics such as in 8.012 — Physics I. Because 8.012 goes deeper into the Newtonian analysis of problems like rigid-body rotation, gyroscopes, Euler equations, and central forces, and because all central force problems can be confined to a plane, knowledge of polar coordinates is crucial for success in the class. A general vector is decomposed into polar coordinates as u=urer+uθeθ, in which the Cartesian decomposition of the polar unit vectors yields er=cos(θ)ex+sin(θ)ey and eθ=−sin(θ)ex+cos(θ)ey. The position vector r=rer, with the direction of the vector implicit if decomposition into the Cartesian basis is not desired. Because the direction of the radial unit vector is implicit and because it changes at each point in the plane, its time derivative is not constant: ˙er=˙θeθ, and likewise for the tangential unit vector, ˙eθ=−˙θer. These can be proved through decomposition into the Cartesian basis or through geometric arguments if one wants to avoid the Cartesian basis altogether. This means that the velocity vector is ˙r=˙rer+r˙θeθ and the acceleration vector is ¨r=(¨r−r˙θ2)er+(2˙r˙θ+r¨θ)eθ. Without knowing the geometrical argument for the time derivatives of the polar unit vectors, it is not immediately obvious why a radial position vector would have time derivatives that contain more complicated radial and tangential components. But what if it was possible to do this using complex numbers instead?
A radial position vector would be represented by a complex number as (and I will be abusing vector and complex notation here by mixing them a bit) r=reiθ. Note here that the radial unit vector is er≡eiθ, which makes explicit the directionality of the vector; this makes sense given that in the Cartesian basis, er=cos(θ)ex+sin(θ)ey while eiθ=cos(θ)+isin(θ). Taking one time derivative of the position vector yields ˙r=˙reiθ+ir˙θeiθ. It is now easy to read off that the tangential unit vector is eθ=ieiθ=ei(θ+π2), which makes manifestly obvious the fact that the tangential unit vector is the radial unit vector rotated by π2, as well as the facts about the time derivatives of those unit vectors. Taking another time derivative yields ¨r=(¨r+r˙θ2)eiθ+(2˙r˙θ+r¨θ)ieiθ, which is also consistent with the expression for the acceleration vector. Hence, it is possible to do Newtonian mechanics in 2 dimensions with complex numbers.
Now, what about inner products? We know that for 2-dimensional vectors, in general, u⋅v=|u||v|cos(θ(u,v)), and in the Cartesian basis, u⋅v=uxvx+uyvy. How is it possible to reproduce this with complex numbers? A quick check for u=ux+iuy and v=vx+ivy shows that multiplication won't work. However, we can use a simpler version of a result of inner products for complex vector spaces that says that (using Dirac bra-ket notation here) for actual complex vectors expanded in a basis |u⟩=∑juj|j⟩ and likewise |v⟩=∑jvj|j⟩ with uj,vj∈C, then the inner product ⟨u|v⟩=∑ju⋆jvj=⟨v|u⟩⋆. This means that we could possibly take the inner product to be u⋅v=u⋆v, where we remember now that u and v are single complex numbers and can then be thought of as 1-dimensional complex vectors. The problem is that this notion of an inner product yields a complex number for its result, and we want the result to be real so that it can mimic the result for two-dimensional vectors. The answer turns out to be in taking the real part of that expression: u⋅v=12(u⋆v+uv⋆). This works in the Cartesian basis: if u=ux+iuy and v=vx+ivy, then u⋅v=uxvx+uyvy; it works in the polar basis as well: if u=ueiθu and v=veiθv, then u⋅v=uvcos(θv−θu), which is in fact the more general definition anyway. (As a side note, what if we took the imaginary part? The answer would in fact be the magnitude of the cross product: |u×v|=uv|sin(θv−θu)|=|uxvy−uyvx|. The vector cross product would not lie in the plane and therefore could not be expressed as a complex number. If we wanted to represent 3-dimensional vectors as some generalization of the complex numbers, we would use quaternions, which are extremely well-documented and which I still don't understand that much.)
Note that theoretically, any 2-dimensional vector or complex number could be expressed as a product of its magnitude and direction in their respective polar forms. Because of this, radial and tangential unit vectors differ based on position. From this, it is customary to fix the position as a purely radial vector r=rer and express other vectors as superpositions of both the radial and tangential unit vectors. In complex notation, this means that for a given angle θ, the position r=reiθ, and all other vectors could be expressed for that same angle as u=ureiθ+iuθeiθ. And there you have it: you have been able to explore with me some of the other properties of complex numbers when representing 2-dimensional vectors!