This post is a little different from the last one, only because it's more about mathematics than physics. It's based on thoughts I have been having about complex numbers and how they relate to 2-dimensional vectors. Follow the jump to see more.

Just to refresh, the imaginary unit is defined by $i^2 = -1$, and by convention the sign is taken as $i = \sqrt{-1}$. This imaginary unit exists outside of the space of real numbers, so an imaginary number line can be constructed perpendicular to the real number line intersecting at zero. Any point in this new two-dimensional space is a complex number $z = x + iy$. This can also be written in polar form $z = re^{i\theta}$, where $r = \sqrt{x^2 + y^2}$ and $\theta = \arctan\left(\frac{y}{x}\right)$. The conjugate of a complex number is defined as $z^{\star} \equiv x - iy = re^{-i\theta}$. This means that the number $z^{\star} z \in \mathbb{R}$. Other than that, complex numbers feature commutativity, associativity, and distributivity in addition, subtraction, multiplication, and division. Using the facts that $i^2 = -1$ and $e^{i\theta} \equiv \cos(\theta) + i\sin(\theta)$, exponentiation and other mathematical operations can be recovered too.

What about 2-dimensional vectors? The traditional Cartesian basis selection allows the decomposition of a vector $\mathbf{u} = u_x \mathbf{e}_x + u_y \mathbf{e}_y$. These also have a polar form $\mathbf{r} = u_r \mathbf{e}_r + u_{\theta} \mathbf{e}_{\theta}$, where the unit vectors $\mathbf{e}_r$ and $\mathbf{e}_{\theta}$ depend on the angle with respect to some fixed basis vector (usually $\mathbf{e}_x$ in the counterclockwise direction). In the Cartesian basis, these vectors can be added, subtracted, and multiplied by scalars, along with the other properties that allow these vectors to be part of a vector space.

You may be wondering what the connection is between 2-dimensional vectors and complex numbers. The initial connection is just that the imaginary axis lies perpendicular to the real axis by convention, so complex numbers lie in a plane just like points in the $xy$-plane. That's basically all that the book

Before getting into inner products, though, I'd like to digress a little and show how the polar basis is actually more tractable for complex numbers than for 2-dimensional vectors, especially when working with problems in Newtonian mechanics such as in 8.012 — Physics I. Because 8.012 goes deeper into the Newtonian analysis of problems like rigid-body rotation, gyroscopes, Euler equations, and central forces, and because all central force problems can be confined to a plane, knowledge of polar coordinates is crucial for success in the class. A general vector is decomposed into polar coordinates as $\mathbf{u} = u_r \mathbf{e}_r + u_{\theta} \mathbf{e}_{\theta}$, in which the Cartesian decomposition of the polar unit vectors yields $\mathbf{e}_r = \cos(\theta) \mathbf{e}_x + \sin(\theta) \mathbf{e}_y$ and $\mathbf{e}_{\theta} = -\sin(\theta) \mathbf{e}_x + \cos(\theta) \mathbf{e}_y$. The position vector $\mathbf{r} = r\mathbf{e}_r$, with the direction of the vector implicit if decomposition into the Cartesian basis is not desired. Because the direction of the radial unit vector is implicit and because it changes at each point in the plane, its time derivative is not constant: $\dot{\mathbf{e}}_r = \dot{\theta} \mathbf{e}_{\theta}$, and likewise for the tangential unit vector, $\dot{\mathbf{e}}_{\theta} = -\dot{\theta} \mathbf{e}_r$. These can be proved through decomposition into the Cartesian basis or through geometric arguments if one wants to avoid the Cartesian basis altogether. This means that the velocity vector is $\dot{\mathbf{r}} = \dot{r} \mathbf{e}_r + r\dot{\theta} \mathbf{e}_{\theta}$ and the acceleration vector is $\ddot{\mathbf{r}} = (\ddot{r} - r\dot{\theta}^2) \mathbf{e}_r + (2\dot{r} \dot{\theta} + r\ddot{\theta}) \mathbf{e}_{\theta}$. Without knowing the geometrical argument for the time derivatives of the polar unit vectors, it is not immediately obvious why a radial position vector would have time derivatives that contain more complicated radial and tangential components. But what if it was possible to do this using complex numbers instead?

A radial position vector would be represented by a complex number as (and I will be abusing vector and complex notation here by mixing them a bit) $\mathbf{r} = re^{i\theta}$. Note here that the radial unit vector is $\mathbf{e}_r \equiv e^{i\theta}$, which makes explicit the directionality of the vector; this makes sense given that in the Cartesian basis, $\mathbf{e}_r = \cos(\theta) \mathbf{e}_x + \sin(\theta) \mathbf{e}_y$ while $e^{i\theta} = \cos(\theta) + i\sin(\theta)$. Taking one time derivative of the position vector yields $\dot{\mathbf{r}} = \dot{r} e^{i\theta} + ir\dot{\theta} e^{i\theta}$. It is now easy to read off that the tangential unit vector is $\mathbf{e}_{\theta} = ie^{i\theta} = e^{i(\theta + \frac{\pi}{2})}$, which makes manifestly obvious the fact that the tangential unit vector is the radial unit vector rotated by $\frac{\pi}{2}$, as well as the facts about the time derivatives of those unit vectors. Taking another time derivative yields $\ddot{\mathbf{r}} = (\ddot{r} + r\dot{\theta}^2) e^{i\theta} + (2\dot{r} \dot{\theta} + r\ddot{\theta}) ie^{i\theta}$, which is also consistent with the expression for the acceleration vector. Hence, it is possible to do Newtonian mechanics in 2 dimensions with complex numbers.

Now, what about inner products? We know that for 2-dimensional vectors, in general, $\mathbf{u} \cdot \mathbf{v} = |\mathbf{u}||\mathbf{v}|\cos(\theta(\mathbf{u}, \mathbf{v}))$, and in the Cartesian basis, $\mathbf{u} \cdot \mathbf{v} = u_x v_x + u_y v_y$. How is it possible to reproduce this with complex numbers? A quick check for $\mathbf{u} = u_x + iu_y$ and $\mathbf{v} = v_x + iv_y$ shows that multiplication won't work. However, we can use a simpler version of a result of inner products for complex vector spaces that says that (using Dirac bra-ket notation here) for actual complex vectors expanded in a basis $|u\rangle = \sum_j u_j |j\rangle$ and likewise $|v\rangle = \sum_j v_j |j\rangle$ with $u_j, v_j \in \mathbb{C}$, then the inner product $\langle u|v\rangle = \sum_j u_j^{\star} v_j = \langle v|u \rangle^{\star}$. This means that we could possibly take the inner product to be $\mathbf{u} \cdot \mathbf{v} = \mathbf{u}^{\star} \mathbf{v}$, where we remember now that $\mathbf{u}$ and $\mathbf{v}$ are single complex numbers and can then be thought of as 1-dimensional complex vectors. The problem is that this notion of an inner product yields a complex number for its result, and we want the result to be real so that it can mimic the result for two-dimensional vectors. The answer turns out to be in taking the real part of that expression: $\mathbf{u} \cdot \mathbf{v} = \frac{1}{2} \left(\mathbf{u}^{\star} \mathbf{v} + \mathbf{u} \mathbf{v}^{\star} \right)$. This works in the Cartesian basis: if $\mathbf{u} = u_x + iu_y$ and $\mathbf{v} = v_x + iv_y$, then $\mathbf{u} \cdot \mathbf{v} = u_x v_x + u_y v_y$; it works in the polar basis as well: if $\mathbf{u} = ue^{i\theta_u}$ and $\mathbf{v} = ve^{i\theta_v}$, then $\mathbf{u} \cdot \mathbf{v} = uv\cos(\theta_v - \theta_u)$, which is in fact the more general definition anyway. (As a side note, what if we took the imaginary part? The answer would in fact be the magnitude of the cross product: $|\mathbf{u} \times \mathbf{v}| = uv|\sin(\theta_v - \theta_u)| = |u_x v_y - u_y v_x|$. The vector cross product would not lie in the plane and therefore could not be expressed as a complex number. If we wanted to represent 3-dimensional vectors as some generalization of the complex numbers, we would use quaternions, which are extremely well-documented and which I still don't understand that much.)

Note that theoretically, any 2-dimensional vector or complex number could be expressed as a product of its magnitude and direction in their respective polar forms. Because of this, radial and tangential unit vectors differ based on position. From this, it is customary to fix the position as a purely radial vector $\mathbf{r} = r\mathbf{e}_r$ and express other vectors as superpositions of both the radial and tangential unit vectors. In complex notation, this means that for a given angle $\theta$, the position $\mathbf{r} = re^{i\theta}$, and all other vectors could be expressed for that same angle as $\mathbf{u} = u_r e^{i\theta} + iu_{\theta} e^{i\theta}$. And there you have it: you have been able to explore with me some of the other properties of complex numbers when representing 2-dimensional vectors!

Just to refresh, the imaginary unit is defined by $i^2 = -1$, and by convention the sign is taken as $i = \sqrt{-1}$. This imaginary unit exists outside of the space of real numbers, so an imaginary number line can be constructed perpendicular to the real number line intersecting at zero. Any point in this new two-dimensional space is a complex number $z = x + iy$. This can also be written in polar form $z = re^{i\theta}$, where $r = \sqrt{x^2 + y^2}$ and $\theta = \arctan\left(\frac{y}{x}\right)$. The conjugate of a complex number is defined as $z^{\star} \equiv x - iy = re^{-i\theta}$. This means that the number $z^{\star} z \in \mathbb{R}$. Other than that, complex numbers feature commutativity, associativity, and distributivity in addition, subtraction, multiplication, and division. Using the facts that $i^2 = -1$ and $e^{i\theta} \equiv \cos(\theta) + i\sin(\theta)$, exponentiation and other mathematical operations can be recovered too.

What about 2-dimensional vectors? The traditional Cartesian basis selection allows the decomposition of a vector $\mathbf{u} = u_x \mathbf{e}_x + u_y \mathbf{e}_y$. These also have a polar form $\mathbf{r} = u_r \mathbf{e}_r + u_{\theta} \mathbf{e}_{\theta}$, where the unit vectors $\mathbf{e}_r$ and $\mathbf{e}_{\theta}$ depend on the angle with respect to some fixed basis vector (usually $\mathbf{e}_x$ in the counterclockwise direction). In the Cartesian basis, these vectors can be added, subtracted, and multiplied by scalars, along with the other properties that allow these vectors to be part of a vector space.

You may be wondering what the connection is between 2-dimensional vectors and complex numbers. The initial connection is just that the imaginary axis lies perpendicular to the real axis by convention, so complex numbers lie in a plane just like points in the $xy$-plane. That's basically all that the book

*Waves and Vibrations*by A. P. French (required for 8.03 — Physics III) had to say about it, and it didn't make a whole lot of sense to me that this alone should connect 2-dimensional vectors with complex numbers. There are, after all, a few differences. The inner product for 2-dimensional vectors $\mathbf{u}$ and $\mathbf{v}$ is the dot product $\mathbf{u} \cdot \mathbf{v} \in \mathbb{R}$ and is independent of basis. In the Cartesian or polar bases, it can be expanded as $\mathbf{u} \cdot \mathbf{v} = u_x v_x + u_y v_y = u_r v_r \mathbf{e}_{r_u} \cdot \mathbf{e}_{r_v} + u_{\theta} v_{\theta} \mathbf{e}_{\theta_u} \cdot \mathbf{e}_{\theta_v}$, where the polar representations are $\mathbf{u} = u_r \mathbf{e}_{r_u} + u_{\theta} \mathbf{e}_{\theta_u}$ and $\mathbf{v} = v_r \mathbf{e}_{r_v} + v_{\theta} \mathbf{e}_{\theta_v}$ and where the dot product of the two radial and tangential unit vectors cannot be simplified further without expanding in the Cartesian basis due to the angular dependence of different radial and tangential unit vectors. Anyway, I haven't yet defined a similar inner product for complex numbers that would allow them to represent 2-dimensional vectors in the same way, but it is possible, and that shall be done. Also, complex numbers can be multiplied and divided, whereas 2-dimensional vectors cannot; this is on account of complex numbers being a*field*(related to $i^2 = -1$), and 2-dimensional vectors cannot be made to have direct multiplication and division except by essentially becoming complex numbers.Before getting into inner products, though, I'd like to digress a little and show how the polar basis is actually more tractable for complex numbers than for 2-dimensional vectors, especially when working with problems in Newtonian mechanics such as in 8.012 — Physics I. Because 8.012 goes deeper into the Newtonian analysis of problems like rigid-body rotation, gyroscopes, Euler equations, and central forces, and because all central force problems can be confined to a plane, knowledge of polar coordinates is crucial for success in the class. A general vector is decomposed into polar coordinates as $\mathbf{u} = u_r \mathbf{e}_r + u_{\theta} \mathbf{e}_{\theta}$, in which the Cartesian decomposition of the polar unit vectors yields $\mathbf{e}_r = \cos(\theta) \mathbf{e}_x + \sin(\theta) \mathbf{e}_y$ and $\mathbf{e}_{\theta} = -\sin(\theta) \mathbf{e}_x + \cos(\theta) \mathbf{e}_y$. The position vector $\mathbf{r} = r\mathbf{e}_r$, with the direction of the vector implicit if decomposition into the Cartesian basis is not desired. Because the direction of the radial unit vector is implicit and because it changes at each point in the plane, its time derivative is not constant: $\dot{\mathbf{e}}_r = \dot{\theta} \mathbf{e}_{\theta}$, and likewise for the tangential unit vector, $\dot{\mathbf{e}}_{\theta} = -\dot{\theta} \mathbf{e}_r$. These can be proved through decomposition into the Cartesian basis or through geometric arguments if one wants to avoid the Cartesian basis altogether. This means that the velocity vector is $\dot{\mathbf{r}} = \dot{r} \mathbf{e}_r + r\dot{\theta} \mathbf{e}_{\theta}$ and the acceleration vector is $\ddot{\mathbf{r}} = (\ddot{r} - r\dot{\theta}^2) \mathbf{e}_r + (2\dot{r} \dot{\theta} + r\ddot{\theta}) \mathbf{e}_{\theta}$. Without knowing the geometrical argument for the time derivatives of the polar unit vectors, it is not immediately obvious why a radial position vector would have time derivatives that contain more complicated radial and tangential components. But what if it was possible to do this using complex numbers instead?

A radial position vector would be represented by a complex number as (and I will be abusing vector and complex notation here by mixing them a bit) $\mathbf{r} = re^{i\theta}$. Note here that the radial unit vector is $\mathbf{e}_r \equiv e^{i\theta}$, which makes explicit the directionality of the vector; this makes sense given that in the Cartesian basis, $\mathbf{e}_r = \cos(\theta) \mathbf{e}_x + \sin(\theta) \mathbf{e}_y$ while $e^{i\theta} = \cos(\theta) + i\sin(\theta)$. Taking one time derivative of the position vector yields $\dot{\mathbf{r}} = \dot{r} e^{i\theta} + ir\dot{\theta} e^{i\theta}$. It is now easy to read off that the tangential unit vector is $\mathbf{e}_{\theta} = ie^{i\theta} = e^{i(\theta + \frac{\pi}{2})}$, which makes manifestly obvious the fact that the tangential unit vector is the radial unit vector rotated by $\frac{\pi}{2}$, as well as the facts about the time derivatives of those unit vectors. Taking another time derivative yields $\ddot{\mathbf{r}} = (\ddot{r} + r\dot{\theta}^2) e^{i\theta} + (2\dot{r} \dot{\theta} + r\ddot{\theta}) ie^{i\theta}$, which is also consistent with the expression for the acceleration vector. Hence, it is possible to do Newtonian mechanics in 2 dimensions with complex numbers.

Now, what about inner products? We know that for 2-dimensional vectors, in general, $\mathbf{u} \cdot \mathbf{v} = |\mathbf{u}||\mathbf{v}|\cos(\theta(\mathbf{u}, \mathbf{v}))$, and in the Cartesian basis, $\mathbf{u} \cdot \mathbf{v} = u_x v_x + u_y v_y$. How is it possible to reproduce this with complex numbers? A quick check for $\mathbf{u} = u_x + iu_y$ and $\mathbf{v} = v_x + iv_y$ shows that multiplication won't work. However, we can use a simpler version of a result of inner products for complex vector spaces that says that (using Dirac bra-ket notation here) for actual complex vectors expanded in a basis $|u\rangle = \sum_j u_j |j\rangle$ and likewise $|v\rangle = \sum_j v_j |j\rangle$ with $u_j, v_j \in \mathbb{C}$, then the inner product $\langle u|v\rangle = \sum_j u_j^{\star} v_j = \langle v|u \rangle^{\star}$. This means that we could possibly take the inner product to be $\mathbf{u} \cdot \mathbf{v} = \mathbf{u}^{\star} \mathbf{v}$, where we remember now that $\mathbf{u}$ and $\mathbf{v}$ are single complex numbers and can then be thought of as 1-dimensional complex vectors. The problem is that this notion of an inner product yields a complex number for its result, and we want the result to be real so that it can mimic the result for two-dimensional vectors. The answer turns out to be in taking the real part of that expression: $\mathbf{u} \cdot \mathbf{v} = \frac{1}{2} \left(\mathbf{u}^{\star} \mathbf{v} + \mathbf{u} \mathbf{v}^{\star} \right)$. This works in the Cartesian basis: if $\mathbf{u} = u_x + iu_y$ and $\mathbf{v} = v_x + iv_y$, then $\mathbf{u} \cdot \mathbf{v} = u_x v_x + u_y v_y$; it works in the polar basis as well: if $\mathbf{u} = ue^{i\theta_u}$ and $\mathbf{v} = ve^{i\theta_v}$, then $\mathbf{u} \cdot \mathbf{v} = uv\cos(\theta_v - \theta_u)$, which is in fact the more general definition anyway. (As a side note, what if we took the imaginary part? The answer would in fact be the magnitude of the cross product: $|\mathbf{u} \times \mathbf{v}| = uv|\sin(\theta_v - \theta_u)| = |u_x v_y - u_y v_x|$. The vector cross product would not lie in the plane and therefore could not be expressed as a complex number. If we wanted to represent 3-dimensional vectors as some generalization of the complex numbers, we would use quaternions, which are extremely well-documented and which I still don't understand that much.)

Note that theoretically, any 2-dimensional vector or complex number could be expressed as a product of its magnitude and direction in their respective polar forms. Because of this, radial and tangential unit vectors differ based on position. From this, it is customary to fix the position as a purely radial vector $\mathbf{r} = r\mathbf{e}_r$ and express other vectors as superpositions of both the radial and tangential unit vectors. In complex notation, this means that for a given angle $\theta$, the position $\mathbf{r} = re^{i\theta}$, and all other vectors could be expressed for that same angle as $\mathbf{u} = u_r e^{i\theta} + iu_{\theta} e^{i\theta}$. And there you have it: you have been able to explore with me some of the other properties of complex numbers when representing 2-dimensional vectors!