This was a post that I had been thinking of doing for a while, but I couldn't get around to it until now. A lot of introductory electricity & magnetism problems constrain charges to only move in 1 or 2 dimensions, but in reality the constraint existed within a 3-dimensional space. I thought that would cover the bases for electrodynamics in 1 or 2 dimensions, but then I saw that in cylindrical coordinates, the order-0 multipole moment outside a line charge is $\phi \propto \ln(r)$ as opposed to $\phi \propto \frac{1}{r}$. That made me realize that there is in fact a distinction among 1 or 2 or 3 dimensions. In all of the following, I will make use of the conventions and relations \[ x^{\mu} = (ct, x, y, z) \\ \partial_{\mu} = \left(\frac{1}{c} \frac{\partial}{\partial t}, \frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z}\right) \\ \eta_{\mu \nu} = \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \\ F^{\mu \nu} = \begin{bmatrix} 0 & E_x & E_y & E_z \\ -E_x & 0 & B_z & -B_y \\ -E_y & -B_z & 0 & B_x \\ -E_z & B_y & -B_x & 0 \end{bmatrix} \\ \partial_{\nu} F^{\mu \nu} = \frac{4\pi}{c} J^{\mu} \\ \epsilon_{\mu \nu \zeta \xi} \partial^{\nu} F^{\zeta \xi} = 0 \\ \mathbf{F} = q\left(\mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B}\right) \] in 3 dimensions, with Einstein summation and CGS implied (with more on that last point nearer to the end), with Latin indices representing only spatial components, and with Greek indices representing spacetime components. Also note that the fully antisymmetric tensor $\epsilon$ has $n$ Latin indices in $n$ spatial dimensions and $n+1$ Greek indices in $n+1$ spacetime dimensions; for example, in 2 spatial dimensions, the antisymmetric tensor over only space looks like $\epsilon_{ij}$, while over spacetime it looks like $\epsilon_{\mu \nu \xi}$, and I will frequently switch between the two as needed. Follow the jump to see what happens.
What else happens? Well, the electric field of a point charge gives \[ \phi = 2q \ln\left(\frac{r}{r_0}\right) \] as the potential of a point charge, where $r_0$ is a characteristic length scale where $\phi = 0$, because $\phi$ diverges as $r \rightarrow \infty$. This seems a little weird to me, and I'm not as comfortable continuing, but I would like to see where this ends up. Naïvely, I might say that the potential diverging at infinity implies that the work it takes to bring a test charge away from infinity is itself infinite. In reality, the electric potential cannot be interpreted as a work per unit charge if it does not drop to zero infinitely far away; the reason the work appears to be infinitely large is because in 3 dimensions, these point charges are actually line charges, so it takes an infinite amount of work to assemble charge distributions that are infinitely long along $z$. This also means that \[ W = \frac{1}{8\pi} \int \mathbf{E}^2 \, d^3 r \] becomes \[ W = \frac{1}{8\pi} \int \mathbf{E}^2 \, d^2 r \] which diverges because $d^2 r \propto r$ while $\mathbf{E} \propto \frac{1}{r}$ so $\mathbf{E}^2 \, d^2 r \propto \frac{1}{r}$ whose integral diverges as well. This is belied by considering line charges in 3 dimensions to be point charges in 2 dimensions.
But that doesn't mean that useful things still can't be done. A general charge distribution can still have a potential expanded in terms of its multipole moments. This would now be a cylindrical multipole expansion \[ \phi = \phi_0 + 2q\ln\left(\frac{r}{r_0}\right) \\ + \sum_{k = 1}^{\infty} \left(\left(A_k r^k + \frac{B_k}{r^k}\right) \cos\left(k\theta\right) + \left(C_k r^k + \frac{D_k}{r^k}\right) \sin\left(k\theta\right)\right) \] where $A_k$ and $C_k$ come into play to ensure that $\phi$ is finite at $r = 0$ and likewise $B_k$ and $D_k$ come into play to ensure that $\phi$ does not diverge faster than $\ln(r)$ (unless it is desired) as $r \rightarrow \infty$; the only thing to note is that the terms $\phi_0$, $A_k$, and $C_k$ are relevant for the interior of a desired region, while $2q\ln\left(\frac{r}{r_0}\right)$, $B_k$, and $D_k$ are relevant for the exterior of said region, enforcing continuity at an appropriate $r_0$. What makes me feel iffy is exactly the fact that $\phi$ for a point charge diverges as $r \rightarrow \infty$, so the notions of continuity and convergence from 3-dimensional multipole expansions feel less solid here.
What about magnetic fields? In 2 spatial dimensions, the fully antisymmetric tensor has 2 indices, so given that the Lorentz force is a vector and the velocity is a vector, the magnetic field must be a scalar. This can be more easily seen through \[ F^{\mu \nu} = \begin{bmatrix} 0 & E_x & E_y \\ -E_x & 0 & B \\ -E_y & -B & 0 \end{bmatrix} \] which is the electromagnetic field tensor in 2+1 spacetime dimensions (i.e. the $z$ components of the tensor have been removed). If you notice, in 2+1 spacetime dimensions, the fully antisymmetric tensor $\epsilon_{\mu \nu \xi}$ has 3 indices, which means that an antisymmetric 2-index tensor is equivalent to a 3-component pseudovector. In this case, I will denote $X_{\mu} = \frac{1}{2} \epsilon_{\mu \nu \xi} F^{\nu \xi}$ as that field pseudovector, and this can be reversed as $F^{\mu \nu} = \epsilon^{\mu \nu \xi} X_{\xi}$, so $X_{\mu} = (B, -E_y, E_x)$. This means $\partial_{\nu} F^{\mu \nu} = \epsilon^{\mu \nu \xi} \partial_{\nu} X_{\xi} = \frac{4\pi}{c} J^{\mu}$ gives the source-dependent Maxwell equations, while $\epsilon^{\mu \nu \xi} \partial_{\mu} F_{\nu \xi} = \partial_{\mu} X^{\mu} = 0$ give the source-independent Maxwell equations. It is convenient in two dimensions to define the scalar cross product $\mathbf{a} \times \mathbf{b} = \epsilon_{ij} a_i b_j = a_x b_y - a_y b_x$, which can also be written as \[ \mathbf{a} \times \mathbf{b} = \begin{vmatrix} a_x & a_y \\ b_x & b_y \end{vmatrix} \] in analogy to the vector cross product in 3 dimensions (though note again that this is a scalar determinant). It is also convenient to define the cross product operations $\nabla \times \mathbf{a} = \epsilon_{ij} \partial_i a_j = \frac{\partial a_y}{\partial x} - \frac{\partial a_x}{\partial y}$ which is a scalar operation on a vector, and $\nabla \times c = \epsilon_{ij} \partial_j c \mathbf{e}_i = \frac{\partial c}{\partial y} \mathbf{e}_x - \frac{\partial c}{\partial x} \mathbf{e}_y$ which is a vector operation on a scalar. This means the two Maxwell equations, which have been written in covariant form thus far, give \[ \nabla \cdot \mathbf{E} = 4\pi \rho \\ \nabla \times \mathbf{E} = -\frac{1}{c} \frac{\partial B}{\partial t} \\ \nabla \times B = \frac{4\pi}{c} \mathbf{J} + \frac{1}{c} \frac{\partial \mathbf{E}}{\partial t} \] when expanded in terms of the 2-dimensional vectors $\mathbf{E}$ & $\mathbf{J}$ and scalars $B$ & $\rho$; note that there is no equation like $\nabla \cdot \mathbf{B} = 0$ because $B$ is a scalar that only depends on $x$ & $y$. Finally, \[ \mathbf{F} = q\left(\mathbf{E} + \frac{\mathbf{v}}{c} \times B \right) \] is the new Lorentz force law.
How do magnetic fields come about anyway? A possible starting point is to consider the magnetic field due to a moving point charge in 2 dimensions. This corresponds to a line charge in 3 dimensions whose velocity is always parallel to the $xy$-plane. This can be solved by essentially calculating the Biot-Savart magnetic field for each charge element in that line charge with the same velocity and adding the field contributions together. However, this seems a lot more painful than the alternative. In 3 dimensions, the equations \[ \mathbf{E}'_{\parallel} = \mathbf{E}_{\parallel} \\ \mathbf{B}'_{\parallel} = \mathbf{B}_{\parallel} \\ \mathbf{E}'_{\perp} = \gamma \cdot \left(\mathbf{E}_{\perp} + \frac{\mathbf{v}}{c} \times \mathbf{B}_{\perp}\right) \\ \mathbf{B}'_{\perp} = \gamma \cdot \left(\mathbf{B}_{\perp} - \frac{\mathbf{v}}{c} \times \mathbf{E}_{\perp}\right) \] describe the transformations of electromagnetic fields to a primed frame moving with velocity $\mathbf{v}$ with respect to an unprimed frame, where $\gamma = \left(1 - \frac{\mathbf{v}^2}{c^2}\right)^{-\frac{1}{2}}$. In 2 dimensions, these can be replaced as laid out above by \[ \mathbf{E}'_{\parallel} = \mathbf{E}_{\parallel} \\ B'_{\parallel} = B_{\parallel} \\ \mathbf{E}'_{\perp} = \gamma \cdot \left(\mathbf{E}_{\perp} + \frac{\mathbf{v}}{c} \times B_{\perp}\right) \\ B'_{\perp} = \gamma \cdot \left(B_{\perp} - \frac{\mathbf{v}}{c} \times \mathbf{E}_{\perp}\right) \] given the above definitions. The Biot-Savart law can then be recovered by taking $\mathbf{E} = \frac{2q}{r} \mathbf{e}_r$ and $\mathbf{B} = 0$ in the unprimed frame while considering a primed frame of velocity $-\mathbf{v}$ low enough that $\gamma \approx 1$. Dropping the primes, this gives $B = \frac{2q\mathbf{v} \times \mathbf{e}_r}{cr}$ as the magnetic field of a nonrelativistic moving point charge in 2 dimensions.
That doesn't cover all the cases of a steady current though. A steady current would be more like a stream of point charges along a line. Actually, let us look at that example more carefully. A steady stream of point charges along a line, say along $\mathbf{e}_x$, in 2 dimensions would correspond in 3 dimensions to a steady stream of line charges oriented along $\mathbf{e}_z$ moving along $\mathbf{e}_x$, forming an infinite sheet of current $\mathbf{K} = K\mathbf{e}_x$. But what magnetic field comes from an infinite sheet of current? That magnetic field is uniform in magnitude and simply flips direction across the sheet itself. That must be true here too. If there is a steady current $I$ flowing along $\mathbf{e}_x$, then $B = \frac{2\pi I}{c} \cdot \{-1 \, \mathrm{ for } \, y < 0, 1 \, \mathrm{ for } \, y > 0 \}$. This is mathematically supported by the fact that in magnetostatics, $\nabla \times B = \frac{4\pi}{c} \mathbf{J}$, but in 2 dimensions $I = \int \mathbf{J} \cdot d\mathbf{l}$ as $\mathbf{J}$ is now essentially a surface current density so $d\mathbf{l}$ is a line element whose path is taken perpendicular to current flow but whose vector direction is parallel to current flow. (It's essentially the 1-dimensional equivalent of a normal area element.) The line integral over $\mathbf{B}$ in 3 dimensions along a path becomes a difference in $B$ in 2 dimensions between two endpoints such that a path connecting the endpoints passes through a region of current, and because this line of current essentially exhibits a delta function-like behavior in that spatial direction, then indeed $B(y > 0) - B(y < 0) = \frac{4\pi I}{c}$.
What about a uniform circular ring of counterclockwise current centered at the origin? In 3 dimensions, this is a classic problem showing why Ampère's law may fail due to lack of obvious symmetries, thereby demonstrating the utility of the Biot-Savart law. In 2 dimensions, though, we have the opportunity to be a lot more clever. Remember that a point charge in 2 dimensions is a line charge in 3 dimensions. This means that a rotating ring of point charges in 2 dimensions is equivalent to a rotating ring of line charges in 3 dimensions, which is exactly the same as an infinite stack of thin current rings parallel to the $xy$-plane, also known as an infinite solenoid. Hence, the magnetic field of a ring of current along radius $r_0$ in 2 dimensions is $B = \frac{4\pi I}{c} \cdot \{1 \, \mathrm{ for } \, r < r_0, 0 \, \mathrm{otherwise}\}$. Note that there is no other length scale as there might be in 3 dimensions because the units of the magnetic field require one less power of length in 2 dimensions.
To wrap up a loose end, what about potentials? The definition $F^{\mu \nu} = \partial^{\mu} A^{\nu} - \partial^{\nu} A^{\mu}$ should always be true. This implies that $A^{\mu} = (\phi, A_x, A_y)$, $\mathbf{E} = -\nabla \phi - \frac{1}{c} \frac{\partial \mathbf{A}}{\partial t}$ as before, and of course $B = \nabla \times \mathbf{A}$ is now a scalar.
Finally, what about radiation? $\mathbf{E} \propto \frac{1}{r}$ is true for a static field. In 3 dimensions, a static $\mathbf{E} \propto \frac{1}{r^2}$, while a radiating $\mathbf{E} \propto \frac{1}{r}$. This leads to the conclusion that in 2 dimensions, the radiating $\mathbf{E}$ is uniform in space. This is correct for two reasons. The first is that it cannot be proportional to $\ln(r)$, because that depends on a choice of zero, which is OK for the potential, but the field should be observable everywhere, and there is no choice on where to make it zero except for where it is experimentally so. The second is that the Purcell geometric argument works here too, except that the static field is proportional to $\frac{1}{r}$, so the factor of $r$ coming from $t = \frac{r}{c}$ cancels that to give a uniform radiated electric field. This also means that the radiated magnetic field is uniform, the Poynting vector $\mathbf{S} = \frac{c}{4\pi} \mathbf{E} \times B$ is uniform, and the total power $P = \int \mathbf{S} \cdot d\mathbf{l}$, where again the line element is assigned to have a vector direction normal to the orientation of the line itself in analogy to a surface integral in 3 dimensions, is infinite. Why does that happen? Remember that in 3 dimensions, these are infinite lines of charge being accelerated and made to radiate, so of course weird divergences are going to occur. I'm sure there's a way to take care of those infinities in just the right way, but my math background is not strong enough to make that happen.
So it actually looks quite boring. A point charge in 1 dimension is an infinite uniform sheet of charge in 2 dimensions. This has an electric field \[ E = 2\pi q \cdot \{-1 \, \mathrm{ for } \, x < 0, 1 \, \mathrm{ for } \, x > 0\} \] for a point charge $q$ at $x = 0$; note that this is uniform throughout space. Given no magnetism, the Lorentz force law is now just $F = qE$, and for such an electric field, the force on a test charge is $F = 2\pi q_1 q_2 \cdot \mathrm{sign}(x_1 - x_2)$. The dimensions of charge are now $F^{\frac{1}{2}} = m^{\frac{1}{2}} l^{\frac{1}{2}} t^{-1}$. Hence, the force between two charges in 1 dimension is independent of their separation and only dependent on their relative signs. Note that the Gauss law, which in 3 dimensions was a surface integral and in 2 dimensions was a closed line integral, is now in 1 dimension a difference between endpoints such that between the endpoints lies nonzero charge; really, the electric field at each point is added, weighted by the outward direction at each point, giving the proper sign difference. This charge density is essentially a delta function, so integrating over that segment gives the total charge, which is identical to the difference in the electric field value between those two points.
There is no magnetic field at all in 1 dimension, so there can't be a magnetic field just by steadily moving a point charge with velocity $v$. Furthermore, no steady currents can be made because there is only one direction to go in. Because all cross products are zero, the only transformation law to consider for an object with velocity $v$ is the now-trivial $E' = E$, because $E$, being in exactly 1 dimension, must by construction lie parallel to $v$, which is also in exactly 1 dimension.
What do the Maxwell equations look like? The field tensor \[ F^{\mu \nu} = \begin{bmatrix} 0 & E \\ -E & 0 \end{bmatrix} \] looks a lot simpler now without other extraneous components. The Maxwell equations should still hold in covariant form. This means the source-dependent equation $\partial_{\nu} F^{\mu \nu} = \frac{4\pi}{c} J^{\mu}$ gives the equations \[ \frac{\partial E}{\partial x} = \frac{4\pi}{c} J \\ -\frac{\partial E}{\partial t} = 4\pi \rho \] while in fact the source-independent equation $\epsilon_{\mu \nu} \partial^{\xi} F^{\nu \xi} = 0$ is a restatement of the charge conservation equation $\partial_{\mu} J^{\mu} = 0$ based on the source-dependent equation relations.
But we know that $F^{\mu \nu} = \partial^{\mu} A^{\nu} - \partial^{\nu} A^{\mu}$ in any case. This means that $A^{\mu} = (\phi, A)$ in 1+1 spacetime dimensions, with $E = -\frac{\partial \phi}{\partial x} - \frac{1}{c} \frac{\partial A}{\partial t}$. Also, because the electric field is uniform in space, $\phi$ diverges linearly with $x$. This is only to be expected, now especially because the "point" charge in 1 dimension is really infinite along 2 directions.
Finally, there cannot be electromagnetic radiation in 1 dimension. There is no magnetic field in 1 dimension, while even if dipole radiation in 3 dimensions is considered, remember that the radiated fields vanish along the axis of the dipole. An accelerated charge is also a dipole in a sense, so accelerated charges cannot radiate in 1 dimension, also because electric fields do not deviate from uniformity in 1 dimension at all.
I found the pages given here (McDavid et al, ArXiV; Zwiebach (and I had 8.05 — Quantum Physics II with him too!); Physics StackExchange) to be quite helpful. I would also recommend that you read those further if you want more details.
2
A 2-dimensional system is essentially a 3-dimensional system with the constraint that nothing can depend on $z$, so everything is uniform along that direction. This reduces the spatial coordinates to Cartesian $(x, y)$ or plane polar $(r, \theta)$ (which would be $(\rho, \phi, z)$ in 3-dimensional cylindrical coordinates). A "point" charge in 2 dimensions is a charge distribution that is independent of $z$, corresponding to a thin line of charge in 3 dimensions. This means \[ \mathbf{E} = \frac{2q}{r} \mathbf{e}_r \] is the basic electric field for a point charge $q$ in 2 dimensions. This means that \[ \mathbf{F} = q_1 \mathbf{E}_2 = \frac{2q_1 q_2}{r_{(1,2)}} \mathbf{e}_{r_{(1,2)}} \] is the Coulomb force between 2 point charges in 2 dimensions, which in 3 dimensions corresponds to the force between 2 lines of charge separated by a displacement $\mathbf{r}_{(1,2)}$. Note that the dimensions of $q$ and $\mathbf{E}$ are now different: $q$ has dimensions $(Fl)^{\frac{1}{2}} = (ml^2 t^{-2})^{\frac{1}{2}} = m^{\frac{1}{2}} lt^{-1}$, and $\mathbf{E}$ has dimensions $\frac{q}{l} = m^{\frac{1}{2}} t^{-1}$. Furthermore, note that the Gauss law still works, because now the "surface" is a circle with the point charge at its center and with outward orientation (so that the electric flux does not vanish).What else happens? Well, the electric field of a point charge gives \[ \phi = 2q \ln\left(\frac{r}{r_0}\right) \] as the potential of a point charge, where $r_0$ is a characteristic length scale where $\phi = 0$, because $\phi$ diverges as $r \rightarrow \infty$. This seems a little weird to me, and I'm not as comfortable continuing, but I would like to see where this ends up. Naïvely, I might say that the potential diverging at infinity implies that the work it takes to bring a test charge away from infinity is itself infinite. In reality, the electric potential cannot be interpreted as a work per unit charge if it does not drop to zero infinitely far away; the reason the work appears to be infinitely large is because in 3 dimensions, these point charges are actually line charges, so it takes an infinite amount of work to assemble charge distributions that are infinitely long along $z$. This also means that \[ W = \frac{1}{8\pi} \int \mathbf{E}^2 \, d^3 r \] becomes \[ W = \frac{1}{8\pi} \int \mathbf{E}^2 \, d^2 r \] which diverges because $d^2 r \propto r$ while $\mathbf{E} \propto \frac{1}{r}$ so $\mathbf{E}^2 \, d^2 r \propto \frac{1}{r}$ whose integral diverges as well. This is belied by considering line charges in 3 dimensions to be point charges in 2 dimensions.
But that doesn't mean that useful things still can't be done. A general charge distribution can still have a potential expanded in terms of its multipole moments. This would now be a cylindrical multipole expansion \[ \phi = \phi_0 + 2q\ln\left(\frac{r}{r_0}\right) \\ + \sum_{k = 1}^{\infty} \left(\left(A_k r^k + \frac{B_k}{r^k}\right) \cos\left(k\theta\right) + \left(C_k r^k + \frac{D_k}{r^k}\right) \sin\left(k\theta\right)\right) \] where $A_k$ and $C_k$ come into play to ensure that $\phi$ is finite at $r = 0$ and likewise $B_k$ and $D_k$ come into play to ensure that $\phi$ does not diverge faster than $\ln(r)$ (unless it is desired) as $r \rightarrow \infty$; the only thing to note is that the terms $\phi_0$, $A_k$, and $C_k$ are relevant for the interior of a desired region, while $2q\ln\left(\frac{r}{r_0}\right)$, $B_k$, and $D_k$ are relevant for the exterior of said region, enforcing continuity at an appropriate $r_0$. What makes me feel iffy is exactly the fact that $\phi$ for a point charge diverges as $r \rightarrow \infty$, so the notions of continuity and convergence from 3-dimensional multipole expansions feel less solid here.
What about magnetic fields? In 2 spatial dimensions, the fully antisymmetric tensor has 2 indices, so given that the Lorentz force is a vector and the velocity is a vector, the magnetic field must be a scalar. This can be more easily seen through \[ F^{\mu \nu} = \begin{bmatrix} 0 & E_x & E_y \\ -E_x & 0 & B \\ -E_y & -B & 0 \end{bmatrix} \] which is the electromagnetic field tensor in 2+1 spacetime dimensions (i.e. the $z$ components of the tensor have been removed). If you notice, in 2+1 spacetime dimensions, the fully antisymmetric tensor $\epsilon_{\mu \nu \xi}$ has 3 indices, which means that an antisymmetric 2-index tensor is equivalent to a 3-component pseudovector. In this case, I will denote $X_{\mu} = \frac{1}{2} \epsilon_{\mu \nu \xi} F^{\nu \xi}$ as that field pseudovector, and this can be reversed as $F^{\mu \nu} = \epsilon^{\mu \nu \xi} X_{\xi}$, so $X_{\mu} = (B, -E_y, E_x)$. This means $\partial_{\nu} F^{\mu \nu} = \epsilon^{\mu \nu \xi} \partial_{\nu} X_{\xi} = \frac{4\pi}{c} J^{\mu}$ gives the source-dependent Maxwell equations, while $\epsilon^{\mu \nu \xi} \partial_{\mu} F_{\nu \xi} = \partial_{\mu} X^{\mu} = 0$ give the source-independent Maxwell equations. It is convenient in two dimensions to define the scalar cross product $\mathbf{a} \times \mathbf{b} = \epsilon_{ij} a_i b_j = a_x b_y - a_y b_x$, which can also be written as \[ \mathbf{a} \times \mathbf{b} = \begin{vmatrix} a_x & a_y \\ b_x & b_y \end{vmatrix} \] in analogy to the vector cross product in 3 dimensions (though note again that this is a scalar determinant). It is also convenient to define the cross product operations $\nabla \times \mathbf{a} = \epsilon_{ij} \partial_i a_j = \frac{\partial a_y}{\partial x} - \frac{\partial a_x}{\partial y}$ which is a scalar operation on a vector, and $\nabla \times c = \epsilon_{ij} \partial_j c \mathbf{e}_i = \frac{\partial c}{\partial y} \mathbf{e}_x - \frac{\partial c}{\partial x} \mathbf{e}_y$ which is a vector operation on a scalar. This means the two Maxwell equations, which have been written in covariant form thus far, give \[ \nabla \cdot \mathbf{E} = 4\pi \rho \\ \nabla \times \mathbf{E} = -\frac{1}{c} \frac{\partial B}{\partial t} \\ \nabla \times B = \frac{4\pi}{c} \mathbf{J} + \frac{1}{c} \frac{\partial \mathbf{E}}{\partial t} \] when expanded in terms of the 2-dimensional vectors $\mathbf{E}$ & $\mathbf{J}$ and scalars $B$ & $\rho$; note that there is no equation like $\nabla \cdot \mathbf{B} = 0$ because $B$ is a scalar that only depends on $x$ & $y$. Finally, \[ \mathbf{F} = q\left(\mathbf{E} + \frac{\mathbf{v}}{c} \times B \right) \] is the new Lorentz force law.
How do magnetic fields come about anyway? A possible starting point is to consider the magnetic field due to a moving point charge in 2 dimensions. This corresponds to a line charge in 3 dimensions whose velocity is always parallel to the $xy$-plane. This can be solved by essentially calculating the Biot-Savart magnetic field for each charge element in that line charge with the same velocity and adding the field contributions together. However, this seems a lot more painful than the alternative. In 3 dimensions, the equations \[ \mathbf{E}'_{\parallel} = \mathbf{E}_{\parallel} \\ \mathbf{B}'_{\parallel} = \mathbf{B}_{\parallel} \\ \mathbf{E}'_{\perp} = \gamma \cdot \left(\mathbf{E}_{\perp} + \frac{\mathbf{v}}{c} \times \mathbf{B}_{\perp}\right) \\ \mathbf{B}'_{\perp} = \gamma \cdot \left(\mathbf{B}_{\perp} - \frac{\mathbf{v}}{c} \times \mathbf{E}_{\perp}\right) \] describe the transformations of electromagnetic fields to a primed frame moving with velocity $\mathbf{v}$ with respect to an unprimed frame, where $\gamma = \left(1 - \frac{\mathbf{v}^2}{c^2}\right)^{-\frac{1}{2}}$. In 2 dimensions, these can be replaced as laid out above by \[ \mathbf{E}'_{\parallel} = \mathbf{E}_{\parallel} \\ B'_{\parallel} = B_{\parallel} \\ \mathbf{E}'_{\perp} = \gamma \cdot \left(\mathbf{E}_{\perp} + \frac{\mathbf{v}}{c} \times B_{\perp}\right) \\ B'_{\perp} = \gamma \cdot \left(B_{\perp} - \frac{\mathbf{v}}{c} \times \mathbf{E}_{\perp}\right) \] given the above definitions. The Biot-Savart law can then be recovered by taking $\mathbf{E} = \frac{2q}{r} \mathbf{e}_r$ and $\mathbf{B} = 0$ in the unprimed frame while considering a primed frame of velocity $-\mathbf{v}$ low enough that $\gamma \approx 1$. Dropping the primes, this gives $B = \frac{2q\mathbf{v} \times \mathbf{e}_r}{cr}$ as the magnetic field of a nonrelativistic moving point charge in 2 dimensions.
That doesn't cover all the cases of a steady current though. A steady current would be more like a stream of point charges along a line. Actually, let us look at that example more carefully. A steady stream of point charges along a line, say along $\mathbf{e}_x$, in 2 dimensions would correspond in 3 dimensions to a steady stream of line charges oriented along $\mathbf{e}_z$ moving along $\mathbf{e}_x$, forming an infinite sheet of current $\mathbf{K} = K\mathbf{e}_x$. But what magnetic field comes from an infinite sheet of current? That magnetic field is uniform in magnitude and simply flips direction across the sheet itself. That must be true here too. If there is a steady current $I$ flowing along $\mathbf{e}_x$, then $B = \frac{2\pi I}{c} \cdot \{-1 \, \mathrm{ for } \, y < 0, 1 \, \mathrm{ for } \, y > 0 \}$. This is mathematically supported by the fact that in magnetostatics, $\nabla \times B = \frac{4\pi}{c} \mathbf{J}$, but in 2 dimensions $I = \int \mathbf{J} \cdot d\mathbf{l}$ as $\mathbf{J}$ is now essentially a surface current density so $d\mathbf{l}$ is a line element whose path is taken perpendicular to current flow but whose vector direction is parallel to current flow. (It's essentially the 1-dimensional equivalent of a normal area element.) The line integral over $\mathbf{B}$ in 3 dimensions along a path becomes a difference in $B$ in 2 dimensions between two endpoints such that a path connecting the endpoints passes through a region of current, and because this line of current essentially exhibits a delta function-like behavior in that spatial direction, then indeed $B(y > 0) - B(y < 0) = \frac{4\pi I}{c}$.
What about a uniform circular ring of counterclockwise current centered at the origin? In 3 dimensions, this is a classic problem showing why Ampère's law may fail due to lack of obvious symmetries, thereby demonstrating the utility of the Biot-Savart law. In 2 dimensions, though, we have the opportunity to be a lot more clever. Remember that a point charge in 2 dimensions is a line charge in 3 dimensions. This means that a rotating ring of point charges in 2 dimensions is equivalent to a rotating ring of line charges in 3 dimensions, which is exactly the same as an infinite stack of thin current rings parallel to the $xy$-plane, also known as an infinite solenoid. Hence, the magnetic field of a ring of current along radius $r_0$ in 2 dimensions is $B = \frac{4\pi I}{c} \cdot \{1 \, \mathrm{ for } \, r < r_0, 0 \, \mathrm{otherwise}\}$. Note that there is no other length scale as there might be in 3 dimensions because the units of the magnetic field require one less power of length in 2 dimensions.
To wrap up a loose end, what about potentials? The definition $F^{\mu \nu} = \partial^{\mu} A^{\nu} - \partial^{\nu} A^{\mu}$ should always be true. This implies that $A^{\mu} = (\phi, A_x, A_y)$, $\mathbf{E} = -\nabla \phi - \frac{1}{c} \frac{\partial \mathbf{A}}{\partial t}$ as before, and of course $B = \nabla \times \mathbf{A}$ is now a scalar.
Finally, what about radiation? $\mathbf{E} \propto \frac{1}{r}$ is true for a static field. In 3 dimensions, a static $\mathbf{E} \propto \frac{1}{r^2}$, while a radiating $\mathbf{E} \propto \frac{1}{r}$. This leads to the conclusion that in 2 dimensions, the radiating $\mathbf{E}$ is uniform in space. This is correct for two reasons. The first is that it cannot be proportional to $\ln(r)$, because that depends on a choice of zero, which is OK for the potential, but the field should be observable everywhere, and there is no choice on where to make it zero except for where it is experimentally so. The second is that the Purcell geometric argument works here too, except that the static field is proportional to $\frac{1}{r}$, so the factor of $r$ coming from $t = \frac{r}{c}$ cancels that to give a uniform radiated electric field. This also means that the radiated magnetic field is uniform, the Poynting vector $\mathbf{S} = \frac{c}{4\pi} \mathbf{E} \times B$ is uniform, and the total power $P = \int \mathbf{S} \cdot d\mathbf{l}$, where again the line element is assigned to have a vector direction normal to the orientation of the line itself in analogy to a surface integral in 3 dimensions, is infinite. Why does that happen? Remember that in 3 dimensions, these are infinite lines of charge being accelerated and made to radiate, so of course weird divergences are going to occur. I'm sure there's a way to take care of those infinities in just the right way, but my math background is not strong enough to make that happen.
1
What does electromagnetism in 1 spatial dimension look like? Well, let us first get some things out of the way. In 1 dimension, there are no vectors; there are only scalars. Furthermore, the fully antisymmetric tensor $\epsilon$ has only 1 index and 1 component, so that has to be identically zero. This means that anything involving cross products, including magnetic fields, will be zero. Hence, "electromagnetism" in 1 dimension is really just electrostatics.So it actually looks quite boring. A point charge in 1 dimension is an infinite uniform sheet of charge in 2 dimensions. This has an electric field \[ E = 2\pi q \cdot \{-1 \, \mathrm{ for } \, x < 0, 1 \, \mathrm{ for } \, x > 0\} \] for a point charge $q$ at $x = 0$; note that this is uniform throughout space. Given no magnetism, the Lorentz force law is now just $F = qE$, and for such an electric field, the force on a test charge is $F = 2\pi q_1 q_2 \cdot \mathrm{sign}(x_1 - x_2)$. The dimensions of charge are now $F^{\frac{1}{2}} = m^{\frac{1}{2}} l^{\frac{1}{2}} t^{-1}$. Hence, the force between two charges in 1 dimension is independent of their separation and only dependent on their relative signs. Note that the Gauss law, which in 3 dimensions was a surface integral and in 2 dimensions was a closed line integral, is now in 1 dimension a difference between endpoints such that between the endpoints lies nonzero charge; really, the electric field at each point is added, weighted by the outward direction at each point, giving the proper sign difference. This charge density is essentially a delta function, so integrating over that segment gives the total charge, which is identical to the difference in the electric field value between those two points.
There is no magnetic field at all in 1 dimension, so there can't be a magnetic field just by steadily moving a point charge with velocity $v$. Furthermore, no steady currents can be made because there is only one direction to go in. Because all cross products are zero, the only transformation law to consider for an object with velocity $v$ is the now-trivial $E' = E$, because $E$, being in exactly 1 dimension, must by construction lie parallel to $v$, which is also in exactly 1 dimension.
What do the Maxwell equations look like? The field tensor \[ F^{\mu \nu} = \begin{bmatrix} 0 & E \\ -E & 0 \end{bmatrix} \] looks a lot simpler now without other extraneous components. The Maxwell equations should still hold in covariant form. This means the source-dependent equation $\partial_{\nu} F^{\mu \nu} = \frac{4\pi}{c} J^{\mu}$ gives the equations \[ \frac{\partial E}{\partial x} = \frac{4\pi}{c} J \\ -\frac{\partial E}{\partial t} = 4\pi \rho \] while in fact the source-independent equation $\epsilon_{\mu \nu} \partial^{\xi} F^{\nu \xi} = 0$ is a restatement of the charge conservation equation $\partial_{\mu} J^{\mu} = 0$ based on the source-dependent equation relations.
But we know that $F^{\mu \nu} = \partial^{\mu} A^{\nu} - \partial^{\nu} A^{\mu}$ in any case. This means that $A^{\mu} = (\phi, A)$ in 1+1 spacetime dimensions, with $E = -\frac{\partial \phi}{\partial x} - \frac{1}{c} \frac{\partial A}{\partial t}$. Also, because the electric field is uniform in space, $\phi$ diverges linearly with $x$. This is only to be expected, now especially because the "point" charge in 1 dimension is really infinite along 2 directions.
Finally, there cannot be electromagnetic radiation in 1 dimension. There is no magnetic field in 1 dimension, while even if dipole radiation in 3 dimensions is considered, remember that the radiated fields vanish along the axis of the dipole. An accelerated charge is also a dipole in a sense, so accelerated charges cannot radiate in 1 dimension, also because electric fields do not deviate from uniformity in 1 dimension at all.
Units and $n$ Dimensions
One thing that I realized is that Gaussian CGS units are actually only usable by construction in 3 dimensions as intended. The reason for that is because $4\pi$ is the total solid angle of the surface of a sphere in 3 dimensions. In general, \[ \Omega_{n - 1} = \frac{\pi^{\frac{n}{2}} n}{\Gamma\left(\frac{n}{2} + 1\right)} \] is total solid angle of a sphere in $n$ dimensions (I'm being a little sloppy with terminology here: I'm considering a typical sphere to be in 3 dimensions, a circle to be in 2, a line to be in 1, and a point to be in 0). It would make more sense if these solid angle factors appeared in the field definitions rather than the Maxwell equations, so that the Maxwell equations can be written for any number of dimensions. This means that Gaussian CGS units are not ideal for dimensions other than 3. To keep the electric and magnetic fields as having the same units (avoiding the pitfalls of SI), what would be preferable is the Lorentz-Heaviside system. There, \[ \partial_{\nu} F^{\mu \nu} = \frac{1}{c} J^{\mu} \\ \epsilon^{\mu \nu \zeta \xi} \partial_{\nu} F_{\zeta \xi} = 0 \] are the Maxwell equations in 3 dimensions, but now the solid angle factors are absent from the Maxwell equations, allowing for easy generalization to $n$ dimensions. This is nice because in $n$ spatial dimensions, \[ \mathbf{E} = \frac{q}{\Omega_{n - 1} r^{n - 1}} \mathbf{e}_r \] is the static electric field of a point charge at rest, with the magnetic field of a nonrelativistic moving point charge defined with regard to the appropriate fully antisymmetric tensor in $n$ spatial dimensions. Furthermore, the Gauss law \[ \oint \mathbf{E} \cdot \mathbf{e}_{\mathbf{n}} d\mathcal{S} = q \] must still hold true in $n$ spatial dimensions (not to be confused with the surface normal vector $\mathbf{e}_{\mathbf{n}}$).I found the pages given here (McDavid et al, ArXiV; Zwiebach (and I had 8.05 — Quantum Physics II with him too!); Physics StackExchange) to be quite helpful. I would also recommend that you read those further if you want more details.