I had the idea of writing this post a couple of weeks ago, but I didn't feel like I had enough stuff to write here at that time. Now I do, so here goes. (Also, here's hoping that inputting LaTeX into this post works once more.)

When I took 18.03 — Differential Equations in 2010 fall, one of the topics covered was linear time-invariant systems. The general system of interest was $Lu(t) = f(t)$ where $L$ is a linear time-invariant operator. The technique of course is to find a weight function $w(t)$ where $Lw(t) = \delta(t)$, and once that is done, the solution is $u(t) = \int_{-\infty}^{\infty} f(t') w(t - t') dt'$ which is a convolution of the input $f$ with the weight $w$. The professor mentioned that it is essentially akin to inverting the operator $L$, but while I could see the general utility in this method, I never quite understood why it might be considered inversion on any deeper level.

Last semester, I took 8.07 — Electromagnetism II, and there we discussed Green's functions a little more in the context of electromagnetism & electrodynamics. In a static situation, the Green's function comes up in solving the Poisson equation $\nabla^2 \phi = -\rho$. In this case, $\nabla^2 G(\vec{x}, \vec{x}') = -\delta(\vec{x} - \vec{x}')$ is solved by the familiar potential of a unit point charge $G(\vec{x}, \vec{x}') = \frac{1}{4\pi |\vec{x} - \vec{x}'|}$. I started to see a little more clearly why this worked, because if a general charge distribution was some superposition of point charges, then a general potential distribution should be the same superposition of point charge potentials. However, it still wasn't entirely clear to me how this was "inversion" per se. Follow the jump to see what changed.

A few weeks ago, it hit me why this was considered inversion. The issue is that the notation isn't exactly analogous between discrete and continuous systems, but can be made so. Let us consider a discrete system \[ Ax = b \] where $b$ is a source vector stemming from the action of $A$, a linear operator, on a field vector $x$. What does this look like in index notation? I will keep the sums explicit to make the analogies more explicit, so \[ \sum_{j = 1}^{n} A_{ij} x_j = b_i \] is the rewriting of this inhomogeneous linear equation in index form. How is this solved? This is in principle solved through matrix inversion (leaving aside the issue of vanishing eigenvalues), given by \[ x = A^{-1} b \] in operator form or \[ x_i = \sum_{j = 1}^{n} (A^{-1})_{ij} b_j \] in index form. And although this may sound stupid, it is convenient for the purposes of analogy to say \[ x = Gb \] so \[ GA = AG = 1 \] meaning \[ G = A^{-1} \] defines the "Green's operator" $G$. More explicitly, this means \[ \sum_{k = 1}^{n} A_{ik} G_{kj} = \delta_{ij} \] in index notation.

How does this relate to a continuous linear equation like the Poisson equation? Defining our operator \[ L = -\nabla^2 \] then \[ L\phi = \rho \] is the Poisson equation with an appropriate sign choice. More explicitly, this looks like $-\nabla^2 \phi(\vec{x}) = \rho(\vec{x})$. But wait, the operator and functions all depend on the same continuum label $\vec{x}$, and there's no summation! That can be fixed by redefining \[ L(\vec{x}, \vec{x}') = -\delta(\vec{x} - \vec{x}') \nabla'^2 \] where $\nabla'$ is the gradient with respect to the primed coordinate, so that \[ \int L(\vec{x}, \vec{x}') \phi(\vec{x}') d^3 x' = \rho(\vec{x}) \] is the new Poisson equation. This now looks exactly analogous to the matrix equation \[ \sum_{j = 1}^{n} A_{ij} x_j = b_i \] where now the discrete indices $i$ and $j$ are replaced by continuum analogues $\vec{x}$ & $\vec{x}'$. (

How do we invert this then? Well, we have to find a new function of two spatial variables that, when acted upon by the linear operator, gives the identity. In the continuum case, this means \[ \int L(\vec{x}, \vec{x}'') G(\vec{x}'', \vec{x}') d^3 x'' = \delta(\vec{x} - \vec{x}') \] defines the Green's function $G(\vec{x}, \vec{x}')$ of the operator $L(\vec{x}, \vec{x}')$. The delta function $\delta(\vec{x} - \vec{x}')$ is the continuum analogue of $\delta_{ij}$ and can be considered like the identity operator in the continuum. Once again, summation over a dummy index $k$ has been replaced by integration over a dummy spatial coordinate $\vec{x}''$, so it should be clearer now exactly why $G$ is essentially the inverse of $L$. To finish that, the solution \[ \phi(\vec{x}) = \int G(\vec{x}, \vec{x}') \rho(\vec{x}') d^3 x' \] should look analogous to \[ x_i = \sum_{j = 1}^{n} G_{ij} b_j \] in the discrete case where $G = A^{-1}$.

So that takes care of the intuition for why the Green's function is like the inverse of a linear operator. What does this have to do with correlations? Well, let's take a look at a few examples. The first is the Poisson equation itself, as applied to electrostatics. The Green's function for the Poisson equation is the electrostatic potential of a point charge. Multiplying that by a test charge gives the potential energy of the system, so effectively, the Green's function also describes how correlated charges are to one another (in terms of energy); this almost sounds like action at a distance once more to me, but then I remember that charges couple to electrostatic fields generated by each other, and those interactions between charges and fields are local.

The second looks completely different, because it comes from another class I'm taking this semester, and that is 14.15 — Networks. This example is discrete rather than continuous, but it still lends itself nicely to a correlation interpretation. If $A$ is the adjacency matrix defining links between nodes in a directed graph (so $A_{ij}$ denotes the presence or absence of a link from $j$ to $i$), then the Katz centrality of a graph is given by the equilibrium equation $\bar{x} = \gamma A\bar{x} + \beta$ where $\bar{x}$ should have all nonnegative components in the node basis. Here, $\beta$ is a vector that is a small source of nonzero centrality for each node (so $\beta_i$ is the initial small centrality of node $i$), and the scalar $\gamma$ is chosen such that the largest eigenvalue of $\gamma A$ is strictly less than 1. This can be rewritten as $(1 - \gamma A)\bar{x} = \beta$ which can be inverted as $\bar{x} = (1 - \gamma A)^{-1} \beta$, which is $\bar{x}_i = \sum_{j = 1}^{n} (1 - \gamma A)^{-1}_{ij} b_j$. Knowing that $\beta$ is the initial source centrality, then $1 - \gamma A$ is the operator that causes changes in centrality, and $\bar{x}_i$ is the equilibrium centrality of node $i$; this means $(1 - \gamma A)^{-1}$ is the Green's operator for this equation. Moreover, the $(i, j)$ elements of that Green's operator correlate the initial centrality of node $j$ to the final centrality of node $i$, and a sum is taken over all $j$ to account for all the contributions. In the context of graph theory, it makes sense too because Katz centrality, being an extension of eigenvector centrality, gives the importance of a node not just based on its adjacencies but particularly based on its adjacencies to other important nodes, so correlations between nodes play the biggest role in determining equilibrium Katz centrality in a directed graph. The columns (or is it rows?) of $(1 - \gamma A)^{-1}$, being the solution to $(1 - \gamma A)(1 - \gamma A)^{-1} = 1$, give the response vectors to the unit input vectors (i.e. the input vectors where the input centrality is 1 for a particular node and 0 for all the others), showing how centrality spreads and in equilibrium shows the correlations among all the nodes for any one particular node.

The third is a bit more physics-related again. I won't go too much into detail because I'm still wrapping my head around it as it is being taught in 8.334 — Statistical Mechanics II, but basically the idea is that if a Hamiltonian is the integral over space of quadratic terms in a statistical field and its first derivatives, then the Green's function $K^{-1}$ to the equation $(-\nabla^2 + \xi^{-2})K^{-1} (\vec{x}) = \delta(\vec{x})$ (where the second variable $\vec{x}'$ is taken to be irrelevant because of translational symmetries) is also the correlation between the field at two points so $K^{-1} (\vec{x}) = \langle \phi(\vec{x}) \phi(0) \rangle$. Here $K^{-1}$ is the Green's function to the screened Poisson equation, which has the solution to the Poisson equation multiplied by the damped exponential $e^{-\frac{|\vec{x}|}{\xi}}$ where $\xi$ is the correlation length of this statistical field. Interestingly, I think this also looks strikingly like propagator/correlation between two points in a massive scalar quantum field $\phi$ in that $G(x, x') = \langle 0|\phi(x)\phi(x')|0 \rangle$ or something like that (with time ordering too I think) is also the Green's function for the Klein-Gordon equation for $\phi$. I won't say more about that because I haven't really learned quantum field theory beyond what I've posted here, but I will say that from the snippets I've been seeing in 8.334 — Statistical Mechanics II, the equivalences and analogies between quantum field theory and statistical field theory are plentiful, deep, and profound.

Anyway, I hope that was useful for anyone seeking a bit more intuition on the need for and meaning of the Green's function. I will end by saying something that I said to my 8.07 — Electromagnetism II professor: on a historical note, it seems that George Green was the Green's function of a point source of education (look it up).

When I took 18.03 — Differential Equations in 2010 fall, one of the topics covered was linear time-invariant systems. The general system of interest was $Lu(t) = f(t)$ where $L$ is a linear time-invariant operator. The technique of course is to find a weight function $w(t)$ where $Lw(t) = \delta(t)$, and once that is done, the solution is $u(t) = \int_{-\infty}^{\infty} f(t') w(t - t') dt'$ which is a convolution of the input $f$ with the weight $w$. The professor mentioned that it is essentially akin to inverting the operator $L$, but while I could see the general utility in this method, I never quite understood why it might be considered inversion on any deeper level.

Last semester, I took 8.07 — Electromagnetism II, and there we discussed Green's functions a little more in the context of electromagnetism & electrodynamics. In a static situation, the Green's function comes up in solving the Poisson equation $\nabla^2 \phi = -\rho$. In this case, $\nabla^2 G(\vec{x}, \vec{x}') = -\delta(\vec{x} - \vec{x}')$ is solved by the familiar potential of a unit point charge $G(\vec{x}, \vec{x}') = \frac{1}{4\pi |\vec{x} - \vec{x}'|}$. I started to see a little more clearly why this worked, because if a general charge distribution was some superposition of point charges, then a general potential distribution should be the same superposition of point charge potentials. However, it still wasn't entirely clear to me how this was "inversion" per se. Follow the jump to see what changed.

A few weeks ago, it hit me why this was considered inversion. The issue is that the notation isn't exactly analogous between discrete and continuous systems, but can be made so. Let us consider a discrete system \[ Ax = b \] where $b$ is a source vector stemming from the action of $A$, a linear operator, on a field vector $x$. What does this look like in index notation? I will keep the sums explicit to make the analogies more explicit, so \[ \sum_{j = 1}^{n} A_{ij} x_j = b_i \] is the rewriting of this inhomogeneous linear equation in index form. How is this solved? This is in principle solved through matrix inversion (leaving aside the issue of vanishing eigenvalues), given by \[ x = A^{-1} b \] in operator form or \[ x_i = \sum_{j = 1}^{n} (A^{-1})_{ij} b_j \] in index form. And although this may sound stupid, it is convenient for the purposes of analogy to say \[ x = Gb \] so \[ GA = AG = 1 \] meaning \[ G = A^{-1} \] defines the "Green's operator" $G$. More explicitly, this means \[ \sum_{k = 1}^{n} A_{ik} G_{kj} = \delta_{ij} \] in index notation.

How does this relate to a continuous linear equation like the Poisson equation? Defining our operator \[ L = -\nabla^2 \] then \[ L\phi = \rho \] is the Poisson equation with an appropriate sign choice. More explicitly, this looks like $-\nabla^2 \phi(\vec{x}) = \rho(\vec{x})$. But wait, the operator and functions all depend on the same continuum label $\vec{x}$, and there's no summation! That can be fixed by redefining \[ L(\vec{x}, \vec{x}') = -\delta(\vec{x} - \vec{x}') \nabla'^2 \] where $\nabla'$ is the gradient with respect to the primed coordinate, so that \[ \int L(\vec{x}, \vec{x}') \phi(\vec{x}') d^3 x' = \rho(\vec{x}) \] is the new Poisson equation. This now looks exactly analogous to the matrix equation \[ \sum_{j = 1}^{n} A_{ij} x_j = b_i \] where now the discrete indices $i$ and $j$ are replaced by continuum analogues $\vec{x}$ & $\vec{x}'$. (

**UPDATE**: I meant to justify the change to $L = -\delta(\vec{x} - \vec{x}') \nabla'^2$ by analogizing to quantum mechanics. What do we really mean when we say that the momentum operator in position space is $p = -i\hbar \partial_x$? What we mean is that $\langle x|p|\psi \rangle = -i\hbar \partial_x \langle x|\psi \rangle$. A more consistent definition of the momentum operator in position space would be just that $\langle x|p|x' \rangle = -i\hbar \delta(x - x') \partial_{x'}$ which is now a function of $x$ and $x'$, just like how a matrix has 2 indices. This then means $\langle x|p|\psi \rangle = \int_{-\infty}^{\infty} \langle x|p|x' \rangle \langle x'|\psi \rangle dx' = \int_{-\infty}^{\infty} -i\hbar \partial_{x'} \langle x'|\psi \rangle dx' = -i\hbar \partial_x \langle x|\psi \rangle$ as desired. Putting the delta function into the definition of the momentum operator in position space and using an integral with it makes the momentum operator look more like a continuous-space matrix. Also, I apologize that some of the typesetting has turned out OK but other parts have not.)How do we invert this then? Well, we have to find a new function of two spatial variables that, when acted upon by the linear operator, gives the identity. In the continuum case, this means \[ \int L(\vec{x}, \vec{x}'') G(\vec{x}'', \vec{x}') d^3 x'' = \delta(\vec{x} - \vec{x}') \] defines the Green's function $G(\vec{x}, \vec{x}')$ of the operator $L(\vec{x}, \vec{x}')$. The delta function $\delta(\vec{x} - \vec{x}')$ is the continuum analogue of $\delta_{ij}$ and can be considered like the identity operator in the continuum. Once again, summation over a dummy index $k$ has been replaced by integration over a dummy spatial coordinate $\vec{x}''$, so it should be clearer now exactly why $G$ is essentially the inverse of $L$. To finish that, the solution \[ \phi(\vec{x}) = \int G(\vec{x}, \vec{x}') \rho(\vec{x}') d^3 x' \] should look analogous to \[ x_i = \sum_{j = 1}^{n} G_{ij} b_j \] in the discrete case where $G = A^{-1}$.

So that takes care of the intuition for why the Green's function is like the inverse of a linear operator. What does this have to do with correlations? Well, let's take a look at a few examples. The first is the Poisson equation itself, as applied to electrostatics. The Green's function for the Poisson equation is the electrostatic potential of a point charge. Multiplying that by a test charge gives the potential energy of the system, so effectively, the Green's function also describes how correlated charges are to one another (in terms of energy); this almost sounds like action at a distance once more to me, but then I remember that charges couple to electrostatic fields generated by each other, and those interactions between charges and fields are local.

The second looks completely different, because it comes from another class I'm taking this semester, and that is 14.15 — Networks. This example is discrete rather than continuous, but it still lends itself nicely to a correlation interpretation. If $A$ is the adjacency matrix defining links between nodes in a directed graph (so $A_{ij}$ denotes the presence or absence of a link from $j$ to $i$), then the Katz centrality of a graph is given by the equilibrium equation $\bar{x} = \gamma A\bar{x} + \beta$ where $\bar{x}$ should have all nonnegative components in the node basis. Here, $\beta$ is a vector that is a small source of nonzero centrality for each node (so $\beta_i$ is the initial small centrality of node $i$), and the scalar $\gamma$ is chosen such that the largest eigenvalue of $\gamma A$ is strictly less than 1. This can be rewritten as $(1 - \gamma A)\bar{x} = \beta$ which can be inverted as $\bar{x} = (1 - \gamma A)^{-1} \beta$, which is $\bar{x}_i = \sum_{j = 1}^{n} (1 - \gamma A)^{-1}_{ij} b_j$. Knowing that $\beta$ is the initial source centrality, then $1 - \gamma A$ is the operator that causes changes in centrality, and $\bar{x}_i$ is the equilibrium centrality of node $i$; this means $(1 - \gamma A)^{-1}$ is the Green's operator for this equation. Moreover, the $(i, j)$ elements of that Green's operator correlate the initial centrality of node $j$ to the final centrality of node $i$, and a sum is taken over all $j$ to account for all the contributions. In the context of graph theory, it makes sense too because Katz centrality, being an extension of eigenvector centrality, gives the importance of a node not just based on its adjacencies but particularly based on its adjacencies to other important nodes, so correlations between nodes play the biggest role in determining equilibrium Katz centrality in a directed graph. The columns (or is it rows?) of $(1 - \gamma A)^{-1}$, being the solution to $(1 - \gamma A)(1 - \gamma A)^{-1} = 1$, give the response vectors to the unit input vectors (i.e. the input vectors where the input centrality is 1 for a particular node and 0 for all the others), showing how centrality spreads and in equilibrium shows the correlations among all the nodes for any one particular node.

The third is a bit more physics-related again. I won't go too much into detail because I'm still wrapping my head around it as it is being taught in 8.334 — Statistical Mechanics II, but basically the idea is that if a Hamiltonian is the integral over space of quadratic terms in a statistical field and its first derivatives, then the Green's function $K^{-1}$ to the equation $(-\nabla^2 + \xi^{-2})K^{-1} (\vec{x}) = \delta(\vec{x})$ (where the second variable $\vec{x}'$ is taken to be irrelevant because of translational symmetries) is also the correlation between the field at two points so $K^{-1} (\vec{x}) = \langle \phi(\vec{x}) \phi(0) \rangle$. Here $K^{-1}$ is the Green's function to the screened Poisson equation, which has the solution to the Poisson equation multiplied by the damped exponential $e^{-\frac{|\vec{x}|}{\xi}}$ where $\xi$ is the correlation length of this statistical field. Interestingly, I think this also looks strikingly like propagator/correlation between two points in a massive scalar quantum field $\phi$ in that $G(x, x') = \langle 0|\phi(x)\phi(x')|0 \rangle$ or something like that (with time ordering too I think) is also the Green's function for the Klein-Gordon equation for $\phi$. I won't say more about that because I haven't really learned quantum field theory beyond what I've posted here, but I will say that from the snippets I've been seeing in 8.334 — Statistical Mechanics II, the equivalences and analogies between quantum field theory and statistical field theory are plentiful, deep, and profound.

Anyway, I hope that was useful for anyone seeking a bit more intuition on the need for and meaning of the Green's function. I will end by saying something that I said to my 8.07 — Electromagnetism II professor: on a historical note, it seems that George Green was the Green's function of a point source of education (look it up).

## No comments:

## Post a Comment