2022-12-02

Fundamental Theorem of Calculus for Functionals

I happened to think more about the idea of recovering a functional by somehow integrating its functional derivative. In the process, I realized that certain ideas that I would have to consider make this post a natural follow-up to a recent post [LINK] about mapping scalars to functions. This will become clear later in this post.

For a single variable, a function \( f(x) \) has an antiderivative \( F(x) \) such that \( f(x) = \frac{\mathrm{d}F}{\mathrm{d}x} \). One statement of the fundamental theorem of calculus is that this implies that \[ \int_{a}^{b} f(x)~\mathrm{d}x = F(b) - F(a) \] for these functions. In turn, this means \( F(x) \) can be extracted directly from \( f(x) \) through \[ F(x) = \int_{x_{0}}^{x} f(x')~\mathrm{d}x' \] in which \( x_{0} \) is chosen such that \( F(x_{0}) = 0 \).

For multiple variables, a conservative vector field \( \mathbf{f}(\mathbf{x}) \) in which \( \mathbf{f} \) must have the same number of components as \( \mathbf{x} \) can be said to have a scalar antiderivative \( F(\mathbf{x}) \) in the sense that \( \mathbf{f} \) is the gradient of \( F \), meaning \( \mathbf{f}(\mathbf{x}) = \nabla F(\mathbf{x}) \); more precisely, \( f_{i}(x_{1}, x_{2}, \ldots, x_{N}) = \frac{\partial F}{\partial x_{i}} \) for all \( i \in \{1, 2, \ldots, N \} \). (Note that if \( \mathbf{f} \) is not conservative, then it by definition cannot be written as the gradient of a scalar function! This is an important point to which I will return later in this post.) In such a case, a line integral (which, as I will emphasize again later in this post, is distinct from a functional path integral) from vector point \( \mathbf{a} \) to vector point \( \mathbf{b} \) of \( \mathbf{f} \) can be computed as \( \int \mathbf{f}(\mathbf{x}) \cdot \mathrm{d}\mathbf{x} = F(\mathbf{b}) - F(\mathbf{a}) \); more precisely, this equality holds along any contour, so if a contour is defined as \( \mathbf{x}(s) \) for \( s \in [0, 1] \), no matter what \( \mathbf{x}(s) \) actually is, as long as \( \mathbf{x}(0) = \mathbf{a} \) and \( \mathbf{x}(1) = \mathbf{b} \) hold, then \[ \sum_{i = 1}^{N} \int_{0}^{1} f_{i}(x_{1}(s), x_{2}(s), \ldots, x_{N}(s)) \frac{\mathrm{d}x_{i}}{\mathrm{d}s} \mathrm{d}s = F(\mathbf{b}) - F(\mathbf{a}) \] must also hold. This therefore suggests that \( F(\mathbf{x}) \) can be extracted from \( \mathbf{f}(\mathbf{x}) \) by relabeling \( \mathbf{x}(s) \to \mathbf{x}'(s) \), \( \mathbf{a} \) to a point such that \( F(\mathbf{a}) = 0 \), and \( \mathbf{b} \to \mathbf{x} \). Once again, if \( \mathbf{f}(\mathbf{x}) \) is not conservative, then it cannot be written as the gradient of a scalar field \( F \), and the integral \( \sum_{i = 1}^{N} \int_{0}^{1} f_{i}(x_{1}(s), x_{2}(s), \ldots, x_{N}(s)) \frac{\mathrm{d}x_{i}}{\mathrm{d}s} \mathrm{d}s \) will depend on the specific choice of \( \mathbf{x}(s) \), not just the endpoints \( \mathbf{a} \) and \( \mathbf{b} \).

For continuous functions, the generalization of a vector \( \mathbf{x} \), or more precisely \( x_{i} \) for \( i \in \{1, 2, \ldots, N\} \), is a function \( x(t) \) where \( t \) is a continuous dummy index or parameter analogous to the discrete index \( i \). This means the generalization of a scalar field \( F(\mathbf{x}) \) is the scalar functional \( F[x] \). What is the generalization of a vector field \( \mathbf{f}(\mathbf{x}) \)? To be precise, a vector field is a collection of functions \( f_{i}(x_{1}, x_{2}, \ldots, x_{N}) \) for all \( i \in \{1, 2, \ldots, N \} \). This suggests that its generalization should be a function of \( t \) and must somehow depend on \( x(t) \) as well. It is tempting therefore to write this as \( f(t, x(t)) \) for all \( t \). However, although this is a valid subset of the generalization, it is not the whole generalization, because vector fields of the form \( f_{i}(x_{i}) \) are collections of single-variable functions that do not fully capture all vector fields of the form \( f_{i}(x_{1}, x_{2}, \ldots, x_{N}) \) for all \( i \in \{1, 2, \ldots, N \} \). As a specific example, for \( N = 2 \), the vector field with components \( f_{1}(x_{1}, x_{2}) = (x_{1} - x_{2})^{2} \) and \( f_{2}(x_{1}, x_{2}) = (x_{1} + x_{2})^{3} \) cannot be written as just \( f_{1}(x_{1}) \) and \( f_{2}(x_{2}) \), as \( f_{1} \) depends on \( x_{2} \) and \( f_{2} \) depends on \( x_{1} \) as well. Similarly, in the generalization, one could imagine a function of the form \( f = \frac{x(t)}{x(t - t_{0})} \mathrm{exp}(-(t - t_{0})^{2}) \); in this case, it is not correct to write it as \( f(t, x(t)) \) because the dependence of \( f \) on \( x \) at a given dummy index value \( t \) comes through not only \( x(t) \) but also \( x(t - t_{0}) \) for some fixed parameter \( t_{0} \). Additionally, the function may depend not only on \( x \) per se but also on derivatives \( \frac{\mathrm{d}^{n} x}{\mathrm{d}t^{n}} \); the case of the first derivative \( \frac{\mathrm{d}x}{\mathrm{d}t} = \lim_{t_{0} \to 0} \frac{x(t) - x(t - t_{0})}{t_{0}} \) illustrates the connection to the aforementioned example. Therefore, the most generic way to write such a function is effectively as a functional \( f[x; t] \) with a dummy index \( t \). The example \( f = \frac{x(t)}{x(t - t_{0})} \mathrm{exp}(-(t - t_{0})^{2}) \) can be formalized as \( f[t, x] = \int_{-\infty}^{\infty} \frac{x(t')}{x(t' - t_{0})} \mathrm{exp}(-(t' - t_{0})^{2}) \delta(t - t')~\mathrm{d}t' \) where the dummy index \( t' \) is the integration variable while the dummy index \( t \) is free. (For \( N = 3 \), the condition of a vector field being conservative is often written as \( \nabla \times \mathbf{f}(\mathbf{x}) = 0 \). I have not used that condition in this post because the curl operator does not easily generalize to \( N \neq 3 \).)

If a functional \( f[x; t] \) is conservative, then there exists a functional \( F[x] \) (with no free dummy index) such that \( f \) is the functional derivative \( f[x; t] = \frac{\delta F}{\delta x(t)} \). Comparing the notation between scalar fields and functionals, \( \sum_{i} A_{i} \to \int A(t)~\mathrm{d}t \) and \( \mathrm{d}x_{i} \to \delta x(t) \), in which \( \delta x(t) \) is a small variation in a function \( x \) specifically at the index value \( t \) and nowhere else. This suggests a generalization of the fundamental theorem of calculus to functionals as follows. If \( a(t) \) and \( b(t) \) are fixed functions, then \( \int_{-\infty}^{\infty} \int f[x; t]~\delta x(t)~\mathrm{d}t = F[b] - F[a] \). More precisely, a path from the function \( a(t) \) to the function \( b(t) \) at every index value \( t \) can be parameterized by \( s \in [0, 1] \) by the map \( s \to x(t, s) \) which is a function of \( t \) for each \( s \) such that \( x(t, 0) = a(t) \) and \( x(t, 1) = b(t) \); this is why I linked this post to the most recent post on this blog. With this in mind, the fundamental theorem of calculus becomes \[ \int_{-\infty}^{\infty} \int_{0}^{1} f[x(s); t] \frac{\partial x}{\partial s}~\mathrm{d}s~\mathrm{d}t = F[b] - F[a] \] where, in the integrand, the argument \( x \) in \( f \) has the parameter \( s \) explicit but the dummy index \( t \) implicit; the point is that this equality holds regardless of the specific parameterization \( x(t, s) \) as long as \( x \) at the endpoints of \( s \) satisfies \( x(t, 0) = a(t) \) and \( x(t, 1) = b(t) \). This also means that \( F[x] \) can be recovered if \( b(t) = x(t) \) and \( a(t) \) is chosen such that \( F[a] = 0 \), in which case \[ F[x] = \int_{-\infty}^{\infty} \int_{0}^{1} f[x'(s); t]~\frac{\partial x'}{\partial s}~\mathrm{d}s~\mathrm{d}t \] (where \( x(t, s) \) has been renamed to \( x'(t, s) \) to avoid confusion with \( x(t) \)). If \( f[x; t] \) is not conservative, then there is no functional \( F[x] \) whose functional derivative with respect to \( x(t) \) would yield \( f[x; t] \); in that case, with \( x(t, 0) = a(t) \) and \( x(t, 1) = b(t) \), the integral \( \int_{-\infty}^{\infty} \int_{0}^{1} f[x(s); t] \frac{\partial x}{\partial s}~\mathrm{d}s~\mathrm{d}t \) does depend on the specific choice of parameterization \( x(t, s) \) with respect to \( s \) and not just on the functions \( a(t) \) and \( b(t) \) at the endpoints of \( s \).

As an example, consider from a previous post [LINK] the nonrelativistic Newtonian action \[ S[x] = \int_{-\infty}^{\infty} \left(\frac{m}{2} \left(\frac{\mathrm{d}x}{\mathrm{d}t}\right)^{2} + F_{0} x(t) \right)~\mathrm{d}t \] for a particle under the influence of a uniform force \( F_{0} \) (which may vanish). The first functional derivative is \[ f[x; t] = \frac{\delta S}{\delta x(t)} = F_{0} - m\frac{\mathrm{d}^{2} x}{\mathrm{d}t^{2}} \] and its vanishing would yield the usual equation of motion. The action itself vanishes for \( x(t) = 0 \), which will be helpful when using the fundamental theorem of calculus to recover the action from the equation of motion. In particular, one can parameterize \( x'(t, s) = sx(t) \) such that \( x'(t, 0) = 0 \) and \( x'(t, 1) = x(t) \). This gives the integral \( \int_{0}^{1} \left(F_{0} - ms\frac{\mathrm{d}^{2} x}{\mathrm{d}t^{2}}\right)x(t)~\mathrm{d}s = F_{0} x(t) - \frac{m}{2} x(t) \frac{\mathrm{d}^{2} x}{\mathrm{d}t^{2}} \). This is then integrated over all \( t \), so the first term is identical to the corresponding term in the definition of \( S[x] \), and the second term becomes the same as the corresponding term in the definition of \( S[x] \) after integrating over \( t \) by parts and setting the boundary conditions that \( x(t) \to 0 \) for \( |t| \to \infty \). (Other boundary conditions may require more care.) In any case, the parameterization \( x'(t, s) = sx(t) \) is not the only choice that could fulfill the boundary conditions; the salient point is that any parameterization fulfilling the boundary conditions would yield the correct action \( S[x] \).

I considered that example because I wondered whether any special formulas need to be considered if \( f[x; t] \) depends explicitly on first or second derivatives of \( x(t) \), as might be the case in nonrelativistic Newtonian mechanics. That example shows that no special formulas are needed because even if the Lagrangian explicitly depends on the velocity \( \frac{\mathrm{d}x}{\mathrm{d}t} \), the action \( S \) only explicitly depends as a functional on \( x(t) \), so proper application of functional differentiation and regular integration by parts will ensure proper accounting of each piece.

This post has been about the fundamental theorem of calculus saying that the 1-dimensional integral of a function in \( N \) dimensions along a contour, if that function is conservative, is equal to the difference between the two endpoints of its scalar antiderivative. This generalizes easily to infinite dimensions and continuous functions instead of finite-dimensional vectors. There is another fundamental theorem of calculus saying that the \( N \)-dimensional integral in a finite volume of the scalar divergence of an \( N \)-dimensional vector function, if that volume has a closed orientable surface, is equal to the \( N - 1 \)-dimensional integral of the inner product of that function with the normal vector (of unit 2-norm) at every point on the surface across the whole surface, meaning \[ \int_{V} \sum_{i = 1}^{N} \frac{\partial f_{i}}{\partial x_{i}}~\mathrm{d}V = \oint_{\partial V} \sum_{i = 1}^{N} f_{i}(x_{1}, x_{2}, \ldots, x_{N}) n_{i}(x_{1}, x_{2}, \ldots, x_{N})~\mathrm{d}S \] where \( \sum_{i = 1}^{N} |n_{i}(x_{1}, x_{2}, \ldots, x_{N})|^{2} = 1 \) for every \( \mathbf{x} \). From a purely formal perspective, this could generalize to something like \( \int_{V} \int_{-\infty}^{\infty} \frac{\delta f[x; t]}{\delta x(t)}~\mathrm{d}t~\mathcal{D}x = \oint_{\partial V} \int_{-\infty}^{\infty} f[x; t]n[x; t]~\mathrm{d}t~\mathcal{D}x \) having generalized \( \frac{\partial}{\partial x_{i}} \to \frac{\delta}{\delta x(t)} \), \( \prod_{i} \mathrm{d}x_{i} \to \mathcal{D}x \), and \( n_{i}(\mathbf{x}) \to n[x; t] \) where \( n[x; t] \) is normalized such that \( \int_{-\infty}^{\infty} |n[x; t]|^{2}~\mathrm{d}t = 1 \) for all \( x(t) \) on the surface. However, this formalism may be hard to further develop because the space has infinite dimensions. Even when working in a countable basis, it might not be possible to characterize an orientable surface enclosing a volume in an infinite-dimensional space; the surface is also infinite-dimensional. While the choice of basis is arbitrary, things become even less intuitive when choosing to work in an uncountable basis.

2022-11-01

Mapping Scalars to Functions

In just over a year, I've written three posts for this blog about functionals, specifically about their application to probability theory [LINK], finding their stationary points [LINK], and the use of their stationary points in classical mechanics [LINK]. As a reminder, a functional is an object that maps a space of functions to a space of numbers. This got me thinking about what the reverse, namely an object that maps a space of numbers to a space of functions, looks like. To be clear, this is not the same as an ordinary function which, as an element in a space of functions, maps a space of numbers to a space of numbers.

As I thought about it more, I realized that this is a bit easier to understand and therefore more commonly encountered than a functional. An extremely glib way to describe such an object is a function of multiple variables. However, it may be more enlightening to describe this in further detail to avoid potentially deceptive images that may arise from that glib description.

In the discrete case, the matrix elements \( A_{ij} \) can be described as a map from integers to vectors, in which an integer \( j \) is associated with a vector whose elements indexed by an integer \( i \) are \( A_{ij} \). This is the essential idea behind seeing the columns of the matrix with elements \( A_{ij} \) as a collection of vectors. Formally, this maps \( i \to (j \to A_{ij}) \) where the map \( j \to A_{ij} \) defines a vector indexed by the free variable \( i \).

Similarly, in the continuous case, the function elements \( f(x, y) \) can be described as a map from numbers to functions, in which a number \( y \) is associated with a function whose elements indexed by a number \( x \) are \( f(x, y) \). Formally, this maps \( x \to (y \to f(x, y)) \) where the map \( y \to f(x, y) \) defines a function indexed by the free variable \( x \). These ideas are foundational to the development of more abstract notions of functions, like lambda calculus.

2022-10-02

Technological Restrictions on E-Books and Culture Wars on Books in 2022

Despite the long title, this post will be fairly short. This blog used to publish a lot more often (I had a lot more free time in high school & college) and focus a lot more on issues related to free software, free culture, and things like that, yet even after looking through posts on this blog from its early years (which, aligning with the stereotype of an adult looking through essays written in high school, made me cringe at the quality of writing even if I agreed with some of the basic opinions), I actually couldn't find any posts specifically about the effects of so-called digital rights management (DRM) on E-books.

In any case, I was motivated to write this because I recently listened to an episode [LINK] of a podcast associated with The Daily Show in which the guests discussed recent instances of conservative politicians in the US preventing public schools & libraries from teaching or carrying books that offend those politicians' cultural sensibilities. I distinctly remember reading in high school & college about warnings of the consequences of putting DRM on E-books, including making it easier to ban such books. At that time, I and many others felt it would be ridiculous for politically motivated book bans to take effect in the US especially given respect for the First Amendment to the US Constitution. Leaving aside whether such book bans from public schools & libraries technically violate that amendment if such bans don't go beyond those domains, it is disheartening to see a direct example of censorship so closely connected to technological restrictions that are politically motivated (not due to fundamental technical limitations). It will be interesting to see whether authors of banned books encourage or tolerate people scanning & sharing unauthorized PDF files of the books for free; this wouldn't be unprecedented, given that the huge markup of textbooks in the US compared to other countries has led many textbook authors to encourage students to buy cheaper editions from other countries.

2022-09-17

Book Review: "From the War on Poverty to the War on Crime" by Elizabeth Hinton

I've recently read the book From the War on Poverty to the War on Crime by Elizabeth Hinton. This book is a history of the progression through the titular subjects in the US, starting with the Kennedy presidency and ending with the Reagan presidency. It shows how while some problems associated with the war on poverty did come from good but conflicting intentions when implementing social welfare programs, many more problems came from halfhearted implementation of social welfare programs with the intent & through the lens of fighting crime leading ultimately to replacement of those programs with more explicit expansions of policing to fight crime especially in response to high-profile riots in large cities in the 1960s & 1970s.

The introduction makes clear that the war on crime started with the presidency of Lyndon Johnson and arguably even earlier with the Kennedy administration's efforts to combat juvenile delinquency, so the association of the war on crime primarily with the presidencies Nixon, Ford, and Reagan (as Carter is often left out of popular discussions about this despite being equally responsible in these ways) is because those presidents (and Carter) cut funding for social welfare programs that Johnson initially saw as integral to the success of programs combating crime but began to back away from by the end of his presidency, increased funding for policing, and shifted focus to arrests & imprisonments as ways to prevent future crimes. The author discusses how law enforcement agencies started to develop biased metrics for crime & collect data in biased ways to justify racist theories about the supposedly inherent pathologies of black Americans in cities, even as some politicians at that time wondered if law enforcement agencies should be collecting crime statistics given the conflict of interest. The author emphasizes the bipartisan white American political consensus about crime after 1960 to show that it wasn't just Reagan or other Republicans in the 1980s who focused on punishment through veiled racism. The author also discusses how black American community leaders in & after the 1960s wanted to partner with law enforcement agencies to develop effective strategies together to deal with local problems at the root of local crimes, but conservative politicians (from both parties) deliberately moved away from such partnerships toward the federal block grant funding model that would incentivize states to conduct law enforcement in the most heavy-handed & punitive way possible especially in urban black neighborhoods, which led many black Americans to stop trusting law enforcement. My only criticism specifically about the introduction (that doesn't have to do with the rest of the book) is that the author's language about conditional probabilities is quite sloppy, which is problematic in the context of discussions about biases in data collection & statistical analysis by law enforcement agencies about crime.

The rest of the book simply goes through the history in detail. In the introduction, I wasn't sure who the target audience of the book was supposed to be given frequent references to gaps in academic literature in the main text, but the narrative became more clear through the rest of the book.

There are a few points that I credit the book for. These are as follows.

First, the author repeats points effectively to reinforce the narrative. This makes the narrative easy to follow, and the narrative is clearly well-sourced.

Second, I didn't know that the war on crime dated back to the Kennedy administration. I can claim to have learned that from this book.

Third, I didn't know that close federal cooperation directly with local governments also dated back to the Kennedy administration at the latest. I can claim to have learned that from this book. This is an issue that has been on my mind for a while in the context of empowering cities whose political views oppose those of their state governments which want to disempower them (because while there is a clear federal relationship among the federal, state, and tribal governments in the US Constitution, the US Constitution doesn't govern states' internal affairs, and many states treat their constituent cities as fully subordinate to state governments in all matters).

Fourth, in my view, the author correctly recognizes that policies to combat crime should be evaluated for effectiveness several years afterwards, as there are no quick fixes. The author therefore evaluates whether different parts of the war on crime had effects on crime 10-20 years later instead of just a year later, as the latter would have been a political cheap shot.

Fifth, I appreciate the author's recognition that the US made the same mistakes in both military & social welfare strategies domestically as in invasions of other countries. The author makes clear that many politicians at that time recognized this in the context of wars in Southeast Asia. This adds another dimension to arguments from the book How Everything Became War and the Military Became Everything by Rosa Brooks (which I have reviewed on this blog [LINK]) which, from what I remember, focused a little more on more recent invasions of Afghanistan & Iraq.

However, there are many more points in the book which I find problematic. These are as follows.

First, at several points, the author claims that news media & politicians often overstated the prevalence of violent crime, but the author's claims that federal law enforcement statistics about crime were less biased before the 1960s are undercut by the author's acknowledgment that such statistics were collected much more sparsely and in a more ad hoc way before the 1960s. Even leaving aside this point, the author doesn't (in the main text, leaving out the endnotes) name or cite enough specific sources to effectively argue against the dominant narrative of that time, which the book simply restates while feebly arguing against, or intuitively explain why certain crime statistics may initially look alarming but may actually imply rarity of such incidents in practice for an individual. For example, it could be argued that a certain homicide rate given in homicides per million people per year could look big at first but could be argued to imply that an individual has a very low chance of being killed by another person on a given day. (I don't know enough about crime statistics to know what specific number could plausibly fit this description.)

Second, at several points, the author basically wags a finger against what the federal government did, but in some cases where solutions are longer-term and therefore more obvious like investments in unleaded plumbing, better street lighting, or better sidewalks, the author barely identifies these, and in other cases where issues like imminent or continuing riots are spiraling out of control, the author doesn't convincingly argue in favor of specific alternatives except in perhaps 1 or 2 isolated cases. Part of the problem, as becomes clear in the epilogue, may be that the federal government prematurely dismissed such alternatives before seriously trying them, but then the author should have spent more space arguing for such programs on their own potential specific merits instead of giving most of the space to the arguments of contemporary politicians that, by dominating the narrative, may unintentionally seem more convincing than the author may have wanted.

Third, the author argues at some points that poor black Americans should have been empowered from the bottom-up but at other points that they should have gotten similar top-down federal assistance as poor white Americans (which did happen in the war on poverty, albeit at much less monetary amounts per person). This seems incoherent, and the author makes no attempt to explain why these views are compatible with each other.

Fourth, the author ignores the issues of continued slum clearance, urban highway construction, and the dynamics of white flight from urban cores in much of the narrative. I've read in many other places how critical these concerns were in the context of urban crime, so it is surprising to see no mention of these concerns in the book.

Fifth, the author seems to unduly dismiss the challenges that the Carter administration faced in rebuilding damaged urban neighborhoods in the face of high interest rates & high inflation in the 1970s. Perhaps the argument would have been stronger if the author could have found examples of the Carter administration spending scarce resources on less dire issues.

Sixth, in the introduction, the author claims that the war on crime specifically wasn't a reincarnation of the Jim Crow era, but later in the book, the author at many points implies & comes close to explicitly saying exactly that. This seems inconsistent, though to be fair, it was obvious to me that the author would ultimately argue that the war on crime was related to the Jim Crow era, so this inconsistency only threw me off in the introduction.

Seventh, I found it interesting that the author used scare quotes around the term "evil empire" and called the Cold War "Reagan's Cold War". It could be argued that the latter was a more specific reference to how the Reagan administration waged the Cold War in the 1980s as a smaller part of the broader conflict over decades, but based on the author's other stated & implied views through the book, I see it more likely as evidence of the author having far left-wing sympathies, because the latter term in the broader context of the author's views through the book sounds like the author believes the Reagan administration was too bellicose toward the USSR and was too taken by American propaganda over decades to admit that some parts of Soviet propaganda especially about race relations could be true (which is debatable in the context of internal affairs in the USSR).

Eighth, the author seems to argue that the implementation of the ban of handguns but not shotguns was racist because it failed to separate bans on weapons (which should mostly be about destroying those weapons and removing sources of weapons production, though perhaps some further consequences could be appropriate for repeat offenders) from the harsh punishment of offenders. I agree with this in the context of having excessive punishments and in the sense that, even now, it is clear in the US that white Americans are much more often than black Americans given the benefit of the doubt with respect to usage of firearms in self-defense or the possession of firearms per the Second Amendment to the US Constitution. Moreover, it is worth remembering that while courts of law in the US didn't start to systematically recognize an individual right to bear arms until the 2000s (many decades after the 1960s), that doesn't mean that everyone was prosecuted equally for bearing firearms before the 2000s; it is much more plausible that there were racial disparities in enforcement of such laws. However, I think the author doesn't do a good job of acknowledging how the much greater difficulty in using a shotgun to commit violent crimes in urban settings compared to using a handgun for the same purpose makes a handgun ban more sensible without such a ban (separate from the consequences to humans who violate such bans) necessarily being racist per se.

Ninth, at several points, the author seems to go beyond merely describing large-scale riots led by black Americans as a predictable consequence of oppression to being an apologist for such riots. I do at least acknowledge the merit in describing such predictable consequences from a sociological perspective. Also, I admit that it took me a while to understand the author's point that the problem with claiming in the 1960s & later that black Americans are culturally pathological is that such claims were coming from white American politicians who wanted to claim that such cultural pathology was the root of poverty & crime (and not the other way around). Finally, while I still believe that any human culture can have pathological elements in which some elements come from a history of being oppressed & therefore traumatized while other elements may be evidence of being a privileged or oppressive group (so there is no such thing as being a purely "good" or "oppressed" group or a purely "bad" or "oppressive" group), I can understand the author's desire to call white American racism as the dominant group in the US something closer to "cultural pathology" and the cultural pathologies that may result directly from centuries of oppression "trauma" as qualitatively distinct things within the context of US history. Having said those things, I don't think the author did a good enough job of acknowledging that even if some ideal form of reparations for these harms could be formulated & implemented overnight, the lasting effects of these traumas could continue to have negative consequences for different localities & the US as a whole for many decades, and especially now with the US having so many residents & citizens who come from other places & don't identify as white or black, I'm not sure how many Americans would have the patience to wait decades for those things to resolve even if they are much more understanding & supportive of the need for reparations than most white Americans would have been in the 1960s (as the author shows how white Americans in the 1960s wanted quick fixes to the eruptions of violence in response to oppression, which led to further oppression through brutal crackdowns by law enforcement).

Overall, I still recommend this book because the basic historical narrative is well-written & well-sourced; I do feel like I learned a little bit and I had a lot to think about. I just think that readers should be aware of the author's biases as I've discussed above.

2022-08-07

Review: KDE neon 5.25

It has been a long time since I've reviewed a Linux distribution on this blog; the last one was of Linux Mint 19 "Tara" from 4 years ago [LINK]. In a more recent post about problems that I had with a scanner that required me to install Linux Mint 20 "Ulyana" MATE because the existing operating system was damaged beyond repair [LINK], I explained that I had come to trust the consistency & stability of Linux Mint enough and liked it enough that, in conjunction with the lack of novelty in Linux distributions compared to 10 years prior, I no longer felt motivated to do such reviews. Thus, it may seem strange that I should do a review like this now. In truth, the motivation wasn't hugely compelling, but I thought it might make for a nice post on this blog as I didn't have much else in mind. I thought of checking out a showcase of KDE, namely KDE neon, because it had been a long time since I tried KDE and I was getting a little concerned that the odd artifacts I was starting to see in Linux Mint 20 "Ulyana" MATE when hovering over right-click menus might be the tip of an iceberg of problems. While the latter concern has thankfully not come to pass even after several months of experiencing these more minor issues, I figured it might be nice to see what KDE is like now.

Default desktop, before changes
This review will be a bit different from past reviews. In particular, in past reviews, I took the perspective of a newbie to Linux trying to do ordinary tasks, whereas the purpose of this review is to see whether I can replicate the look & feel of my desktop in Linux Mint 20 "Ulyana" MATE. Thus, I will focus mostly on changing the desktop and on using the default KDE applications; I will not focus on the presence or absence of other applications or on other parts of the live USB environment. Follow the jump to see what it is like.

2022-07-10

FOLLOW-UP: Some Recent Troubles with pCloud and Google Chat

Almost exactly 11 months ago, I wrote a post [LINK] about problems I was experiencing with pCloud and Google Chat. I don't have any updates about Google Chat (or Google Meet), but I do have an update regarding pCloud. In the previous post, I noted that for files & folders protected by standard encryption (as opposed to stronger zero-knowledge encryption, for which this problem doesn't exist in pCloud), some files & folders require multiple attempts to transfer; I also speculated that pCloud may have been secretly deleting files & folders. After having spent more time using pCloud, I've been able to verify that my files & folders protected by standard encryption have transferred properly, and I think I've figured out why they initially seemed to require multiple attempts to transfer.

As I understand, pCloud creates a temporary folder, essentially like a cache, on the local hard drive, to transfer files before they are uploaded to pCloud. The process of transferring from the local cache to pCloud is limited by upload speeds, which are quite slow (as I mentioned in the previous post). Additionally, once the pCloud program is closed and the remote drive is unmounted, file transfer stops, so many folders & files that the user might think were transferred might not have been transferred. A good way to verify this is to leave the desktop application for pCloud open, monitor how many files as well as what total amount of data still remain to be transferred, and only close the application when all folders & files have been transferred; I like to think of it as a practice similar to leaving a torrent open for uploading after it has finished downloading.

2022-06-01

Nonlocality and Infinite LDOS in Lossy Media

While I have written many posts on this blog about various topics in physics or math unrelated to my graduate work as well as posts promoting papers from my graduate work, it is rare that I've written direct technical posts about my graduate work. It is even more unusual that I should be doing so 2 years after leaving physics as a career. However, I felt compelled to do so after meeting again with my PhD advisor (a day before the Princeton University 2020 Commencement, which was held in person after a delay of 2 years due to this pandemic), as we had a conversation about the problem of infinite local density of states (LDOS) in a lossy medium.

Essentially, the idea is the following. Working in the frequency domain, the electric field produced by a polarization density in any EM environment is \( E_{i}(\omega, \vec{x}) = \int G_{ij}(\omega, \vec{x}, \vec{x}')P_{j}(\omega, \vec{x}')~\mathrm{d}^{3} x' \) which can be written in bra-ket notation (dispensing with the explicit dependence on frequency) as \( |\vec{E}\rangle = \hat{G}|\vec{P}\rangle \). The LDOS is proportional to the power radiated by a point dipole and can be written as \( \mathrm{LDOS}(\omega, \vec{x}) \propto \sum_{i} \mathrm{Im}(G_{ii}(\omega, \vec{x}, \vec{x})) \). This power should be finite as long as the power put into the dipole to keep it oscillating forever at a given frequency \( \omega \) is finite. However, there appears to be a paradox in that if the position \( \vec{x} \) corresponds to a point embedded in a local lossy medium, the LDOS diverges there.

I wondered if an intuitive explanation could be that loss should properly imply the existence of energy leaving the system by traveling out of its boundaries, so the idea of a medium that is local everywhere (in the sense that the susceptibility operator takes the form \( \chi_{ij}(\omega, \vec{x}, \vec{x}') = \chi_{ij}(\omega, \vec{x})\delta^{3} (\vec{x} - \vec{x}') \) at all positions) and is lossy at every point in its domain may not be well-posed as energy is somehow disappearing "into" the system instead of leaving it. Then, I wondered if the problem may actually be with locality and whether a nonlocal description of the susceptibility could help. This is where my graduate work could come in. Follow the jump to see a very technical sketch of how this might work (as I won't work out all of the details myself).

2022-05-01

Book Review: "Algorithms to Live By" by Brian Christian & Tom Griffiths

I've recently read the book Algorithms to Live By by Brian Christian & Tom Griffiths. This book shows how many problems & heuristics in computer science can be applied to explain or improve human decision-making. Each chapter focuses on a certain class of problems or issues. Such classes include the optimal stopping problem, the multi-armed bandit problem, searching & sorting, task scheduling, Bayesian inference, overfitting data, constraint relaxation, random stimulus, communication protocols, and social interaction. Additionally, most chapters try to show how results from computer science can either improve or justify certain human behaviors.

This book was frustrating for me to read. If it had fully met my expectation that it would show, in a unified & consistent way, how these computer science problems apply to human behavior and connect to each other, I would be singing its praises. If it had completely failed, I'd be happy to rhetorically trash this book. Instead, I found that each chapter would be a great vignette on its own, and each chapter showed the great potential of what the book could have been, but the book failed to live up to that potential. First, there was very little connection among the chapters, and any acknowledgment that the authors did make of such connections was almost always superficial instead of deeply insightful. For example, the respective chapters about the optimal stopping problem, caches, and overfitting each could have been so much better with greater discussion about the connection to social pressure & game theory, yet those topics were discussed only in the last chapter, which I think was a mistake. Second, only in the concluding section did the authors make clear that they wanted to either improve or justify human behavior with each class of problems or issues. This because clear over the course of reading the book, yet there was very little guidance in each chapter about whether improvement versus justification would be the goal. Perhaps the worst offender was the chapter about constraint relaxation, as there was little connection to human behavior in a way that would be obvious to lay readers. These problems meant that reading the last numbered chapter (about game theory) and the conclusion felt simultaneously wonderful for finally seeing these concepts discussed clearly and maddening for knowing that the book could have been so much better if these ideas had been more consistently executed through the book.

There are two other minor criticisms I have of the book too. First, the chapter about overfitting seems to use the word "overfitting" to mean too many different things, which is ironic and undermines any clarity that the discussion could have provided. Second, the chapter about randomized algorithms attempts to make a tenuous connection between randomized algorithms used in computer science and the way that random mental stimuli can produce very creative responses in people, but it never makes clear whether the latter result is true at an individual level or only holds statistically for large populations.

Overall, I think the author's goals were laudable and that each chapter is interesting to read in isolation. However, other readers may be disappointed, as I was, in the way that the authors fail to synthesize many of the ideas across chapters in a smooth & unified manner. Thus, I would advise that readers who may be interested in these topics go into this book with lower expectations.

2022-04-04

FOLLOW-UP: How to Tell Whether a Functional is Extremized

This post is a follow-up to an earlier post (link here) about how to tell whether a stationary point of a functional is a maximum, minimum, or saddle point. In particular, as I thought about it more, I realized that using the analogy to discrete vectors could help when formulating a more general expression for the second derivative of the nonrelativistic classical action for a single degree of freedom (i.e. the corresponding Hessian operator). Additionally, I thought of a few other examples of actions whose Hessian operators are positive-definite. Finally, I've thought more about how to express these equations for systems with multiple degrees of freedom (DOFs) as well as for fields and about how these ideas connect to the path integral formulation of quantum mechanics. Follow the jump to see more

2022-03-05

How to Tell Whether a Functional is Extremized

I happened to be thinking recently about how to tell when a functional is extremized. Examples in physics include minimizing the ground state energy of an electronic system expressed as an approximate density functional \( E[\rho] \) with respect to the electron density \( \rho \) or maximizing the relativistic proper time \( \tau \) of a classical particle with respect to a path through spacetime. Additionally, finding the points of stationary action that lead to the Euler-Lagrange equations of motion is often called "minimization of the action", but I can't recall ever having seen a proof that the action is truly minimized (as opposed to reaching a saddle point). This got me to think more about the conditions under which a functional is truly maximized or minimized as opposed to reaching a saddle point. Follow the jump to see more. I will frequently refer to concepts presented in a recent post (link here), including the relationships between functionals of vectors & functionals of functions. Additionally, for simplicity, all variables and functions will be real-valued.

2022-02-20

Google Chrome OS Flex and Broader Adoption of Linux

I recently read [Chin, The Verge (2022); Raphael, Computerworld (2022)] that Google is releasing a version of Chrome OS called Chrome OS Flex which can be copied to a USB storage drive and installed on computers that didn't come with Chrome OS. This seems very similar to how many popular Linux distributions work, so I initially wondered if Chrome OS Flex will succeed with the muscle of Google behind it where similar efforts by Linux distributions backed by smaller not-for-profit organizations have failed. At the same time, it seems clear to me that Google will not hesitate to use this as an opportunity to collect more valuable data from people who use Chrome OS Flex. This got me to think more broadly about how much ordinary people who might consider using Chrome OS Flex really care about their privacy (especially considering that such people would typically use Microsoft Windows 10/11, which are known to collect significant amounts of data from users) even after revelations about Facebook's practices, earlier revelations about government surveillance, and so on.

However, upon closer reading, I noticed that the first article makes clear that the target audience is schools & businesses which have many old computers whose Windows versions may no longer be supported. This makes more sense to me than targeting ordinary individuals, because I get the sense that the learning curve even to copy an ISO file onto a USB storage drive and install it onto a computer is steep for most ordinary individuals (despite the significant progress that distributions like Ubuntu & Linux Mint have made in making the installation process easy). By contrast, it seems more reasonable to expect specialists in schools & businesses to learn these things once and then do them for many different computers. Meanwhile, the second article makes clear that while traditional Chrome OS is capable of running many programs built for Windows, Chrome OS Flex will not have such capabilities. This suggests to me that while many schools may take up this opportunity given that few user-facing applications need to be installed on the computer and most user-facing applications can be accessed through web equivalents, this might not be the case for many businesses, so it is unclear to me which businesses will actually take this up.

Ultimately, I don't expect to see much adoption among ordinary individuals even though they aren't forbidden from installing & using Chrome OS Flex. That said, I would be interested to see how adoption evolves in schools & businesses over time.

2022-01-17

Turning off Comments for this Blog Going Forward

This is a quick post to say that I have unfortunately had to turn off comments for this blog. Recently, I had trouble using the comment feature on various posts in this blog (despite being its sole author & maintainer). Furthermore, for the last few years, the vast majority of comments have been unwanted commercial solicitations, while I haven't really gotten much useful feedback about posts themselves. I recognize that there are very few humans who regularly read this blog and that most, if not all, of those readers probably know me personally, so I trust that they can reach out to me if they want to share their thoughts about posts. In any case, going forward, comments will no longer be available on this blog. This also necessarily means that the "Featured Comments" blog post series cannot continue.