Transitioning from microscopic to macroscopic and quantum to classical regimes

I recently read two things that were of interest to me having previously worked in physics. One was an article in The New Yorker magazine [LINK], in which the author does a good job of going over the successes of mathematical modeling in the physical sciences and contrasting this with the limitations of mathematical modeling in public health (showing, for example, how many models of the spread of contagions fail when governments & societies take fast & drastic collective actions to limit the spread), the failures of mathematical models in social sciences where the outputs of those models can create feedback loops with public sentiment (for example in political polling), and the way that many people who use machine learning models in different domains expect the fancy curve-fitting of those models to represent fundamental understanding when that might not really be so. The other was a journal article published in Physical Review Letters [LINK] about how it can be possible to test the extent to which a massive (as opposed to massless) object which exhibits the dynamics of a simple harmonic oscillator and prepared in a quantum coherent state can be tested for deviations from classical behavior using a protocol that does not depend on the mass of the object (although I question this given that the protocol depends on timed measurements that depend on the frequency of oscillation, and in many physics contexts the frequency does depend on the mass as \( \omega = \sqrt{k/m}\), but this is somewhat of a quibble). These two things got me to think about something that I realized I never got out of many years of formal undergraduate & graduate education in physics. This can be illustrated with the following example.

In introductory physics classes that focus on Newtonian mechanics, a prototypical problem involves a block, modeled as a point mass, sliding (with or without friction) down a fixed triangular incline in the constant gravitational field of the Earth. In the context of those classes, instructors will be careful to note that this is merely a model, and corrections could come from the inclusion of the variation of the Earth's gravitational field & surface curvature, the technical possibility of moving the triangular incline (which must be much more massive than the block in question), the shape of the block, variations in the touching surfaces, air resistance, et cetera. In later classes, instructors may point out corrections due to special relativity (i.e. the speed of light) and general relativity (as it relates to the Earth's gravitational field).

However, in later classes about quantum mechanics & statistical mechanics, instructors explain how different the models are from models of Newtonian mechanics at human scales, but they often promise that appropriate treatments of aggregates of microscopic constituents can consistently recover results from Newtonian mechanics, yet this promise is almost never fulfilled. In particular, wavefunctions that describe pure states of single microscopic particles are quite far removed from the simple dynamical variables describing blocks on inclined planes, although statistical mechanics can probabilistically describe the solid states of the block & inclined plane as well as the gaseous state of the surrounding air, it is not usually extended to describe the dynamics of the block sliding down the inclined plane. For example, if a block sliding down a fixed inclined plane of horizontal angle \( \theta \) in a uniform gravitational field is described as having equations of motion \( m\ddot{x} = mg\sin(\theta) \) where the \( x \)-axis is defined as pointing downward parallel to the slope of the inclined plane for increasing \( x \) and the \( y \)-axis points outward in the normal direction from the inclined plane, then I wish to see corrections of the form \( m\ddot{\vec{x}} = \sum_{\mu = 0}^{\infty} \sum_{\nu = 0}^{\infty} \hbar^{\mu} k_{\mathrm{B}}^{\nu} \vec{f}^{(\mu, \nu)} \) where the lowest-order term is \( \vec{f}^{(0, 0)} = mg\sin(\theta)\vec{e}_{x} \). I have never seen these sorts of quantum or statistical corrections to Newtonian equations of motion in simple (in the context of Newtonian mechanics) systems. Similarly, it is rare to see how quantum or statistical mechanical systems can, in appropriate limits, reproduce classical systems; I can only think of the quantum coherent state of the simple harmonic oscillator as well as how the Moyal bracket in the phase space formulation of quantum mechanics reduces to lowest order in \( \hbar \) to the Poisson bracket, and in the latter case, intuitive construction of the quantum phase space quasiprobability function is made more difficult (compared to construction of a classical phase space probability density function, as I did in a post [LINK] from a few years ago) by the fact that unlike the classical phase space probability density function, the quantum phase space quasiprobability function cannot be arbitrarily localized in phase space, it can take on negative values for certain wavefunctions, it is compressible in phase space with respect to its own evolution over time, and it is not obvious how it should look for a system of many particles constituting a macroscopic object like a block (in contrast to a classical phase space probability density function, which for such a system could just be a product of Dirac delta functions localizing each microscopic constituent to a point in phase space).

These considerations reminded me of a discussion I had last year with friends from college, who also did course 8 (physics) with me. We came to a consensus that while people who do not become physics majors should, as usual, get exposure to Newtonian physics and the basics of electricity & magnetism, people who become physics majors should have a curriculum over 3-4 years that exhibits a sensible conceptual progression. In particular, after seeing Newtonian mechanics, such students should then be exposed to Lagrangian & Hamiltonian formulations of classical mechanics. The Lagrangian formulation of classical mechanics should then be used to develop intuitions about mechanical waves, which in turn can lead to introductions to classical field theory and development of classical electromagnetic theory as a rich example of a classical field theory. (I would also personally recommend using the introduction of mechanical waves to introduce the linear algebraic treatment of waves and then reintroduce the linear algebraic treatment of waves into the treatment of linear classical field theories in general & linear classical electromagnetic theory in particular.) The Hamiltonian formulation of classical mechanics should then be used to develop intuitions about probability distributions in classical mechanics, which in turn can be used to develop intuitions about statistical mechanics. Optionally, at this point, the Hamiltonian formulation of classical mechanics can also be used to develop intuitions about nonlinear dynamics & chaos theory, but while this is good for the broader education of physics students, it is less immediately relevant for the introduction of quantum theory to come soon after (because quantum mechanics is linear). Finally, only after these things happen should quantum theory be introduced, such that there are clear connections of the wavefunction formulation of quantum mechanics to mechanical waves, the phase space formulation of quantum mechanics to classical phase space probability distributions, and the linear algebraic framework of quantum mechanics to linear algebraic treatments of classical field theories (including linear classical electromagnetic theory); this will ensure that students understand how ideas like superposition, interference, rotation through a Hilbert space, statistical uncertainty, and related ideas are not unique to quantum mechanics (which is unfortunately too often a consequence of the way quantum mechanics is typically introduced in undergraduate curricula, at least in the US). We also came to a consensus that in each course, there should be clear explanations of what prototypical systems are analytically solvable, what prototypical systems are not analytically solvable, and why (in each case).