As a follow-up to this post, I'm going to briefly discuss what two statistical mechanics professors (who shall remain nameless) I talked to about this had to say. For those who don't remember or are too lazy to read through, the issue is that a new paper publicized by the MIT news office claims that by adopting a view of entropy as per Gibbs as opposed to Boltzmann, negative temperature can be removed from statistical mechanics. I pointed out many issues I had with the arguments for that, and I would thereby cast doubt on the paper and premise as their wholes. Follow the jump to see what information I was able to learn after talking to those professors. (It appears that rendering LaTeX on this blog no longer works right after the takedown, so I'm enclosing any useful LaTeX formulas in dollar signs for you to copy and paste into a LaTeX renderer, if you so choose. The rendering of LaTeX in past posts is inconsistent, just as a heads-up.)

The first professor agreed for the most part with what I had to say. He agreed that all of statistics, and therefore statistical mechanics, works only when $N$ is sufficiently large so that $\lim_{N \to \infty} \frac{\sqrt{N}}{N} = 0$ and $\lim_{N \to \infty} \frac{\ln{N}}{N} = 0$. He also pointed out that in fact there's another more specific issue, in that the derivation of the temperature of a quantum harmonic oscillator would apparently yield different ground state temperatures for different oscillator frequencies, when in fact if every oscillator (regardless of frequency) is in its ground state, the temperature should be exactly at absolute zero. Moreover, the negative Boltzmann temperature, when brought into contact with a positive temperature system, gives off heat spontaneously to the "higher" temperature system, appearing to violate the second law of thermodynamics; he showed that using Gibbs temperature instead does not in fact alleviate the problem, because the Gibbs temperature would be arbitrarily close to zero and would still give off heat to a hotter body. Finally, as I said, he also said that the experimental consequences of Gibbs temperature are not as readily apparent as the paper authors would seem to suggest.

The second professor had a more nuanced outlook on both the general picture and the specific details. The biggest issue is that any mathematical model must ultimately reproduce the physics somehow. Negative temperature is only an approximation for systems where the particles have almost-isolated degrees of freedom, in that a set of degrees of freedom that has a bounded energy spectrum cannot easily exchange energy and are not strongly coupled with other degrees of freedom which allow for unbounded energies. For example, a two-level system is a large simplification of a spin-1/2 system, in that electron spin systems do have bounded energy levels, but electrons also have kinetic degrees of freedom (position/momentum) with no energy bound, so negative temperature just says that the spins can quickly exchange energy among each other but will only exchange energy with kinetic degrees of freedom after very long times. That said, equilibrium is defined to occur after an arbitrarily long time, after which point some exchange of energy between spin and kinetic degrees of freedom must occur, so physically saying that the spins are isolated from the kinetic degrees of freedom so as to allow for negative temperature is merely an approximation over an appropriate intermediate time scale. Relatedly, he agreed with my assertion that because negative temperature is an approximation for isolated degrees of freedom, no one seriously expects to stick a thermometer into a spin system/laser in a pumped (negative temperature) state and measure a negative temperature, but this also doesn't necessarily mean that the temperature definition as proposed by the paper is correct.

There were a few other details that we discussed in some depth, but I don't remember them as much now. The one other big thing that stuck out was our discussion of entropy and information. He taught me how a canonical heat bath is actually a destroyer of information, so the information view of entropy works there. I reminded him of how the assumption of molecular chaos can extract the Boltzmann equation from the BBGKY hierarchy, and how this is indicative of the equivalence of entropy and information even in classical microcanonical statistical mechanics; he agreed with me after some thought. He then told me though that, strictly speaking, traditional statistical mechanics is only concerned with equilibrium analysis, and while even other ensembles may often have thermodynamic entropies that happen to also be representative of information, there are plenty of counterexamples as well; moreover, there is a certain degree of arbitrariness in some definitions of information entropy. His work is on non-equilibrium statistical mechanics, though, for which there has been much progress in the last 15 years or so, and remember that because equilibrium requires an arbitrarily long waiting time, any real physical system is out of equilibrium; he said that in fact, non-equilibrium statistical mechanical analysis derives (rather than assumes) Shannon-esque entropy as being the correct description of thermodynamic entropy describing irreversible processes, so the paper isn't quite up-to-date for having been published just last year. Finally, he and I agreed that if the "microcanonical" system is real, then it would have to be in contact with another system at a different temperature and be out of equilibrium, in which case his (this professor's) argument about non-equilibrium analysis wins out; meanwhile, if the "microcanonical" system is ideal and is truly isolated in a way to keep energy rather than temperature constant, it doesn't make sense to define entropy as a measure of the cumulative states below and up to that energy (and it thermodynamically makes much more sense to keep the Boltzmann definition of entropy as the number of states at or in a small neighborhood of a particular energy), because for the states below the energy of the system to be accessed, the energy by construction has to drop, which by definition precludes an isolated system.

The first professor agreed for the most part with what I had to say. He agreed that all of statistics, and therefore statistical mechanics, works only when $N$ is sufficiently large so that $\lim_{N \to \infty} \frac{\sqrt{N}}{N} = 0$ and $\lim_{N \to \infty} \frac{\ln{N}}{N} = 0$. He also pointed out that in fact there's another more specific issue, in that the derivation of the temperature of a quantum harmonic oscillator would apparently yield different ground state temperatures for different oscillator frequencies, when in fact if every oscillator (regardless of frequency) is in its ground state, the temperature should be exactly at absolute zero. Moreover, the negative Boltzmann temperature, when brought into contact with a positive temperature system, gives off heat spontaneously to the "higher" temperature system, appearing to violate the second law of thermodynamics; he showed that using Gibbs temperature instead does not in fact alleviate the problem, because the Gibbs temperature would be arbitrarily close to zero and would still give off heat to a hotter body. Finally, as I said, he also said that the experimental consequences of Gibbs temperature are not as readily apparent as the paper authors would seem to suggest.

The second professor had a more nuanced outlook on both the general picture and the specific details. The biggest issue is that any mathematical model must ultimately reproduce the physics somehow. Negative temperature is only an approximation for systems where the particles have almost-isolated degrees of freedom, in that a set of degrees of freedom that has a bounded energy spectrum cannot easily exchange energy and are not strongly coupled with other degrees of freedom which allow for unbounded energies. For example, a two-level system is a large simplification of a spin-1/2 system, in that electron spin systems do have bounded energy levels, but electrons also have kinetic degrees of freedom (position/momentum) with no energy bound, so negative temperature just says that the spins can quickly exchange energy among each other but will only exchange energy with kinetic degrees of freedom after very long times. That said, equilibrium is defined to occur after an arbitrarily long time, after which point some exchange of energy between spin and kinetic degrees of freedom must occur, so physically saying that the spins are isolated from the kinetic degrees of freedom so as to allow for negative temperature is merely an approximation over an appropriate intermediate time scale. Relatedly, he agreed with my assertion that because negative temperature is an approximation for isolated degrees of freedom, no one seriously expects to stick a thermometer into a spin system/laser in a pumped (negative temperature) state and measure a negative temperature, but this also doesn't necessarily mean that the temperature definition as proposed by the paper is correct.

There were a few other details that we discussed in some depth, but I don't remember them as much now. The one other big thing that stuck out was our discussion of entropy and information. He taught me how a canonical heat bath is actually a destroyer of information, so the information view of entropy works there. I reminded him of how the assumption of molecular chaos can extract the Boltzmann equation from the BBGKY hierarchy, and how this is indicative of the equivalence of entropy and information even in classical microcanonical statistical mechanics; he agreed with me after some thought. He then told me though that, strictly speaking, traditional statistical mechanics is only concerned with equilibrium analysis, and while even other ensembles may often have thermodynamic entropies that happen to also be representative of information, there are plenty of counterexamples as well; moreover, there is a certain degree of arbitrariness in some definitions of information entropy. His work is on non-equilibrium statistical mechanics, though, for which there has been much progress in the last 15 years or so, and remember that because equilibrium requires an arbitrarily long waiting time, any real physical system is out of equilibrium; he said that in fact, non-equilibrium statistical mechanical analysis derives (rather than assumes) Shannon-esque entropy as being the correct description of thermodynamic entropy describing irreversible processes, so the paper isn't quite up-to-date for having been published just last year. Finally, he and I agreed that if the "microcanonical" system is real, then it would have to be in contact with another system at a different temperature and be out of equilibrium, in which case his (this professor's) argument about non-equilibrium analysis wins out; meanwhile, if the "microcanonical" system is ideal and is truly isolated in a way to keep energy rather than temperature constant, it doesn't make sense to define entropy as a measure of the cumulative states below and up to that energy (and it thermodynamically makes much more sense to keep the Boltzmann definition of entropy as the number of states at or in a small neighborhood of a particular energy), because for the states below the energy of the system to be accessed, the energy by construction has to drop, which by definition precludes an isolated system.