Statistical Theory of Heat

Tian Ma & Shouhong Wang, Statistical Theory of Heat, Indiana University ISCAM Preprint #1711, 37 pages, August 26, 2017

The above paper presents a new statistical theory of heat based on the statistical theory of thermodynamics and on the recent developments of quantum physics.

Main Motivations

First, by classical thermodynamics, for a thermodynamic system, the thermal energy {Q_0} is given by

\displaystyle Q_0=ST. \ \ \ \ \ (1)

Here {T} is the temperature of the system and {S} is the entropy given by the Boltzmann formula:

\displaystyle S=k\ln W, \ \ \ \ \ (2)

where {k} is the Boltzmann constant, and {W} is the number of microscopic configurations of the system. It is then clear that in modern thermodynamics, there is simply no physical heat carrier in both the temperature {T} and the entropy {S}, and hence there is no physical carrier for thermal energy {Q_0=ST}. Due to the lack of physical carrier in temperature {T} and {S=k ln W}, and consequently in the thermal energy {Q=ST}, the nature of heat is still not fully understood.

The second motivation is the recent development on the photon cloud structure of electrons: the naked electron are surrounded by a shell layer cloud of photons. Therefore, electrons and photons form a natural conjugate pair of physical carriers for emission and absorption, reminiscent to the conjugation between {T} and {S}.

Third, in view of {Q=ST} and the above conjugation between photons and electrons, a theory of heat has to make

connections between 1) conjugation between electrons and photons, and 2) conjugation between temperature and entropy.

The new theory in this paper provides precisely such a connection: at the equilibrium of absorption and radiation, the average energy level of the system maintains unchanged, and represents the temperature of the system; at the same time, the number (density) of photons in the sea of photons represents the entropy (entropy density) of the system.

Main Results

1. Energy level formula of temperature

In view of the above connection between the two conjugations, temperature must be associated with the energy levels of electrons, since it is an intensive physical quantity measuring certain strength of heat, reminiscent of the basic characteristic of energy levels of electrons. Also notice that there are abundant orbiting electrons and energy levels of electrons in atoms and molecules. Hence the energy levels of orbiting electrons, together with the kinetic energy of the system particles, provide a truthful representation of the system particles.

We derive the following energy level formula of temperature using the well-known Maxwell-Boltzmann, the Fermi-Dirac, and the Bose-Einstein distributions:

  • for classical systems,

    \displaystyle kT= \sum_n\bigg(1-\frac{a_n}{N}\bigg)\frac{a_n\varepsilon_n} {N(1+\beta_n\ln\varepsilon_n)}, \ \ \ \ \ (3)

  • for system of bose particles,

    \displaystyle kT= \sum_n \bigg(1 + \frac{a_n}{g_n}\bigg) \frac{a_n\varepsilon_n}{N(1+\beta_n\ln\varepsilon_n)}; \ \ \ \ \ (4)

  • for systems of fermi particls,

    \displaystyle kT= \sum_n \bigg(1 - \frac{a_n}{g_n}\bigg) \frac{a_n\varepsilon_n}{N(1+\beta_n\ln\varepsilon_n)}. \ \ \ \ \ (5)

Here {\varepsilon_n} are the energy levels of the system particles, {N} is the total number of particles, {g_n} are the degeneracy factors (allowed quantum states) of the energy level {\varepsilon_n}, and {a_n} are the distributions, representing the number of particles on the energy level {\varepsilon_n}.

The above formulas amount to saying that temperature is simply the (weighted) average energy level of the system. Also these formulas enable us to have a better understanding on the nature of temperature.

In summary, the nature of temperature {T} is the (weighted) average energy level. Also, the temperature {T} is a function of distributions {\{a_n\}} and the energy levels {\{\varepsilon_n\}} with the parameters {\{\beta_n\}} reflecting the property of the material.

2. Photon number formula of entropy

In view of the conjugate relations, since entropy {S} is an extensive variable, we need to characterize entropy as the number of photons in the photon gas between system particles, or the photon density of the photon gas in the system. Also, photons are Bosons and obey the Bose-Einstein distribution. Then we can make a connection between entropy and the number of photons and derive

\displaystyle S = kN_0 \left[ 1+ \frac{1}{kT} \sum_n \frac{\varepsilon_n a_n}{N_0}\right], \ \ \ \ \ (6)

where {\varepsilon_n} are the energy levels of photons, and {a_n} are the distribution of photons at energy level {\varepsilon_n}, {N_0=\sum_n a_n} is the total number of photons between particles in the system, and {\sum_n \frac{\varepsilon_n}{kT}a_n} represents the number of photons in the sense of average energy level.

It is worth mentioning that this new entropy formula is equivalent to the Boltzmann entropy formula {S=k\ln W}. However, their physical meanings have changed: the new formula (6) provides explicitly that

the physical carrier of heat is the photons.

3. Temperature theorem

By the temperature and the entropy formulas (5) and (6), we arrive immediately at the following results of temperature:

  • There are minimum and maximum values of temperature with {T_{\min}=0} and {T_{\max}};
  • When the number of photons in the system is zero, the temperature is at absolute zero; namely, the absence of photons in the system is the physical reason causing absolute zero temperature;
  • (Nernst Theorem) With temperature at absolute zero, the entropy of the system is zero;
  • With temperature at absolute zero, all particles fills all lowest energy levels.

4. Thermal energy formula

Thanks to the entropy formula (6), we derive immediately the following thermal energy formula:

\displaystyle Q_0 = ST = E_0 + k N_0 T, \ \ \ \ \ (7)

where {E_0=\sum_n a_n \varepsilon_n} is the total energy of photons in the system, {\varepsilon_n} are the energy levels of photons, and {a_n} are the distribution of photons at energy level {\varepsilon_n}, and {N_0} is the number of photons in the system.

The theory of heat presented in this paper is established based on physical theories on fundamental interactions, the photon cloud model of electrons, the first law of thermodynamics, statistical theory of thermodynamics, radiation mechanism of photons, and energy level theory of micro-particles. The theory utilizes rigorous mathematics to reveal the physical essence of temperature, entropy and heat.

Tian Ma & Shouhong Wang

Posted in statistical physics | Tagged , , , , , , , | Leave a comment

Dynamical Theory of Thermodynamical Phase Transitions

Tian Ma & Shouhong Wang, Dynamical Theory of Thermodynamical Phase Transitions, Preprint, July 14, 2017

This blog post is on the above paper. The goals of this paper, as well as all other phase transition theories, are

  1. to determine the definition and types of phase transitions,
  2. to derive the critical parameters,
  3. to obtain the critical exponents, and
  4. to understand the mechanism and properties of phase transitions, such as supercooled and superheated states and the Andrews critical points, etc.

There are two routes for studying phase transitions: the first is Landau’s approach using thermodynamic potentials, and the second is a microscopic approach using statistical theory. These routes are complementing to each other. Our dynamical transition theory follows the Landau’s route, but is not a mean field theory approach. It is based, however, on the thermodynamical potential.

1. For a thermodynamic system, there are three different levels of physical quantities: control parameters {\lambda}, the order parameters (state functions) {u=(u_1, \cdots, u_N)}, and the thermodynamic potential {F}. These are well-defined quantities, fully describing the system. The potential is a functional of the order parameters, and is used to represent the thermodynamic state of the system.

In a recent paper [Tian Ma & Shouhong Wang, Dynamic Law of Physical Motion and Potential-Descending Principle, Indiana University, The Institute for Scientific Computing and Applied Mathematics Preprint \#1701] and a related previous blog, we postulated a potential-descending principle (PDP) for statistical physics, which give rise to a dynamic equation of thermodynamical systems: Also importantly, based on PDP, the dynamic equations of a thermodynamic system in non-equilibrium state take the form

\displaystyle \frac{\text{d} u }{\text{d} t } = - A \delta F(u, \lambda), \ \ \ \ \ (1)

which leads to an equation for the deviation order parameter {u} from the basic equilibrium state {\bar u}:

\displaystyle \frac{du}{dt} = L_\lambda u +G(u,\lambda), \ \ \ \ \ (2)

where {L_\lambda} is a linear operator, and {G(u,\lambda) } is the nonlinear operator.

We obtain in this paper three theorems, providing a full theoretical characterization of thermodynamic phase transitions.

2. The first theorem states that as soon as the linear instability occurs, the system always undergoes a dynamical transition, to one of the three types: continuous, catastrophic and random. This theorem offers the detailed information for the phase diagram and the critical threshold {\lambda_c} in the control parameter {\lambda=(\lambda_1, \cdots, \lambda_N) \in \mathbb R^N}. They are precisely determined by the following equation:

\displaystyle \beta_1(\lambda_1, \cdots, \lambda_N)=0. \ \ \ \ \ (3)

Here {\beta_1} is the first eigenvalue of the linear operator {L_\lambda} given by

\displaystyle L_\lambda w \equiv -\left(\frac{\partial^2F(\bar u; \lambda)}{\partial u_i, \partial u_j}\right) w = \beta_1(\lambda) w, \ \ \ \ \ (4)

where {F} is the potential functional.

3. The second theorem states that there are only first, second and third-order thermodynamic transitions; and establishes a corresponding relationship between the Ehrenfest classification and the dynamical classification. The Ehrenfest classification offers clear experimental quantities to determine the types of phase transition, while the dynamical classification provides a full theoretical characterization of the transition.

4. The last theorem states that both catastrophic and random transitions lead to saddle-node bifurcations, and both latent heat, superheated and supercooled states always accompany the saddle-node bifurcation associated with the first order transitions.

5. We emphasize that the three theorems lead to three important diagrams: the phase diagram, the transition diagram and the dynamical diagram. These diagrams appear only to be derivable by the dynamical transition theory presented in this paper. In addition, our theory achieves the fourth goal of phase transition theories as stated in the beginning of this blog post, which is hardly achievable by other existing theories.

Tian Ma & Shouhong Wang


Posted in Phase Transition | Tagged , , , , , , , | Leave a comment

Dynamical Law of Physical Motion Systems

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

This is the last of the three blog posts for the above paper. The first  was on postulating the potential-descending principle (PDP) as the first principle of statistical physics, and the second was on irreversibility and the problems in the Boltzmann equation.

This blog focuses on the general dynamic law for all physical motion systems, and on the new variational principle with constraint-infinitesimals.

Dynamical Law for isolated physical motion systems

1. For each isolated physical motion system, there are a set of state functions {u=(u_1, \cdots, u_N)}, describing the states of the system, and a potential functional {F(u)}, representing a certain form of energy intrinsic to the underlying physical system. An important basic ingredient of modeling the underlying physical system is to determine the state functions and the potential functional.

Then it is physically clear that the rate of change of the state functions {{du}/ dt} should equal to the driving force derived from the potential functional {F}.
More precisely, we postulate the following dynamical law of physical motion:

\displaystyle \frac{du}{dt } =- A \delta_{\mathcal L} F(u), \ \ \ \ \ (1)


where {A} is the coefficient matrix, and {- \delta_{\mathcal L} F(u)} is the variation with constraint infinitesimals, representing the driving force of the physical motion system, and {\mathcal L} is a differential operator representing the infinitesimal constraint.

2. We show that proper constraints should be imposed on the infinitesimals (variation elements) for the variation of the potential functional {F}. These constraints can be considered as generalized energy and/or momentum conservation laws for the infinitesimal variation elements. The variation under constraint infinitesimals is motivated in part by the recent work of the authors on the principle of interaction dynamics (PID) for the four fundamental interactions, which was required by the dark energy and dark matter phenomena, the Higgs fields, and the quark confinement. Basically, PID takes the variation of the Lagrangian actions for the four interactions, under energy-momentum conservation constraints. We refer interested readers to an earlier blog on PID, and the following book for details:

[Ma-Wang, MPTP] Tian Ma & Shouhong Wang, Mathematical Principles of Theoretical Physics, 524pp., Science Press, 2015

3. The linear operator {\mathcal L} in the dynamic law (1) takes the form of a differential operator {L} or its dual {L^\ast}. The constraints can be imposed either on the kernel {\mathcal N^\ast} of the dual operator {L^\ast} or on the range of the operator {L}, given as follows:

\displaystyle \langle \delta_{L^\ast}F(u),v\rangle_{H}=\frac{\mbox{d}}{\mbox{d}t}\bigg|_{t=0} F(u+tv), \qquad \forall\ L^\ast v=0,

\displaystyle \langle \delta_{L}F(u),\varphi\rangle_{H_1}=\frac{\mbox{d}}{\mbox{d}t}\bigg|_{t=0} F(u+tL\varphi), \qquad \forall\ \varphi\in H_1.

Using the orthogonal decomposition theorem below, we show that the above variations with constraint infinitesimals take the following form:

\displaystyle \delta_{L^\ast}F(u)=\delta F(u)+Lp,

\displaystyle \delta_{L}F(u)=L^\ast\delta F(u),

for some function {p}, which plays a similar role as the pressure in incompressible fluid flows. Here {\delta F(u)} is the usual derivative operator.

4. As an example, we consider the compressible Navier-Stokes equations:

\displaystyle \frac{\partial u}{\partial t} + (u \cdot \nabla) u = \frac{1}{\rho} \left[ \mu \Delta u - \nabla p + f\right],

\displaystyle \frac{\partial \rho}{\partial t} = -\text{div} \left(\rho u \right).

Let the constraint operator {L=- \nabla} be the gradient operator, with dual operator {L^\ast =div}.
Also, let the potential functional be

\displaystyle \Phi(u, \rho) = \int_{\Omega}\left[\frac{\mu}{2}|\nabla u|^2 - fu+ \frac12 \rho^2 \text{div}u\right]\text{d} x.

Then the above compressible Navier-Stokes equations are written as

\displaystyle \frac{\text{d} u}{\text{d} t} = - \frac{1}{\rho} \frac{\delta_{L^\ast} }{\delta u} \Phi (u, \rho),

\displaystyle \frac{d \rho}{d t} = - \frac{\delta}{\delta \rho} \Phi(u, \rho),

which is in the form of (1) with coefficient matrix {A=\text{diag}(1/\rho, 1)}. Also, the pressure is given by

\displaystyle p=p_0 - \lambda \text{div} u - \frac12 \rho^2.


\displaystyle \frac{\text{d} u}{\text{d}t} = \frac{\partial u }{\partial t} + \frac{\partial u}{\partial x_i} \frac{\text{d} x_i}{\text{d} t } = \frac{\partial u }{\partial t} + (u \cdot \nabla ) u, \qquad \frac{d\rho}{dt}= \frac{\partial \rho}{\partial t} + u\cdot \nabla \rho.

5. There are two types of physical motion systems: the dissipative systems and the conservation systems. The coefficient matrix {A} is symmetric and positive definite if and only if the system is a dissipative system, and {A} is anti-symmetry if and only if the system is a conservation system.

Dynamical Law for Coupled Physical Motion Systems

Symmetry plays a fundamental role in understanding Nature. In [Ma-Wang, MPTP], we have demonstrated that for the four fundamental interactions, the Lagrangian actions are dictated by the general covariance (principle of general relativity), the gauge symmetry and the Lorentz symmetry; the field equations are then derived using PID as mentioned earlier.

For isolated motion systems, all energy functionals {F} obey certain symmetries such as {SO(n)} ({n=2, 3}) symmetry. In searching for laws of Nature, one inevitably encounters a system consisting of a number of subsystems, each of which enjoys its own symmetry principle with its own symmetry group. To derive the basic law of the coupled system, we postulated in [Ma-Wang, MPTP] the principle of symmetry-breaking (PSB), which is of fundamental importance for deriving physical laws for both fundamental interactions and motion dynamics: Physical systems in different levels obey different laws, which are dictated by their corresponding symmetries. For a system coupling different levels of physical laws, part of these symmetries must be broken.

In view of this principle, for a system coupling different subsystems, the motion equations become

\displaystyle \frac{\text{d} u}{\text{d}t} = -A\delta_{\mathcal L} F(u) + B(u), \ \ \ \ \ (2)


where {B(u)} represents the symmetry-breaking.

Orthogonal-Decomposition Theorem

To establish the needed mathematical foundation for the dynamical law of physical motion systems, we need to prove an orthogonal decomposition theorem, Theorem~6.1 in the paper. Basically, for a linear operator {L:H_1 \rightarrow H} between two Hilbert spaces {H} and {H_1}, with dual operator {L^\ast}, any {u \in H} then can be decomposed as

\displaystyle u=L\varphi+v,\quad L^\ast v=0,

where {L\varphi} and {v} are orthogonal in {H}.


The dynamical law given by (1) and (2) is essentially known for motion system in classical mechanics, quantum mechanics and astrophysics; see among others [Ma-Wang, MPTP] and [Landau and Lifshitz, Course of theoretical physics, Vol. 2, The classical theory of fields].

Thanks to the variation with infinitesimal constraints, the law of fluid motion is now in the form of (1) and (2).

The potential-descending principle (PDP) addressed in the previous blog shows that non-equilibrium thermodynamical systems are governed by the dynamical law (1) and (2), as well.

In a nutshell,

the dynamical law (1) and (2) are the law for all physical motions systems.

We end this blog by emphasizing that in deriving dynamical law and the basic laws for the four fundamental interactions [Ma-Wang, MPTP], the following guiding principle of physics played a crucial role:

  • The heart of physics is to seek experimentally verifiable, fundamental laws and principles of Nature. In this process, physical concepts and theories are transformed into mathematical models:

    \displaystyle \text{ physical laws } = \text{ mathematical equations} \ \ \ \ \ (3)

  • the predictions derived from these models can be verified experimentally and conform to reality.

The true understanding of (3) is a subtle process, and is utterly important.

Tian Ma & Shouhong Wang

Posted in Fundamental Principles | Tagged , , , | Leave a comment

Irreversibility and Problems in the Boltzmann Equation

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

The previous blog is on postulating the potential-descending principle (PDP) as the first principle of statistical physics. The purpose of this blog is on the next two components of the above paper:

  1. to demonstrate that the PDP is the first principle to describe irreversibility of all thermodynamic systems;

  2. to examine the problems faced by the Boltzmann equation.


First, irreversibility is a macroscopic property of thermodynamics, and must be described by the first level physical quantities–thermodynamic potentials, rather than the second level quantities (state functions) or the third level quantities (control parameters).

Second, entropy {S} is a state function, which is the solution of basic thermodynamic equations. Thermodynamic potential is a higher level physical quantity than entropy, and consequently, is the correct physical quantity for describing irreversibility for all thermodynamic systems.

Problems in Boltzmann Equation

Historically, great effort has been put on establishing a mathematical model of entropy-increasing principle. The Boltzmann equation is introduced mainly for this purpose. Since there is no first principle to achieve this purpose, the Boltzmann equation is introduced as a phenomenological model with two specific goals:


  • to derive the entropy-ascending principle, and

  • to make the Maxwell-Boltzmann distribution (realistic equilibrium state of a dilute gaseous system) a steady-state solution.


However, the Boltzmann equation faces many problems:

First,  laws of physics (equations) should not use state functions, which are themselves governed by physical laws, as independent variables. The Boltzmann equation violates this simple physical rule by using the velocity field as an independent variable. Consequently, the Boltzmann equation is not a physical law.

Second, the Boltzmann equation uses the velocity field {v} as an independent variable. This leads to a new unknown function (force field) in the Boltzmann equation:

\displaystyle F=F(t, x, v), \ \ \ \ \ (1)

which is the sum of the external force and the force generated by the total interaction potential of all particles in the system, including the force due to collision.

Third, the main source of this force field {F} comes from the interaction between particles, and {F} is realistically not zero. Otherwise, all particles in the system would make uniform rectilinear motion, and in particular there would be no particle collisions (collision is close-distance interactions).

Fourth, if

\displaystyle F\not=0, \ \ \ \ \ (2)

 then the Maxwell distribution fails to be the steady state solution of the Boltzmann equation. Since the Maxwell-distribution is the realistic equilibrium state of dilute gaseous systems, the Boltzmann equation fails to describe the underlying physical phenomena in this important regard.

Fifth, in deriving the H-Theorem (i.e. the entropy-ascending principle), the following must be assumed:

\displaystyle F=F(t, x) \text{ \it is independent of } v. \ \ \ \ \ (3)


This is a non-physical, rather arbitrary assumption, since {F} includes the force generated by all interactions (including collision) and must be velocity dependent. Consequently the H-Theorem is not a natural consequence of the Boltzmann equation.

Sixth, ignoring the non-physical nature of assumption (2), the space of all steady state solutions of the Boltzmann equation is of five-dimensional. Namely, the general form of the steady state solutions is

\displaystyle \bar \rho = e^{\alpha_0 + \alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3 + \alpha_4 v^2},

where {\alpha_i} ({i=0, \cdots, 4}) are constants, and {v=(v_1, v_2, v_3)}. This shows that each steady-state is not stable, and this does not fit the reality.

Seventh, the entropy-increasing principle shows that a gaseous system in the equilibrium has the maximum entropy, i.e. the Maxwell distribution should be the maximum of

\displaystyle S=-\int \rho\ln \rho dx dv + S_0, \ \ \ \ \ (4)

However, the maximum of {S} is given by

\displaystyle \rho_0=e^{-1}.

Again, this is non-physical.

In summary,

the Boltzmann is not a physical law, and also fails to achieve its two original goals.

Tian Ma & Shouhong Wang

Posted in Fundamental Principles | Tagged , , , , , , , | 2 Comments

Potential-Descending Principle as the First Principle of Statistical Physics

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

One main component of this paper is to postulate the following potential-descending principle (PDP) for statistical physics:

Potential-Descending Principle: For each thermodynamic system, there are order parameters {u=(u_1, \cdots, u_N)}, control parameters {\lambda}, and the thermodynamic potential functional {F(u; \lambda)}. For a non-equilibrium state {u(t; u_0)} of the system with initial state {u(0, u_0)=u_0}, we have the following properties:

1)  the potential {F(u(t; u_0); \lambda)} is decreasing:

\displaystyle \frac{\text{d} }{\text{d} t} F(u(t; u_0); \lambda) < 0 \qquad \forall t > 0;

2) the order parameters {u(t; u_0)} have a limit

\displaystyle \lim\limits_{t \rightarrow \infty}u(t; u_0) = \bar u;

3)  there is an open and dense set {\mathcal O} of initial data in the space of state functions, such that for any {u_0 \in \mathcal O}, the corresponding {\bar u} is a minimum of {F}, which is called an equilibrium of the thermodynamic system:

\displaystyle \delta F(\bar u;\lambda)= 0.

1. In classical thermodynamics, the order parameters (state functions) {u=(u_1, \cdots, u_N)}, the control parameters {\lambda}, and the thermodynamic potential (or potential in short) {F} are all treated as state variables. This way of mixing different level of physical quantities leads to difficulties for the understanding and the development of statistical physics.

One important feature of PDP above is the distinction of different levels of thermodynamical quantities— thermodynamical potentials are functionals of the order parameters (state functions), and orders parameters are functions of control parameters: potentials are first level physical quantities, order-parameters are on the second-level, and the control parameters are on the third level.

2. In classical thermodynamics, the first and second laws are treated as the first principles, from which one derives other statistical properties of thermodynamical systems. One perception is that potential-decreasing property can be derived from the first and second laws. However, in the derivations, there is a hidden assumption that at the equilibrium, there is a free-variable in each pair of (entropy {S}, temperature {T}) and (generalized force {f}, displacement {X}). Here the free-variables corresponds to order-parameters. We discovered that this assumption is mathematically equivalent to the potential-descending principle. As an example, we consider an internal energy of a thermodynamic system, classical theory asserts that the first and second laws are given by

\displaystyle dU \le \frac{\partial U}{\partial S} dS + \frac{\partial U}{\partial X}dX, \ \ \ \ \ (1)

where the equality represents the first laws, describing the equilibrium state, and inequality presents second law for non-equilibrium state. However, there is a hidden assumption in (1) that {S} and {X} are free variables, and

\displaystyle \frac{\partial U}{\partial T} \le 0, \qquad \frac{\partial U}{\partial f}\le 0,

where, again, the equality is for equilibrium state and the strict inequality is for non-equilibrium state. Then it is clear to see that this assumption is mathematically equivalent to PDP. In other words, the potential-decreasing property cannot be derived if we treat the first and second laws as the only fundamental principles of thermodynamics. Also, we demonstrate that the potential-descending principle leads to both the first and second laws of thermodynamics. Therefore we reach the following conclusion:

the potential-descending principle is a more fundamental principle then the first and second laws.

3. For a thermodynamical systems, PDP provides a dynamical law for the transformation of non-equilibrium states to equilibrium states: the dynamic equations of a thermodynamic system in non-equilibrium state take the form

\displaystyle \frac{\text{d} u }{\text{d} t } = - A \delta F(u), \ \ \ \ \ (2)

where {A} is positive and symmetric coefficient matrix.

4. According to the entropy formula:

\displaystyle S=k\ln W,

and by the minimum potential principle as part of PDP:

\displaystyle \delta F=0.


\displaystyle F=U_0 -ST -\mu_1 N - \mu_2 E, \ \ \ \ \ (3)

where {U_0} is the internal energy, which is a constant, {N} is the number of particles, {\mu_1} and {\mu_2} are Lagrangian multipliers, and {E} is the total energy. For this system, the entropy {S} is an order parameter, and temperature {T} is a control parameter.

Then we can derive, with similar procedures as in Section 6.1 of [R. K. Pathria & Paul D. Beale, Statistical Mechanics, 3rd Edition, Elsevier, 2011], all three distributions: the Maxwell-Boltzmann distribution, the Fermi-Dirac distribution and the Bose-Einstein distribution. This shows that

the potential-descending principle is also the first principle of statistical mechanics.

Tian Ma & Shouhong Wang

Posted in Fundamental Principles | Tagged , , , , , , , | 6 Comments

What does Einstein’s General Relativity Tell Us about Black Holes?

Presentation Slides

We have today given a presentation at IU Science Slam with the same title as this post. The main themes are to present the black hole theory derived from Einstein’s General Relativity, and to point out the essential differences between the black hole theory and the viewpoint on black holes from Newton’s Law. The final message is

nothing gets out of the black hole, and nothing gets inside a black hole either.

This is based on a recent paper [Tian Ma & Shouhong Wang,  Astrophysical dynamics and cosmology, J. Math. Study, 47:4 (2014), 305-378]; see also the previous blog post: Singularity at the Black-Hole Horizon is Physical.

Posted in Astrophysics and Cosmology, Field Theory | Tagged , , , | Leave a comment

Singularity at the Black-Hole Horizon is Physical

1. Newtonian Viewpoint

Consider a massive body with mass {M} inside a ball {B_R} of radius {R}. The Schwarzschild radius is defined by {R_s={2GM}/{c^2}.}

Based on the Newtonian theory, a particle of mass {m} will be trapped inside the ball {B_R} and cannot escape from the ball, if its kinetic energy, {mv^2/2}, is smaller than gravitational energy:

\displaystyle \frac{mv^2}{2} \le \frac{mc^2}{2} \le \frac{mMG}{r},

which implies that

\displaystyle r \le R_s =\frac{2GM}{c^2}.

In other words, if the radius {R} of the ball is less than or equal to {R_s}, then all particles inside the ball are permanently trapped inside the ball {B_{R_s}}.

It is clear that the main results of the Newton theory of black holes are as follows:

  • the radius {R} of the black hole may be smaller than the Schwarzschild radius {R_s},
  • all particles inside the ball are permanently trapped inside the ball {B_{R_s}}, and
  • particles outside of a black hole {B_{R_s}} can be sucked into the black hole {B_{R_s}}.

2. Einstein-Schwarzschild Theory

Black-Holes are closed

Now consider the case where {R=R_s}. Based on the Einstein field equations, in the exterior of the body, the Schwarzschild solution is given by

\displaystyle ds^2= -\left[1-\frac{R_s}{r}\right]c^2dt^2+\left[1-\frac{R_s}{r}\right]^{-1}dr^2 +r^2d\theta^2+r^2\sin^2\theta d\varphi^2 \qquad \text{ for } r > R_s, \ \ \ \ \ (1)

and in the interior the Tolman-Oppenheimer-Volkoff (TOV) metric is

\displaystyle ds^2= - \frac14 \left[1- \frac{r^2}{R^2_s}\right] c^2dt^2 +\left[1-\frac{r^2}{R^2_s}\right]^{-1}dr^2 +r^2d\theta^2+r^2\sin^2\theta d\varphi^2 \qquad \text{ for } r < R_s. \ \ \ \ \ (2)

The both metrics have a singularity at {r=R_s}, which is called the event horizon:

\displaystyle d\tau = \left[1-\frac{R_s}{r}\right]^{1/2} dt \rightarrow 0, \quad d\tilde r= \left[1-\frac{R_s}{r}\right]^{-1/2} dr \rightarrow \infty \text{ for } r \rightarrow R_s^+, \ \ \ \ \ (3)

\displaystyle d\tau = \frac12 \left[1-\frac{r^2}{R_s^2}\right]^{1/2} dt \rightarrow 0, \quad d\tilde r=\left[1-\frac{R_s}{r}\right]^{-1/2} dr \rightarrow \infty \text{ for } r \rightarrow R_s^-, \ \ \ \ \ (4)

Both (3) and (4) show that time freezes at {r=R_s}, and there is no motion crossing the event horizon:

\displaystyle \tau_1-\tau_2 =d\tau =0\quad \text{ implies } \quad d \tilde r = \tilde r (\tau_1) -\tilde r(\tau_2) =0.

Consequently the black hole enclosed by the event horizon {r=R_s} is closed: Nothing gets inside a black hole, and nothing gets out of the black hole either.

Black holes are filled

We now demonstrate that black holes are filled. Suppose there is a body of matter field with mass {M} trapped inside a ball of radius {R < R_s}. Then on the vacuum region {R< r < R_s}, the Schwarzschild solution would be valid, which leads to non-physical imaginary time and nonphysical imaginary distance:

\displaystyle d\tau = i \left|1-\frac{R_s}{r}\right|^{1/2} dt, \qquad d\tilde r= i \left|1-\frac{R_s}{r}\right|^{-1/2} dr \quad \text{ for } \quad R<r<R_s.

Also, when {R< R_s}, the TOV metric is given by

\displaystyle ds^2= -\left[ \frac32 \left(1-\frac{R_s}{R}\right)^{1/2} - \frac12 \left( 1- \frac{r^2 R_s}{R^3}\right)^{1/2} \right]^2 c^2dt^2

\displaystyle +\left(1-\frac{r^2R_s}{R^3}\right)^{-1}dr^2 +r^2d\theta^2+r^2\sin^2\theta d\varphi^2 \qquad \text{ for } r < R. \ \ \ \ \ (5)

Then both time and radial distance would become imaginary near {r=R}, and this is clearly non-physical.

This observation clearly demonstrates that the black is filled. In fact, we have proved the following black hole theorem:

Blackhole Theorem (Ma-Wang, 2014) Assume the validity of the Einstein theory of general relativity, then the following assertions hold true:

  1. black holes are closed: matters can neither enter nor leave their interiors,
  2.  black holes are innate: they are neither born to explosion of cosmic objects, nor born to gravitational collapsing, and
  3.  black holes are filled and incompressible, and if the matter field is non-homogeneously distributed in a black hole, then there must be sub-blackholes in the interior of the black hole.

This theorem leads to drastically different view on the structure and geometry of black holes than the classical theory of black holes.

3. Singularity at {R_s} is physical

A basic mathematical requirement for a partial differential equation system on a Riemannian manifold to generate correct mathematical results is that the local coordinate system that is used to express the system must have no singularity.

The Schwarzschild solution is derived from the Einstein equations under the spherical coordinate system, which has no singularity for {r>0}. Consequently, the singularity of the Schwarzschild solution at {r=R_s} must be intrinsic to the Einstein equations, and is not caused by the particular choice of the coordinate system. In other words, the singularity at {r=R_s} is real and physical.

4. Mistakes of the classical view

Many writings on modern theory of black holes have taken a wrong viewpoint that the singularity at {r=R_s} is the coordinate singularity, and is non-physical. This mistake can be viewed in the following two aspects:

A. Mathematically forbidden coordinate transformations are used. Classical transformations such as e.g. those by Eddington and Kruskal are singular, and therefore they are not valid for removing the singularity at the Schwarzschild radius. Consider for example, the Kruskal coordinates involving

\displaystyle u= t-r_\ast, \quad v=t + r_\ast, \qquad r_\ast = r +R_s \ln \left(\frac{r}{R_s}-1\right).

This coordinate transformation is singular at {r=R_s}, since {r_\ast} becomes infinity when {r=R_s}.

It is mathematically clear that by using singular coordinate transformations, any singularity can be either removed or created at will.

In fact, many people did not realize that what is hidden in the wrong transformations is that all the deduced new coordinate systems, such as the Kruskal coordinates, are themselves singular at {r=R_s}:

all the coordinate systems, such as the Kruskal and Eddington-Finkelstein coordinates, that are derived by singular coordinate transformations, are singular and are mathematically forbidden.

B. Confirmation bias. Another likely reason for the perception that a black hole attracts everything nearby is the fixed thinking (confirmation bias) of Newtonian black hole picture. In their deep minds, people wanted to have the attraction, as produced by the Newtonian theory, and were trying to find the needed “proofs” for what they believe.

In summary, the classical theory of black holes is essentially the Newton theory of black holes. The correct theory, following the Einstein theory of relativity, is given in the black hole theorem above.

Tian Ma & Shouhong Wang

Posted in Astrophysics and Cosmology, Field Theory, Fundamental Principles | Tagged , , , , , | 4 Comments