Law of gravity and Gravitational Radiation

Tian Ma & Shouhong Wang, Radiations and Potentials of Four Fundamental Interactions, Hal-preprint, hal-01616874, October 10, 2017

Talk at 2017 Midwest Relativity Meeting. October 13, 2017, Ann Arbor, University of Michigan

The above paper addresses radiation and field particles of the four fundamental interactions, demonstrating that each individual interaction possesses two basic attributes:

  • field particles and their radiation, and
  • the interaction potential and force formulas.

In this blog, we examine the law of gravity and the gravitational field particle and radiation; see also the talk given in 2017 Midwest Relativity Conference.

1. Gravitational field equations

The Einstein theory of general relativity is the most profound scientific theory in the recorded human history. The Einstein theory is built on two first principles: the principle of equivalence (PE) and the principle of general relativity (PGR). In essence, PE amounts to saying that space time is a four-dimensional Riemannian manifold {(\mathcal M, \{g_{\mu\nu}\})} with the metric being the gravitational potential.

The PGR is a symmetry principle, and says that the law of gravity is the same (covariant) under all coordinate systems. In other words, the Lagrangian action of gravity, called the Einstein-Hilbert action, is invariant under all coordinate transformations.

The Einstein–Hilbert functional (1) is uniquely dictated by this profound and simple looking symmetry principle, together with simplicity of laws of Nature, and is given as follows:

\displaystyle L_{\text{EH}}=\int_{M}\bigg[R+\frac{8\pi G}{c^4}S\bigg]\sqrt{-g}\mbox{d}x. \ \ \ \ \ (1)

Indeed, in Riemannian geometry, the invariant quantities satisfying the principle of general relativity and containing the second order derivative terms of {\{g_{\mu\nu}\}} is just the scalar curvature {R}, which is unique.

The presence of the dark matter and dark energy phenomena requires the inevitable need for modifying the Einstein general theory of relativity. Such modification needs to preserve the following basic physical requirements:

  • conservation of energy-momentum,
  • inclusion of dark matter and dark energy effect, and
  • preservation of Einstein’s two principles: PE and PGR.

We have shown that, under these basic requirements, the unique route for altering the Einstein general theory of relativity is through the principle of interaction dynamics (PID), which takes variation of the Einstein-Hilbert action subject of energy-momentum conservation constraint. This leads to the the following new gravitational field equations; see [Tian Ma & Shouhong Wang, Gravitational field equations and theory of dark matter and dark energy, Discrete and Continuous Dynamical Systems, Ser. A, 34:2 (2014), pp. 335-366; see also arXiv:1206.5078]:

\displaystyle R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=-\frac{8\pi G}{c^4}T_{\mu\nu}-\nabla_\mu\Phi_\nu. \ \ \ \ \ (2)

Also, we have shown that PID is the direct consequence of the presence of dark energy and dark matter, is the requirement of the presence of the Higgs field for the weak interaction, and is the consequence of the quark confinement phenomena for the strong interaction; see [Tian Ma & Shouhong Wang, Mathematical Principles of Theoretical Physics, Science Press, 524pp., 2015].

2. Gravitational field particle

The gravitational field particle is described by the dual field {\{\Phi_\mu\}} in (2), and the governing radiation equations are

\displaystyle \nabla^\mu\nabla_\mu\Phi_\nu=-\frac{8\pi G}{c^4}\nabla^\mu T_{\mu\nu}, \ \ \ \ \ (3)

where {T_{\mu\nu}} stands for the energy-momentum tensor of the visible matter, and {\{\Phi_\mu\}} is a massless, spin-1 and electric neutral boson.

In fact, the gravitational field particle {\{\Phi_\mu\}} represents the dark matter that we have been searching for, and the energy that {\{\Phi_\mu\}} carries is the dark energy. Equation (3) is the field equations for dark matter and dark energy. The gravitational effect of the field particle {\{\Phi_\mu\}} is manifested through the mutual coupling and interaction with the gravitational potential {\{g_{\mu\nu}\}}, through the field equations (2), leading to both attractive and repulsive behavior of gravity, which is exactly the dark energy and dark matter phenomena.

3. Gravitational force formula

Gravitational force formula By the gravitational field equations (2), we derive the following approximate gravitational force formula:

\displaystyle F(r)=mMG\bigg[-\frac{1}{r^2}-\frac{k_0}{r}+k_1r\bigg],\quad k_0=4\times 10^{-18}\mbox{km}^{-1},\quad k_1=10^{-57}\mbox{km}^{-3}.

 

Tian Ma & Shouhong Wang, October 13, 2017

Advertisements
Posted in Field Theory, Fundamental Principles, Uncategorized | Tagged , , , , , , | Leave a comment

New Interpretations of Wave Functions in Quantum Mechanics

Tian Ma & Shouhong Wang, Quantum Mechanism of Condensation and High Tc Superconductivity, 38 pages, October 8, 2017, hal-01613117

In this post, we describe the new interpretation of quantum mechanical wave functions introduced in the above paper.

In classical quantum mechanics, a micro-particle is described by a complex-valued wave function {\Psi: \Omega \rightarrow \mathbb C}, satisfying such a wave equation as the Schrödinger equation with external interaction potential {V(x)}:

\displaystyle i\hbar\frac{\partial \Psi}{\partial t}=-\frac{\hbar^2}{2m}\Delta\Psi+V(x)\Psi,\quad x\in\Omega, \ \ \ \ \ (1)

where {\Omega \subset \mathbb R^3} is the region that the particle occupies, and {m} is the mass of the particle. The Schrödinger equation conserves the energy, and the wave function {\Psi} can be expressed as

\displaystyle \Psi=e^{-iEt/\hbar}\psi(x), \qquad \psi=|\psi| e^{i \varphi}, \ \ \ \ \ (2)

where {E} is the energy, and {\psi} is the time-independent wave function, satisfying

\displaystyle -\frac{\hbar^2}{2m}\Delta\psi+V(x)\psi=E\psi. \ \ \ \ \ (3)

The classical Born statistical interpretation of quantum mechanics amounts to saying that without constraints, the motion of a micro-particle is random and there is no definite trajectory of the motion. Also {|\psi(x)|^2} stands for the probability density of the particle appearing at the particular point {x}. The Born interpretation of the wave function is treated as a fundamental postulate of quantum mechanics. This leads to the classical Einstein-Bohr debates, and is the origin of absurdities associated with the interpretation of quantum mechanics.

The key observation for the new interpretation is that

\displaystyle \frac{\hbar}{m}\nabla\varphi(x) \ \ \ \ \ (4)

can be regarded as the velocity field of the particles, and the wave function {\psi} is the field function for the motion of all particles with the same mass in the same class determined by the external potential {V(x)}. More precisely, we have the following new interpretation of quantum mechanical wave functions:

New Interpretation of Wave Functions

  1. Under the external potential field {V(x)}, the wave function {\psi} is the field function for the motion of all particles with the same mass in the same class determined by the external potential {V(x)}. In other words, it is not the wave function of a particular particle in the classical sense;
  2. When a particle is observed at a particular point {x_0\in\Omega}, then the motion of the particle is fully determined by the solution of the following motion equation with initial position at {x_0}:

    \displaystyle \begin{aligned} &\frac{\mbox{d}x}{\mbox{d}t}=\frac{\hbar}{m}\nabla\varphi(x), \\ & x(0)=x_0, \end{aligned} \ \ \ \ \ (5)

    where {\varphi} is the phase of the wave function {\psi} in (2);

  3. With {\psi} being the field function,

    \displaystyle |\psi(x)|^2=\mbox{distribution density of particles at } x; \ \ \ \ \ (6)

  4. The energy {E} in (3) represents the average energy level of the particles and can be written as

    \displaystyle E=\int_{\Omega}\bigg[\frac{\hbar^2}{2m}|\nabla|\psi||^2+\frac{\hbar^2}{2m}|\nabla\varphi(x)|^2 +V(x)|\psi|^2\bigg] {d}x, \ \ \ \ \ (7)

    where in integrand on the right-hand side, the fist term represents the non-uniform distribution potential of particles, the second term is the average kinetic energy, and the third term is the potential energy of the external field. Here {\nabla|\psi|} is characteristic of quantum mechanics and there is no such a term in classical mechanics.

In summary, our new interpretation says that { \psi=|\psi| e^{i \varphi} } is the common wave function for all particles in the same class determined by the external potential {V(x)}, {|\psi(x)|^2} represents the distribution density of the particles, and { \frac{\hbar}{m} \nabla \varphi } is the velocity field of the particles. The trajectories of the motion of the particles are then dictated by this velocity field. The observed particles are the particles in the same class described by the same wave function, rather than a specific particle in the sense of classical quantum mechanics.

This is an entirely different interpretation from the classical Bohr interpretation. Also this new interpretation of wave functions does not alter the basic theories of quantum mechanics, and instead offers new understanding of quantum mechanics, and plays a fundamental role for the quantum theory of condensed matter physics and quantum physics.

It is worth mentioning that the Landau school of physics was the first who noticed that the relation between the superfluid velocity {v_s} and the wave function {\psi=|\psi| e^{i \varphi}} of the condensate is given by (4); see (26.12) on page 106 of [E. Lifshitz and L. Pitaevskii, Statistical physics Part 2, Landau and Lifshitz Course of Theoretical Physics vol. 9, 1980]. However they fail to make an important connection between (4) and the basic interpretation of quantum mechanics.

Tian Ma & Shouhong Wang

Posted in Field Theory, Fundamental Principles, Uncategorized | Tagged , , , , , , , , , , , , , , , | Leave a comment

Statistical Theory of Heat

Tian Ma & Shouhong Wang, Statistical Theory of Heat, Indiana University ISCAM Preprint #1711, 37 pages, August 26, 2017

The above paper presents a new statistical theory of heat based on the statistical theory of thermodynamics and on the recent developments of quantum physics.

Main Motivations

First, by classical thermodynamics, for a thermodynamic system, the thermal energy {Q_0} is given by

\displaystyle Q_0=ST. \ \ \ \ \ (1)

Here {T} is the temperature of the system and {S} is the entropy given by the Boltzmann formula:

\displaystyle S=k\ln W, \ \ \ \ \ (2)

where {k} is the Boltzmann constant, and {W} is the number of microscopic configurations of the system. It is then clear that in modern thermodynamics, there is simply no physical heat carrier in both the temperature {T} and the entropy {S}, and hence there is no physical carrier for thermal energy {Q_0=ST}. Due to the lack of physical carrier in temperature {T} and {S=k ln W}, and consequently in the thermal energy {Q=ST}, the nature of heat is still not fully understood.

The second motivation is the recent development on the photon cloud structure of electrons: the naked electron are surrounded by a shell layer cloud of photons. Therefore, electrons and photons form a natural conjugate pair of physical carriers for emission and absorption, reminiscent to the conjugation between {T} and {S}.

Third, in view of {Q=ST} and the above conjugation between photons and electrons, a theory of heat has to make

connections between 1) conjugation between electrons and photons, and 2) conjugation between temperature and entropy.

The new theory in this paper provides precisely such a connection: at the equilibrium of absorption and radiation, the average energy level of the system maintains unchanged, and represents the temperature of the system; at the same time, the number (density) of photons in the sea of photons represents the entropy (entropy density) of the system.

Main Results

1. Energy level formula of temperature

In view of the above connection between the two conjugations, temperature must be associated with the energy levels of electrons, since it is an intensive physical quantity measuring certain strength of heat, reminiscent of the basic characteristic of energy levels of electrons. Also notice that there are abundant orbiting electrons and energy levels of electrons in atoms and molecules. Hence the energy levels of orbiting electrons, together with the kinetic energy of the system particles, provide a truthful representation of the system particles.

We derive the following energy level formula of temperature using the well-known Maxwell-Boltzmann, the Fermi-Dirac, and the Bose-Einstein distributions:

  • for classical systems,

    \displaystyle kT= \sum_n\bigg(1-\frac{a_n}{N}\bigg)\frac{a_n\varepsilon_n} {N(1+\beta_n\ln\varepsilon_n)}, \ \ \ \ \ (3)

  • for system of bose particles,

    \displaystyle kT= \sum_n \bigg(1 + \frac{a_n}{g_n}\bigg) \frac{a_n\varepsilon_n}{N(1+\beta_n\ln\varepsilon_n)}; \ \ \ \ \ (4)

  • for systems of fermi particls,

    \displaystyle kT= \sum_n \bigg(1 - \frac{a_n}{g_n}\bigg) \frac{a_n\varepsilon_n}{N(1+\beta_n\ln\varepsilon_n)}. \ \ \ \ \ (5)

Here {\varepsilon_n} are the energy levels of the system particles, {N} is the total number of particles, {g_n} are the degeneracy factors (allowed quantum states) of the energy level {\varepsilon_n}, and {a_n} are the distributions, representing the number of particles on the energy level {\varepsilon_n}.

The above formulas amount to saying that temperature is simply the (weighted) average energy level of the system. Also these formulas enable us to have a better understanding on the nature of temperature.

In summary, the nature of temperature {T} is the (weighted) average energy level. Also, the temperature {T} is a function of distributions {\{a_n\}} and the energy levels {\{\varepsilon_n\}} with the parameters {\{\beta_n\}} reflecting the property of the material.

2. Photon number formula of entropy

In view of the conjugate relations, since entropy {S} is an extensive variable, we need to characterize entropy as the number of photons in the photon gas between system particles, or the photon density of the photon gas in the system. Also, photons are Bosons and obey the Bose-Einstein distribution. Then we can make a connection between entropy and the number of photons and derive

\displaystyle S = kN_0 \left[ 1+ \frac{1}{kT} \sum_n \frac{\varepsilon_n a_n}{N_0}\right], \ \ \ \ \ (6)

where {\varepsilon_n} are the energy levels of photons, and {a_n} are the distribution of photons at energy level {\varepsilon_n}, {N_0=\sum_n a_n} is the total number of photons between particles in the system, and {\sum_n \frac{\varepsilon_n}{kT}a_n} represents the number of photons in the sense of average energy level.

It is worth mentioning that this new entropy formula is equivalent to the Boltzmann entropy formula {S=k\ln W}. However, their physical meanings have changed: the new formula (6) provides explicitly that

the physical carrier of heat is the photons.

3. Temperature theorem

By the temperature and the entropy formulas (5) and (6), we arrive immediately at the following results of temperature:

  • There are minimum and maximum values of temperature with {T_{\min}=0} and {T_{\max}};
  • When the number of photons in the system is zero, the temperature is at absolute zero; namely, the absence of photons in the system is the physical reason causing absolute zero temperature;
  • (Nernst Theorem) With temperature at absolute zero, the entropy of the system is zero;
  • With temperature at absolute zero, all particles fills all lowest energy levels.

4. Thermal energy formula

Thanks to the entropy formula (6), we derive immediately the following thermal energy formula:

\displaystyle Q_0 = ST = E_0 + k N_0 T, \ \ \ \ \ (7)

where {E_0=\sum_n a_n \varepsilon_n} is the total energy of photons in the system, {\varepsilon_n} are the energy levels of photons, and {a_n} are the distribution of photons at energy level {\varepsilon_n}, and {N_0} is the number of photons in the system.

The theory of heat presented in this paper is established based on physical theories on fundamental interactions, the photon cloud model of electrons, the first law of thermodynamics, statistical theory of thermodynamics, radiation mechanism of photons, and energy level theory of micro-particles. The theory utilizes rigorous mathematics to reveal the physical essence of temperature, entropy and heat.

Tian Ma & Shouhong Wang

Posted in statistical physics | Tagged , , , , , , , | Leave a comment

Dynamical Theory of Thermodynamical Phase Transitions

Tian Ma & Shouhong Wang, Dynamical Theory of Thermodynamical Phase Transitions, Preprint, July 14, 2017

This blog post is on the above paper. The goals of this paper, as well as all other phase transition theories, are

  1. to determine the definition and types of phase transitions,
  2. to derive the critical parameters,
  3. to obtain the critical exponents, and
  4. to understand the mechanism and properties of phase transitions, such as supercooled and superheated states and the Andrews critical points, etc.

There are two routes for studying phase transitions: the first is Landau’s approach using thermodynamic potentials, and the second is a microscopic approach using statistical theory. These routes are complementing to each other. Our dynamical transition theory follows the Landau’s route, but is not a mean field theory approach. It is based, however, on the thermodynamical potential.

1. For a thermodynamic system, there are three different levels of physical quantities: control parameters {\lambda}, the order parameters (state functions) {u=(u_1, \cdots, u_N)}, and the thermodynamic potential {F}. These are well-defined quantities, fully describing the system. The potential is a functional of the order parameters, and is used to represent the thermodynamic state of the system.

In a recent paper [Tian Ma & Shouhong Wang, Dynamic Law of Physical Motion and Potential-Descending Principle, Indiana University, The Institute for Scientific Computing and Applied Mathematics Preprint \#1701] and a related previous blog, we postulated a potential-descending principle (PDP) for statistical physics, which give rise to a dynamic equation of thermodynamical systems: Also importantly, based on PDP, the dynamic equations of a thermodynamic system in non-equilibrium state take the form

\displaystyle \frac{\text{d} u }{\text{d} t } = - A \delta F(u, \lambda), \ \ \ \ \ (1)

which leads to an equation for the deviation order parameter {u} from the basic equilibrium state {\bar u}:

\displaystyle \frac{du}{dt} = L_\lambda u +G(u,\lambda), \ \ \ \ \ (2)

where {L_\lambda} is a linear operator, and {G(u,\lambda) } is the nonlinear operator.

We obtain in this paper three theorems, providing a full theoretical characterization of thermodynamic phase transitions.

2. The first theorem states that as soon as the linear instability occurs, the system always undergoes a dynamical transition, to one of the three types: continuous, catastrophic and random. This theorem offers the detailed information for the phase diagram and the critical threshold {\lambda_c} in the control parameter {\lambda=(\lambda_1, \cdots, \lambda_N) \in \mathbb R^N}. They are precisely determined by the following equation:

\displaystyle \beta_1(\lambda_1, \cdots, \lambda_N)=0. \ \ \ \ \ (3)

Here {\beta_1} is the first eigenvalue of the linear operator {L_\lambda} given by

\displaystyle L_\lambda w \equiv -\left(\frac{\partial^2F(\bar u; \lambda)}{\partial u_i, \partial u_j}\right) w = \beta_1(\lambda) w, \ \ \ \ \ (4)

where {F} is the potential functional.

3. The second theorem states that there are only first, second and third-order thermodynamic transitions; and establishes a corresponding relationship between the Ehrenfest classification and the dynamical classification. The Ehrenfest classification offers clear experimental quantities to determine the types of phase transition, while the dynamical classification provides a full theoretical characterization of the transition.

4. The last theorem states that both catastrophic and random transitions lead to saddle-node bifurcations, and both latent heat, superheated and supercooled states always accompany the saddle-node bifurcation associated with the first order transitions.

5. We emphasize that the three theorems lead to three important diagrams: the phase diagram, the transition diagram and the dynamical diagram. These diagrams appear only to be derivable by the dynamical transition theory presented in this paper. In addition, our theory achieves the fourth goal of phase transition theories as stated in the beginning of this blog post, which is hardly achievable by other existing theories.

Tian Ma & Shouhong Wang

 

Posted in Phase Transition | Tagged , , , , , , , | Leave a comment

Dynamical Law of Physical Motion Systems

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

This is the last of the three blog posts for the above paper. The first  was on postulating the potential-descending principle (PDP) as the first principle of statistical physics, and the second was on irreversibility and the problems in the Boltzmann equation.

This blog focuses on the general dynamic law for all physical motion systems, and on the new variational principle with constraint-infinitesimals.

Dynamical Law for isolated physical motion systems

1. For each isolated physical motion system, there are a set of state functions {u=(u_1, \cdots, u_N)}, describing the states of the system, and a potential functional {F(u)}, representing a certain form of energy intrinsic to the underlying physical system. An important basic ingredient of modeling the underlying physical system is to determine the state functions and the potential functional.

Then it is physically clear that the rate of change of the state functions {{du}/ dt} should equal to the driving force derived from the potential functional {F}.
More precisely, we postulate the following dynamical law of physical motion:

\displaystyle \frac{du}{dt } =- A \delta_{\mathcal L} F(u), \ \ \ \ \ (1)

 

where {A} is the coefficient matrix, and {- \delta_{\mathcal L} F(u)} is the variation with constraint infinitesimals, representing the driving force of the physical motion system, and {\mathcal L} is a differential operator representing the infinitesimal constraint.

2. We show that proper constraints should be imposed on the infinitesimals (variation elements) for the variation of the potential functional {F}. These constraints can be considered as generalized energy and/or momentum conservation laws for the infinitesimal variation elements. The variation under constraint infinitesimals is motivated in part by the recent work of the authors on the principle of interaction dynamics (PID) for the four fundamental interactions, which was required by the dark energy and dark matter phenomena, the Higgs fields, and the quark confinement. Basically, PID takes the variation of the Lagrangian actions for the four interactions, under energy-momentum conservation constraints. We refer interested readers to an earlier blog on PID, and the following book for details:

[Ma-Wang, MPTP] Tian Ma & Shouhong Wang, Mathematical Principles of Theoretical Physics, 524pp., Science Press, 2015

3. The linear operator {\mathcal L} in the dynamic law (1) takes the form of a differential operator {L} or its dual {L^\ast}. The constraints can be imposed either on the kernel {\mathcal N^\ast} of the dual operator {L^\ast} or on the range of the operator {L}, given as follows:

\displaystyle \langle \delta_{L^\ast}F(u),v\rangle_{H}=\frac{\mbox{d}}{\mbox{d}t}\bigg|_{t=0} F(u+tv), \qquad \forall\ L^\ast v=0,

\displaystyle \langle \delta_{L}F(u),\varphi\rangle_{H_1}=\frac{\mbox{d}}{\mbox{d}t}\bigg|_{t=0} F(u+tL\varphi), \qquad \forall\ \varphi\in H_1.

Using the orthogonal decomposition theorem below, we show that the above variations with constraint infinitesimals take the following form:

\displaystyle \delta_{L^\ast}F(u)=\delta F(u)+Lp,

\displaystyle \delta_{L}F(u)=L^\ast\delta F(u),

for some function {p}, which plays a similar role as the pressure in incompressible fluid flows. Here {\delta F(u)} is the usual derivative operator.

4. As an example, we consider the compressible Navier-Stokes equations:

\displaystyle \frac{\partial u}{\partial t} + (u \cdot \nabla) u = \frac{1}{\rho} \left[ \mu \Delta u - \nabla p + f\right],

\displaystyle \frac{\partial \rho}{\partial t} = -\text{div} \left(\rho u \right).

Let the constraint operator {L=- \nabla} be the gradient operator, with dual operator {L^\ast =div}.
Also, let the potential functional be

\displaystyle \Phi(u, \rho) = \int_{\Omega}\left[\frac{\mu}{2}|\nabla u|^2 - fu+ \frac12 \rho^2 \text{div}u\right]\text{d} x.

Then the above compressible Navier-Stokes equations are written as

\displaystyle \frac{\text{d} u}{\text{d} t} = - \frac{1}{\rho} \frac{\delta_{L^\ast} }{\delta u} \Phi (u, \rho),

\displaystyle \frac{d \rho}{d t} = - \frac{\delta}{\delta \rho} \Phi(u, \rho),

which is in the form of (1) with coefficient matrix {A=\text{diag}(1/\rho, 1)}. Also, the pressure is given by

\displaystyle p=p_0 - \lambda \text{div} u - \frac12 \rho^2.

Here

\displaystyle \frac{\text{d} u}{\text{d}t} = \frac{\partial u }{\partial t} + \frac{\partial u}{\partial x_i} \frac{\text{d} x_i}{\text{d} t } = \frac{\partial u }{\partial t} + (u \cdot \nabla ) u, \qquad \frac{d\rho}{dt}= \frac{\partial \rho}{\partial t} + u\cdot \nabla \rho.

5. There are two types of physical motion systems: the dissipative systems and the conservation systems. The coefficient matrix {A} is symmetric and positive definite if and only if the system is a dissipative system, and {A} is anti-symmetry if and only if the system is a conservation system.

Dynamical Law for Coupled Physical Motion Systems

Symmetry plays a fundamental role in understanding Nature. In [Ma-Wang, MPTP], we have demonstrated that for the four fundamental interactions, the Lagrangian actions are dictated by the general covariance (principle of general relativity), the gauge symmetry and the Lorentz symmetry; the field equations are then derived using PID as mentioned earlier.

For isolated motion systems, all energy functionals {F} obey certain symmetries such as {SO(n)} ({n=2, 3}) symmetry. In searching for laws of Nature, one inevitably encounters a system consisting of a number of subsystems, each of which enjoys its own symmetry principle with its own symmetry group. To derive the basic law of the coupled system, we postulated in [Ma-Wang, MPTP] the principle of symmetry-breaking (PSB), which is of fundamental importance for deriving physical laws for both fundamental interactions and motion dynamics: Physical systems in different levels obey different laws, which are dictated by their corresponding symmetries. For a system coupling different levels of physical laws, part of these symmetries must be broken.

In view of this principle, for a system coupling different subsystems, the motion equations become

\displaystyle \frac{\text{d} u}{\text{d}t} = -A\delta_{\mathcal L} F(u) + B(u), \ \ \ \ \ (2)

 

where {B(u)} represents the symmetry-breaking.

Orthogonal-Decomposition Theorem

To establish the needed mathematical foundation for the dynamical law of physical motion systems, we need to prove an orthogonal decomposition theorem, Theorem~6.1 in the paper. Basically, for a linear operator {L:H_1 \rightarrow H} between two Hilbert spaces {H} and {H_1}, with dual operator {L^\ast}, any {u \in H} then can be decomposed as

\displaystyle u=L\varphi+v,\quad L^\ast v=0,

where {L\varphi} and {v} are orthogonal in {H}.

Summary

The dynamical law given by (1) and (2) is essentially known for motion system in classical mechanics, quantum mechanics and astrophysics; see among others [Ma-Wang, MPTP] and [Landau and Lifshitz, Course of theoretical physics, Vol. 2, The classical theory of fields].

Thanks to the variation with infinitesimal constraints, the law of fluid motion is now in the form of (1) and (2).

The potential-descending principle (PDP) addressed in the previous blog shows that non-equilibrium thermodynamical systems are governed by the dynamical law (1) and (2), as well.

In a nutshell,

the dynamical law (1) and (2) are the law for all physical motions systems.

We end this blog by emphasizing that in deriving dynamical law and the basic laws for the four fundamental interactions [Ma-Wang, MPTP], the following guiding principle of physics played a crucial role:

  • The heart of physics is to seek experimentally verifiable, fundamental laws and principles of Nature. In this process, physical concepts and theories are transformed into mathematical models:

    \displaystyle \text{ physical laws } = \text{ mathematical equations} \ \ \ \ \ (3)

  • the predictions derived from these models can be verified experimentally and conform to reality.

The true understanding of (3) is a subtle process, and is utterly important.

Tian Ma & Shouhong Wang

Posted in Fundamental Principles | Tagged , , , | Leave a comment

Irreversibility and Problems in the Boltzmann Equation

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

The previous blog is on postulating the potential-descending principle (PDP) as the first principle of statistical physics. The purpose of this blog is on the next two components of the above paper:

  1. to demonstrate that the PDP is the first principle to describe irreversibility of all thermodynamic systems;

  2. to examine the problems faced by the Boltzmann equation.

Irreversibility

First, irreversibility is a macroscopic property of thermodynamics, and must be described by the first level physical quantities–thermodynamic potentials, rather than the second level quantities (state functions) or the third level quantities (control parameters).

Second, entropy {S} is a state function, which is the solution of basic thermodynamic equations. Thermodynamic potential is a higher level physical quantity than entropy, and consequently, is the correct physical quantity for describing irreversibility for all thermodynamic systems.

Problems in Boltzmann Equation

Historically, great effort has been put on establishing a mathematical model of entropy-increasing principle. The Boltzmann equation is introduced mainly for this purpose. Since there is no first principle to achieve this purpose, the Boltzmann equation is introduced as a phenomenological model with two specific goals:

 

  • to derive the entropy-ascending principle, and

  • to make the Maxwell-Boltzmann distribution (realistic equilibrium state of a dilute gaseous system) a steady-state solution.

 

However, the Boltzmann equation faces many problems:

First,  laws of physics (equations) should not use state functions, which are themselves governed by physical laws, as independent variables. The Boltzmann equation violates this simple physical rule by using the velocity field as an independent variable. Consequently, the Boltzmann equation is not a physical law.

Second, the Boltzmann equation uses the velocity field {v} as an independent variable. This leads to a new unknown function (force field) in the Boltzmann equation:

\displaystyle F=F(t, x, v), \ \ \ \ \ (1)

which is the sum of the external force and the force generated by the total interaction potential of all particles in the system, including the force due to collision.

Third, the main source of this force field {F} comes from the interaction between particles, and {F} is realistically not zero. Otherwise, all particles in the system would make uniform rectilinear motion, and in particular there would be no particle collisions (collision is close-distance interactions).

Fourth, if

\displaystyle F\not=0, \ \ \ \ \ (2)

 then the Maxwell distribution fails to be the steady state solution of the Boltzmann equation. Since the Maxwell-distribution is the realistic equilibrium state of dilute gaseous systems, the Boltzmann equation fails to describe the underlying physical phenomena in this important regard.

Fifth, in deriving the H-Theorem (i.e. the entropy-ascending principle), the following must be assumed:

\displaystyle F=F(t, x) \text{ \it is independent of } v. \ \ \ \ \ (3)

 

This is a non-physical, rather arbitrary assumption, since {F} includes the force generated by all interactions (including collision) and must be velocity dependent. Consequently the H-Theorem is not a natural consequence of the Boltzmann equation.

Sixth, ignoring the non-physical nature of assumption (2), the space of all steady state solutions of the Boltzmann equation is of five-dimensional. Namely, the general form of the steady state solutions is

\displaystyle \bar \rho = e^{\alpha_0 + \alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3 + \alpha_4 v^2},

where {\alpha_i} ({i=0, \cdots, 4}) are constants, and {v=(v_1, v_2, v_3)}. This shows that each steady-state is not stable, and this does not fit the reality.

Seventh, the entropy-increasing principle shows that a gaseous system in the equilibrium has the maximum entropy, i.e. the Maxwell distribution should be the maximum of

\displaystyle S=-\int \rho\ln \rho dx dv + S_0, \ \ \ \ \ (4)

However, the maximum of {S} is given by

\displaystyle \rho_0=e^{-1}.

Again, this is non-physical.

In summary,

the Boltzmann is not a physical law, and also fails to achieve its two original goals.

Tian Ma & Shouhong Wang

Posted in Fundamental Principles | Tagged , , , , , , , | 2 Comments

Potential-Descending Principle as the First Principle of Statistical Physics

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

One main component of this paper is to postulate the following potential-descending principle (PDP) for statistical physics:

Potential-Descending Principle: For each thermodynamic system, there are order parameters {u=(u_1, \cdots, u_N)}, control parameters {\lambda}, and the thermodynamic potential functional {F(u; \lambda)}. For a non-equilibrium state {u(t; u_0)} of the system with initial state {u(0, u_0)=u_0}, we have the following properties:

1)  the potential {F(u(t; u_0); \lambda)} is decreasing:

\displaystyle \frac{\text{d} }{\text{d} t} F(u(t; u_0); \lambda) < 0 \qquad \forall t > 0;

2) the order parameters {u(t; u_0)} have a limit

\displaystyle \lim\limits_{t \rightarrow \infty}u(t; u_0) = \bar u;

3)  there is an open and dense set {\mathcal O} of initial data in the space of state functions, such that for any {u_0 \in \mathcal O}, the corresponding {\bar u} is a minimum of {F}, which is called an equilibrium of the thermodynamic system:

\displaystyle \delta F(\bar u;\lambda)= 0.

1. In classical thermodynamics, the order parameters (state functions) {u=(u_1, \cdots, u_N)}, the control parameters {\lambda}, and the thermodynamic potential (or potential in short) {F} are all treated as state variables. This way of mixing different level of physical quantities leads to difficulties for the understanding and the development of statistical physics.

One important feature of PDP above is the distinction of different levels of thermodynamical quantities— thermodynamical potentials are functionals of the order parameters (state functions), and orders parameters are functions of control parameters: potentials are first level physical quantities, order-parameters are on the second-level, and the control parameters are on the third level.

2. In classical thermodynamics, the first and second laws are treated as the first principles, from which one derives other statistical properties of thermodynamical systems. One perception is that potential-decreasing property can be derived from the first and second laws. However, in the derivations, there is a hidden assumption that at the equilibrium, there is a free-variable in each pair of (entropy {S}, temperature {T}) and (generalized force {f}, displacement {X}). Here the free-variables corresponds to order-parameters. We discovered that this assumption is mathematically equivalent to the potential-descending principle. As an example, we consider an internal energy of a thermodynamic system, classical theory asserts that the first and second laws are given by

\displaystyle dU \le \frac{\partial U}{\partial S} dS + \frac{\partial U}{\partial X}dX, \ \ \ \ \ (1)

where the equality represents the first laws, describing the equilibrium state, and inequality presents second law for non-equilibrium state. However, there is a hidden assumption in (1) that {S} and {X} are free variables, and

\displaystyle \frac{\partial U}{\partial T} \le 0, \qquad \frac{\partial U}{\partial f}\le 0,

where, again, the equality is for equilibrium state and the strict inequality is for non-equilibrium state. Then it is clear to see that this assumption is mathematically equivalent to PDP. In other words, the potential-decreasing property cannot be derived if we treat the first and second laws as the only fundamental principles of thermodynamics. Also, we demonstrate that the potential-descending principle leads to both the first and second laws of thermodynamics. Therefore we reach the following conclusion:

the potential-descending principle is a more fundamental principle then the first and second laws.

3. For a thermodynamical systems, PDP provides a dynamical law for the transformation of non-equilibrium states to equilibrium states: the dynamic equations of a thermodynamic system in non-equilibrium state take the form

\displaystyle \frac{\text{d} u }{\text{d} t } = - A \delta F(u), \ \ \ \ \ (2)

where {A} is positive and symmetric coefficient matrix.

4. According to the entropy formula:

\displaystyle S=k\ln W,

and by the minimum potential principle as part of PDP:

\displaystyle \delta F=0.

Here

\displaystyle F=U_0 -ST -\mu_1 N - \mu_2 E, \ \ \ \ \ (3)

where {U_0} is the internal energy, which is a constant, {N} is the number of particles, {\mu_1} and {\mu_2} are Lagrangian multipliers, and {E} is the total energy. For this system, the entropy {S} is an order parameter, and temperature {T} is a control parameter.

Then we can derive, with similar procedures as in Section 6.1 of [R. K. Pathria & Paul D. Beale, Statistical Mechanics, 3rd Edition, Elsevier, 2011], all three distributions: the Maxwell-Boltzmann distribution, the Fermi-Dirac distribution and the Bose-Einstein distribution. This shows that

the potential-descending principle is also the first principle of statistical mechanics.

Tian Ma & Shouhong Wang

Posted in Fundamental Principles | Tagged , , , , , , , | 6 Comments