## Dynamical Theory of Thermodynamical Phase Transitions

Tian Ma & Shouhong Wang, Dynamical Theory of Thermodynamical Phase Transitions, Preprint, July 14, 2017

This blog post is on the above paper. The goals of this paper, as well as all other phase transition theories, are

1. to determine the definition and types of phase transitions,
2. to derive the critical parameters,
3. to obtain the critical exponents, and
4. to understand the mechanism and properties of phase transitions, such as supercooled and superheated states and the Andrews critical points, etc.

There are two routes for studying phase transitions: the first is Landau’s approach using thermodynamic potentials, and the second is a microscopic approach using statistical theory. These routes are complementing to each other. Our dynamical transition theory follows the Landau’s route, but is not a mean field theory approach. It is based, however, on the thermodynamical potential.

1. For a thermodynamic system, there are three different levels of physical quantities: control parameters ${\lambda}$, the order parameters (state functions) ${u=(u_1, \cdots, u_N)}$, and the thermodynamic potential ${F}$. These are well-defined quantities, fully describing the system. The potential is a functional of the order parameters, and is used to represent the thermodynamic state of the system.

In a recent paper [Tian Ma & Shouhong Wang, Dynamic Law of Physical Motion and Potential-Descending Principle, Indiana University, The Institute for Scientific Computing and Applied Mathematics Preprint \#1701] and a related previous blog, we postulated a potential-descending principle (PDP) for statistical physics, which give rise to a dynamic equation of thermodynamical systems: Also importantly, based on PDP, the dynamic equations of a thermodynamic system in non-equilibrium state take the form

$\displaystyle \frac{\text{d} u }{\text{d} t } = - A \delta F(u, \lambda), \ \ \ \ \ (1)$

which leads to an equation for the deviation order parameter ${u}$ from the basic equilibrium state ${\bar u}$:

$\displaystyle \frac{du}{dt} = L_\lambda u +G(u,\lambda), \ \ \ \ \ (2)$

where ${L_\lambda}$ is a linear operator, and ${G(u,\lambda) }$ is the nonlinear operator.

We obtain in this paper three theorems, providing a full theoretical characterization of thermodynamic phase transitions.

2. The first theorem states that as soon as the linear instability occurs, the system always undergoes a dynamical transition, to one of the three types: continuous, catastrophic and random. This theorem offers the detailed information for the phase diagram and the critical threshold ${\lambda_c}$ in the control parameter ${\lambda=(\lambda_1, \cdots, \lambda_N) \in \mathbb R^N}$. They are precisely determined by the following equation:

$\displaystyle \beta_1(\lambda_1, \cdots, \lambda_N)=0. \ \ \ \ \ (3)$

Here ${\beta_1}$ is the first eigenvalue of the linear operator ${L_\lambda}$ given by

$\displaystyle L_\lambda w \equiv -\left(\frac{\partial^2F(\bar u; \lambda)}{\partial u_i, \partial u_j}\right) w = \beta_1(\lambda) w, \ \ \ \ \ (4)$

where ${F}$ is the potential functional.

3. The second theorem states that there are only first, second and third-order thermodynamic transitions; and establishes a corresponding relationship between the Ehrenfest classification and the dynamical classification. The Ehrenfest classification offers clear experimental quantities to determine the types of phase transition, while the dynamical classification provides a full theoretical characterization of the transition.

4. The last theorem states that both catastrophic and random transitions lead to saddle-node bifurcations, and both latent heat, superheated and supercooled states always accompany the saddle-node bifurcation associated with the first order transitions.

5. We emphasize that the three theorems lead to three important diagrams: the phase diagram, the transition diagram and the dynamical diagram. These diagrams appear only to be derivable by the dynamical transition theory presented in this paper. In addition, our theory achieves the fourth goal of phase transition theories as stated in the beginning of this blog post, which is hardly achievable by other existing theories.

Tian Ma & Shouhong Wang

## Dynamical Law of Physical Motion Systems

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

This is the last of the three blog posts for the above paper. The first  was on postulating the potential-descending principle (PDP) as the first principle of statistical physics, and the second was on irreversibility and the problems in the Boltzmann equation.

This blog focuses on the general dynamic law for all physical motion systems, and on the new variational principle with constraint-infinitesimals.

Dynamical Law for isolated physical motion systems

1. For each isolated physical motion system, there are a set of state functions ${u=(u_1, \cdots, u_N)}$, describing the states of the system, and a potential functional ${F(u)}$, representing a certain form of energy intrinsic to the underlying physical system. An important basic ingredient of modeling the underlying physical system is to determine the state functions and the potential functional.

Then it is physically clear that the rate of change of the state functions ${{du}/ dt}$ should equal to the driving force derived from the potential functional ${F}$.
More precisely, we postulate the following dynamical law of physical motion:

$\displaystyle \frac{du}{dt } =- A \delta_{\mathcal L} F(u), \ \ \ \ \ (1)$

where ${A}$ is the coefficient matrix, and ${- \delta_{\mathcal L} F(u)}$ is the variation with constraint infinitesimals, representing the driving force of the physical motion system, and ${\mathcal L}$ is a differential operator representing the infinitesimal constraint.

2. We show that proper constraints should be imposed on the infinitesimals (variation elements) for the variation of the potential functional ${F}$. These constraints can be considered as generalized energy and/or momentum conservation laws for the infinitesimal variation elements. The variation under constraint infinitesimals is motivated in part by the recent work of the authors on the principle of interaction dynamics (PID) for the four fundamental interactions, which was required by the dark energy and dark matter phenomena, the Higgs fields, and the quark confinement. Basically, PID takes the variation of the Lagrangian actions for the four interactions, under energy-momentum conservation constraints. We refer interested readers to an earlier blog on PID, and the following book for details:

[Ma-Wang, MPTP] Tian Ma & Shouhong Wang, Mathematical Principles of Theoretical Physics, 524pp., Science Press, 2015

3. The linear operator ${\mathcal L}$ in the dynamic law (1) takes the form of a differential operator ${L}$ or its dual ${L^\ast}$. The constraints can be imposed either on the kernel ${\mathcal N^\ast}$ of the dual operator ${L^\ast}$ or on the range of the operator ${L}$, given as follows:

$\displaystyle \langle \delta_{L^\ast}F(u),v\rangle_{H}=\frac{\mbox{d}}{\mbox{d}t}\bigg|_{t=0} F(u+tv), \qquad \forall\ L^\ast v=0,$

$\displaystyle \langle \delta_{L}F(u),\varphi\rangle_{H_1}=\frac{\mbox{d}}{\mbox{d}t}\bigg|_{t=0} F(u+tL\varphi), \qquad \forall\ \varphi\in H_1.$

Using the orthogonal decomposition theorem below, we show that the above variations with constraint infinitesimals take the following form:

$\displaystyle \delta_{L^\ast}F(u)=\delta F(u)+Lp,$

$\displaystyle \delta_{L}F(u)=L^\ast\delta F(u),$

for some function ${p}$, which plays a similar role as the pressure in incompressible fluid flows. Here ${\delta F(u)}$ is the usual derivative operator.

4. As an example, we consider the compressible Navier-Stokes equations:

$\displaystyle \frac{\partial u}{\partial t} + (u \cdot \nabla) u = \frac{1}{\rho} \left[ \mu \Delta u - \nabla p + f\right],$

$\displaystyle \frac{\partial \rho}{\partial t} = -\text{div} \left(\rho u \right).$

Let the constraint operator ${L=- \nabla}$ be the gradient operator, with dual operator ${L^\ast =div}$.
Also, let the potential functional be

$\displaystyle \Phi(u, \rho) = \int_{\Omega}\left[\frac{\mu}{2}|\nabla u|^2 - fu+ \frac12 \rho^2 \text{div}u\right]\text{d} x.$

Then the above compressible Navier-Stokes equations are written as

$\displaystyle \frac{\text{d} u}{\text{d} t} = - \frac{1}{\rho} \frac{\delta_{L^\ast} }{\delta u} \Phi (u, \rho),$

$\displaystyle \frac{d \rho}{d t} = - \frac{\delta}{\delta \rho} \Phi(u, \rho),$

which is in the form of (1) with coefficient matrix ${A=\text{diag}(1/\rho, 1)}$. Also, the pressure is given by

$\displaystyle p=p_0 - \lambda \text{div} u - \frac12 \rho^2.$

Here

$\displaystyle \frac{\text{d} u}{\text{d}t} = \frac{\partial u }{\partial t} + \frac{\partial u}{\partial x_i} \frac{\text{d} x_i}{\text{d} t } = \frac{\partial u }{\partial t} + (u \cdot \nabla ) u, \qquad \frac{d\rho}{dt}= \frac{\partial \rho}{\partial t} + u\cdot \nabla \rho.$

5. There are two types of physical motion systems: the dissipative systems and the conservation systems. The coefficient matrix ${A}$ is symmetric and positive definite if and only if the system is a dissipative system, and ${A}$ is anti-symmetry if and only if the system is a conservation system.

Dynamical Law for Coupled Physical Motion Systems

Symmetry plays a fundamental role in understanding Nature. In [Ma-Wang, MPTP], we have demonstrated that for the four fundamental interactions, the Lagrangian actions are dictated by the general covariance (principle of general relativity), the gauge symmetry and the Lorentz symmetry; the field equations are then derived using PID as mentioned earlier.

For isolated motion systems, all energy functionals ${F}$ obey certain symmetries such as ${SO(n)}$ (${n=2, 3}$) symmetry. In searching for laws of Nature, one inevitably encounters a system consisting of a number of subsystems, each of which enjoys its own symmetry principle with its own symmetry group. To derive the basic law of the coupled system, we postulated in [Ma-Wang, MPTP] the principle of symmetry-breaking (PSB), which is of fundamental importance for deriving physical laws for both fundamental interactions and motion dynamics: Physical systems in different levels obey different laws, which are dictated by their corresponding symmetries. For a system coupling different levels of physical laws, part of these symmetries must be broken.

In view of this principle, for a system coupling different subsystems, the motion equations become

$\displaystyle \frac{\text{d} u}{\text{d}t} = -A\delta_{\mathcal L} F(u) + B(u), \ \ \ \ \ (2)$

where ${B(u)}$ represents the symmetry-breaking.

Orthogonal-Decomposition Theorem

To establish the needed mathematical foundation for the dynamical law of physical motion systems, we need to prove an orthogonal decomposition theorem, Theorem~6.1 in the paper. Basically, for a linear operator ${L:H_1 \rightarrow H}$ between two Hilbert spaces ${H}$ and ${H_1}$, with dual operator ${L^\ast}$, any ${u \in H}$ then can be decomposed as

$\displaystyle u=L\varphi+v,\quad L^\ast v=0,$

where ${L\varphi}$ and ${v}$ are orthogonal in ${H}$.

Summary

The dynamical law given by (1) and (2) is essentially known for motion system in classical mechanics, quantum mechanics and astrophysics; see among others [Ma-Wang, MPTP] and [Landau and Lifshitz, Course of theoretical physics, Vol. 2, The classical theory of fields].

Thanks to the variation with infinitesimal constraints, the law of fluid motion is now in the form of (1) and (2).

The potential-descending principle (PDP) addressed in the previous blog shows that non-equilibrium thermodynamical systems are governed by the dynamical law (1) and (2), as well.

In a nutshell,

the dynamical law (1) and (2) are the law for all physical motions systems.

We end this blog by emphasizing that in deriving dynamical law and the basic laws for the four fundamental interactions [Ma-Wang, MPTP], the following guiding principle of physics played a crucial role:

• The heart of physics is to seek experimentally verifiable, fundamental laws and principles of Nature. In this process, physical concepts and theories are transformed into mathematical models:

$\displaystyle \text{ physical laws } = \text{ mathematical equations} \ \ \ \ \ (3)$

• the predictions derived from these models can be verified experimentally and conform to reality.

The true understanding of (3) is a subtle process, and is utterly important.

Tian Ma & Shouhong Wang

## Irreversibility and Problems in the Boltzmann Equation

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

The previous blog is on postulating the potential-descending principle (PDP) as the first principle of statistical physics. The purpose of this blog is on the next two components of the above paper:

1. to demonstrate that the PDP is the first principle to describe irreversibility of all thermodynamic systems;

2. to examine the problems faced by the Boltzmann equation.

Irreversibility

First, irreversibility is a macroscopic property of thermodynamics, and must be described by the first level physical quantities–thermodynamic potentials, rather than the second level quantities (state functions) or the third level quantities (control parameters).

Second, entropy ${S}$ is a state function, which is the solution of basic thermodynamic equations. Thermodynamic potential is a higher level physical quantity than entropy, and consequently, is the correct physical quantity for describing irreversibility for all thermodynamic systems.

Problems in Boltzmann Equation

Historically, great effort has been put on establishing a mathematical model of entropy-increasing principle. The Boltzmann equation is introduced mainly for this purpose. Since there is no first principle to achieve this purpose, the Boltzmann equation is introduced as a phenomenological model with two specific goals:

• to derive the entropy-ascending principle, and

• to make the Maxwell-Boltzmann distribution (realistic equilibrium state of a dilute gaseous system) a steady-state solution.

However, the Boltzmann equation faces many problems:

First,  laws of physics (equations) should not use state functions, which are themselves governed by physical laws, as independent variables. The Boltzmann equation violates this simple physical rule by using the velocity field as an independent variable. Consequently, the Boltzmann equation is not a physical law.

Second, the Boltzmann equation uses the velocity field ${v}$ as an independent variable. This leads to a new unknown function (force field) in the Boltzmann equation:

$\displaystyle F=F(t, x, v), \ \ \ \ \ (1)$

which is the sum of the external force and the force generated by the total interaction potential of all particles in the system, including the force due to collision.

Third, the main source of this force field ${F}$ comes from the interaction between particles, and ${F}$ is realistically not zero. Otherwise, all particles in the system would make uniform rectilinear motion, and in particular there would be no particle collisions (collision is close-distance interactions).

Fourth, if

$\displaystyle F\not=0, \ \ \ \ \ (2)$

then the Maxwell distribution fails to be the steady state solution of the Boltzmann equation. Since the Maxwell-distribution is the realistic equilibrium state of dilute gaseous systems, the Boltzmann equation fails to describe the underlying physical phenomena in this important regard.

Fifth, in deriving the H-Theorem (i.e. the entropy-ascending principle), the following must be assumed:

$\displaystyle F=F(t, x) \text{ \it is independent of } v. \ \ \ \ \ (3)$

This is a non-physical, rather arbitrary assumption, since ${F}$ includes the force generated by all interactions (including collision) and must be velocity dependent. Consequently the H-Theorem is not a natural consequence of the Boltzmann equation.

Sixth, ignoring the non-physical nature of assumption (2), the space of all steady state solutions of the Boltzmann equation is of five-dimensional. Namely, the general form of the steady state solutions is

$\displaystyle \bar \rho = e^{\alpha_0 + \alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3 + \alpha_4 v^2},$

where ${\alpha_i}$ (${i=0, \cdots, 4}$) are constants, and ${v=(v_1, v_2, v_3)}$. This shows that each steady-state is not stable, and this does not fit the reality.

Seventh, the entropy-increasing principle shows that a gaseous system in the equilibrium has the maximum entropy, i.e. the Maxwell distribution should be the maximum of

$\displaystyle S=-\int \rho\ln \rho dx dv + S_0, \ \ \ \ \ (4)$

However, the maximum of ${S}$ is given by

$\displaystyle \rho_0=e^{-1}.$

Again, this is non-physical.

In summary,

the Boltzmann is not a physical law, and also fails to achieve its two original goals.

Tian Ma & Shouhong Wang

## Potential-Descending Principle as the First Principle of Statistical Physics

Tian Ma & Shouhong Wang, Dynamical Law of Physical Motion and Potential-Descending Principle, The Institute for Scientific Computing and Applied Mathematics Preprint #1701, July 6, 2017

One main component of this paper is to postulate the following potential-descending principle (PDP) for statistical physics:

Potential-Descending Principle: For each thermodynamic system, there are order parameters ${u=(u_1, \cdots, u_N)}$, control parameters ${\lambda}$, and the thermodynamic potential functional ${F(u; \lambda)}$. For a non-equilibrium state ${u(t; u_0)}$ of the system with initial state ${u(0, u_0)=u_0}$, we have the following properties:

1)  the potential ${F(u(t; u_0); \lambda)}$ is decreasing:

$\displaystyle \frac{\text{d} }{\text{d} t} F(u(t; u_0); \lambda) < 0 \qquad \forall t > 0;$

2) the order parameters ${u(t; u_0)}$ have a limit

$\displaystyle \lim\limits_{t \rightarrow \infty}u(t; u_0) = \bar u;$

3)  there is an open and dense set ${\mathcal O}$ of initial data in the space of state functions, such that for any ${u_0 \in \mathcal O}$, the corresponding ${\bar u}$ is a minimum of ${F}$, which is called an equilibrium of the thermodynamic system:

$\displaystyle \delta F(\bar u;\lambda)= 0.$

1. In classical thermodynamics, the order parameters (state functions) ${u=(u_1, \cdots, u_N)}$, the control parameters ${\lambda}$, and the thermodynamic potential (or potential in short) ${F}$ are all treated as state variables. This way of mixing different level of physical quantities leads to difficulties for the understanding and the development of statistical physics.

One important feature of PDP above is the distinction of different levels of thermodynamical quantities— thermodynamical potentials are functionals of the order parameters (state functions), and orders parameters are functions of control parameters: potentials are first level physical quantities, order-parameters are on the second-level, and the control parameters are on the third level.

2. In classical thermodynamics, the first and second laws are treated as the first principles, from which one derives other statistical properties of thermodynamical systems. One perception is that potential-decreasing property can be derived from the first and second laws. However, in the derivations, there is a hidden assumption that at the equilibrium, there is a free-variable in each pair of (entropy ${S}$, temperature ${T}$) and (generalized force ${f}$, displacement ${X}$). Here the free-variables corresponds to order-parameters. We discovered that this assumption is mathematically equivalent to the potential-descending principle. As an example, we consider an internal energy of a thermodynamic system, classical theory asserts that the first and second laws are given by

$\displaystyle dU \le \frac{\partial U}{\partial S} dS + \frac{\partial U}{\partial X}dX, \ \ \ \ \ (1)$

where the equality represents the first laws, describing the equilibrium state, and inequality presents second law for non-equilibrium state. However, there is a hidden assumption in (1) that ${S}$ and ${X}$ are free variables, and

$\displaystyle \frac{\partial U}{\partial T} \le 0, \qquad \frac{\partial U}{\partial f}\le 0,$

where, again, the equality is for equilibrium state and the strict inequality is for non-equilibrium state. Then it is clear to see that this assumption is mathematically equivalent to PDP. In other words, the potential-decreasing property cannot be derived if we treat the first and second laws as the only fundamental principles of thermodynamics. Also, we demonstrate that the potential-descending principle leads to both the first and second laws of thermodynamics. Therefore we reach the following conclusion:

the potential-descending principle is a more fundamental principle then the first and second laws.

3. For a thermodynamical systems, PDP provides a dynamical law for the transformation of non-equilibrium states to equilibrium states: the dynamic equations of a thermodynamic system in non-equilibrium state take the form

$\displaystyle \frac{\text{d} u }{\text{d} t } = - A \delta F(u), \ \ \ \ \ (2)$

where ${A}$ is positive and symmetric coefficient matrix.

4. According to the entropy formula:

$\displaystyle S=k\ln W,$

and by the minimum potential principle as part of PDP:

$\displaystyle \delta F=0.$

Here

$\displaystyle F=U_0 -ST -\mu_1 N - \mu_2 E, \ \ \ \ \ (3)$

where ${U_0}$ is the internal energy, which is a constant, ${N}$ is the number of particles, ${\mu_1}$ and ${\mu_2}$ are Lagrangian multipliers, and ${E}$ is the total energy. For this system, the entropy ${S}$ is an order parameter, and temperature ${T}$ is a control parameter.

Then we can derive, with similar procedures as in Section 6.1 of [R. K. Pathria & Paul D. Beale, Statistical Mechanics, 3rd Edition, Elsevier, 2011], all three distributions: the Maxwell-Boltzmann distribution, the Fermi-Dirac distribution and the Bose-Einstein distribution. This shows that

the potential-descending principle is also the first principle of statistical mechanics.

Tian Ma & Shouhong Wang

## What does Einstein’s General Relativity Tell Us about Black Holes?

Presentation Slides

We have today given a presentation at IU Science Slam with the same title as this post. The main themes are to present the black hole theory derived from Einstein’s General Relativity, and to point out the essential differences between the black hole theory and the viewpoint on black holes from Newton’s Law. The final message is

nothing gets out of the black hole, and nothing gets inside a black hole either.

This is based on a recent paper [Tian Ma & Shouhong Wang,  Astrophysical dynamics and cosmology, J. Math. Study, 47:4 (2014), 305-378]; see also the previous blog post: Singularity at the Black-Hole Horizon is Physical.

## 1. Newtonian Viewpoint

Consider a massive body with mass ${M}$ inside a ball ${B_R}$ of radius ${R}$. The Schwarzschild radius is defined by ${R_s={2GM}/{c^2}.}$

Based on the Newtonian theory, a particle of mass ${m}$ will be trapped inside the ball ${B_R}$ and cannot escape from the ball, if its kinetic energy, ${mv^2/2}$, is smaller than gravitational energy:

$\displaystyle \frac{mv^2}{2} \le \frac{mc^2}{2} \le \frac{mMG}{r},$

which implies that

$\displaystyle r \le R_s =\frac{2GM}{c^2}.$

In other words, if the radius ${R}$ of the ball is less than or equal to ${R_s}$, then all particles inside the ball are permanently trapped inside the ball ${B_{R_s}}$.

It is clear that the main results of the Newton theory of black holes are as follows:

• the radius ${R}$ of the black hole may be smaller than the Schwarzschild radius ${R_s}$,
• all particles inside the ball are permanently trapped inside the ball ${B_{R_s}}$, and
• particles outside of a black hole ${B_{R_s}}$ can be sucked into the black hole ${B_{R_s}}$.

## 2. Einstein-Schwarzschild Theory

#### Black-Holes are closed

Now consider the case where ${R=R_s}$. Based on the Einstein field equations, in the exterior of the body, the Schwarzschild solution is given by

$\displaystyle ds^2= -\left[1-\frac{R_s}{r}\right]c^2dt^2+\left[1-\frac{R_s}{r}\right]^{-1}dr^2 +r^2d\theta^2+r^2\sin^2\theta d\varphi^2 \qquad \text{ for } r > R_s, \ \ \ \ \ (1)$

and in the interior the Tolman-Oppenheimer-Volkoff (TOV) metric is

$\displaystyle ds^2= - \frac14 \left[1- \frac{r^2}{R^2_s}\right] c^2dt^2 +\left[1-\frac{r^2}{R^2_s}\right]^{-1}dr^2 +r^2d\theta^2+r^2\sin^2\theta d\varphi^2 \qquad \text{ for } r < R_s. \ \ \ \ \ (2)$

The both metrics have a singularity at ${r=R_s}$, which is called the event horizon:

$\displaystyle d\tau = \left[1-\frac{R_s}{r}\right]^{1/2} dt \rightarrow 0, \quad d\tilde r= \left[1-\frac{R_s}{r}\right]^{-1/2} dr \rightarrow \infty \text{ for } r \rightarrow R_s^+, \ \ \ \ \ (3)$

$\displaystyle d\tau = \frac12 \left[1-\frac{r^2}{R_s^2}\right]^{1/2} dt \rightarrow 0, \quad d\tilde r=\left[1-\frac{R_s}{r}\right]^{-1/2} dr \rightarrow \infty \text{ for } r \rightarrow R_s^-, \ \ \ \ \ (4)$

Both (3) and (4) show that time freezes at ${r=R_s}$, and there is no motion crossing the event horizon:

$\displaystyle \tau_1-\tau_2 =d\tau =0\quad \text{ implies } \quad d \tilde r = \tilde r (\tau_1) -\tilde r(\tau_2) =0.$

Consequently the black hole enclosed by the event horizon ${r=R_s}$ is closed: Nothing gets inside a black hole, and nothing gets out of the black hole either.

#### Black holes are filled

We now demonstrate that black holes are filled. Suppose there is a body of matter field with mass ${M}$ trapped inside a ball of radius ${R < R_s}$. Then on the vacuum region ${R< r < R_s}$, the Schwarzschild solution would be valid, which leads to non-physical imaginary time and nonphysical imaginary distance:

$\displaystyle d\tau = i \left|1-\frac{R_s}{r}\right|^{1/2} dt, \qquad d\tilde r= i \left|1-\frac{R_s}{r}\right|^{-1/2} dr \quad \text{ for } \quad R

Also, when ${R< R_s}$, the TOV metric is given by

$\displaystyle ds^2= -\left[ \frac32 \left(1-\frac{R_s}{R}\right)^{1/2} - \frac12 \left( 1- \frac{r^2 R_s}{R^3}\right)^{1/2} \right]^2 c^2dt^2$

$\displaystyle +\left(1-\frac{r^2R_s}{R^3}\right)^{-1}dr^2 +r^2d\theta^2+r^2\sin^2\theta d\varphi^2 \qquad \text{ for } r < R. \ \ \ \ \ (5)$

Then both time and radial distance would become imaginary near ${r=R}$, and this is clearly non-physical.

This observation clearly demonstrates that the black is filled. In fact, we have proved the following black hole theorem:

Blackhole Theorem (Ma-Wang, 2014) Assume the validity of the Einstein theory of general relativity, then the following assertions hold true:

1. black holes are closed: matters can neither enter nor leave their interiors,
2.  black holes are innate: they are neither born to explosion of cosmic objects, nor born to gravitational collapsing, and
3.  black holes are filled and incompressible, and if the matter field is non-homogeneously distributed in a black hole, then there must be sub-blackholes in the interior of the black hole.

This theorem leads to drastically different view on the structure and geometry of black holes than the classical theory of black holes.

## 3. Singularity at ${R_s}$ is physical

A basic mathematical requirement for a partial differential equation system on a Riemannian manifold to generate correct mathematical results is that the local coordinate system that is used to express the system must have no singularity.

The Schwarzschild solution is derived from the Einstein equations under the spherical coordinate system, which has no singularity for ${r>0}$. Consequently, the singularity of the Schwarzschild solution at ${r=R_s}$ must be intrinsic to the Einstein equations, and is not caused by the particular choice of the coordinate system. In other words, the singularity at ${r=R_s}$ is real and physical.

## 4. Mistakes of the classical view

Many writings on modern theory of black holes have taken a wrong viewpoint that the singularity at ${r=R_s}$ is the coordinate singularity, and is non-physical. This mistake can be viewed in the following two aspects:

A. Mathematically forbidden coordinate transformations are used. Classical transformations such as e.g. those by Eddington and Kruskal are singular, and therefore they are not valid for removing the singularity at the Schwarzschild radius. Consider for example, the Kruskal coordinates involving

$\displaystyle u= t-r_\ast, \quad v=t + r_\ast, \qquad r_\ast = r +R_s \ln \left(\frac{r}{R_s}-1\right).$

This coordinate transformation is singular at ${r=R_s}$, since ${r_\ast}$ becomes infinity when ${r=R_s}$.

It is mathematically clear that by using singular coordinate transformations, any singularity can be either removed or created at will.

In fact, many people did not realize that what is hidden in the wrong transformations is that all the deduced new coordinate systems, such as the Kruskal coordinates, are themselves singular at ${r=R_s}$:

all the coordinate systems, such as the Kruskal and Eddington-Finkelstein coordinates, that are derived by singular coordinate transformations, are singular and are mathematically forbidden.

B. Confirmation bias. Another likely reason for the perception that a black hole attracts everything nearby is the fixed thinking (confirmation bias) of Newtonian black hole picture. In their deep minds, people wanted to have the attraction, as produced by the Newtonian theory, and were trying to find the needed “proofs” for what they believe.

In summary, the classical theory of black holes is essentially the Newton theory of black holes. The correct theory, following the Einstein theory of relativity, is given in the black hole theorem above.

## PID Weak Interaction Theory

This post presents a brief introduction to the field theory of weak interactions, developed recently by the authors, based only on a few fundamental principles:

• the action is the classical Yang-Mills action dictated uniquely by the ${SU(2)}$ gauge invariance and the Lorentz invariance; and
• the field equations and the Higgs fields are then derived using the principle of interaction dynamics (PID) and the principle of representation invariance (PRI).

The essence of the new field theory is that the Higgs fields are the natural outcome of the PID, which takes variation under energy-momentum conservation constraints. We call this new field theory the PID weak interaction theory.

This new theory leads to layered weak interaction potential formulas, provides duality between intermediate vector bosons ${Z}$, ${W^\pm}$, and their dual neutral Higgs ${H^0}$ and two charged Higgs ${H^\pm}$, and offers first principle approach for Higgs mechanism.

## 1. PID field equations of the weak interaction

The new weak interaction theory based on PID and PRI was first discovered by the authors in [1]; see also the new book by the authors.

First the weak interaction obeys the ${SU(2)}$ gauge symmetry, which, together with the Lorentz invariance and PRI, dictates the standard ${SU(2)}$ Yang-Mills action, as we have explained in our previous post.

Then the field equations of the weak interaction and the Higgs fields are determined by by PID and PRI, and are given by:

$\displaystyle \partial^{\nu}W^a_{\nu\mu}-\frac{g_w}{\hbar c}\varepsilon^a_{bc}g^{\alpha\beta}W^b_{\alpha\mu}W^c_{\beta}-g_wJ^a_{\mu} =\left[\partial_{\mu}-\frac{1}{4}\left(\frac{m_Hc}{\hbar}\right)^2x_{\mu}+\frac{g_w}{\hbar c}\gamma_bW^b_{\mu}\right] \phi^a, \ \ \ \ \ (1)$

$\displaystyle i\gamma^{\mu}\left[ \partial_{\mu}+i\frac{g_w}{\hbar c}W^a_{\mu}\sigma_a\right] \psi -\frac{mc}{\hbar}\psi =0, \ \ \ \ \ (2)$

where ${m_H}$ represents the mass of the Higgs particle, ${\sigma_a=\sigma^a\ (1\leq a\leq 3)}$ are the Pauli matrices, ${W^a_\mu}$ (${a=1, 2, 3}$) are the three ${SU(2)}$ gauge potentials, ${\phi^a}$ are the three Higgs fields, and

$\displaystyle W^a_{\mu\nu}=\partial_{\mu}W^a_{\nu}-\partial_{\nu}W^a_{\mu}+\frac{g_w}{\hbar c}\varepsilon^a_{bc}W^b_{\mu}W^c_{\nu},\qquad J^a_{\mu}=\bar{\psi}\gamma_{\mu}\sigma^a\psi.$

## 2. Prediction of Charged Higgs

The right-hand side of (1) is due to PID, leading naturally to the introduction of three scalar dual fields. The left-hand side of (1) represents the intermediate vector bosons ${W^\pm}$ and ${Z}$, and the dual fields represent two charged Higgs ${H^\pm}$ (to be discovered) and the neutral Higgs ${H^0}$, with the later being discovered by LHC in 2012.

It is worth mentioning that the right-hand side of (1), involving the Higgs fields, are non-variational, and can not be generated by directly adding certain terms in the Lagrangian action. This might be the very reason why for a long time one has to use logically-inconsistent electroweak theory, as we explained in the previous post here.

## 3. First principle approach to spontaneous gauge symmetry-breaking and mass generation

PID induces naturally spontaneous symmetry breaking mechanism. By construction, the action obeys the ${SU(2)}$ gauge symmetry, the PRI and the Lorentz invariance. Both the Lorentz invariance and PRI are universal principles, and, consequently, the field equations (1) and (2) are covariant under these symmetries.

The gauge symmetry is spontaneously breaking in the field equations (1), due to the presence of the term, ${\frac{g_w}{\hbar c}\gamma_bW^b_{\mu} \phi^a}$, in the right-hand side, derived by PID. This term generates the mass for the vector bosons.

## 4. Weak charge and weak potential

As we mentioned in the previous posts here, elements in ${SU(N)}$ are expressed as ${ \Omega =e^{i\theta^a\tau_a}}$, where ${\{\tau_1, \cdots ,\tau_{N^2-1}\}}$ is a basis of the set of traceless Hermitian matrices, and plays the role of a coordinate system in this representation. Consequently, an ${SU(N)}$ gauge theory should be independent of choices of the representation basis. This leads to the principle of representation invariance (PRI), and is simply a logic requirement for any ${SU(N)}$ gauge theory. This was first discovered by the authors in 2012.

With PRI applied to ${SU(2)}$ gauge theory for the weak interaction, two important physical consequences are clear.

First, by PRI, the ${SU(2)}$ gauge coupling constant ${g_w}$ plays the role of weak charge, responsible for the weak interaction.

In fact, the weak charge concept can only be properly introduced by using PRI, and it is clear now that the weak charge is the source of the weak interaction.

Second, PRI induces an important ${SU(2)}$ constant vector ${\{\gamma_b\}}$. The components of this vector represent the portions distributed to the gauge potentials ${W_\mu^a}$ by the weak charge ${g_w}$. Hence the (total) weak interaction potential is given by the following PRI representation invariant

$\displaystyle W_{\mu}=\gamma_a W^a_{\mu}=(W_0,W_1,W_2,W_3), \ \ \ \ \ (3)$

and the weak charge potential is the temporal component of this total weak interaction potential ${W_0}$, and weak force is

$\displaystyle F_w=-g_w(\rho )\nabla W_0, \ \ \ \ \ (4)$

where ${g_w(\rho )}$ is the weak charge of a reference particle with radius ${\rho}$.

## 5. Layered formulas for the weak interaction potential

The weak interaction is also layered, and we derive from the field equations (1) and (2) the following

$\displaystyle W_0 =g_w(\rho)e^{-kr}\left[\frac{1}{r}-\frac{B}{\rho}(1+2kr)e^{-kr}\right], \ \ \ \ \ (5)$

$\displaystyle g_w(\rho )=N\left(\frac{\rho_w}{\rho}\right)^3g_w, \ \ \ \ \ (6)$

where ${W_0}$ is the weak force potential of a particle with radius ${\rho}$ and carrying ${N}$ weak charges ${g_w}$, taken as the unit of weak charge ${g_s}$ for each weakton, ${\rho_w}$ is the weakton radius, ${B}$ is a parameter depending on the particles, and ${{1}/{k}=10^{-16}\text{ cm}}$ represents the force-range of weak interactions.

The layered weak interaction potential formula (5) shows clearly that the weak interaction is short-ranged. Also, it is clear that the weak interaction is repulsive, asymptotically free, and attractive when the distance of two particles increases.