Saturday, March 9, 2024

Derivations, CAS, Lean, and Assumptions in Physics

Initially the Physics Derivation Graph documented expressions as Latex. Then SymPy was added to support validation of steps (is the step self-consistent) and dimensionality (is the expression self-consistent?). 

Recently I learned that Lean could be used to prove each step in a derivation. The difference between a Computer Algebra System (e.g., SymPy) and Lean is whether "a = b  --> a/b = 1" is a valid step -- it isn't when b is zero. Lean catches that; SymPy does not. 

While Lean proofs sound like the last possible refinement, there are two additional complications to account for not addressed by Lean. 

Challenge: Bounded ranges of applicability

In classical mechanics the relation between momentum, mass, and velocity is "p = m v". That hold when "v << c". Near the speed of light we need to switch to relativistic mass, 

m = m_{rest} / sqrt{1-((v^2)/(c^2))}.

The boundary between "v << c" and "v ~ c" is usually set by the context being considered. 

One response for users of Lean would be to always use the "correct" relativistic equation, even when "v << c."  A more conventional approach used by Physicists is to use

p = m v, where v << c

then drop the "v << c" clause and rely on context.


Challenge: Real versus Float versus experimental characterization

Lean forces you to characterize numbers as Real or Integer or Complex. This presents a problem for numerical simulations that have something like a 64 bit float representation.

In thermodynamics we assume the number of particles involved is sufficiently large that we focus on the behavior of the ensemble rather than individual particles. The imprecision of floats is not correct, but neither is the infinite precision assumed by Real numbers. 


Example applications of Lean proofs needing bounds on values

Math doesn't have convenient ways of indicating "finite precision, as set by the Plank scale."  The differential element used in calculus cannot actually go to zero, but we use that concept because it works at the scales we are used to. 

Physicists make simplifying assumptions that sometimes ignore reality (e.g., assuming continuous media when particles are discrete). Then again the assumption that particles are discrete is also a convenient fiction that ignores the wavefunction of quantum mechanics. 

Lean can be used to prove derivations in classical mechanics, but to be explicit about the bounds of those proofs we'd also need to indicate "v << c" and "assume space is Euclidean." 

For molecular dynamics, another constraint to account for is "temperature << 1E10 Kelvin" or whatever temperature the atoms breaks down into plasma. 

Distinguishing the context of (classical mechanics from quantum) and (classical from relativistic) and (conventional gas versus plasma) seems important so that we know when a claim proven in Lean is applicable. 

Saturday, March 2, 2024

dichotomy of assumptions

In Physics there are some assumptions that form a dichotomy:

  • is the speed of light constant or variable?
  • is the measure of energy discrete or continuous?

In the dichotomy of assumptions, one of the two assumptions is reflective of reality, and the other is an oversimplification. The oversimplification is related to reality by assumptions, constraints, and limits. 

(I define "oversimplification" as the extension of useful assumptions to incorrect regions.)

Another case where oversimplification is the link between domains is quantum physics and (classical) statistical physics. Quantum particles are either Fermions (odd half integer spin) or Bosons (integer spin), but that is practically irrelevant for large ensembles of particles at room temperature. The aspects that get measured at one scale (e.g., particle velocity) are related to but separate from metrics at another scale (e.g, temperature, entropy). Mathematically this transition manifests as the switch from summation to integration.


So what? 
This is a new-to-me category of derivations which span domains. What constitutes a domain is set by the assumptions that form the boundaries, and oversimplification is how to cross the boundaries. 

What boundaries should the Physics Derivation Graph transgress? What oversimplifications are adjacent?

The evidences of dissonance (e.g, Mercury’s perihelion based on Newtonian gravitation versus relativity, the Deflection of Starlight; source) are not relevant for bridging domains. They are illustrations of the oversimplification.

Update 2024-03-10: on the page https://en.wikipedia.org/wiki/Phase_space#Quantum_mechanics

"by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the Weyl map facilitates recognition of quantum mechanics as a deformation (generalization) of classical mechanics, with deformation parameter ħ/S, where S is the action of the relevant process. (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter v/c; or the deformation of Newtonian gravity into general relativity, with deformation parameter Schwarzschild radius/characteristic dimension.)
 
Classical expressions, observables, and operations (such as Poisson brackets) are modified by ħ-dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle."
See also https://en.wikipedia.org/wiki/Wigner%E2%80%93Weyl_transform#Deformation_quantization