where T is period of oscillation and f is frequency of oscillation.
A transformation would be to multiply both sides by f to get
f T = 1
Verification of a step using a Computer Algebra System like SymPy
The single step above could be verified using a Computer Algebra System like SymPy. The generic form of the inference rule is "multiply both sides of (LHS=RHS) by feed to get (LHS*feed=RHS*feed)". To show the inference rule was correctly applied, we want to show that
LHS_in*feed == LHS_out
and
RHS_in*feed == RHS_out
Another way to describe the equivalence is that the difference should be zero:
Wahoo! The step has been validated using SymPy to show the inference rule is applied correctly.
The cool part is that the "multiply_both_sides()" Python function is generic to any input expression. The same check can be used for many different steps that use the inference rule. Using SymPy we can gain confidence that the expressions associated with a step were correctly transformed.
Consistency of dimensionality using SymPy
In addition to evaluating the transformation of symbols in a step, we can verify the consistency of dimensions for each expression. That requires more than just the symbols -- the user will have to specify the dimensions of each symbol.
additional constraints are needed for reasoning around division by 0.
If you expect to avoid negative or zero frequency or period, you could define f and T to have be "positive real numbers" (which exclude zero; non-negative reals include zero). This does define the context more precisely, but there is a price - we won’t have nearly as many proofs for positive reals as we have for reals. The alternative is to add additional hypotheses as constraints. The latter case (additional hypotheses) is favored.
import Mathlib.Data.Real.Basic
import Mathlib.Tactic -- Import standard tactics, specifically import Mathlib.Tactic.FieldSimp
theorem inversion_amultbeq1
(a b : Real)
(hb_ne_zero : b ≠ 0) :
a = 1 / b <-> a * b = 1 := by
-- field_simp clears the denominator 'b' on the LHS,
-- turning (a = 1 / b) into (a * b = 1) automatically.
field_simp [hb_ne_zero]
->
I had been using Neo4j's property graph for the Physics Derivation Graph for a few years and recently stopped due to the 1GB of memory used by the server that is constantly running. (That 1GB of RAM is unrelated to the size of the underlying database, which in my case was a few hundred nodes.)
I realized that what I want is a SQLite-like "offline" database that only is noticeable to CPU and RAM use when there's a query. I'm ok with the database's initialization cost resulting slower queries.
Search for Alternatives
https://jgrapht.org/ and NetworkX are irrelevant to the needs of the Physics Derivation Graph
However, the most reliable approach may be to switch to SQLite and use LLMs to craft Cypher-like queries. This is inelegant (I had been exposing read-only Cypher capabilities on the web) and requires coming up with a schema (something I have no experience with).
Convert Property Graph to Relational Database for SQL
Here's the prompt I provided Gemini 2.5 Pro:
Response from Gemini:
Converting a property graph to a relational model involves translating nodes into tables and edges into relationships (either through foreign keys or dedicated join tables). Here is a comprehensive SQLite schema designed to represent your Neo4j graph structure, balancing relational purity with the flexibility of the original graph.
Design Principles
Nodes as Tables: Each distinct node type (`derivation`, `step`, `expression`, etc.) will become its own table.
Properties as Columns: The properties of each node will become columns in the corresponding table. The node's `id` will serve as the `PRIMARY KEY`.
Edges as Relationships:
One-to-Many / One-to-One Edges: These are modeled using a foreign key column in the "many" or "child" table. For example, since a `step` belongs to one `derivation`, the `steps` table will have a `derivation_id` column.
Many-to-Many Edges: These are modeled using a separate "join table" (or "link table") that contains foreign keys to the two connected tables. For example, a `step` can have many `expression` inputs, and an `expression` can be an input to many `steps`. This requires a `step_inputs` join table.
Polymorphic Edges: The `HAS_SYMBOL` edge is polymorphic because it can connect to multiple different types of symbol nodes (`scalar`, `vector`, etc.). To handle this cleanly without losing foreign key integrity, we will create a dedicated join table for each specific relationship (e.g., `expression_scalars`, `expression_vectors`). A more "graph-like" but less relationally pure alternative is discussed in the rationale section.
SQLite Schema Definition
Here are the `CREATE TABLE` statements for the SQLite database.
Node Tables. These tables store the data for each type of node.
-- The top-level container for a mathematical derivation.
CREATE TABLE derivations (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
name_latex TEXT,
abstract_latex TEXT,
reference_latex TEXT,
created_datetime TEXT NOT NULL
);
-- Represents a single logical step in a derivation.
-- It connects to a derivation (one-to-many) and an inference_rule (many-to-one).
CREATE TABLE steps (
id TEXT PRIMARY KEY,
derivation_id TEXT NOT NULL,
inference_rule_id TEXT NOT NULL,
sequence_index INTEGER NOT NULL, -- From the 'HAS_STEP' edge property
author_name_latex TEXT,
note_before_step_latex TEXT,
note_after_step_latex TEXT,
created_datetime TEXT NOT NULL,
FOREIGN KEY (derivation_id) REFERENCES derivations(id) ON DELETE CASCADE,
FOREIGN KEY (inference_rule_id) REFERENCES inference_rules(id) ON DELETE RESTRICT,
UNIQUE (derivation_id, sequence_index) -- A derivation cannot have two steps at the same index
);
-- Represents the mathematical content of an expression.
CREATE TABLE expressions (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
name_latex TEXT,
description_latex TEXT,
reference_latex TEXT,
latex TEXT,
lean TEXT,
sympy TEXT
);
-- Represents feed-in values or assumptions for a step.
CREATE TABLE feeds (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
sympy TEXT,
lean TEXT,
latex TEXT
);
-- Represents a rule of logic or mathematics used in a step.
CREATE TABLE inference_rules (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
name_latex TEXT,
latex TEXT,
number_of_inputs INTEGER,
number_of_outputs INTEGER,
number_of_feeds INTEGER
);
-- Represents a mathematical operation like addition or differentiation.
CREATE TABLE operations (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
name_latex TEXT,
description_latex TEXT,
reference_latex TEXT,
latex TEXT,
argument_count INTEGER
);
-- Represents a scalar value (a single number).
CREATE TABLE scalars (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
name_latex TEXT,
description_latex TEXT,
reference_latex TEXT,
latex TEXT,
scope TEXT,
variable_or_constant TEXT,
domain TEXT,
dimension_length INTEGER,
dimension_time INTEGER,
dimension_mass INTEGER,
dimension_temperature INTEGER,
dimension_electric_charge INTEGER,
dimension_amount_of_substance INTEGER,
dimension_luminous_intensity INTEGER
);
-- Represents a vector.
CREATE TABLE vectors (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
name_latex TEXT,
description_latex TEXT,
reference_latex TEXT,
latex TEXT
);
-- Represents a matrix.
CREATE TABLE matrices (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
name_latex TEXT,
description_latex TEXT,
reference_latex TEXT,
latex TEXT
);
-- Represents a quantum mechanical operator.
CREATE TABLE quantum_operators (
id TEXT PRIMARY KEY,
author_name_latex TEXT,
name_latex TEXT,
description_latex TEXT,
reference_latex TEXT,
latex TEXT,
requires_arguments INTEGER -- Using INTEGER for BOOLEAN (0=false, 1=true)
);
-- Represents the numerical value and units for a scalar.
-- This has a one-to-one relationship with a scalar.
CREATE TABLE values_and_units (
id TEXT PRIMARY KEY,
scalar_id TEXT NOT NULL UNIQUE, -- The UNIQUE constraint enforces the 1-to-1 relationship
author_name_latex TEXT,
dimension_length_unit INTEGER,
dimension_time_unit INTEGER,
dimension_mass_unit INTEGER,
dimension_temperature_unit INTEGER,
dimension_electric_charge_unit INTEGER,
dimension_amount_of_substance_unit INTEGER,
dimension_luminous_intensity_unit INTEGER,
FOREIGN KEY (scalar_id) REFERENCES scalars(id) ON DELETE CASCADE
);
Relationship (Join) Tables. These tables model the many-to-many relationships defined by the edges.
-- Links steps to their input expressions.
CREATE TABLE step_inputs (
step_id TEXT NOT NULL,
expression_id TEXT NOT NULL,
sequence_index INTEGER NOT NULL,
PRIMARY KEY (step_id, sequence_index),
FOREIGN KEY (step_id) REFERENCES steps(id) ON DELETE CASCADE,
FOREIGN KEY (expression_id) REFERENCES expressions(id) ON DELETE CASCADE
);
-- Links steps to their output expressions.
CREATE TABLE step_outputs (
step_id TEXT NOT NULL,
expression_id TEXT NOT NULL,
sequence_index INTEGER NOT NULL,
PRIMARY KEY (step_id, sequence_index),
FOREIGN KEY (step_id) REFERENCES steps(id) ON DELETE CASCADE,
FOREIGN KEY (expression_id) REFERENCES expressions(id) ON DELETE CASCADE
);
-- Links steps to their feed-in values.
CREATE TABLE step_feeds (
step_id TEXT NOT NULL,
feed_id TEXT NOT NULL,
sequence_index INTEGER NOT NULL,
PRIMARY KEY (step_id, sequence_index),
FOREIGN KEY (step_id) REFERENCES steps(id) ON DELETE CASCADE,
FOREIGN KEY (feed_id) REFERENCES feeds(id) ON DELETE CASCADE
);
-- Join tables for the polymorphic 'HAS_SYMBOL' relationships
-- These tables link expressions and feeds to the various symbol types they contain.
-- Expression-to-Symbol Links
CREATE TABLE expression_symbols (
expression_id TEXT NOT NULL,
symbol_id TEXT NOT NULL,
symbol_type TEXT NOT NULL, -- e.g., 'scalar', 'vector', 'matrix', 'operation', 'quantum_operator'
PRIMARY KEY (expression_id, symbol_id),
FOREIGN KEY (expression_id) REFERENCES expressions(id) ON DELETE CASCADE
);
-- Feed-to-Symbol Links
CREATE TABLE feed_symbols (
feed_id TEXT NOT NULL,
symbol_id TEXT NOT NULL,
symbol_type TEXT NOT NULL, -- e.g., 'scalar', 'vector', 'matrix', 'operation', 'quantum_operator'
PRIMARY KEY (feed_id, symbol_id),
FOREIGN KEY (feed_id) REFERENCES feeds(id) ON DELETE CASCADE
);
Rationale and Explanation
Direct Relationships (`derivation` -> `step`, `step` -> `inference_rule`): The `HAS_STEP` edge connects one derivation to many steps. This is a classic one-to-many relationship, best modeled by adding a `derivation_id` foreign key to the `steps` table. The `sequence_index` from the edge property is also placed here. The same logic applies to the `HAS_INFERENCE_RULE` edge.
Step Inputs/Outputs/Feeds: A `step` can have multiple ordered inputs, outputs, and feeds. These are many-to-many relationships modeled with the join tables `step_inputs`, `step_outputs`, and `step_feeds`. The `sequence_index` property from the edge is a crucial part of the primary key to maintain order and uniqueness.
One-to-One Relationship (`scalar` -> `value_and_units`): The `HAS_VALUE` edge is modeled by adding a `scalar_id` foreign key to the `values_and_units` table. Adding a `UNIQUE` constraint on `scalar_id` ensures that each scalar can have at most one `value_and_units` entry, perfectly modeling the one-to-one relationship.
Handling Polymorphism (`HAS_SYMBOL`): This is the most complex relationship to model. An `expression` or a `feed` can contain many symbols of different types.
Chosen Approach: The schema uses two tables, `expression_symbols` and `feed_symbols`. Each table stores the ID of the source (`expression_id` or `feed_id`), the ID of the target symbol (`symbol_id`), and a `symbol_type` column to tell you which table to join with to get the symbol's details (e.g., `scalars`, `vectors`).
Advantage: This approach is pragmatic. It keeps the number of tables manageable.
Trade-off: It sacrifices full relational integrity because you cannot create a native `FOREIGN KEY` on `symbol_id` that points to five different tables. You would enforce this relationship at the application layer (e.g., when you retrieve an `expression`'s symbols, you read the `symbol_type` and then query the appropriate table).
In my assessment, the SQL schema is inelegant compared to the property graph representation.
The use of `ON DELETE CASCADE` is a guess by the LLM and is not always the desired behavior. (The `ON DELETE CASCADE` means that if the parent gets deleted then automatically delete the linked child.) The alternative is `ON DELETE RESTRICT` (or `NO ACTION`). It prevents deletion of the parent row as long as any child rows exist. The database would return an error. This is used when you want to protect the child data.
The above tailored-to-PDG schema is complicated. A simpler schema like the one below incurs a higher query cost.
From Gemini I learned that the EAV (Entity-Attribute-Value) model is a normalized and traditional approach that is portable across almost any relational database.
"Long time derivation of the Boltzmann equation from hard sphere dynamics" by Yu Deng, Zaher Hani, Xiao Ma
https://arxiv.org/abs/2408.07818
"Hilbert's sixth problem: derivation of fluid equations via Boltzmann's kinetic theory" by Yu Deng, Zaher Hani, Xiao Ma
https://arxiv.org/abs/2503.01800
I'm currently paying $12/month for my VPS (virtual private server) that has 2GB of RAM and 25GB of storage. I had been paying $6/month previously, but including Neo4j bumped the memory usage to nearly 100%. (CPU usage is averaging around 10% and spikes to 30%.)
I have a few iterations of the source code for the webserver so I'll need to figure out which is actually in use. After logging into the VPS, I see I have two folders:
allofphysics.com
ui_v7_website_flask_json
Using docker images I see the latest image (from 7 months ago) is allofphysicscom-flask. That doesn't help me figure out which repo is in use.
allofphysics.com/docker-compose.yaml has neo4j, whereas ui_v7_website_flask_json/docker-compose.yaml does not. Therefore I'm currently operating out of allofphysics.com/docker-compose.yaml
I have two options: either revert to the "v7" or disable neo4j in "allofphysics.com".
Going with option 1,
~/allofphysics.com$ docker compose down
~/allofphysics.com$ cd ~/ui_v7_website_flask_json
~/ui_v7_website_flask_json$ git pull
~/ui_v7_website_flask_json$ make
Turns out the packages are pretty old. The opencv package wasn't compatible. After I reverted all pip packages to known good version numbers I found the crypto libraries weren't happy. Eventually I was able to get a docker image built.
Next the certs weren't present so I copied those from ~/allofphysics.com/certs/ and that worked.
The LLM is not a database of facts. Historical events, dates, places are not stored as exact references. LLMs generate their response based on statistical probabilities derived from patterns.
The more widely documented something is, the better the LLM knows it
The LLM's training is roughly proportional to the representation of the information on the Internet. An LLM is more reliable and detailed when discussing common knowledge.
Precise questions using relevant jargon with context yields useful output
Poorly worded questions that do not use domain-specific terminology are less likely to produce clear answers.
Do not trust citations
The LLM does not have citations hard-coded into the network. Citations are most likely to be hallucinations
Decompose complex tasks and questions into a sequence of iterative prompts
There is a limited amount of "thinking" by the LLM per prompt, so simpler tasks are more likely to produce relevant answers.
Structure your question to produce a page or less of output
Producing a 200 page book from a single prompt devolves into hallucinations after a few pages. Shorter answers are more likely to remain lucid, so phrase your question in a way that can be answered with a small amount of text.
LLMs default to the average
While LLM output can be creative (in unexpected ways), seeking exceptional insight yields the mundane
Simplify your question to a one-shot prompt
Iterative questions are more likely to yield hallucinations
Delegation to an intern who doesn't learn
This can be confusing, as the LLM occasionally knows more than you do.
BLUF/tl;dr: methods of requirements generation are described in this post: "one-shot think hard and brainstorm", learn from others, iterative adversarial feedback, and formal models.
As a solo hobby developer I get to do what feels right with regard to my project. That autonomy and freedom applies to both prioritization and scope. Part of the joy of working is based on being able to follow my curiosity, and to do so at a time and rate of my choosing.
When I work with another person on a shared goal, then there is value in identifying how to divide tasks. For example, by skill, by interests, by availability.
A vision distills into use cases, and these use cases determine requirements which determine tasks. Then sequencing and parallelization of tasks can happen. Let's refer to this as the "think hard and brainstorm" waterfall method. The success of waterfall relies on the ability of planners to identify all contingencies before taking action. Use of an LLM for generating requirements fits in this category as an alternative to thinking hard.
If someone else has a similar situation, learning from their requirements is a valid way of making progress. Plagiarism is acceptable; no need for being original.
The optimistic waterfall method described above assumes the alignment of incentives for participants doing the tasks. If the participants doing tasks are looking for the easiest solution to the requirement they may provided results that don't satisfy the vision.
If the folks satisfying a requirement may be adversarial, that can be accounted for in an iterative manner.
think hard and brainstorm to come up with an initial draft of requirements
provide the draft requirements to adversarial works with the instructions, "provide a solution in a day." Leverage their creativity to provide an insufficient result.
Determine why the adversarial solutions (which do meet the requirements) don't satisfy the vision. Use that insight to develop better requirements.
Repeat the iterations until requirements are "fool proof" for the sample pool of fools.
A third method of coming up with requirements is to use formal methods. For example,
"Program Derivation is the practice of beginning with a specification of a function, and by a series of mechanical steps, deriving an efficient implementation." (source: https://www.youtube.com/watch?v=JDqD6RZpnZA)
And here's an implementation in Python that I think provides that capability:
When I run the above script I get
$ python3 add_one.py 4
4 plus one is 5
$ python3 add_one.py 4.2
4.2 plus one is 5.2
$ python3 add_one.py cat
Error: 'cat' is not a valid float.
Usage: python add_one.py <number>
Next I'm going to intentionally add a few bugs and then ask how to prove the implementation has no bugs:
I've added three bugs in v2: a deterministic bug, a random bug, and bug that depends on the user's environment. A brute force test would be expensive but could identify the first two bugs.
There are a couple problems with v1 of the program requirements to "add one to a user-provided value."
The input range (unstated) is negative infinity to positive infinity.
Python does not have a built-in limit for the size of integers. The maximum integer value is restricted only by the available memory of the system.
Time-out conditions are unspecified. So if the program doesn't respond for 5 minutes, the requirements have nothing to say about that.
Rewrite the program requirements to be more specific:
If the "computer" doing this calculation has a 1Hz CPU clock frequency with 1byte of RAM, that might result in the Python program being "right" but the hardware being inadequate.
Also, let's make explicit the assumption that we are operating in base 10 numbers.
To be safe with the input string, let's bound that to be less than 1000 characters.
The revised implementation is
Normal testing involves evaluating pre-determined cases, like "input 5, get 6" and "input 5.4, get 6.4" and "input 'cat', get error" and "input (nothing), get error."
Property-based testing (e.g., https://hypothesis.readthedocs.io/en/latest/) is where you "write tests which should pass for all inputs in whatever range you describe, and let Hypothesis randomly choose which of those inputs to check - including edge cases you might not have thought about."
https://github.com/pschanely/CrossHair: a static verification tool for Python using symbolic execution. "Repeatedly calls your functions with symbolic inputs. It uses an SMT solver (a kind of theorem prover) to explore viable execution paths and find counterexamples for you."
https://github.com/formal-land/coq-of-python - Translate Python code to Coq code for formal verification. "formal-land" is a commercial company selling verification-as-a-service:
Design by Contract (https://en.wikipedia.org/wiki/Design_by_contract) approaches for Python include Dafny, Deal, and icontract. For Dafny you write the program in Dafny and compile to Python. For Deal you write Python and provide decorators.
"Dafny lifts the burden of writing bug-free code into that of writing bug-free annotations." Dafny was created by Rustan Leino at Microsoft Research. Dafny uses the Z3 automated theorem prover and Boogie.
Boogie is a simple programming language that is meant to be
an easy compile target (think "like JVM bytecode, but for proving code correct")
easy to analyze soundly
not actually intended to be executable
Instead of running Boogie programs, the Boogie compiler looks through the Boogie code to find assertions. For each assertion, the compiler generates a "verification condition", which is a formula based on a (symbolic) analysis of the program; the verification condition formula is constructed so that if the verification condition is true, the assertion holds.
It then hands those verification conditions, along with annotations in the program like assumptions, preconditions, postconditions, and loop invariants, to an SMT solver (Boogie uses Z3 by default). The SMT solver determines whether or not the assumptions definitely ensure the verification condition holds; Boogie complains about the assertions whose verification-conditions haven't been shown to hold.
Prototypes are impactful and efficient when they feature only the essential features. The consequence of that claim is that the prototype is janky (not easy to use), fragile (not robust), shared prematurely (not "professional" looking). For software, a prototype might act on fake data and produce incorrect results.
After stakeholders provide feedback, then the RoI has been confirmed and the direction for where to invest more effort is clarified -- what else is essential? Correctness is typically of interest, but that competes with ease-of-use, speed, and robustness.
In this post I add a third: contrasting variables.
The "contrasting variables" and approaches compare extremes like
fast (HEP) and slow (classical)
large (relativity) and small (quantum)
The question in this approach is whether there is a derivation that spans each pair of the two extremes.
A starting point for this approach would be to identify a specific variable (mass, velocity, temperature) and find derivations that involve the variable at each extreme.
Previously I've considered the Physics Derivation Graph to be an unlikely dream, in that a complete representation of Physics is infeasible, and even a breadth-first approach is infeasible. My expectation of infeasibility was based on the constraints that data entry would not just require a significant number of person-hours, but the data entry would need to be enacted by highly-qualified people with training tailored to the Physics Derivation Graph.
If the workflow can be segmented into small tasks, LLMs might provide relevant automation of complex tasks that require semantic context.
Overview of workflow
For example,
Given a corpus of papers on arxiv,
identify whether a paper has a derivation.
Given a paper with a derivation, can the steps be identified? (Associate specific inference rule)
Can the steps be verified (e.g., SymPy, Lean)?
For derivations with steps, what variables and constants are used in the paper?
Do these variables and constants appear in other derivations?
That's a combination of text search and semantic awareness of the text content. Not every step has to be "just a human" or "just an LLM" -- an augmented capability is a reasonable workflow.
Another enabling aspect is the Latex representation for arxiv papers. That makes parsing (by regex or LLM) easier.
$ tar xzvf arXiv-2004.04818v2.tar.gz
x elemental.pdf
x fig3.pdf
x sound1.tex
x sound-speeds.pdf
The file sound1.tex is 37110 bytes (36kB), 302 lines, 37108 characters, and 5134 words.
Loading the .tex into Gemini 2.0's prompt uses 10,518 tokens (with the max for Gemini 2.0 being 1,048,574 tokens).
Gemini 2.0 (on 2025-01-07) says
Yes, the article contains a derivation. It presents two approaches to derive the speed of sound, starting from system elasticity and vibrational properties, respectively. Both approaches lead to the same result, which is then used to find the upper bound for the speed of sound.
I ask a follow-up question,
Gemini's response:
I follow with another prompt,
Gemini's response:
Let's see if Gemini can extract symbol definitions:
Gemini's response:
The use of unicode isn't desirable. Let's see if we can force the use of Latex using the prompt
Gemini's response:
Now lets see if Gemini can identify the steps of the derivation.
Gemini's response
Next step: extract the variables used in the above derivation to see if they are present in the Physics Derivation Graph by using the PDG API.
Gemini's response (2025-01-07):
TODO: the PDG API needs to be able to handle tasks associated with a workflow that uses an LLM.
tl;dr: I'm unsuccessful at getting Gemini 2.0 (on 2025-01-07) to relate Brewster's angle to the Law of Refraction. However, Gemini is able to provide the steps for simple (non-trivial) derivations.
Reformatting HTML table
The site https://aistudio.google.com/ provides Gemini 2.0 LLM for free.
To experiment with using it I wanted to provide the inference rules. There are 94 inference rules listed on https://allofphysics.com/list_all_inference_rules in an 8 column table.
After removing javascript, headers, and footers the HTML file has 3869 lines. That takes 32,164 tokens for the prompt
Gemini provided the correct 5 column HTML file. Huzzah!
Brewster's angle and Law of Refraction
Next step is to see if Gemini can use the inference rules for a derivation.
To which Gemini replied (on 2025-01-07),
"Unsure about answer"
That's reasonable, so I removed the inference rules and tried again
Again Gemini replied (on 2025-01-07),
"Unsure about answer"
I then relaxed the constraint about right answers only and encouraged step-by-step reasoning
Gemini's response (on 2025-01-07):
It's not possible to directly derive Brewster's angle from Snell's Law, or vice versa, using standard mathematical manipulations. Brewster's angle is a specific condition related to polarization, while Snell's Law describes refraction in general. Therefore, no sequence of inference rules can relate them directly. They are independent physical principles governing light behavior under different conditions.
Simple derivation
Gemini 2.0 on 2025-01-07 responds with
That's "good enough" in my view to be useful -- associating the relevant inference rule with specific steps in a multi-step derivation!
Running as root doesn't help
As per https://stackoverflow.com/a/71576242 I tried
As a consequence, while I can run the container,
I am unable to mount the local directory
separate the property graph backend from an API and have the web interface call the API
find a lighter weight alternative to neo4j backend
get a minimum viable product working using the existing property graph backend
separate static documentation pages from the dynamic paths
get docker working on my new Mac
That last one is the blocker for everything else, so get docker working on my new Mac is priority. The good news is that the code does run on my old MacBook Air.
Next I'll work on getting a minimum viable product working using the existing property graph backend