Archive for the 'Physics' Category

“The Universe” in a Nut’s Hell, (Briefing the Physical Theory)

Dec 06 2020 Published by under Essays,Physics

“What is Dan going on about, in terms of physics, in The Universe in a Nut’s Hell and life, generally?” To immediately overstep my academic bounds, I’ve taken to calling the theory “local relativity,” referring to taking boundary conditions of the action by which I try to describe the evolution of patches of space-time even to the infinitesimal limit, when possible. (It probably won’t be called that, but you can attach “local relativity” as a name to the whole ball of wax, if that helps.)

“Local relativity” attempts to resolve the purported (mathematical and philosophical) “incompatibility” between the modern theories of relativity and quantum mechanics by removing a single core assumption from the set of axioms between, which happens to be the “symmetry of the metric tensor” of general relativity. (Yes, with no problems found with the canonical quantum theory.) I’ve gone on (or maybe rather, started thereby) to argue in favor of an antisymmetric graviton mode, since allowing more generality than symmetry opens up this new state to the spectrum. It’s quantitatively plausible that the background temperature of this “quasi-monopole, scalar, composite” graviton cosmological remnant could be maintained high enough to be “dark energy,” exciting proton, neutron, electron and astronomical black hole rest masses sufficiently to keep an equilibrium with it as “dark matter,” yet “weakly interacting” as per a coupling constant of Newton’s gravitational constant times about the mass of a proton. (Maintaining so, the composite quasi-scalar, monopole graviton interaction also points itself out by simple nomenclature as a scalar “inflaton” candidate.) 

All this follows if the graviton spectrum admits a state such as (spin-2) gravitons with identical quantum numbers in a pair, except for exactly opposite spin, but rather therefore due to the physical existence of an (additively-separable) antisymmetric component to the “rank 2 tensor” potential of the gravitational field, i.e. if the assumption of metric tensor symmetry is dropped. (The photon cannot exhibit this degree of freedom, because its potential is a “rank 1 tensor,” though this is closely related to what makes a true “laser.”)

“Why change that one assumption?” It can be argued that this particular assumption was originally added to the theory as an otherwise harmless calculational convenience, upon initial scrutiny of general relativity with and without the assumption. Einstein himself, later in his career, considered whether metric tensor symmetry was required to be anything more than a computational convenience, as in the “Einstein-Cartan theory.” Actually, metric tensor symmetry can seem like something of an arbitrary axiom to take, as compared to, “The speed of light is a constant to all inertial observers,” (special relativity) and, “A gravitational field is locally indistinguishable from a uniform acceleration,” (equivalence principle) as basically the only other assumptions of general relativity, besides the application of classical “mechanics,” (the principles of simple “billiard ball collision” interactions). This assumption has more intuitive consequences than my opaque definition of it, though, like the symmetry of distances measured backwards and forwards between any two points “A” and “B” separated in space and time, yet I think it is intuitive that one cannot travel as easily back in time as one can travel forward, if this is the same asymmetry, (literally, underlying the thermodynamic tendency of “the arrow of time”).

The assumption of metric tensor symmetry in general relativity uniquely fixes the “Levi-Civita connection” as the exactly required metric connection. Without Levi-Civita, we need a principle to describe the mechanical evolution of the metric connection. Allowing monopole radiation from event horizons, (as in “black holes,”) metric connection can vary as a “Palatini geometry,” but, I propose, it evolves over space-time specifically by the action, Lagrangian, or Hamiltonian of the Higgs field as describing “baryonic rest mass excitation” (with “baryon” as in “proton,” or “neutron,”) and selection of a locally preferred frame of accelerated rest, as by an action principle of (as usually rendered) Higgs scalar and vector terms, which happen to describe the metric connection, added to the Ricci scalar of general relativity. The conceptually “real” driver of the changes in the Higgs terms, though, is primarily the evaporation of event horizons into this composite monopole graviton interaction, including and most importantly the event horizon of a “Rindler metric” appearing in the description of any accelerated first-person point-of-view in flat space.

This could be tested as an alternative hypothesis to explain the “Pioneer anomaly,” by hypothetical decay of a “Rindler horizon” tending to shift the locally preferred frame of accelerated rest. (On a long shot, if a quasi-monopole composite mode of the graviton can couple with—brace yourself—electromagnetic charge as it does with “mass monopole interaction,” in charge-neutrally confined packets of fundamental units of charge, this expected interaction even seems consistent with astronomical observation of stellar black hole microwave spectra, but analyzed on envelope-back.)

Without context, without citation, without proof, without a simple happy ending, in a page in a half, that’s “The Universe in a Nut’s Hell.” You could call the theory that, I guess.

No responses yet

Dark matter and the Second Law

Feb 08 2020 Published by under Essays,Physics

You might or might not know, I should probably rightly be regarded a “fringe,” hobbyist quantum gravity researcher. I’ve attempted to publish a couple of papers on the topic of composite monopole quasi-gravitons, which haven’t gone over well in peer review. The latest attempt to publish, which I think at least manages to distill my hypothesis down to its most fundamental exemplar mathematical form, was ultimately rejected largely on the basis (in my understanding) that my waves in the metric, while an independently confirmed solution to the Einstein field equations according to the reviewer, have vacuum Ricci scalar curvature, (R=0), and therefore must be regarded as a “coordinate artifact” rather than true gravity waves.

In great haste, I made a small but critical mathematical error in coordinate choice, in reaching the above, which I must thank the reviewer for correcting and looking past; however, I continue to assert, correcting this error, the argument of the paper is otherwise wholly unaffected, including my original preemptive response to the R=0 point of criticism, which might have been largely ignored. Tidal forces have since been reported to carry the signature of gravity waves, and I am confident my waves qualify as such a case, with R=0, on the basis on my own substantial work.

In the course of this work referenced above, the (supremely dissatisfying) reliance of my work on the second law of thermodynamics became quickly apparent, in order to explain the ostensible existence of black holes for a period any longer than a moment of (“ultraviolet…”) catastrophic breakdown into monopole gravity wave radiation, entirely. Certainly, it follows, quantum monopole waves/particles with enough energy to fit into their own Schwarzschild radii, to be black holes themselves, violate the black hole thermodynamics of Bekenstein and Hawking, so we “happily” assume the second law. From this, the (asymmetric) “arrow of time” seems to arise thermodynamically, “thankfully.”

In case my rhetorical parentheticals and quotations don’t make it clear, this assumption of thermodynamic law has been my biggest self-circumspect reservation about my own work. Thermodynamics is “emergent,” isn’t it? Unifying gravity with quantum mechanics, fully describing all known fundamental forces quantum mechanically, the laws of thermodynamics and statistical mechanics should follow in a manner “quod erat demonstrandum,” shouldn’t they?

One of my informal teachers (and favorite persons in the world) repeated a point of confusion to me, once, on the application of the black hole thermodynamics work of Bekenstein and Hawking, at its interface with quantum information theory and computer science: “A hard drive filled with ‘information’ should have more mass than a zeroed hard drive.” Really? I could believe you, almost. I could believe you, if your hard drive was a thermodynamically “closed” system that approached or satisfied the Bekenstein bound before you manipulated it further, but it’s not, and it doesn’t.

The Bekenstein bound represents both a thermodynamic limit and a quantum information theoretic limit: for a given mass and geometric volume, there is a maximum “entropy.” We cannot represent more “information” than this entropy limit, with a fixed mass and geometric volume. The “densest” possible representation of “information,” of “entropy,” happens to be exactly maximized by the mass, volume, and thermodynamic entropy of a literal relativistic black hole.

“A hard drive filled with ‘information’ should have more mass than a zeroed hard drive.” …Really, there’s no reason to suspect this, at all, besides an overzealous over-extension of interpretation of the work of Bekenstein and Hawking. Following that over-extension “faithfully,” we might expect a difference of one Planck mass (in a radius of one Planck length) per computational “bit,” or really, “qubit”! Canonically, for magnetic effects in a hard drive, we expect a small mass/energy difference, but nowhere close to this. Further, we can carry the “hard drive” analogy to a hypothetical model where even these tiny magnetic potential energy differences vanish virtually or exactly, like to the model of a “Turing machine” “memory tape” where both binary states of a “bit” of the tape exhibit no electromagnetic or even gravitational difference in potential energy, at all.

Is there any good, plausible theoretical reason to question whether my mentor’s offhand remark might actually be right? Can we raise any physically reasonable doubt, against his wrongness?

Suppose, I have two identical terrabyte, “TB,” hard drives. I start by setting all bits on both drives to the “0” state, exactly. I isolate them both entirely from interaction with their environment. I isolate them both entirely, except for the bare minimum interaction necessary to complete this next step, on one of them: I prepare a TB of (quantum computational) “qubits” in identical, exact “50/50” superposition (such as by Hadamard gates on the |0> state). I measure this TB of qubits, with equal, independent probability of measuring “0” or “1” in every case, and I write every one of these results to a “bit” on the TB hard-drive. (Say I do this completely faithfully, without regard to any “coincidence” I might notice in the results measured, and with a bare minimum of interaction with my two hard drives.)

Now, I squeeze both the identically zeroed (“no information”) hard drive and the other hard drive filled to capacity with “information” down to black holes. Here’s where things get “wonky.” What you expect to happen, here, might depend on which side you fall in the black hole information paradox “war,” and—in case you couldn’t already guess—I throw my lot in with Hawking, in that conflict.

I assert, we hit the point of minimum volume for the two drives, (collapse into black holes,) one TB “sooner” for the “full” disk, compared to the “empty” one. That is, the “full” disk’s Schwarzschild black hole configuration should be 1 TB larger, where we have 1 Planck mass per “bit,” and where this is entirely and solely a function of the mass/energy of the two disks, when we vary the volume, according to general relativity. If Hawking is right, my mentor Peter Schmitt is a careless genius. The disk filled to capacity with computational information would be 8 * 1024 * 1024 * 1024 * 1024, (1 TB,) Planck masses more massive or energetic, by argument from black hole thermodynamics and quantum information theory, by no mechanism known or expected from material science, electromagnetism, or any other canonical physical theory. From where could the mass possibly come?!

Consider: our most widely accepted models of cosmology already suppose that most of the mass and energy comprising our universe already take a weakly interacting form of which we have virtually no idea what they are: “dark matter” and “dark energy.”

As I prefaced this post with, as I have failed to satisfy peer reviewers with, I offer that I might know exactly the form these two substances take: a composite graviton quasi-particle gas, (resulting from an asymmetric mode of the metric tensor of Einstein’s gravity,) and its excitation of (primarily) baryonic and leptonic fundamental rest masses above the ground state of the Higgs field vacuum. Drawing a finite hypervolume surface bounding the preparation of our hard disk “saturated with information,” slightly more (composite graviton monopole) “dark energy” goes into the hypersurface than ultimately comes out, and slightly more (baryonic rest mass excitation above the Higgs field vacuum state) comes out than goes in. These substances must follow a continuity equation, (i.e., must be “locally conserved,”) cannot travel faster than the speed of light, and exhibit “coupling constant” proportional to particle rest mass times Newton’s gravitational constant, in the ideal.

This would be worthless, raving speculation, if it could not be tested; tell me it cannot be tested.

No responses yet

Cosmological constant as “gauge-fixing” parameter

Aug 17 2016 Published by under Physics

This note represents scratch work on an idea that will probably become a research letter, leaving aside the issue of publishing. It might be edited until it is substantive and complete enough to become that letter. The aim here is explain the ideas in as accessible a way as possible to the most general possible audience. Expect a “hip shot” to lead and supporting citations to follow.

The cosmological constant is a term that may be added to solutions to the Einstein field equations governing general relativity to produce new families of solutions. The cosmological constant gives “empty” space a constant energy density in these new solutions. For a few decades, physicists have suspected that our actual universe has an effectively nonzero cosmological constant. This leads us to question, “Where is that energy coming from?” since it is natural to assume that “empty” space actually carries no energy, momentum, or stress.

There is a “textbook” example of quantum mechanical behavior called the “simple quantum harmonic oscillator.” Decoding the jargon, all the name really means is that quantum particles sometimes act as though they were pendulums or attached to idealized springs. This example is “textbook” because it is both simple and found all throughout nature. One example of it is in quantum fields. We can think of space as being filled with something like a medium that represents every possible state of every particle we know, and this is basically a quantum field. Generally, we can think of these fields as being made of many “simple quantum harmonic oscillators,” potential “pendulums” that fill all of space that real particles can oscillate with. Part of the “textbook” treatment of these oscillators is that, if the point at which the “pendulum” doesn’t “swing” at all is taken to be the point of zero energy, then the quantum treatment requires a small positive energy for any “pendulum,” because a quantum mechanical pendulum can never quite totally stop its motion. We can never take this energy away from the field. (That is, unless we remove potential states, as is seen in the “Casimir effect,” but we will not need to talk about this effect here.)

So the natural conceptual leap is to identify this tiny minimum required “motion” of the field as exactly the constant energy density that the cosmological constant resembles, where we assume that the point of zero energy would be exactly where all the “motion” of the “pendulums” stops. This seems theoretically reasonable, but from experimental observation we know that it is fantastically wrong, or at least an incomplete picture. The estimate we arrive at this way is about 120 orders of magnitude too large to be the cosmological constant we measure. (120 orders of magnitude would be a multiplicative factor of a number written with a one followed by 120 zeroes.) It has literally been called the worst theoretical prediction in physics ever.

Attempts have been made to rectify this huge difference, such as the supposition of “supersymmetry.” Among its other features, supersymmetry could lead to cancellation of the minimum required energies of different fields against each other. However, with every round of experiments at the newest and most powerful particle colliders, we push the boundaries for finding evidence of supersymmetry further out, with no great experimental success to date.

I have led you through the natural assumptions, here. We assume that space without fields is truly “empty,” carrying no energy, momentum, or stress. We assume that the “true” point of zero energy for quantum field oscillators is the (physically unattainable) point where all oscillation stops. I propose that we throw either or both of these assumptions out the window, and I’m about to explain how abandoning either one or both leads to basically the same consequences.

First, I want to state that “textbook” quantum mechanics is basically agnostic to the nature of gravity, and “textbook” general relativity is basically agnostic to the nature of quantum mechanics. One can be applied to the other, presumably, but the two theories were not developed with clear rules for doing so, or even the necessity for one to be applied to the other. There is ambiguity, here. In quantum mechanical consideration of energy, it is basically only differences in energy which are significant. In general relativity, we have to know the absolute quantity of the energy, among other things, to determine how things gravitate, and quantum mechanics doesn’t necessarily tell us what this absolute quantity is.

In classical as well as quantum mechanics, rather than assuming that the point of exactly zero oscillation is the point of zero absolute energy, this point could just as well be a billion units of energy or a negative billion units of energy. We can add this billion or negative billion units to the ground state energy consistently, and it makes absolutely no mechanical difference whatsoever. It only changes the zero-point of our accounting, while the motions of particles remain exactly the same whatever this zero-point might be. We call this “gauge invariance,” because it reminds us of something like the zero-point on a car tire pressure gauge. We can set this zero at an idealized absolute zero pressure or set the zero at atmospheric pressure, but all we really measure are differences between the pressure of tire and our reference zero-point. For the purposes of gravity, then, it’s actually ambiguous what the absolute energy of the zero-point is coming from quantum mechanics, because quantum mechanics doesn’t depend on absolute measure of the energy. However, it’s arguably natural to assume that the gravitational absolute zero-point is the (physically unattainable) point where all field oscillation hypothetically stops.

In general relativity, we can add a cosmological constant to a known acceptable solution to the theory’s governing equations and produce another acceptable solution, with no consideration of quantum mechanics whatsoever. We can change this constant energy density in a way that is basically arbitrary. General relativity does not tell us or require of us that “empty” space is truly empty except for the zero-point of quantum fields. Conversely, our freedom in choosing the cosmological constant while still keeping an acceptable physical solution could suggest that “empty” space is not required to carry zero energy at all. However, unlike in the case of gauge invariance, changing the cosmological constant intrinsically changes the mechanics.

On an aside, the cosmological constant has always resembled a gauge parameter to me, coming from the gauge invariance described above. That is, it looks like an arbitrary zero point for the energy. It is not arbitrary, though. It can’t be, because the strength of the gravitational attraction depends on an absolute zero-point for the energy, momentum, and stress. Rather, I think it might be a parameter that fixes a preferred gauge, a preferred absolute zero-point, for quantum fields which otherwise don’t depend on an absolute zero-point of energy for their mechanics.

Ontologically, we can think of the cosmological constant as either being the minimum energy of the quantum fields when their gauge is fixed by the true gravitational zero-point, or we can think of the constant as an intrinsic energy density for “empty” space before we add quantum fields. In either case, so long as the “dark energy” density we measure in the universe remains fixed, there is no mechanical difference between these two ontologies.

Quantum mechanics does not tell us the zero-point of energy of matter fields for gravitational accounting purposes, and general relativity does not tell us this either. There might be no 120 order of magnitude difference between the true absolute zero-point of the fields and the observed dark energy density then, at all. Neither quantum mechanics nor general relativity actually give us enough information to say what the zero-point of the quantum fields are in the first place, or what the energy density of space is without any quantum fields at all.

No responses yet