Toggle light / dark theme

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Practically, these projects are expected to expand our understanding of the actual physical loci of human behavioral patterns, get to the bottom of various brain pathologies, stimulate the creation of more advanced AI/non-biological intelligence — and, of course, the big enchilada: help us understand more about our own species’ consciousness.

On Consciousness: My Simulated Brain has an Attitude?
Yes, of course it’s wild speculation to guess at the feelings and worries and conundrums of a simulated brain — but dude, what if, what if one or both of these brain simulation map thingys is done well enough that it shows signs of spontaneous, autonomous reaction? What if it tries to like, you know, do something awesome like self-reorganize, or evolve or something?

Maybe it’s too early to talk personality, but you kinda have to wonder… would the Euro-Brain be smug, never stop claiming superior education yet voraciously consume American culture, and perhaps cultivate a mild racism? Would the ‘Merica-Brain have a nation-scale authority complex, unjustifiable confidence & optimism, still believe in childish romantic love, and overuse the words “dude” and “awesome?”

We shall see. We shall see.

Oh yeah, have to ask:
Anyone going to follow Ray Kurzweil’s recipe?

Project info:
[HUMAN BRAIN PROJECT - - MAIN SITE]
[THE BRAIN ACTIVITY MAP - $ - HUFF-PO]

Kinda Pretty Much Related:
[BLUE BRAIN PROJECT]

This piece originally appeared at Anthrobotic.com on February 28, 2013.

I continue to survey the available technology applicable to spaceflight and there is little change.

The remarkable near impact and NEO on the same day seems to fly in the face of the experts quoting a probability of such coincidence being low on the scale of millenium. A recent exchange on a blog has given me the idea that perhaps crude is better. A much faster approach to a nuclear propelled spaceship might be more appropriate.

Unknown to the public there is such a thing as unobtanium. It carries the country name of my birth; Americium.

A certain form of Americium is ideal for a type of nuclear solid fuel rocket. Called a Fission Fragment Rocket, it is straight out of a 1950’s movie with massive thrust at the limit of human G-tolerance. Such a rocket produces large amounts of irradiated material and cannot be fired inside, near, or at the Earth’s magnetic field. The Moon is the place to assemble, test, and launch any nuclear mission.

Such Fission Fragment propelled spacecraft would resemble the original Tsolkovsky space train with a several hundred foot long slender skeleton mounting these one shot Americium boosters. The turn of the century deaf school master continues to predict.

Each lamp-shade-spherical thruster has a programmed design balancing the length and thrust of the burn. After being expended the boosters use a small secondary system to send them into an appropriate direction and probably equipped with small sensor packages, using the hot irradiated shell for an RTG. The Frame that served as a car of the space train transforms into a pair of satellite panels. Being more an artist than an *engineer, I find the monoplane configuration pleasing to the eye as well as being functional. These dozens and eventually thousands of dual purpose boosters would help form a space warning net.

The front of the space train is a large plastic sphere partially filled filled with water sent up from the surface of a a Robotic Lunar Polar Base. The Spaceship would split apart on a tether to generate artificial gravity with the lessening booster mass balanced by varying lengths of tether with an intermediate reactor mass.

These piloted impact threat interceptors would be manned by the United Nations Space Defense Force. All the Nuclear Powers would be represented.…..well, most of them. They would be capable of “fast missions” lasting only a month or at the most two months. They would be launched from underground silos on the Moon to deliver a nuclear weapon package towards an impact threat at the highest possible velocity and so the fastest intercept time. These ships would come back on a ballistic course with all their boosters expended to be rescued by recovery craft from the Moon upon return to the vicinity of Earth.

The key to this scenario is Americium 242. It is extremely expensive stuff. The only alternative is Nuclear Pulse Propulsion (NPP). The problem with bomb propulsion is the need to have a humungous mass for the most efficient size of bomb to react with.

The Logic Tree then splits again with two designs of bomb propelled ship; the “Orion” and the “Medusa.” The Orion is the original design using a metal plate and shock absorbing system. The Medusa is essentially a giant woven alloy parachute and tether system that replaces the plate with a much lighter “mega-sail.” In one of the few cases where compromise might bear fruit- the huge spinning ufo type disc, thousands of feet across, would serve quite well to explore, colonize, and intercept impact threats. Such a ship would require a couple decades to begin manufacture on the Moon.

Americium boosters could be built on earth and inserted into lunar orbit with Human Rated Heavy Lift Vehicles (SLS) and a mission launched well within a ten-year apollo type plan. But the Americium Infrastructure has to be available as a first step.

Would any of my hundreds of faithful followers be willing to assist me in circulating a petition?

*Actually I am neither an artist or an engineer- just a wannabe pulp writer in the mold of Edgar Rice Burroughs.

It is a riddle and almost a scandal: If you let a particle travel fast through a landscape of randomly moving round troughs – like a frictionless ball sent through a set of circling, softly rounded “teacups” inserted into the floor (to be seated in for a ride at a country fair) – you will find that it loses speed on average.

This is perplexing because if you invert time before throwing in the ball, the same thing is bound to happen again – since we did not specify the direction of time beforehand in our frictionless fairy’s universe. So the effect depends only on the “hypothesis of molecular chaos” being fulfilled – lack of initial correlations – in Boltzmann’s 19th century parlance. Boltzmann was the first to wonder about this amazing fact – although he looked only at the opposite case of upwards-inverted cups, that is, repulsive particles.

The simplest example does away with fully 2-dimensional interaction. All you need is a light horizontal particle travelling back and forth in a frictionless 1-dimensional closed transparent tube, plus a single attractive, much heavier particle moving slowly up and down in a frictionless transversal 1-dimensional closed transparent tube of its own – towards and away from the middle of the horizontal tube while exerting a Newtonian attractive force on the light fast particle across the common plane. Then the energy-poor fast particle still gets statistically deprived of energy by the energy-rich heavy slow particle in a sort of “energetic capitalism.”

If now the mass of the heavy particle is allowed to go to infinity while its speed and the force exerted by it remain unchanged, we arrive at a periodically forced single-degree-of-freedom Hamiltonian oscillator in the horizontal tube. What could be simpler? But you again get “antidissipation” – a statistical taking-away of kinetic energy from the light fast particle by the heavy slow one.

A first successful numerical simulation was obtained by Klaus Sonnleitner in 2010 – still with a finite mass-ratio and hence with explicit energy conservation. Ramis Movassagh obtained a similar result independently and proved it analytically. Both publications did not yet look at the simpler – purely periodically forced – limiting case just described: A single-degree-of-freedom, periodically forced conservative system. The simplest and oldest paradigm in Poincaréan chaos theory as the source of big news?

If we invert the potential (Newtonian-repulsive rather than Newtonian-attractive), the light particle now gains energy statistically from the heavy guy – in this simplest example of statistical thermodynamics (which the system now turns out to be). Thus, chaos theory becomes the fundament of many-particle physics: both on earth with its almost everywhere repulsive potentials (thermodynamics) and in the cosmos with its almost everywhere attractive potentials (cryodynamics). The essence of two fundamental disciplines – statistical thermodynamics and statistical cryodynamics – is implicit in our periodically forced single-tube horizontal particle. That tube represents the simplest nontrivial example in Hamiltonian dynamics including celestial mechanics, anyhow. But it now reveals two miraculous new properties: “deterministic entropy” generation under repulsive conditions, and “deterministic ectropy” generation under attractive conditions.

I would love to elicit the enthusiasm of young and old chaos aficionados across the planet because this new two-tiered fundamental discipline in physics based on chaos theory is bound to generate many novel implications – from revolutionizing cosmology to taming the fire of the sun down here on earth. There perhaps never existed a more economically and theoretically promising unified discipline. Simple computers suffice for deriving its most important features, almost all still un-harvested.

Another exciting fact: The present proposal will be taken lightly by most everyone in academic physics because Lifeboat is not an anonymously refereed outlet. But many young people on the planet do own computers and will appreciate the liberating truth that “non-anonymous peer review” carries the day – with them at the helm. So, please, join in. I for one was so far unable to extract the really simplest underlying principle: Why is it possible to have a time-directed behavior in a non-time-directed reversible dynamics if that time-directedness does not come from statistics, as everyone believes for the better part of two centuries? What is the real secret? And why does the latter come in two mutually at odds ways? We only have scratched at the surface of chaos so far. Boltzmann used that term in a clairvoyant fashion, did he not? (For J.O.R.)

JUSTIN.SPACE.ROBOT.GUY
A Point too Far to Astronaut

It’s cold out there beyond the blue. Full of radiation. Low on breathable air. Vacuous.
Machines and organic creatures, keeping them functioning and/or alive — it’s hard.
Space to-do lists are full of dangerous, fantastically boring, and super-precise stuff.

We technological mammals assess thusly:
Robots. Robots should be doing this.

Enter Team Space Torso
As covered by IEEE a few days ago, the DLR (das German Aerospace Center) released a new video detailing the ins & outs of their tele-operational haptic feedback-capable Justin space robot. It’s a smooth system, and eventually ground-based or orbiting operators will just strap on what look like two extra arms, maybe some VR goggles, and go to work. Justin’s target missions are the risky, tedious, and very precise tasks best undertaken by something human-shaped, but preferably remote-controlled. He’s not a new robot, but Justin’s skillset is growing (video is down at the bottom there).

Now, Meet the Rest of the Gang:SPACE.TORSO.LINEUPS
NASA’s Robonaut2 (full coverage), the first and only humanoid robot in space, has of late been focusing on the ferociously mundane tasks of button pushing and knob turning, but hey, WHO’S IN SPACE, HUH? Then you’ve got Russia’s elusive SAR-400, which probably exists, but seems to hide behind… an iron curtain? Rounding out the team is another German, AILA. The nobody-knows-why-it’s-feminized AILA is another DLR-funded project from a university robotics and A.I. lab with a 53-syllable name that takes too long to type but there’s a link down below.

Why Humanoid Torso-Bots?
Robotic tools have been up in space for decades, but they’ve basically been iterative improvements on the same multi-joint single-arm grabber/manipulator. NASA’s recent successful Robotic Refueling Mission is an expansion of mission-capable space robots, but as more and more vital satellites age, collect damage, and/or run out of juice, and more and more humans and their stuff blast into orbit, simple arms and auto-refuelers aren’t going to cut it.

Eventually, tele-operable & semi-autonomous humanoids will become indispensable crew members, and the why of it breaks down like this: 1. space stations, spacecraft, internal and extravehicular maintenance terminals, these are all designed for human use and manipulation; 2. what’s the alternative, a creepy human-to-spider telepresence interface? and 3. humanoid space robots are cool and make fantastic marketing platforms.

A space humanoid, whether torso-only or legged (see: Robotnaut’s new legs), will keep astronauts safe, focused on tasks machines can’t do, and prevent space craziness from trying to hold a tiny pinwheel perfectly still next to an air vent for 2 hours — which, in fact, is slated to become one of Robonaut’s ISS jobs.

Make Sciencey Space Torsos not MurderDeathKillBots
As one is often want to point out, rather than finding ways to creatively dismember and vaporize each other, it would be nice if we humans could focus on the lovely technologies of space travel, habitation, and exploration. Nations competing over who can make the most useful and sexy space humanoid is an admirable step, so let the Global Robot Space Torso Arms Race begin!

“Torso Arms Race!“
Keepin’ it real, yo.

• • •

DLR’s Justin Tele-Operation Interface:

• • •

[JUSTIN TELE-OPERATION SITUATION — IEEE]

Robot Space Torso Projects:
[JUSTIN — GERMANY/DLRFACEBOOKTWITTER]
[ROBONAUT — U.S.A./NASAFACEBOOKTWITTER]
[SAR-400 — RUSSIA/ROSCOSMOS — PLASTIC PALSROSCOSMOS FACEBOOK]
[AILA — GERMANY/DAS DFKI]

This piece originally appeared at Anthrobotic.com on February 21, 2013.

With the recent meteor explosion over Russia coincident with the safe-passing of asteroid 2012 DA14, and an expectant spectacular approach by comet ISON due towards the end of 2013, one could suggest that the Year of the Snake is one where we should look to the skies and consider our long term safeguard against rocks from space.

Indeed, following the near ‘double whammy’ last week, where a 15 meter meteor caught us by surprise and caused extensive damage and injury in central Russia, while the larger anticipated 50 meter asteroid swept to within just 27,000 km of Earth, media reported an immediate response from astronomers with plans to create state-of-the-art detection systems to give warning of incoming asteroids and meteoroids. Concerns can be abated.
ATLAS, the Advanced Terrestrial-Impact Last Alert System is due to begin operations in 2015, and expects to give a one-week warning for a small asteroid – called “a city killer” – and three weeks for a larger “county killer” — providing time for evacuation of risk areas.

Deep Space Industries (a US Company), which is preparing to launch a series of small spacecraft later this decade aimed at surveying nearby asteroids for mining opportunities, could also be used to monitor smaller difficult-to-detect objects that threaten to strike Earth.

However — despite ISON doom-merchants — we are already in relatively safe hands. The SENTRY MONITORING SYSTEM maintains a Sentry Risk Table of possible future Earth impact events, typically tracking objects 50 meters or larger — none of which are currently expected to hit Earth. Other sources will tell you that comet ISON is not expected to pass any closer than 0.42 AU (63,000,000 km) from Earth — though it should still provide spectacular viewing in our night skies come December 2013. A recently trending threat, 140-metre wide asteroid AG5 was given just a 1-in-625 chance of hitting Earth in February 2040, though more recent measurements have reduced this risk to almost nil. The Torino Scale is currently used to rate the risk category of asteroid and comet impacts on a scale of 0 (no hazard) to 10 (globally-impacting certain collisions). At present, almost all known asteroids and comets are categorized as level 0 on this scale (AG5 was temporarily categorized at level 1 until recent measurements, and 2007 VK184, a 130 meter asteroid due for approach circa 2048–2057 is the only currently listed one categorized at level 1 or more).

An asteroid striking land will cause a crater far larger than its size. The diameter calculated in kilometers is = (energy of impact)(1÷3.4)÷106.77. As such, if an asteroid the size of AG5 (140-meter wide) were to strike Earth, it would create a crater over twice the diameter of Barringer Meteor Crater in northern Arizona and affect an area far larger — or on striking water, it would create a global-reach tsunami. Fortunately, the frequency of such an object striking Earth is quite low — perhaps once every 100,000 years. It is the smaller ones, such as the one which exploded over Russia last week which are the greater concern. These occur perhaps once every 100 years and are not easily detectable by our current methods — justifying the $5m funding NASA contributed to the new ATLAS development in Hawaii.

We are a long way from deploying a response system to deflect/destroy incoming meteors, though at least with ATLAS we will be more confident of getting out of the way when the sky falls in. More information on ATLAS: http://www.fallingstar.com/index.php

Humanities wake-up call has been ignored and we are probably doomed.

The Chelyabinsk event is a warning. Unfortunately, it seems to be a non-event in the great scheme of things and that means the human race is probably also a non-starter. For years I have been hoping for such an event- and saw it as the start of a new space age. Just as Sputnik indirectly resulted in a man on the Moon I predicted an event that would launch humankind into deep space.

Now I wait for ISON. Thirteen may be the year of the comet and if that does not impress upon us the vulnerability of Earth to impacts then only an impact will. If the impact throws enough particles into the atmosphere then no food will grow and World War C will begin. The C stands for cannibalism. If the impact hits the ring of fire it may generate volcanic effects that may have the same effect. If whatever hits Earth is big enough it will render all life above the size of microbes extinct. We have spent trillions of dollars on defense- yet we are defenceless.

Our instinctive optimism bias continues to delude us with the idea that we will survive no matter what happens. Beside the impact threat is the threat of an engineered pathogen. While naturally evolved epidemics always leave a percentage of survivors, a bug designed to be 100 percent lethal will leave none alive. And then there is the unknown- Earth changes, including volcanic activity, can also wreck our civilization. We go on as a species the same way we go on with our own lives- ignoring death for the most part. And that is our critical error.

The universe does not care if we thrive or go extinct. If we do not care then a quick end is inevitable.

I have given the world my best answer to the question. That is all I can do:

http://voices.yahoo.com/water-bombs-8121778.html?cat=15

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

One voice, one study, or one robot fetishist with a digital bullhorn — one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality.The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:

Building a Brain — Being Humane — Feeling our Pain — Dude from the NYT
February 3, 2013 — Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

January 28, 2013 — NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

April 25, 2012 — IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

December 25, 2011 — NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark©®™ hybrid, of course. When you do — you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent — even an only vaguely human robot.

Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)

Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?

Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.

Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that — and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible — and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

Oh sure, I understand, turn me off, erase me — time for a better model, I totally get it.
- or -
Hey, meatsack, don’t touch me or I’ll reformat your squishy face!

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS — NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS — IEEE]
[THE FUTURE OF MORAL MACHINES — NYT]

This piece originally appeared at Anthrobotic.com on February 7, 2013.

It appears now that intelligence of humans is largely superseeded by robots and artificial singularity agents. Education and technology have no chances to make us far more intelligent. The question is now what is our place in this new world where we are not the topmost intelligent kind of species.

Even if we develop new scientific and technological approaches, it is likely that machines will be far more efficient than us if these approaches are based on rationality.

IMO, in the next future, we will only be able to compete in irrational domains but I am not that sure that irrational domains cannot be also handled by machines.

“Olemach-Theorem”: Angular-momentum Conservation implies a gravitational-redshift proportional Change of Length, Mass and Charge

Otto E. Rossler

Faculty of Natural Sciences, University of Tubingen, Auf der Morgenstelle 8, 72076 Tubingen, Germany

Abstract

There is a minor revolution going on in general relativity: a “return to the mothers“ – that is, to the “equivalence principle” of Einstein of 1907. Recently the Telemach theorem was described which says that Einstein’s time change T stands not alone (since T, L, M, Ch all change by the same factor or its reciprocal, respectively). Here now, the convergent but trivial-to-derive Olemach theorem is presented. It connects omega (rotation rate), length, mass and charge in a static gravitational field. Angular-momentum conservation alone suffices (plus E = mc² ). The list of implications shows that the “hard core” of general relativity acquires new importance. 5 surprise implications – starting with global constancy of c in general relativity – are pointed out. Young and old physicists are called upon to join in the hunt for the “inevitable fault” in Olemach. (January 31, 2013)

Introduction

“Think simple” is a modern parole (to quote HP). Much as in “ham” radio initiation the “80 meter band playground” is the optimal entry door even if greeted with derision by old hands, so in physics the trivial domain of special relativity’s equivalence principle provides the royal entry portal.

A New Question

The local slowdown of time “downstairs” in gravity is Einstein’s most astounding discovery. It follows from special relativity in the presence of constant acceleration – provided the acceleration covers a vertically extended domain. Einstein’s famous long rocketship with its continually thrusting boosters presents a perennially fertile playground for the mind. This “equivalence principle” [1] was “the happiest thought of my life” as he always claimed.

To date no one doubts any more [2,3] the surprise finding that time is slowed-down downstairs compared to upstairs. The original reason given by Einstein [1] was that all signal sequences sent upwards arrive there with enlarged temporal intervals since the rocketship’s nose has picked up a constant relative departing speed during the finite travel time of the signal from the bottom up. Famous measurements, starting in 1959 and culminating in the daily operation of the Global Positioning System, abundantly confirm Einstein’s seemingly absurd purely mentally deduced prediction. From this hard-won 1907 insight, he would later derive his “general theory of relativity.” The latter remains an intricate edifice up to this day of which not all corners are understood as of yet. For example, many mathematically allowed but unphysical transformations got appended over the years. And a well-paved road running to the right and left of the canonical winded thread is still wanting. For example, the attempt begun by Einstein’s assistant Cornelius Lanczos in 1929 to build a bridge toward Clifford’s older differential-geometric approach [4] remains unconsummated.

In an “impasse-type” situation like this it is sometimes a good strategy to go “back to the mothers” in Goethe’s words, that is, to the early days when everything was still simple and fresh in its unfamiliarity. Do there perhaps exist one or two “direct corollaries” to Einstein’s happiest thought that are likewise bound to remain valid in any later more advanced theory?

A starting point for the hunt is angular-momentum conservation. Angular momentum enjoys an undeservedly low status in general relativity Emmy Noether’s genius notwithstanding. It therefore is a legitimate challenge to be asked to check what happens when angular momentum is “explicitly assumed to be conserved” in Einstein’s long rocketship where all clocks are known to be “tired” in their ticking rate at more downstairs positions in a locally imperceptible fashion. This question appears to be new. In the following, an attempt is made to check how the conservation of angular momentum which is a well-known fact in special relativity manifests itself in the special case of Einstein’s equivalence principle.

Olemach Theorem

To find the answer, a simple thought experiment suggests itself. A frictionless, strictly horizontally rotating bicycle wheel (with its mass ideally concentrated in the rim) is assumed to be suspended at its hub from a rope – so it can be lowered reversibly from the tip to the bottom in our constantly accelerating long rocketship (or else in gravity). Imagine the famous experimentalist Walter Lewin would make this wheel the subject of one of his enlightened M.I.T. lectures distributed on the Internet. The precision of the measurements performed would have to be ideal. What is it that can be predicted?

The law of “angular momentum conservation under planar rotation reads (if a sufficiently slow “nonrelativistic” rotation speed is assumed) according to any textbook like Tipler’s: “angular momentum = rotation rate times mass times radius-squared = constant” or, written in symbols,

J = ω m r² = const. (1)

From the above-quoted paper by Einstein we learn that omega differs across height levels, in a locally imperceptible fashion, being lower downstairs [1]. This is so because a frictionless wheel in planar rotation represents an admissible realization of a “ticking” clock (you can record ticks from a pointer attached to the rim). Then the height-dependent factor which reduces the ticking rate downstairs (explicitly written down by Einstein [1]) can be called K . At the tip, K = 1 , but K > 1 and increasing as one slowly (“adiabatically”) lowers the constantly rotating wheel to a deeper level [1]. Note that K can approach infinity in principle (as when the famous “Rindler rocketship,” with its many independently boosting hollow “rocket rings” that stay together without links, approaches the length of about one light year – if this technical aside is allowed).

The present example is quite refined in its maximum simplicity. What is it that the watching students will learn? If it is true that angular momentum J stays constant despite the fact that the rotation rate ω is reduced downstairs by the Einstein clock slowdown factor K , then necessarily either m or r or both must be altered downstairs besides ω , if J is to stay constant in accordance with Eq.(1).
While infinitely many nonlinear change laws for r and m are envisionable in compensation for the change in ω , the simplest “linear” law keeping angular momentum J unchanged in Eq.(1) reads:

ω’ = ω/K
r’ = r K
m’ = m/K (2)
q’ = q/K .

Here the fourth line was added “for completeness” due to the fact that the local ratio m/q – rest mass-over-charge – is a universal constant in nature in every inertial frame, with a characteristic universal value for every kind of particle. (Note that any particle on the rim can be freshly released into free fall and then retrieved with impunity, so that the universal ratio remains valid.) The unprimed variables on the right refer to the upper-level situation (K = 1) while the primed variables on the left pertain to a given lower floor, with K monotonically increasing toward the bottom as quantitatively indicated by Einstein [1].

How can we understand Eq.(2)? The first line, with ω replaced by the proportional ticking rate t of an ordinary local clock (Einstein’s original result), yields an equivalent law that reads

t’ = t/K ‚ (2a)

with the other three lines of Eq.(2) remaing unchanged. The corresponding 4-liner was described recently under the name “Telemach” (acronym for Time, Length, Mass and Charge). Telemach possessed a fairly complicated derivation [5]. The new law, Eq.(2), has the asset that its validity can be derived directly from Eq.(1).

The prediction made by the conservation law of Eq.(1) is that any change in ω automatically entails a change in r and/or m . There obviously exist infinitely many quantitative ways to ensure the constancy of J in Eq.(1) for our two-dimensionally rotating frictionless wheel. For example, when for the fun of it we keep m constant while letting only r change, the second line of Eq.(2) is bound to read r’ = r K^½ (followed by m’ = m and q’ = q ). Infinitely many other guessed schemes are possible. Eq.(2) has the asset of being “simpler” since all change ratios are linear in K. So the change law does not depend on height; only in this linear way can grotesque consequences like divergent behavior of one variable be avoided.

Now the serious part. We start out with the third line of Eq.(2). We already know from Einstein’s paper [1] that the local photon frequency (and hence the photon mass-energy) scales linearly with 1/K . Photon mass-energy therefore necessarily obeys the third line of Eq.(2). If this is true, we can recall that according to quantum electrodynamics, photons and particles are locally inter-transformable. Einstein would not have disagreed in 1907 already. A famous everyday example known from PET scans is positronium creation and annihilation. In this special case, two 511 kilo-electron-Volt photons turn into – prove equivalent to – one positron plus one electron, in every local frame. Therefore we can be sure that the third line of Eq.(2) indeed represents an indubitable fact in modern physics, a fact which Einstein would have eagerly embraced.

The remaing second line of Eq.(2) could be explained by quantum mechanics as well (as done in ref. [5]). However, this is edundant now since once the third line of Eq.(2) is accepted, the second line is fixed via Eq.(1). The fourth line follows from the third as already stated. Hence we are finished proving the correctness of the new law of Eq.(2).

How to call it? Olemach is a variant of “Oremaq” (which at first sight is a more natural acronym for the law of Eq.(2) in view of its four left-hand sides. But the closeness in content of Eq.(2) to Telemach [4], in which length was termed L and charge termed Ch, lets the matching abbreviation “Olemach” appear more natural.

Discussion

A new fundamental equation in physics was proposed: Eq.(2). The new equation teaches us a new fact about nature: In the accelerating rocket-ship of the young Einstein as well as in general relativity proper under “ordinary conditions” (yet to be specified in detail), angular momentum conservation plays a previously underestimated – new – role.

The most important implication of the law of Eq.(2) no doubt is the fact that the speed of light, c , has become a “global constant” in the equivalence principle. Note that the first two lines of Eq.(2) can be written

T’ = TK
r’ = rK , (2b)

with T = 1/ω and T‘ = 1/ ω‘ . One sees that r’/T’ = r/T . Therefore c-upstairs = c-downstairs = c at all heights (up to the uppermost level of an infinitely long Rindler rocket with c = c-universal at its tip). Thus

c = globally constant. (3)

This result follows from the “linear” structure of Eq.(2). The global constancy of c had been given up explicitly by Einstein in the quoted 1907 paper [1]. (This maximally painful fact was presumably the reason why Einstein could not touch the topic of gravitation again for 4 years until his visiting close friend Ehrenfest helped him re-enter the pond through engulfing him in an irresistible discussion about his rotating-disk problem.) In recompense for the new global constancy of c , it is now m and q that inherit the former underprivileged role of c by being “only locally but not globally constant.” It goes without saying that there are far-reaching tertiary implications (cf. [5]).

The second-most-important point is the already mentioned fact that charge q is no longer conserved in physics in the wake of the fourth line of Eq.(2), after an uninterrupted reign of almost two centuries. This result is the most unbelievable new fact. A first direct physical implication is that the charge of neutron stars needs to be re-calculated in view of the “order-of-unity” gravitational redshift z = K – 1 valid on their surface. Since K thus is almost equal to 2 on this surface, the charge of neutron stars is reduced by a factor of almost 2. Even more strikingly, the electrical properties of quasars (including mini-quasars) are radically altered so that a renewed modeling attempt is mandatory.

Thirdly, a topological new consequence of Eq.(2): “Stretching” is now found added to “curvature” as an equally fundamental differential-geometric feature of nature valid in the equivalence principle and, by implication, in general relativity. Recall that r goes to infinity in parallel with K , in the second line of Eq.(2) when K does so. This new qualitative finding is in accordance with Clifford’s early intuition. While an arbitrarily strong curvature remains valid near the horizon of a black hole where K diverges, the singular curvature is now accompanied by an equally singular (infinite) stretching of r . Thus a novel type of “volume conservation” (more precisely speaking: “conservation of the curvature-over-stretching ratio”) becomes definable in general relativity, in the wake of Eq.(2).

A fourth major consequence is that some traditional historical additions to general relativity cease to hold true if Olemach (or Telemach) is valid. This “tree-trimming” affects previously accepted combinations of general relativity with electrodynamics. In particular, the famous Reissner-Nordström solution loses its physical validity in the wake of Eq.(2). The simple reason: charge is no longer a global invariant. Surprise further implications (like a mandatory unchargedness of black holes) follow. The beautiful mass-ejecting and charge-spitting and electricity and magnetism generating, features of active quasars acquire a radically new interpretation worth to be worked out.

As a fifth point, the mathematically beautiful “Kerr metric” when used as a description of a rotating black hole loses its physical validity by virtue of the second line of Eq.(2). The new infinite distance to the horizon valid from the outside is one reason. More importantly, the effective zero rotation rate at the horizon of a seen from the outside fast-rotating black hole necessitates the formation of a topological “Reeb foliation in space-time” encircling every rotating black hole, as well as (in unfinished form) any of its never quite finished precursors [6].

There appear to be further first-magnitude consequences of the law of angular-momentum conservation (Eq.1), applied in the equivalence principle and its general-relativistic extensions. So the second line of Eq.(2) implies, via the new global constancy of c , that gravitational waves no longer exist [5]. On the other hand, temporal changes of a gravitational potential, for example through the passing-by of a celestial body, do of course remain valid and must somehow be propagated with the speed of light. (This problem is mathematically unsolved in the context of Sudarshan’s “no interaction theorem.”) These two cases can now be confused no longer.

At this point cosmology deserves to be mentioned. The new equal rights of curving and stretching (“Yin and Yang”) suggest that only asymptotically flat solutions remain available in cosmology in the very large – a suggestion already due to Clifford as mentioned [4]. If Olemach implies that a “big bang” (based on a non-volume preserving version of general relativity) is ruled out mathematically, this new fact has tangible consequences. Recently, 24 “ad-hoc assumptions” implicit in the standard model of cosmology were collected [7]. Further new developments in the wake of an improved understanding of the role played by angular-momentum conservation in the equivalence principle, general relativity and cosmology are to be expected.

To conclude, a new big vista opens itself up when the law of angular momentum conservation is indeed valid in the equivalence principle of special relativity of 1907. An inconspicuous “linear law” (Eq.2), re-affirming the role of Einstein’s happiest thought, imposes as the natural “80-meter band” of physics” – or does it not?

Credit Due

The above result goes back to an inconspicuous abstract published in 2003 [8] and a maximally unassuming dissertation written in its wake [9].

Acknowledgment

I thank Ali Sanayei, Frank Kuske and Roland Wais for discussions. For J.O.R.

References

[1] A. Einstein, On the relativity principle and the conclusions drawn from it (in German). Jahrbuch der Radioaktivität 4, 411–462 (1907), p. 458; English translation: http://www.pitt.edu/~jdnorton/teaching/GR&Grav_2007/pdf/Einstein_1907.pdf , p. 306.

[2] M.A. Hohensee, S. Chu, A. Peters and H. Müller, Equivalence principle and gravitational redshift. Phys. Rev. Lett. 106, 151102 (2011). http://prl.aps.org/abstract/PRL/v106/i15/e151102

[3] C. Lämmerzahl, The equivalence principle. MICROSCOPE Colloquium, Paris, September 19, 2011. http://gram.oca.eu/Ressources_doc/EP_Colloquium_2011/2%20C%20Lammerzahl.pdf

[4] C. Lanczos, Space through the Ages: The Evolution of geometric Ideas from Pythagoras to Hilbert and Einstein. New York: Academic Press 1970, p. 222. (Abstract on p. 4 of: http://imamat.oxfordjournals.org/content/6/1/local/back-matter.pdf )

[5] O.E. Rossler, Einstein’s equivalence principle has three further implications besides affecting time: T-L-M-Ch theorem (“Telemach”). African Journal of Mathematics and Computer Science Research 5, 44-47 (2012), http://www.academicjournals.org/ajmcsr/PDF/pdf2012/Feb/9%20Feb/Rossler.pdf

[6] O.E. Rossler, Does the Kerr solution support the new “anchored rotating Reeb foliation” of Fröhlich? (25 January 2012). https://lifeboat.com/blog/2012/01/does-the-kerr-solution-support-the-new-anchored-rotating-reeb-foliation-of-frohlich
[7] O.E. Rossler, Cosmos-21: Twenty-four violations of Occam’s razor healed by statistica mechanics. (Submitted.)

[8] H. Kuypers, O.E. Rossler and P. Bosetti, Matterwave-Doppler effect, a new implication of Planck’s formula (in German). Wechselwirkung 25 (No. 120), 26–27 (2003).

[9] H. Kuypers, Atoms in the gravitational field according to the de-Broglie-Schrödinger theory: Heuristic hints at a mass and size change (in German). PhD thesis, submitted to the Chemical and Pharmaceutical Faculty of the University of Tubingen 2005.

———————–