Toggle light / dark theme

In 2014, I submitted my paper “A Universal Approach to Forces” to the journal Foundations of Physics. The 1999 Noble Laureate, Prof. Gerardus ‘t Hooft, editor of this journal, had suggested that I submit this paper to the journal Physics Essays.

My previous 2009 submission “Gravitational acceleration without mass and noninertia fields” to Physics Essays, had taken 1.5 years to review and be accepted. Therefore, I decided against Prof. Gerardus ‘t Hooft’s recommendation as I estimated that the entire 6 papers (now published as Super Physics for Super Technologies) would take up to 10 years and/or $20,000 to publish in peer reviewed journals.

Prof. Gerardus ‘t Hooft had brought up something interesting in his 2008 paper “A locally finite model for gravity” that “… absence of matter now no longer guarantees local flatness…” meaning that accelerations can be present in spacetime without the presence of mass. Wow! Isn’t this a precursor to propulsion physics, or the ability to modify spacetime without the use of mass?

As far as I could determine, he didn’t pursue this from the perspective of propulsion physics. A year earlier in 2007, I had just discovered the massless formula for gravitational acceleration g=τc^2, published in the Physics Essays paper referred above. In effect, g=τc^2 was the mathematical solution to Prof. Gerardus ‘t Hooft’s “… absence of matter now no longer guarantees local flatness…”

Prof. Gerardus ‘t Hooft used string theory to arrive at his inference. Could he empirically prove it? No, not with strings. It took a different approach, numerical modeling within the context of Einstein’s Special Theory of Relativity (STR) to derive a mathematic solution to Prof. Gerardus ‘t Hooft’s inference.

In 2013, I attended Dr. Brian Greens’s Gamow Memorial Lecture, held at the University of Colorado Boulder. If I had heard him correctly, the number of strings or string states being discovered has been increasing, and were now in the 10500 range.

I find these two encounters telling. While not rigorously proved, I infer that (i) string theories are unable to take us down a path the can be empirically proven, and (ii) they are opened ended i.e. they can be used to propose any specific set of outcomes based on any specific set of inputs. The problem with this is that you now have to find a theory for why a specific set of inputs. I would have thought that this would be heartbreaking for theoretical physicists.

In 2013, I presented the paper “Empirical Evidence Suggest A Need For A Different Gravitational Theory,” at the American Physical Society’s April conference held in Denver, CO. There I met some young physicists and asked them about working on gravity modification. One of them summarized it very well, “Do you want me to commit career suicide?” This explains why many of our young physicists continue to seek employment in the field of string theories where unfortunately, the hope of empirically testable findings, i.e. winning the Noble Prize, are next to nothing.

I think string theories are wrong.

Two transformations or contractions are present with motion, Lorentz-FitzGerald Transformation (LFT) in linear motion and Newtonian Gravitational Transformations (NGT) in gravitational fields.

The fundamental assumption or axiom of strings is that they expand when their energy (velocity) increases. This axiom (let’s name it the Tidal Axiom) appears to have its origins in tidal gravity attributed to Prof. Roger Penrose. That is, macro bodies elongate as the body falls into a gravitational field. To be consistent with NGT the atoms and elementary particles would contract in the direction of this fall. However, to be consistent with tidal gravity’s elongation, the distances between atoms in this macro body would increase at a rate consistent with the acceleration and velocities experienced by the various parts of this macro body. That is, as the atoms get flatter, the distances apart get longer. Therefore, for a string to be consistent with LFT and NGT it would have to contract, not expand. One suspects that this Tidal Axiom’s inconsistency with LFT and NGT has led to an explosion of string theories, each trying to explain Nature with no joy. See my peer-reviewed 2013 paper New Evidence, Conditions, Instruments & Experiments for Gravitational Theories published in the Journal of Modern Physics, for more.

The vindication of this contraction is the discovery of the massless formula for gravitational acceleration g=τc^2 using Newtonian Gravitational Transformations (NGT) to contract an elementary particle in a gravitational field. Neither quantum nor string theories have been able to achieve this, as quantum theories require point-like inelastic particles, while strings expand.

What worries me is that it takes about 70 to 100 years for a theory to evolve into commercially viable consumer products. Laser are good examples. So, if we are tying up our brightest scientific minds with theories that cannot lead to empirical validations, can we be the primary technological superpower a 100 years from now?

The massless formula for gravitational acceleration g=τc^2, shows us that new theories on gravity and force fields will be similar to General Relativity, which is only a gravity theory. The mass source in these new theories will be replaced by field and particle motions, not mass or momentum exchange. See my Journal of Modern Physics paper referred above on how to approach this and Super Physics for Super Technologies on how to accomplish this.

Therefore, given that the primary axiom, the Tidal Axiom, of string theories is incorrect it is vital that we recognize that any mathematical work derived from string theories is invalidated. And given that string theories are particle based theories, this mathematical work is not transferable to the new relativity type force field theories.

I forecast that both string and quantum gravity theories will be dead by 2017.

When I was seeking funding for my work, I looked at the Broad Agency Announcements (BAAs) for a category that includes gravity modification or interstellar propulsion. To my surprise, I could not find this category in any of our research organizations, including DARPA, NASA, National Science Foundation (NSF), Air Force Research Lab, Naval Research Lab, Sandia National Lab or the Missile Defense Agency.

So what are we going to do when our young graduates do not want to or cannot be employed in string theory disciplines?

(Originally published in the Huffington Post)

Gravity modification, the scientific term for antigravity, is the ability to modify the gravitational field without the use of mass. Thus legacy physics, the RSQ (Relativity, String & Quantum) theories, cannot deliver either the physics or technology as these require mass as their field origin.

Ron Kita who recently received the first US patent (8901943) related to gravity modification, in recent history, introduced me to Dr. Takaaki Musha some years ago. Dr. Musha has a distinguished history researching Biefeld-Brown in Japan, going back to the late 1980s, and worked for the Ministry of Defense and Honda R&D.

Dr. Musha is currently editing New Frontiers in Space Propulsion (Nova Publishers) expected later this year. He is one of the founders of the International Society for Space Science whose aim is to develop new propulsion systems for interstellar travel.

Wait. What? Honda? Yes. For us Americans, it is unthinkable for General Motors to investigate gravity modification, and here was Honda in the 1990s, at that, researching this topic.

In recent years Biefeld-Brown has gained some notoriety as an ionic wind effect. I, too, was of this opinion until I read Dr. Musha’s 2008 paper “Explanation of Dynamical Biefeld-Brown Effect from the Standpoint of ZPF field.” Reading this paper I realized how thorough, detailed and meticulous Dr. Musha was. Quoting selected portions from Dr. Musha’s paper:

In 1956, T.T. Brown presented a discovery known as the Biefeld-Bown effect (abbreviated B-B effect) that a sufficiently charged capacitor with dielectrics exhibited unidirectional thrust in the direction of the positive plate.

From the 1st of February until the 1st of March in 1996, the research group of the HONDA R&D Institute conducted experiments to verify the B-B effect with an improved experimental device which rejected the influence of corona discharges and electric wind around the capacitor by setting the capacitor in the insulator oil contained within a metallic vessel … The experimental results measured by the Honda research group are shown …

V. Putz and K. Svozil,

… predicted that the electron experiences an increase in its rest mass under an intense electromagnetic field …

and the equivalent

… formula with respect to the mass shift of the electron under intense electromagnetic field was discovered by P. Milonni …

Dr. Musha concludes his paper with,

… The theoretical analysis result suggests that the impulsive electric field applied to the dielectric material may produce a sufficient artificial gravity to attain velocities comparable to chemical rockets.

Given, Honda R&D’s experimental research findings, this is a major step forward for the Biefeld-Brown effect, and Biefeld-Brown is back on the table as a potential propulsion technology.

We learn two lesson.

First, that any theoretical analysis of an experimental result is advanced or handicapped by the contemporary physics. While the experimental results remain valid, at the time of the publication, zero point fluctuation (ZPF) was the appropriate theory. However, per Prof. Robert Nemiroff’s 2012 stunning discovery that quantum foam and thus ZPF does not exist, the theoretical explanation for the Biefeld-Brown effect needs to be reinvestigated in light of Putz, Svozil and Milonni’s research findings. This is not an easy task as that part of the foundational legacy physics is now void.

Second, it took decades of Dr. Musha’s own research to correctly advise Honda R&D how to conduct with great care and attention to detail, this type of experimental research. I would advise anyone serious considering Biefeld-Brown experiments to talk to Dr. Musha, first.

Another example of similar lessons relates to the Finnish/Russian Dr. Podkletnov’s gravity shielding spinning superconducting ceramic disc i.e. an object placed above this spinning disc would lose weight.

I spent years reading and rereading Dr. Podkletnov’s two papers (the 1992 “A Possibility of Gravitational Force Shielding by Bulk YBa2Cu3O7-x Superconductor” and the 1997 “Weak gravitational shielding properties of composite bulk YBa2Cu3O7-x superconductor below 70K under e.m. field”) before I fully understood all the salient observations.

Any theory on Dr. Podkletnov’s experiments must explain four observations, the stationary disc weight loss, spinning disc weight loss, weight loss increase along a radial distance and weight increase. Other than my own work I haven’t see anyone else attempt to explain all four observation within the context of the same theoretical analysis. The most likely inference is that legacy physics does not have the tools to explore Podkletnov’s experiments.

But it gets worse.

Interest in Dr. Podkletnov’s work was destroyed by two papers claiming null results. First, Woods et al, (the 2001 “Gravity Modification by High-Temperature Superconductors”) and second, Hathaway et al (the 2002 “Gravity Modification Experiments Using a Rotating Superconducting Disk and Radio Frequency Fields”). Reading through these papers it was very clear to me that neither team were able to faithfully reproduce Dr. Podkletnov’s work.

My analysis of Dr. Podkletnov’s papers show that the disc is electrified and bi-layered. By bi-layered, the top side is superconducting and the bottom non-superconducting. Therefore, to get gravity modifying effects, the key to experimental success is, bottom side needs to be much thicker than the top. Without getting into too much detail, this would introduce asymmetrical field structures, and gravity modifying effects.

The necessary dialog between theoretical explanations and experimental insight is vital to any scientific study. Without this dialog, there arises confounding obstructions; theoretically impossible but experiments work or theoretically possible but experiments don’t work. With respect to Biefeld-Brown, Dr. Musha has completed the first iteration of this dialog.

Above all, we cannot be sure what we have discovered is correct until we have tested these discoveries under different circumstances. This is especially true for future propulsion technologies where we cannot depend on legacy physics for guidance, and essentially don’t understand what we are looking for.

In the current RSQ (pronounced risk) theory climate, propulsion physics is not a safe career path to select. I do hope that serious researchers reopen the case for both Biefeld-Brown and Podkletnov experiments, and the National Science Foundation (NSF) leads the way by providing funding to do so.

(Originally published in the Huffington Post)

I first met Dr. Young Bae, NIAC Fellow, at the Defense Advanced Research Projects Agency (DARPA) sponsored 2011, 100 Year Starship Study (100YSS) at Orlando, Fla. Many of us who were there had responded to the NASA/DARPA Tactical Technology Office’s RFP to set up an organization “… to develop a viable and sustainable non-governmental organization for persistent, long-term, private-sector investment into the myriad of disciplines needed to make long-distance space travel viable …”

Yes, both DARPA and NASA are at some level interested in interstellar propulsion. Mine was one of approximately 35 (rumored number) teams from around the world vying for this DARPA grant, and Dr. Bae was with a competing team. I presented the paper “Non-Gaussian Photon Probability Distributions”, and Dr. Bae presented “A Sustainable Developmental Pathway of Photon Propulsion towards Interstellar Flight”. These were early days, the ground zero of interstellar propulsion, if you would.

Dr. Bae has been researching Photon Laser Thrust (PLT) for many years. A video of his latest experiment is available at the NASA website or on YouTube. This PLT uses light photons to move an object by colliding with (i.e. transferring momentum to) the object. The expectation is that this technology will eventually be used to propel space crafts. His most recent experiments demonstrate the horizontal movement of a 1-pound weight. This is impressive. I expect to see much more progress in the coming years.

At one level, Dr. Bae’s experiments are confirmation that Bill Nye’s Light Sail (which very unfortunately lost communications with Earth) will work.

At another level, one wonders why or how the photon, a particle without mass, has momentum that is proportion to the photon’s frequency or energy. A momentum that is observable in Dr. Bae’s and other experiments. This is not a question that contemporary physics asks. Einstein was that meticulous when he derived the Lorentz-FitzGerald Transformations (LFT) from first principles for his Special Theory of Relativity (STR). Therefore, if you think about it, and if we dare to ask the sacrilegious question, does this mean that momentum is a particle’s elementary property that appears to be related to mass? What would we discover if we could answer the question, why does momentum exist in both mass and massless particles? Sure, the short cut don’t bother me answer is, mass-energy equivalence. But why?

At the other end of photon momentum based research is the EmDrive invented by Roger Shawyer. He clearly states that the EmDrive is due to momentum exchange and not due to “quantum vacuum plasma effects”. To vindicate his claims Boeing has received all of his EmDrive designs and test data. This is not something that Boeing does lightly.

In this 2014 video a member of NASA’s Eagleworks explains that the EmDrive (renamed q-thruster) pushes against quantum vacuum, the froth of particle and antiparticle pairs in vacuum. Which raises the question, how can you push against one type and not the other? In 2011, using NASA’s Fermi Gamma-ray Space Telescope photographs, Prof. Robert Nemiroff of Michigan Technological University, made the stunning discovery that this quantum foam of particle and antiparticle pairs in a vacuum, does not exist. Unfortunately, this means that the NASA Eagleworks explanation clearly cannot be correct.

So how does the EmDrive work?

In my 2012 book An Introduction to Gravity Modification, I had explained the importance of asymmetrical fields and designs for creating propellantless engines. For example, given a particle in a gravitational field and with respect to this field’s planetary mass source, this particle will observe an asymmetrical gravitational field. The near side of this particle will experience a stronger field than the far side, and thus the motion towards the planetary mass. Granted that this difference is tiny, it is not zero. This was how I was able to determine the massless formula for gravitational acceleration, g=τc^2, where tau τ is the change in the time dilation transformation (dimensionless LFT) divided by that distance. The error in the modeled gravitational acceleration is less than 6 parts per million. Thus validating the asymmetrical approach.

In very basic terms Shawyer’s New Scientist paper suggests that it is due to the conical shape of the EmDrive that causes microwave photons to exhibit asymmetrical momentum exchange. One side of the conical structure with the larger cross section, has more momentum exchange than the other side with the smaller cross section. The difference in this momentum exchange is evidenced as a force.

However, as Dr. Bae points out, from the perspective of legacy physics, conservation of momentum is broken. If not broken, then there are no net forces. If broken, then one observes a net force. Dr. Beckwith (Prof., Chongqing University, China) confirms that Dr. Bae is correct, but the question that needs to be addressed is, could there be any additional effects which would lead to momentum conservation being violated? Or apparently violated?

To be meticulous, since energy can be transmuted into many different forms, we can ask another sacrilegious question. Can momentum be converted into something else? A wave function attribute for example, in a reversible manner, after all the massless photon momentum is directly proportional to its frequency? We don’t know. We don’t have either the theoretical or experimental basis for answering this question either in the positive or negative. Note, this is not the same as perpetual motion machines, as conservation laws still hold.

Shawyer’s work could be confirmation of these additional effects, asymmetrical properties and momentum-wave-function-attribute interchangeability. If so, the future of propulsion technologies lies in photon based propulsion.

Given that Shawyer’s video demonstrates a moving EmDrive, the really interesting question is, can we apply this model to light photons? Or for that matter, any other type of photons, radio, infrared, light, ultraviolet and X-Rays?

(Originally published in the Huffington Post)

Recent revelations of NASA’s Eagleworks Em Drive caused a sensation on the internet as to why interstellar propulsion can or cannot be possible. The nay sayers pointed to shoddy engineering and impossible physics, and ayes pointed to the physics of the Alcubierre-type warp drives based on General Relativity.

So what is it? Are warp drives feasible? The answer is both yes and no. Allow me to explain.

The empirical evidence of the Michelson-Morley experiment of 1887, now known as the Lorentz-FitzGerald Transformations (LFT), proposed by FitzGerald in 1889, and Lorentz in 1892, show beyond a shadow of doubt that nothing can have a motion with a velocity greater than the velocity of light. In 1905 Einstein derived LFT from first principles as the basis for the Special Theory of Relativity (STR).

So if nothing can travel faster than light why does the Alcubierre-type warp drive matter? The late Prof. Morris Klein explained in his book, Mathematics: The Loss of Certainty, that mathematics has become so powerful that it can now be used to prove anything, and therefore, the loss of certainty in the value of these mathematical models. The antidote for this is to stay close to the empirical evidence.

My good friend Dr. Andrew Beckwith (Prof., Chongqing University, China) explains that there are axiomatic problems with the Alcubierre-type warp drive theory. Basically the implied axioms (or starting assumptions of the mathematics) requires a multiverse universe or multiple universes, but the mathematics is based on a single universe. Thus even though the mathematics appears to be sound its axioms are contradictory to this mathematics. As Dr. Beckwith states, “reducto ad absurdum”. For now, this unfortunately means that there is no such thing as a valid warp drive theory. LFT prevents this.

For a discussion of other problems in physical theories please see my peer reviewed 2013 paper “New Evidence, Conditions, Instruments & Experiments for Gravitational Theories” published in the Journal of Modern Physics. In this paper I explain how General Relativity can be used to propose some very strange ideas, and therefore, claiming that something is consistent with General Relativity does not always lead to sensible outcomes.

The question we should be asking is not, can we travel faster than light (FTL) but how do we bypass LFT? Or our focus should not be how to travel but how to effect destination arrival.

Let us take one step back. Since Einstein, physicists have been working on a theory of everything (TOE). Logic dictates that for a true TOE, the TOE must be able to propose from first principles, why conservation of mass-energy and conservation of momentum hold. If these theories cannot, they cannot be TOEs. Unfortunately all existing TOEs have these conservation laws as their starting axioms, and therefore, are not true TOEs. The importance of this requirement is that if we cannot explain why conservation of momentum is true, like Einstein did with LFT, how do we know how to apply this in developing interstellar propulsion engines? Yes, we have to be that picky, else we will be throwing millions if not billions of dollars in funding into something that probably won’t work in practice.

Is a new physics required to achieve interstellar propulsion? Does a new physics exists?

In 2007, after extensive numerical modeling I discovered the massless formula for gravitational acceleration, g=τc^2, where tau τ is the change in the time dilation transformation (dimensionless LFT) divided by that distance. (The error in the modeled gravitational acceleration is less than 6 parts per million). Thereby, proving that mass is not required for gravitational theories and falsifying the RSQ (Relativity, String & Quantum) theories on gravity. There are two important consequences of this finding, (1) we now have a new propulsion equation, and (2) legacy or old physics cannot deliver.

But gravity modification per g=τc^2 is still based on motion, and therefore, constrained by LFT. That is, gravity modification cannot provide for interstellar propulsion. For that we require a different approach, the new physics.

At least from the perspective of propulsion physics, having a theoretical approach for a single formula g=τc^2 would not satisfy the legacy physics community that a new physics is warranted or even exists. Therefore, based on my 16 years of research involving extensive numerical modeling with the known empirical data, in 2014, I wrote six papers laying down the foundations of this new physics:

1. “A Universal Approach to Forces”: There is a 4th approach to forces that is not based on Relativity, String or Quantum (RSQ) theories.
2. “The Variable Isotopic Gravitational Constant”: The Gravitational Constant G is not a constant, and independent of mass, therefore gravity modification without particle physics is feasible.
3. “A Non Standard Model Nucleon/Nuclei Structure”: Falsifies the Standard Model and proposes Variable Electric Permittivity (VEP) matter.
4. “Replacing Schrödinger”: Proposes that the Schrödinger wave function is a good but not an exact model.
5. “Particle Structure”: Proposes that the Standard Model be replaced with the Component Standard Model.
6. “Spectrum Independence”: Proposes that photons are spectrum independent, and how to accelerate nanowire technology development.

This work, published under the title Super Physics for Super Technologies is available for all to review, critique and test its validity. (A non-intellectual emotional gut response is not a valid criticism). That is, the new physics does exist. And the relevant outcome per interstellar propulsion is that subspace exists, and this is how Nature implements probabilities. Note, neither quantum nor string theories ask the question, how does Nature implement probabilities? And therefore, are unable to provide an answer. The proof of subspace can be found in how the photon electromagnetic energy is conserved inside the photon.

Subspace is probabilistic and therefore does not have the time dimension. In other words destination arrival is not LFT constrained by motion based travel, but is effected by probabilistic localization. We therefore, have to figure out navigation in subspace or vectoring and modulation. Vectoring is the ability to determine direction, and modulation is the ability to determine distance. This approach is new and has an enormous potential of being realized as it is not constrained by LFT.

Yes, interstellar propulsion is feasible, but not as of the warp drives we understand today. As of 2012, there are only about 50 of us on this planet working or worked towards solving the gravity modification and interstellar propulsion challenge.

So the question is not, whether gravity modification or interstellar propulsion is feasible, but will we be the first nation to invent this future?

(Originally published in the Huffington Post)

” “Following these rules, we’ve demonstrated that we can make all the universal logic gates used in electronics, simply by changing the layout of the bars on the chip,” said Katsikis. “The actual design space in our platform is incredibly rich. Give us any Boolean logic circuit in the world, and we can build it with these little magnetic droplets moving around.”

The current paper describes the fundamental operating regime of the system and demonstrates building blocks for synchronous logic gates, feedback and cascadability – hallmarks of scalable computation. A simple-state machine including 1-bit memory storage (known as “flip-flop”) is also demonstrated using the above basic building blocks. ”

Read more

Consider how many natural laws and constants—both physical and chemical—have been discovered since the time of the early Greeks. Hundreds of thousands of natural laws have been unveiled in man’s never ending quest to understand Earth and the universe.

I couldn’t name 1% of the laws of nature and physics. Here are just a few that come to mind from my high school science classes. I shall not offer a bulleted list, because that would suggest that these random references to laws and constants are organized or complete. It doesn’t even scratch the surface…

Newton’s Law of force (F=MA), Newton’s law of gravity, The electromagnetic force, strong force, weak force, Avogadro’s Constant, Boyle’s Law, the Lorentz Transformation, Maxwell’s equations, laws of thermodynamics, E=MC2, particles behave as waves, superpositioning of waves, universe inflation rate, for every action… etc, etc.

For some time, physicists, astronomers, chemists, and even theologians have pondered an interesting puzzle: Why is our universe so carefully tuned for our existence? And not just our existence—After all, it makes sense that our stature, our senses and things like muscle mass and speed have evolved to match our environment. But here’s the odd thing—If even one of a great many laws, properties or constants were off by even a smidgen, the whole universe could not exist—at least not in a form that could support life as we imagine it! Even the laws and numbers listed above. All of creation would not be here, if any of these were just a bit off…

cosmic_questWell, there might be something out there, but it is unlikely to have resulted in life—not even life very different than ours. Why? Because without the incredibly unique balance of physical and chemical properties that we observe, matter would not coalesce into stars, planets would not crunch into balls that hold an atmosphere, and they would not clear their path to produce a stable orbit for eons. Compounds and tissue would not bind together. In fact, none of the things that we can imagine could exist.

Of course, theologians have a pat answer. In one form or another, religions answer all of cosmology by stating a matter of faith: “The universe adheres to God’s design, and so it makes sense that everything works”. This is a very convenient explanation, because these same individuals forbid the obvious question: ‘Who created God?’ and ‘What existed before God?’ Just ask Bill Nye or Bill Maher. They have accepted offers to debate those who feel that God created Man instead of the other way around.

Scientists, on the other hand, take pains to distance themselves from theological implications. They deal in facts and observable phenomenon. Then, they form a hypotheses and begin testing. That’s what we call the scientific method.

If any being could evolve without the perfect balance of laws and constants that we observe, it would be a single intelligence distributed amongst a cold cloud of gas. In fact, a universe that is not based on many of the observed numbers (including the total mass of everything in existence) probably could not be stable for very long.

rene_descartes-sDoes this mean that it’s all about you?! Are you, Dear reader, the only thing in existence?—a living testament to René Descartes?

Don’t discount that notion. Cosmologists acknowledge that your own existence is the only thing of which you can be absolutely sure. (“I think. Therefore, I am”). If you cannot completely trust your senses as s portal to reality, then no one else provably exists. But, most scientists (and the rest of us, too) are willing to assume that we come from a mother and father and that the person in front of us exists as a separate thinking entity. After all, if we can’t start with this assumption, then the rest of physics and reality hardly matters, because we are too far removed from the ‘other’ reality to even contemplate what is outside of our thoughts.

Two questions define the field of cosmology—How did it all begin and why does it work? Really big questions are difficult to test, and so we must rely heavily on tools and observation:

• Is the Big Bang a one-off event, or is it one in a cycle of recurring events?
• Is there anything beyond the observable universe? (something apart from the Big Bang)
• Does natural law observed in our region of the galaxy apply everywhere?
• Is there intelligent life beyond Earth?

Having theories that are difficult to test does not mean that scientists aren’t making progress. Even in the absence of frequent testing, a lot can be learned from observation. Prior to 1992, no planet had ever been observed or detected outside of our solar system. For this reason, we had no idea of the likelihood that planets form and take orbit around stars.

Today, almost 2000 exoplanets have been discovered with 500 of them belonging to multiple planetary systems. All of these were detected by indirect evidence—either the periodic eclipsing of light from a star, which indicates that something is in orbit around it, or subtle wobbling of the star itself, which indicates that it is shifting around a shared center of gravity with a smaller object. But wait! Just this month, a planet close to our solar system (about 30 light years away) was directly observed. This is a major breakthrough, because it gives us an opportunity to perform spectral analysis of the planet and its atmosphere.

Is this important? That depends on goals and your point of view. For example, one cannot begin to speculate on the chances for intelligent life, if we have no idea how common or unusual it is for a star to be orbited by planets. It is a critical factor in the Drake Equation. (I am discounting the possibility of a life form living within a sun, not because it is impossible or because I am a human-chauvinist, but because it would not likely be a life form that we will communicate with in this millennium).

Stephen HawkingOf course, progress sometimes raises completely new questions. In the 1970s, Francis Drake and Carl Sagan began exploring the changing rate of expansion between galaxies. This created an entirely new question and field of study related to the search for dark matter.

Concerning the titular question: “Why is the universe fine-tuned for life?”, cosmologist Stephen Hawking offered an explanation last year that might help us to understand. At last, it offers a theory, even if it is difficult to test. The media did their best to make Professor Hawking’s explanation digestible, explaining it something like this [I am paraphrasing]:

There may be multiple universes. We observe only the one in which we exist. Since our observations are limited to a universe with physical constants and laws that resulted in us—along with Stars, planets, gravity and atmospheres, it seems that the conditions for life are all too coincidental. But if we imagine countless other universes outside of our realm (very few with life-supporting properties), then the coincidence can be dismissed. In effect, as observers, we are regionalized into a small corner.

Cosmic EpochsThe press picked up on this explanation with an unfortunate headline that blared the famous Professor had proven that God does not exist. Actually, Hawking said that miracles stemming out of religious beliefs are “not compatible with science”. Although he is an atheist, he said nothing about God not existing. He simply offered a theory to explain an improbable coincidence.

I am not a Cosmologist. I only recently have come to understand that it is the science of origin and is comprised of astronomy, particle physics, chemistry and philosophy. (But not religion—please don’t go there!). If my brief introduction piques your interest, a great place to spread your wings is with Tim Maudlin’s recent article in Aeon Magazine, The Calibrated Cosmos. Tim succinctly articulates the problem of a fine-tuned universe in the very first paragraph:

“Theories now suggest that the most general structural elements of the universe — the stars and planets, and the galaxies that contain them — are the products of finely calibrated laws and conditions that seem too good to be true.”

And: “Had the constants of nature taken slightly different values, we would not be here.”

The article delves into the question thoroughly, while still reading at a level commensurate with Sunday drivers like you and me. If you write to Tim, tell him I sent you. Tell him that his beautifully written article has added a whole new facet to my appreciation for being!

Philip Raymond is Co-Chair of The Cryptocurrency Standards Association and CEO of Vanquish Labs.
This is his fourth article for Lifeboat Foundation and his first as an armchair cosmologist.

Related: Quantum Entanglement: EPR Paradox

When I was a freshman at Cornell University some decades ago, I had a memorable teaching assistant for CS100, the entry level computer programming course taken by nearly every student in Engineering or Arts & Sciences. Gilles Brassard, a French Canadian, is now a chaired math professor at Université de Montréal and a preeminent cryptographer. He has also been inducted into the Royal Order of Canada. I am told that this is a bit like being knighted. In fact, this highest of civilian honors was established by Queen Elizabeth.

The author with Gilles Brassard in 2014
The author with Gilles Brassard in 2014

Gilles was a graduate student at Cornell in the mid ’70s. Back then, public key encryption was a radical concept. Named for three MIT professors who described it, RSA is now it is at the heart of every secure Internet transaction. Yet, the new generation of cryptographers refers to RSA as “classical cryptography”. The radicals have moved on to Quantum Cryptography. Gilles and his collaborator, Charles Bennett, are the pioneers and leaders in this burgeoning field. No one else is even pretender to the throne.

In its simplest terms, quantum cryptography achieves a secure communication channel because it relies on a stream of individual particles or “quanta” to convey information. If information is sent without any fat at all—just the minimum physics that can support the entropy—then any eavesdropping or rerouting of a message can be detected by the recipient. Voila! Perfect authentication, fidelity and security. Communication is secure because any attack can be detected.

But when you begin to experiment with gating individual quanta of anything, you are typically working within a world of minute, elementary particles—things like photons or electrons with properties that change as they are measured. And the issue of measurement doesn’t just invoke Heisenbeg (he demonstrated that measurements change a property being measured), but also superpositioning of states that resolve only when they are observed. Say, Whaaht?!

Perhaps, we are getting ahead of ourselves. The goal of this article is to share a strange, thoroughly unexpected, awe-inspiring, yet repeatable experimental results achieved by quantum physicists. I am no expert, but given a sufficiently lay explanation, marvel with me at a baffling outcome. It will shake your perception of reality. It suggests that science and math are not as black and white as you believed.

The EPR Paradox
Albert EinsteinAlbert Einstein worked for years to develop an understanding of entangled particles that was consistent with his earlier work in special relativity. By the mid 20th century, physicists were reasonably certain that information could never be conveyed faster than light. It’s not just the math that convinced them. It was the crazy things that would ensue if light speed was not a universal speed limit…

If information—mass or energy, particle or wave, substantive or pure thought—if any of these things travels faster light, then given the time dilation of things moving in relation to each other, very unlikely things would be possible. For example:

  • If information travels faster than light. it would be possible to deliver a reply to a message that had not yet been sent
  • If information travels faster than light, it would be possible to send a message back in time and prevent your parents from meeting each other

So the math that imposes a universal speed limit also preserves our concept of reality. Sure, we can accept that energy and mass are fungible. We can even accept that distance and time are malleable. But time paradoxes defy common sense and beg for a solution that prevents them, altogether.

When the most reasonable explanation of quantum entanglement collided with our understanding of special relativity, efforts to reconcile the two theories or arrive at a unifying model became known as the EPR Paradox, named after Einstein and his colleagues, Boris Podolsky and Nathan Rosen. Given assumptions considered axiomatic, the math suggests that information passes between entangled particles faster than light — in fact, instantaneously and at any distance. Near the end of his life, Einstein reluctantly acknowledged that there must be an error in math, or in basic assumptions, or that some undiscovered, rational explanation could resolve the paradox. Ultimately, he dismissed the notion of particles synchronously and instantly communicating with each other as “spooky action at a distance”. Just as his other memorable quote, “God doesn’t play dice with the world”, the two phrases are indelibly inscribed onto the great physicist’s epitaph.

Before humans could travel to the moon (about 1.3 light seconds from earth), researchers tried to test Einstein’s theory. But even with precise instruments to measure time and distance, it was too difficult in the 1930s and 40s to create, transport and measure characteristics of elementary particles and then discriminate their behavior in such close proximity.

Back then, Einstein assumed that we would measure wave collapse positions or particle momentum. But today, scientists are more keen on measuring another quantum phenomenon: particle spin or photon polarization—or particle destruction. These properties are more easily changed and measured. In the 1960s and 70s, the EPR paradox returned to popular inquiry when physicists John Stewart Bell—and later Lamehi-Rachti and Mittig, conducted experiments that supported Einstein’s original thesis. That is, faster-than-light communication seemed to take place.

So, given appropriate experimental methodology, could it actually be possible to receive a package before it was sent? This is, after all, the disturbing conclusion of faster-than-light communication.

Probably not. But the experimental result is more shocking than “Yes” and way more interesting than “No”. In fact, the outcome to recent experiments force us to confront our understanding of causality. It makes us wonder if reality is an illusion. It shatters our concept of time and space even more than Einstein’s more famous theory of relativity.

Since measurements made in nanoseconds are difficult to visualize, I shall illustrate the experiment and the surprising results by stretching the distance involved. But this is not a metaphor. Actual results actually play out as described here. Continue below image…

quantum entangled particlesThe Experiment

Suppose that I create a pair of entangled particles. It doesn’t matter what this means or how I accomplish the feat. I wish only to test if a change to one particle affects the other. But more specifically, I want to separate them by a great distance and determine if a change to the local particle influences the remote particle instantly, or at least faster than accounted for by a light-speed signal between the two of them.

If you could construct such an experiment, it seems reasonable to assume that you would observe one of four possible outcomes. The results should demonstrate that the remote particle is either:

  • not affected at all
  • affected – apparently instantly or nearly in synchrony with the first particle
  • affected – but only after a delay in which a light speed signal could reach it
  • uncorrelated or inconsistently correlated with it’s entangled mate

The actual result is none of these, and it is almost too stunning to contemplate. In fact, the particle is highly correlated, but the correlation is with the observer’s cognition. But again, I am getting ahead of myself. Let’s look at our experimental set up…

I send an astronaut into space with a box that contains an experimental apparatus. The astronaut travels a distance about as far away from Earth as the sun. It takes about 8 minutes for light (or any message) to reach the astronaut. The box contains the “twin” of many paired particles back on earth. Each particle is trapped in a small crystal and numbered. The box also contains an instrument that can measure the polarization of any photon and a noisy inkjet printer that can be heard from outside the box.

Back on the earth, I have the mate to each paired photon. All of my photons exhibit a polarity than can be measured and expressed as a 2-D angle with any value from 0 to 360 degrees. Our test uses polarized filters to measure the angle of polarity and is very accurate. We can record 4 digits of precision. For the purpose of this test, it doesn’t matter if our measurement affects a particle or even if it destroys it, because we can repeat the test many times.

Clocks on the earth and at the spaceship are synchronized, and the ship is not moving relative to the earth. It is effectively stationary. On earth, each numbered photon is disturbed exactly on the hour. At the spaceship, an astronaut measures the polarity of a paired photon one minute before and one minute after each hourly event.

We know that our photons all begin with a polarity of 15.48 degrees as measured relative some fixed and rigid orientation. The astronaut confirms this with each photon tested before the hourly chime. But at each hour (say 3PM in New York), we disturb a photon on earth (radiate it or pass it through a filter). This changes its polarity.

Suppose that the earth lab determines that a photon was changed at 3PM from a polarity of 15.48° to a polarity of 122.6°. (Any new polarization will do).

Recall that the spaceship is 8 light-minutes away. We wish to determine if photon pairs communicate more quickly than the speed of light. Question: If the astronaut tests the polarity of the paired photon at 3:01 PM (just after its mate on the earth has been altered), do you suppose that he will still detect the original spin of 15.48°? Or will he detect the new spin of 122.6°?

The answer is more startling than either outcome. In fact, it leaves most people in disbelief or outright denial. (Yes…You are being set up for a surprise. But what is it?!)

To make things more interesting, let’s say that you cannot see the results. The box is sealed during the experiment, but you can hear the printer within the box as it prints the polarity after each test. Each time you run the experiment, you unplug the printer right after you hear it print a result. Then, you open the box and read the results.

Spookiness at a Distance

If you open the box less than 8 minutes after the hour (that is, less than the time that it takes light to travel from earth to the astronaut), the printout will always show a polarity of 15.48°. If you open the box after 8 minutes, you will always see a polarity of 122.6°. In both cases, the test was completed and the result was printed in the first minute after the photon on earth was shifted to a new polarization.

Wait! It gets better! If you eventually learn to distinguish the different sounds that the printer makes when it records either result, it will always print 15.48°, even if you wait 8 minutes before actually looking at the print out. The fact that you found a way to ‘cheat’ apparently changes the outcome. Or at least, that is the conclusion that a reasonable person would make when presented with knowledge-induced causality. It’s either that—or we are all crazy.

But quantum physicists (and cryptographers like Gilles) have another explanation. They point out that Einstein’s theory of special relativity doesn’t actually prohibit faster than light phenomena. It only prohibits faster than light communication. If the thing that happens instantaneously cannot be pressed into conveying useful information, then it doesn’t violate special relativity! That is, perturbations applied to one part of a quantum entangled pair are apparently instantaneous, but an observation or experiment on the remote twin will not produce a result that allows you to determine the new state until sufficient time for a light beam to pass from one to the other.

Alternate explanation: This one is known as “Schrödinger’s cat”. In my opinion it was contrived to support both quantum mechanics and the EPR paradox. It states that the paired photon simultaneously existed at both polarities until someone opened the box or otherwise learned its state. That is, the observed result was not a real thing, until the observation forced it to collapse into reality. Common sense says that this explanation makes no sense! And yet, it neatly resolves a lot of mathematics. Go figure!

Here is another explanation. I like this one better… Perhaps time is not an arrow that always moves in one direction and one speed. In contradiction to our intuition (based on a limited set of human senses), perhaps we are not continuously pushed forward at the tip of that arrow. –What if the science fiction about space and time being folded is true? –Or perhaps… Oh Heck! I’ll go with the first explanation: From our perspective, entangled particles change simultaneously, but mysterious forces of nature don’t allow us to observe the change until the laws of special relativity allow it. Why is that?… Because if we could observe information before it was ‘legal’ to do so, then we could change the past.

The take away to this experiment is that just like wave velocity, some things move faster than the speed of light, but useful information cannot do so. For useful information, light is still the speed limit.

Quantum physicists do not typically use my thought experiment, which I call Hidden Printer Result. Instead, they explain that Bell’s experiments prove that the spin measurement distant, entangled particles demonstrates they are connected in a spooky way (because the detected spin is provably opposite for each measurement)—but that Einsteien’s theory is preserved, because individuals measuring particles cannot know that their measurements are correlated until they communicate or meet. That communication is still restricted to light-speed limits, and therefore, useful information did not violate special relativity.

The Hidden Printer Result is a way in which we laypeople could observe and marvel at the transmission of unbelievably fast, but ‘useless’ information. It is a valid experimental setup that allows us to better comprehend that which defies common sense.

This Youtube video provides a more conventional, but more complex explanation of quantum entanglement and the EPR P

Gilles Brassard is not a physicist, but a computer scientist and cryptographer. Yet he has received awards that are typically given to physicists. His experiments and those by scientists around the world render a layperson like me dumbstruck.

Of course, Gilles didn’t ship an inkjet printer into space with half of an entangled pair (my experimental construct). Instead, he measured and recorded a particle state in a way that is self-encrypted. He then he sent the encryption key from the distant particle that had been disturbed. Even though the key is just two bits (too little to contain a measurement of photon spin), the old spin was observed if the key was applied before the time it would have taken to classically transmit and receive the information.

Just as with my experimental setup, results are almost too much to wrap a proverbial brain around. But truths that are hard to believe make great fodder for Lifeboat members. If my non-scientific, jargon free explanation gets across the results of the EPR experiment (actually, it is at the leading edge of my own understanding), then you are now as puzzled and amazed as me.

Philip Raymond is Co-Chair of The Cryptocurrency Standards Association and CEO of Vanquish Labs.
An earlier draft of this article was published in his Blog.

Related:

• Wikipedia explanation of EPR Paradox.
• Search for EPR Paradox, Bell’s theorem or quantum entanglement.

By — SingularityHub

Traditionally, we’ve done science by observing nature in person or setting up experiments in the lab. Now, a relatively new scientific technique is proving a powerful tool—simulating nature on supercomputers.

A few years ago, Caltech astrophysicists released a supercomputer simulation of a supergiant star’s core collapsing just prior to going supernova. Apart from a stunning visual, simulations like this hinted that Type II supernova explosions were asymmetrical—a guess just recently backed by empirical observation.

Read more

Article: Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Posted in astronomy, big data, computing, cosmology, energy, engineering, environmental, ethics, existential risks, futurism, general relativity, governance, government, gravity, information science, innovation, internet, journalism, law, life extension, media & arts, military, nuclear energy, nuclear weapons, open source, particle physics, philosophy, physics, policy, posthumanism, quantum physics, science, security, singularity, space, space travel, supercomputing, sustainability, time travel, transhumanism, transparency, treatiesTagged , , , , , , , , , , , , | Leave a Comment on Article: Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Why the LHC must be shut down