Toggle light / dark theme

In 2014, I submitted my paper “A Universal Approach to Forces” to the journal Foundations of Physics. The 1999 Noble Laureate, Prof. Gerardus ‘t Hooft, editor of this journal, had suggested that I submit this paper to the journal Physics Essays.

My previous 2009 submission “Gravitational acceleration without mass and noninertia fields” to Physics Essays, had taken 1.5 years to review and be accepted. Therefore, I decided against Prof. Gerardus ‘t Hooft’s recommendation as I estimated that the entire 6 papers (now published as Super Physics for Super Technologies) would take up to 10 years and/or $20,000 to publish in peer reviewed journals.

Prof. Gerardus ‘t Hooft had brought up something interesting in his 2008 paper “A locally finite model for gravity” that “… absence of matter now no longer guarantees local flatness…” meaning that accelerations can be present in spacetime without the presence of mass. Wow! Isn’t this a precursor to propulsion physics, or the ability to modify spacetime without the use of mass?

As far as I could determine, he didn’t pursue this from the perspective of propulsion physics. A year earlier in 2007, I had just discovered the massless formula for gravitational acceleration g=τc^2, published in the Physics Essays paper referred above. In effect, g=τc^2 was the mathematical solution to Prof. Gerardus ‘t Hooft’s “… absence of matter now no longer guarantees local flatness…”

Prof. Gerardus ‘t Hooft used string theory to arrive at his inference. Could he empirically prove it? No, not with strings. It took a different approach, numerical modeling within the context of Einstein’s Special Theory of Relativity (STR) to derive a mathematic solution to Prof. Gerardus ‘t Hooft’s inference.

In 2013, I attended Dr. Brian Greens’s Gamow Memorial Lecture, held at the University of Colorado Boulder. If I had heard him correctly, the number of strings or string states being discovered has been increasing, and were now in the 10500 range.

I find these two encounters telling. While not rigorously proved, I infer that (i) string theories are unable to take us down a path the can be empirically proven, and (ii) they are opened ended i.e. they can be used to propose any specific set of outcomes based on any specific set of inputs. The problem with this is that you now have to find a theory for why a specific set of inputs. I would have thought that this would be heartbreaking for theoretical physicists.

In 2013, I presented the paper “Empirical Evidence Suggest A Need For A Different Gravitational Theory,” at the American Physical Society’s April conference held in Denver, CO. There I met some young physicists and asked them about working on gravity modification. One of them summarized it very well, “Do you want me to commit career suicide?” This explains why many of our young physicists continue to seek employment in the field of string theories where unfortunately, the hope of empirically testable findings, i.e. winning the Noble Prize, are next to nothing.

I think string theories are wrong.

Two transformations or contractions are present with motion, Lorentz-FitzGerald Transformation (LFT) in linear motion and Newtonian Gravitational Transformations (NGT) in gravitational fields.

The fundamental assumption or axiom of strings is that they expand when their energy (velocity) increases. This axiom (let’s name it the Tidal Axiom) appears to have its origins in tidal gravity attributed to Prof. Roger Penrose. That is, macro bodies elongate as the body falls into a gravitational field. To be consistent with NGT the atoms and elementary particles would contract in the direction of this fall. However, to be consistent with tidal gravity’s elongation, the distances between atoms in this macro body would increase at a rate consistent with the acceleration and velocities experienced by the various parts of this macro body. That is, as the atoms get flatter, the distances apart get longer. Therefore, for a string to be consistent with LFT and NGT it would have to contract, not expand. One suspects that this Tidal Axiom’s inconsistency with LFT and NGT has led to an explosion of string theories, each trying to explain Nature with no joy. See my peer-reviewed 2013 paper New Evidence, Conditions, Instruments & Experiments for Gravitational Theories published in the Journal of Modern Physics, for more.

The vindication of this contraction is the discovery of the massless formula for gravitational acceleration g=τc^2 using Newtonian Gravitational Transformations (NGT) to contract an elementary particle in a gravitational field. Neither quantum nor string theories have been able to achieve this, as quantum theories require point-like inelastic particles, while strings expand.

What worries me is that it takes about 70 to 100 years for a theory to evolve into commercially viable consumer products. Laser are good examples. So, if we are tying up our brightest scientific minds with theories that cannot lead to empirical validations, can we be the primary technological superpower a 100 years from now?

The massless formula for gravitational acceleration g=τc^2, shows us that new theories on gravity and force fields will be similar to General Relativity, which is only a gravity theory. The mass source in these new theories will be replaced by field and particle motions, not mass or momentum exchange. See my Journal of Modern Physics paper referred above on how to approach this and Super Physics for Super Technologies on how to accomplish this.

Therefore, given that the primary axiom, the Tidal Axiom, of string theories is incorrect it is vital that we recognize that any mathematical work derived from string theories is invalidated. And given that string theories are particle based theories, this mathematical work is not transferable to the new relativity type force field theories.

I forecast that both string and quantum gravity theories will be dead by 2017.

When I was seeking funding for my work, I looked at the Broad Agency Announcements (BAAs) for a category that includes gravity modification or interstellar propulsion. To my surprise, I could not find this category in any of our research organizations, including DARPA, NASA, National Science Foundation (NSF), Air Force Research Lab, Naval Research Lab, Sandia National Lab or the Missile Defense Agency.

So what are we going to do when our young graduates do not want to or cannot be employed in string theory disciplines?

(Originally published in the Huffington Post)

I first met Dr. Young Bae, NIAC Fellow, at the Defense Advanced Research Projects Agency (DARPA) sponsored 2011, 100 Year Starship Study (100YSS) at Orlando, Fla. Many of us who were there had responded to the NASA/DARPA Tactical Technology Office’s RFP to set up an organization “… to develop a viable and sustainable non-governmental organization for persistent, long-term, private-sector investment into the myriad of disciplines needed to make long-distance space travel viable …”

Yes, both DARPA and NASA are at some level interested in interstellar propulsion. Mine was one of approximately 35 (rumored number) teams from around the world vying for this DARPA grant, and Dr. Bae was with a competing team. I presented the paper “Non-Gaussian Photon Probability Distributions”, and Dr. Bae presented “A Sustainable Developmental Pathway of Photon Propulsion towards Interstellar Flight”. These were early days, the ground zero of interstellar propulsion, if you would.

Dr. Bae has been researching Photon Laser Thrust (PLT) for many years. A video of his latest experiment is available at the NASA website or on YouTube. This PLT uses light photons to move an object by colliding with (i.e. transferring momentum to) the object. The expectation is that this technology will eventually be used to propel space crafts. His most recent experiments demonstrate the horizontal movement of a 1-pound weight. This is impressive. I expect to see much more progress in the coming years.

At one level, Dr. Bae’s experiments are confirmation that Bill Nye’s Light Sail (which very unfortunately lost communications with Earth) will work.

At another level, one wonders why or how the photon, a particle without mass, has momentum that is proportion to the photon’s frequency or energy. A momentum that is observable in Dr. Bae’s and other experiments. This is not a question that contemporary physics asks. Einstein was that meticulous when he derived the Lorentz-FitzGerald Transformations (LFT) from first principles for his Special Theory of Relativity (STR). Therefore, if you think about it, and if we dare to ask the sacrilegious question, does this mean that momentum is a particle’s elementary property that appears to be related to mass? What would we discover if we could answer the question, why does momentum exist in both mass and massless particles? Sure, the short cut don’t bother me answer is, mass-energy equivalence. But why?

At the other end of photon momentum based research is the EmDrive invented by Roger Shawyer. He clearly states that the EmDrive is due to momentum exchange and not due to “quantum vacuum plasma effects”. To vindicate his claims Boeing has received all of his EmDrive designs and test data. This is not something that Boeing does lightly.

In this 2014 video a member of NASA’s Eagleworks explains that the EmDrive (renamed q-thruster) pushes against quantum vacuum, the froth of particle and antiparticle pairs in vacuum. Which raises the question, how can you push against one type and not the other? In 2011, using NASA’s Fermi Gamma-ray Space Telescope photographs, Prof. Robert Nemiroff of Michigan Technological University, made the stunning discovery that this quantum foam of particle and antiparticle pairs in a vacuum, does not exist. Unfortunately, this means that the NASA Eagleworks explanation clearly cannot be correct.

So how does the EmDrive work?

In my 2012 book An Introduction to Gravity Modification, I had explained the importance of asymmetrical fields and designs for creating propellantless engines. For example, given a particle in a gravitational field and with respect to this field’s planetary mass source, this particle will observe an asymmetrical gravitational field. The near side of this particle will experience a stronger field than the far side, and thus the motion towards the planetary mass. Granted that this difference is tiny, it is not zero. This was how I was able to determine the massless formula for gravitational acceleration, g=τc^2, where tau τ is the change in the time dilation transformation (dimensionless LFT) divided by that distance. The error in the modeled gravitational acceleration is less than 6 parts per million. Thus validating the asymmetrical approach.

In very basic terms Shawyer’s New Scientist paper suggests that it is due to the conical shape of the EmDrive that causes microwave photons to exhibit asymmetrical momentum exchange. One side of the conical structure with the larger cross section, has more momentum exchange than the other side with the smaller cross section. The difference in this momentum exchange is evidenced as a force.

However, as Dr. Bae points out, from the perspective of legacy physics, conservation of momentum is broken. If not broken, then there are no net forces. If broken, then one observes a net force. Dr. Beckwith (Prof., Chongqing University, China) confirms that Dr. Bae is correct, but the question that needs to be addressed is, could there be any additional effects which would lead to momentum conservation being violated? Or apparently violated?

To be meticulous, since energy can be transmuted into many different forms, we can ask another sacrilegious question. Can momentum be converted into something else? A wave function attribute for example, in a reversible manner, after all the massless photon momentum is directly proportional to its frequency? We don’t know. We don’t have either the theoretical or experimental basis for answering this question either in the positive or negative. Note, this is not the same as perpetual motion machines, as conservation laws still hold.

Shawyer’s work could be confirmation of these additional effects, asymmetrical properties and momentum-wave-function-attribute interchangeability. If so, the future of propulsion technologies lies in photon based propulsion.

Given that Shawyer’s video demonstrates a moving EmDrive, the really interesting question is, can we apply this model to light photons? Or for that matter, any other type of photons, radio, infrared, light, ultraviolet and X-Rays?

(Originally published in the Huffington Post)


“Many—if not most—of the Earth’s aquifers are in trouble. … That’s the finding of a group of NASA scientists, who published their study of global groundwater this week in the journal Water Resources Research. Water levels in 21 of the world’s 37 largest known aquifers, they report, are trending negative.”

Read more

Until 2006 our Solar System consisted essentially of a star, planets, moons, and very much smaller bodies known as asteroids and comets. In 2006 the International Astronomical Union’s (IAU) Division III Working Committee addressed scientific issues and the Planet Definition Committee address cultural and social issues with regard to planet classifications. They introduced the “pluton” for bodies similar to planets but much smaller.

The IAU set down three rules to differentiate between planets and dwarf planets. First, the object must be in orbit around a star, while not being itself a star. Second, the object must be large enough (or more technically correct, massive enough) for its own gravity to pull it into a nearly spherical shape. The shape of objects with mass above 5×1020 kg and diameter greater than 800 km would normally be determined by self-gravity, but all borderline cases would have to be established by observation.

Third, plutons or dwarf planets, are distinguished from classical planets in that they reside in orbits around the Sun that take longer than 200 years to complete (i.e. they orbit beyond Neptune). Plutons typically have orbits with a large orbital inclination and a large eccentricity (noncircular orbits). A planet should dominate its zone, either gravitationally, or in its size distribution. That is, the definition of “planet” should also include the requirement that it has cleared its orbital zone. Of course this third requirement automatically implies the second. Thus, one notes that planets and plutons are differentiated by the third requirement.

As we are soon to become a space faring civilization, we should rethink these cultural and social issues, differently, by subtraction or addition. By subtraction, if one breaks the other requirements? Comets and asteroids break the second requirement that the object must be large enough. Breaking the first requirement, which the IAU chose not address at the time, would have planet sized bodies not orbiting a star. From a socio-cultural perspective, one could suggest that these be named “darktons” (from dark + plutons). “Dark” because without orbiting a star, these objects would not be easily visible; “tons” because in deep space, without much matter, these bodies could not meet the third requirement of being able to dominate its zone.

Taking this socio-cultural exploration a step further, by addition, a fourth requirement is that of life sustaining planets. The scientific evidence suggest that life sustaining bodies would be planet-sized to facilitate a stable atmosphere. Thus, a life sustaining planet would be named “zoeton” from the Greek zoe for life. For example Earth is a zoeton while Mars may have been.

Again by addition, one could define, from the Latin aurum for gold, “auton”, as a heavenly body, comets, asteroids, plutons and planets, whose primary value is that of mineral or mining interest. Therefore, Jupiter is not a zoeton, but could be an auton if one extracts hydrogen or helium from this planet. Another auton is 55 Cancri e, a planet 40 light years away, for mining diamonds with an estimated worth of $26.9x1030. The Earth is both a zoeton and an auton, as it both, sustains life and has substantial mining interests, respectively. Not all plutons or planets could be autons. For example Pluto would be too cold and frozen for mining to be economical, and therefore, frozen darktons would most likely not be autons.

At that time the IAU also did not address the upper limit for a planet’s mass or size. Not restricting ourselves to planetary science would widen our socio-cultural exploration. A social consideration would be the maximum gravitational pull that a human civilization could survive, sustain and flourish in. For example, for discussion sake, a gravitational pull greater the 2x Earth’s or 2g, could be considered the upper limit. Therefore, planets with larger gravitational pulls than 2g would be named “kytons” from the Antikythera mechanical computer as only machines could survive and sustain such harsh conditions over long periods of time. Jupiter would be an example of such a kyton.

Are there any bodies between the gaseous planet Jupiter and brown dwarfs? Yes, they have been named Y-dwarfs. NASA found one with a surface temperature of only 80 degrees Fahrenheit, just below that of a human. It is possible these Y-dwarfs could be kytons and autons as a relatively safe (compared to stars) source of hydrogen.

Taking a different turn, to complete the space faring vocabulary, one can redefine transportation by their order of magnitudes. Atmospheric transportation, whether for combustion intake or winged flight can be termed, “atmosmax” from “atmosphere”, and Greek “amaxi” for car or vehicle. Any vehicle that is bound by the distances of the solar system but does not require an atmosphere would be a “solarmax”. Any vehicle that is capable of interstellar travel would be a “starship”. And one capable of intergalactic travel would be a “galactica”.

We now have socio-cultural handles to be a space faring civilization. A vocabulary that facilitates a common understanding and usage. Exploration implies discovery. Discovery means new ideas to tackle new environments, new situations and new rules. This can only lead to positive outcomes. Positive outcomes means new wealth, new investments and new jobs. Let’s go forth and add to these cultural handles.

Ben Solomon is a Committee Member of the Nuclear and Future Flight Propulsion Technical Committee, American Institute of Aeronautics & Astronautics (AIAA), and author of An Introduction to Gravity Modification and Super Physics for Super Technologies: Replacing Bohr, Heisenberg, Schrödinger & Einstein (Kindle Version)

Based on the Bloomberg TV program “The Next Space Race” and other reliable sources, I determine the realistic payload costs goals for the next generation of private space companies.

I review NASA’s Space Shuttle Program costs and compare these with SpaceX costs, and then extrapolate to Planetary Resources, Inc.‘s cost structure.

Three important conclusions are derived. And for those viewing this video at my blog postings, the link to the Excel Spreadsheet is here (.xlsx file).

Yesterday’s program, The Next Space Race, on Bloomberg TV was an excellent introduction to the commercial aerospace companies, SpaceX, the Sierra Nevada Company (SNC), and Boeing. The following are important points, at the stated times, in the program:

0.33 mins: The cost of space travel has clipped our wings.
5:18 mins: How many people knew Google before they started?
7:40 mins: SpaceX costs, full compliment, 4x per year at $20 million per astronaut.
11:59 mins: Noisy rocket launch, notice also the length of the hot exhaust is several times the length of the rocket.
12:31 mins: One small step for man, one giant leap for mankind.
12:37 mins: Noisy shuttle launch, notice also the length of the hot exhaust is several times the length of the rocket.
13:47 mins: OPF-3, at one time the largest building in the world at 129 million cubic feet.
16:04 mins: States are luring private companies to start up in their states.
16:32 mins: NASA should be spending its money on exploration and missions and not maintenance and operations.
17:12 mins: The fair market value of OPF-3 is about $13.5 million.
17:19 mins: Maintenance cost is $100,000 per month
17:47 mins: Why Florida?
18:55 mins: International Space Station (ISS) cost $60B and if including the Shuttle program, it cost $150B.
19:17 mins: The size of the commercial space launch business.
21:04 mins: Elon Musk has put $100 million of his own money into SpaceX.
21:23 mins: The goals of NASA and private space do not conflict.

1. Cost of ISS is $60B, total cost including the Shuttle program is $150B.

2. SpaceX cost is $20M per astronaut (for 7 astronauts) or a launch cost of $140 million per launch at $560 million per year for 4 launches per year.

3. The next space race is about money.

4. NASA will give a multi billion dollar contract to private space companies to ferry humans & cargo into space and back.

5. Orbiter Processing Facility 3 (OPF-3) valued at $13.5million, and an estimated area of 207,000 sq ft gives a value of $65.22/sq ft.

6. With a maintenance costs of $100,000 gives a per sq ft maintenance costs of $0.48/sq ft/month or $5.80/sq ft/year.

7. Another reason for the Cape Canaveral NASA launch site is the mandatory no/low population down range for rocket launches. At Cape Canaveral this down range is the Atlantic Ocean.

A Point too Far to Astronaut

It’s cold out there beyond the blue. Full of radiation. Low on breathable air. Vacuous.
Machines and organic creatures, keeping them functioning and/or alive — it’s hard.
Space to-do lists are full of dangerous, fantastically boring, and super-precise stuff.

We technological mammals assess thusly:
Robots. Robots should be doing this.

Enter Team Space Torso
As covered by IEEE a few days ago, the DLR (das German Aerospace Center) released a new video detailing the ins & outs of their tele-operational haptic feedback-capable Justin space robot. It’s a smooth system, and eventually ground-based or orbiting operators will just strap on what look like two extra arms, maybe some VR goggles, and go to work. Justin’s target missions are the risky, tedious, and very precise tasks best undertaken by something human-shaped, but preferably remote-controlled. He’s not a new robot, but Justin’s skillset is growing (video is down at the bottom there).

Now, Meet the Rest of the Gang:SPACE.TORSO.LINEUPS
NASA’s Robonaut2 (full coverage), the first and only humanoid robot in space, has of late been focusing on the ferociously mundane tasks of button pushing and knob turning, but hey, WHO’S IN SPACE, HUH? Then you’ve got Russia’s elusive SAR-400, which probably exists, but seems to hide behind… an iron curtain? Rounding out the team is another German, AILA. The nobody-knows-why-it’s-feminized AILA is another DLR-funded project from a university robotics and A.I. lab with a 53-syllable name that takes too long to type but there’s a link down below.

Why Humanoid Torso-Bots?
Robotic tools have been up in space for decades, but they’ve basically been iterative improvements on the same multi-joint single-arm grabber/manipulator. NASA’s recent successful Robotic Refueling Mission is an expansion of mission-capable space robots, but as more and more vital satellites age, collect damage, and/or run out of juice, and more and more humans and their stuff blast into orbit, simple arms and auto-refuelers aren’t going to cut it.

Eventually, tele-operable & semi-autonomous humanoids will become indispensable crew members, and the why of it breaks down like this: 1. space stations, spacecraft, internal and extravehicular maintenance terminals, these are all designed for human use and manipulation; 2. what’s the alternative, a creepy human-to-spider telepresence interface? and 3. humanoid space robots are cool and make fantastic marketing platforms.

A space humanoid, whether torso-only or legged (see: Robotnaut’s new legs), will keep astronauts safe, focused on tasks machines can’t do, and prevent space craziness from trying to hold a tiny pinwheel perfectly still next to an air vent for 2 hours — which, in fact, is slated to become one of Robonaut’s ISS jobs.

Make Sciencey Space Torsos not MurderDeathKillBots
As one is often want to point out, rather than finding ways to creatively dismember and vaporize each other, it would be nice if we humans could focus on the lovely technologies of space travel, habitation, and exploration. Nations competing over who can make the most useful and sexy space humanoid is an admirable step, so let the Global Robot Space Torso Arms Race begin!

“Torso Arms Race!“
Keepin’ it real, yo.

• • •

DLR’s Justin Tele-Operation Interface:

• • •


Robot Space Torso Projects:

This piece originally appeared at on February 21, 2013.

Recently, I met Josh Hopkins of Lockheed’s Advanced Programs, AIAA Rocky Mountain Region’s First Annual Technical Symposium (RMATS), October 26, 2012. Josh was the keynote speaker at this RMATS. Here is his presentation. After his presentation we talked outside the conference hall. I told him about my book, and was surprised when he said that two groups had failed to reproduce Podkletnov’s work. I knew one group had but a second? As we parted we said we’d keep in touch. But you know how life is, it has the habit of getting in the way of exciting research, and we lost touch.

About two weeks ago, I remembered, that Josh had said that he would provide some information on the second group that had failed to reproduce Podkletnov’s work. I sent him an email, and was very pleased to hear back from him and that the group’s finding had been published under the title “Gravity Modification by High-Temperature Semiconductors”. The authors were C. Woods, S. Cooke, J. Helme & C. Caldwell. Their paper was published in the 37th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit, 8–11 July 2001, Salt Lake City, Utah. I bought a copy for the AIAA archives, and read it, reread it, and reread it.

Then I found a third team they published their lack of findings “Gravity Modification Experiments Using a Rotating Superconducting Disk and Radio Frequency Fields”. The authors were G. Hathaway, B. Cleveland and Y. Bao. Published in Physica C, 2003.

Both papers focused on attempting to build a correct superconducting disc. At least Wood et al said “the tests have not fulfilled the specified conditions for a gravity effect”. The single most difficult thing to do was to build a bilayered superconducting disc. Woods et al tried very hard to do so. Reading through Hathaway et all paper suggest that they too had similar difficulties. Photo shows a sample disc from Woods’ team. Observe the crack in the middle.

Further, Woods’ team was able to rotate their disc to 5,000 rpm. Hathaway’s team reports a rotational speed of between 400–800 rpm, a far cry from Podkletnov’s 5,000 rpm. This suggests that there were other problems in Hathaway’s disc not reported in their paper. With 400–800 rpm, if Hathaway were to observe a significant weight change it would have been less than the repeatable experimental sensitivity of 0.5mg!

Here are some quotes from Hathaway et al’s original paper “As a result of these tests it was decided that either the coil designs were inefficient at producing …”, “the rapid induction heating at room temperature cracked the non-superconducting disk into two pieces within 3 s”, “Further tests are needed to determine the proper test set-up required to detect the reverse Josephson junction effect in multi-grain bulk YBCO superconductors”.

It is quite obvious from reading both papers that neither team were able to faithfully reproduce Podkletnov’s work, and it is no wonder that at least Woods et al team stated “the tests have not fulfilled the specified conditions for a gravity effect”. This statement definitely applies to Hathaway et al’s research. There is more to critic both investigations, but .… this should be enough.

Now, for the final surprise. The first team I had mentioned earlier. Ning Li led the first team comprised of members from NASA and University of Huntsville, AL. It was revealed in conversations with a former team member that Ning Li’s team was disbanded before they could build the superconducting discs required to investigate Podkletnov’s claims. Wow!

If you think about it, all these “investigations” just showed that nobody in the US was capable of faithfully reproducing Podkletnov’s experiments to even disprove it.

What a big surprise! A null result is not a disproof.


Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

The 100,000 Stars Google Chrome Galactic Visualization Experiment Thingy

So, Google has these things called Chrome Experiments, and they like, you know, do that. 100,000 Stars, their latest, simulates our immediate galactic zip code and provides detailed information on many of the massive nuclear fireballs nearby.

Zoom in & out of interactive galaxy, state, city, neighborhood, so to speak.

It’s humbling, beautiful, and awesome. Now, is 100, 000 Stars perfectly accurate and practical for anything other than having something pretty to look at and explore and educate and remind us of the enormity of our quaint little galaxy among the likely 170 billion others? Well, no — not really. But if you really feel the need to evaluate it that way, you are a unimaginative jerk and your life is without joy and awe and hope and wonder and you probably have irritable bowel syndrome. Deservedly.

The New Innovation Paradigm Kinda Revisited
Just about exactly one year ago technosnark cudgel was rapping about the changing innovation paradigm in large-scale technological development. There’s chastisement for Neil deGrasse Tyson and others who, paraphrasically (totally a word), have declared that private companies won’t take big risks, won’t do bold stuff, won’t push the boundaries of scientific exploration because of bottom lines and restrictive boards and such. But new business entities like Google, SpaceX, Virgin Galactic, & Planetary Resources are kind of steadily proving this wrong.

Google in particular, a company whose U.S. ad revenue now eclipses all other ad-based business combined, does a load of search-unrelated, interesting little and not so little research. Their mad scientists have churned out innovative, if sometimes impractical projects like Wave, Lively, and Sketchup. There’s the mysterious Project X, rumored to be filled with robots and space elevators and probably endless lollipops as well. There’s Project Glass, the self-driving cars, and they have also just launched Ingress, a global augmented reality game.

In contemporary America, this is what cutting-edge, massively well-funded pure science is beginning to look like, and it’s commendable. So, in lieu of an national flag, would we be okay with a SpaceX visitor center on the moon? Come on, really — a flag is just a logo anyway!

Let’s hope Google keeps not being evil.


(this post originally published at

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

There is one last mistake in physics that needs to be addressed. This is the baking bread model. To quote from the NASA page,

“The expanding raisin bread model at left illustrates why this proportion law is important. If every portion of the bread expands by the same amount in a given interval of time, then the raisins would recede from each other with exactly a Hubble type expansion law. In a given time interval, a nearby raisin would move relatively little, but a distant raisin would move relatively farther — and the same behavior would be seen from any raisin in the loaf. In other words, the Hubble law is just what one would expect for a homogeneous expanding universe, as predicted by the Big Bang theory. Moreover no raisin, or galaxy, occupies a special place in this universe — unless you get too close to the edge of the loaf where the analogy breaks down.”

Notice the two qualifications the obvious one is “unless you get too close to the edge of the loaf where the analogy breaks down”. The second is that this description is only correct from the perspective of velocity. But there is a problem with this.

Look up in the night sky, and you can see the band of stars called the Milky Way. It helps if you are up in the Rocky Mountains above 7,000 ft. (2,133 m) away from the city lights. Dan Duriscoe produced one of the best pictures of our Milky Way from Death Valley, California that I have seen.

What do you notice?

I saw a very beautiful band of stars rising above the horizon, and one of my friends pointed to it and said “That is the Milky Way”. Wow! We could actually see our own galaxy from within.

Hint. The Earth is half way between the center of the Milky Way and the outer edge.

What do you notice?

We are not at the edge of the Milky Way, we are half way inside it. So “unless you get too close to the edge of the loaf where the analogy breaks down” should not happen. Right?

Wrong. We are only half way in and we see the Milky Way severely constrained to a narrow band of stars. That is if the baking bread model is to be correct we have to be far from the center of the Milky Way. This is not the case.

The Universe is on the order of 103 to 106 times larger. Using our Milky Way as an example the Universe should look like a large smudge on one side and a small smudge on the other side if we are even half way out. We should see two equally sized smudges if we are at the center of the Universe! And more importantly by the size of the smudges we could calculate our position with respect to the center of the Universe! But the Hubble pictures show us that this is not the case! We do not see directional smudges, but a random and even distribution of galaxies across the sky in any direction we look.

Therefore the baking bread model is an incorrect model of the Universe and necessarily any theoretical model that is dependent on the baking bread structure of the Universe is incorrect.

We know that we are not at the center of the Universe. The Universe is not geocentric. Neither is it heliocentric. The Universe is such that anywhere we are in the Universe, the distribution of galaxies across the sky must be the same.

Einstein (TV series Cosmic Journey, Episode 11, Is the Universe Infinite?) once described an infinite Universe being the surface of a finite sphere. If the Universe was a 4-dimensional surface of a 4-dimensional sphere, then all the galaxies would be expanding away from each other, from any perspective or from any position on this surface. And, more importantly, unlike the baking bread model one could not have a ‘center’ reference point on this surface. That is the Universe would be ‘isoacentric’ and both the velocity property and the center property would hold simultaneously.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.


Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.