Toggle light / dark theme

Note: The below is exclusively about the United States of America, yet the theme is international.

Each time an extreme weather event takes place humanity is reminded again that basic preparation for an off-grid experience did not take place across large swaths of an affected population. Ironically, it does not begin to take place, publicly and en masse, after the event.

Saving humanity will have a lot to do with teaching a kid to build a fire, in the near term. More esoteric “preservers” and “shields” have their place, but “Scout” knowledge can produce immediate quantitative and qualitative improvements in humanity’s survival capabilities, fast.

After weather-induced disasters, our tendency is toward construction of physical things – better towers, more resilient dams, improved architecture. Seldom do we do anything to improve ourselves. Thousands remain helpless and dependent in the face of the Hurricane “Sandy” aftermath.

We have the resources in abundance to mobilize a citizenry education program. Many veterans have expert-level qualifications in survival training, for example, and with Internet and iPads their knowledge could be disseminated to every public school auditorium and town hall equipped with electricity at nominal cost, while also providing the instructors with competitive compensation.

To practice and train in the practical skills of preparedness, schools, towns, cities and parks could coordinate to deliver on- and off-site programs all citizens could reasonably take part in over a specified period of time. The goal should be to ensure within a specific time frame every citizen is aware of and able to employ a holistic set of preparedness actions in the event of an emergency. It’s a simple, clear and achievable objective.

It is what we do or fail to do as a society and as individuals to prepare for and learn from risk events that makes them more or less harmful now and in the future. The “existential risk” of extreme weather events, or extreme geological events, or terror events or financial avalanche, or their compounding, barring total annihilation, is the systemic and chronic damage which mutilates the fabric of society over time, making us weaker each time we face a new emergency, and also more prone to creating a new emergency or failing to prepare adequately for the next.

A citizen-wide preparedness program for unplanned emergency off-grid events could be designed for fun, building of comradery and orientation of adults and children toward a grounded, active, positive engagement with a highly variable world, thus doing much to off-set the negative impacts of an increasingly disaster prone environment and also much to build a more internally cohesive society. The literal ROA for investment in citizen preparedness should be exponential if the programming is organized with care, especially when including and evaluating the investment into human capital as an asset class. The very act of addressing how we handle disaster will diminish the potential for disaster itself.

A more self-sufficient and stable citizenry would help to increase public wealth and decrease public debt by limiting or eliminating the public expenditures made after an emergency (through the Federal Emergency Management Agency or through deployment of the military for example). Insurance claims of many types would decrease, new markets for sustainable goods and services would emerge, and dependence for survival upon non-sustainable resources like fossil fuels and coal-powered electricity would be tempered, all with increasing measure and positive impact over time.

A logical step for such a program would be for a public official of rank to announce it as a national priority. This could set a helpful tone of willingness, support or mandate. Local councils and agencies could also come together in confederation and create the same effect. Organizations disposed to the dispensation of preparedness skill sets could also use this time to create momentum.

Becoming competent in the ability to intelligently face adverse conditions is the most important skill required in society today. Until the time when all or a majority of people are able to act with sustained rationality and functionality in an unfamiliar situation or emergency, systems will continue to decay or collapse faster than ability to repair damage or survive impact peaceably.

It was on a long-haul flight many months ago that I recalled a visit to the National Air and Space Museum [1] to a fellow passenger whom I struck up conversation with. Asking if I could recommend somewhere to visit in Washington DC, I recounted how I had spent an entire day amazing at the collection of historic aircraft and spacecraft on my only visit to that city fifteen years or so previous as a young adult — and as always a kid at heart.

Seeing the sheer scale of the F-1 engine for the Saturn 5 rocket first hand, stepping inside an Apollo command module identical to those used during the Apollo program, not to mention seeing full life-size replicas of the Lunar Roving Vehicle, an Apollo Lunar Module and for some reason what seemed most surreal to me… the Viking 1 Lander. This was enchantment.

However, for all the amazement that such a museum can provide, it is also a saddening reminder that what once was the forefront of human ambition and endeavor has now been largely resigned to history. NASA budgets are cut annually [2] whilst military expenditure takes ever more precedence. A planned six percent budget decrease in 2013 is the equivalent savings to three hours of the Iraq and Afghanistan Wars. Instead of reaching to explore outer-space we are encouraged to get excited about the equivalent billions [3] invested on science exploring the subatomic inner-space world. Meanwhile, we tend to forget that the ambitions of space exploration are not just to satisfy some wide-eyed childhood yearning to explore, but the serious and sobering prospect of needing to ensure that we as a species can eventually colonize to other worlds and ensure we are not counting down the days to our extinction on an ever-more-precarious planetary solitude.

In the face of such indifference, such concepts of lifeboats have become marginalized to what is perceived to be a realm solely for loons and dreamers, or ‘space cadets’ as we used to call them back in the days of school. The trillion dollar question really is what it takes to redirect all that military investment into science & exploration instead. It is down to credibility. Governments shy away from investing public funds when there is a lack of credibility.

It was an easy sell to the public to invest in the military after the tragic events of 9/11 and terrorist threats which were presented largely by propaganda/disinformation to the public as an existential risk to the free world. The purse strings opened and an unforgivable amount of expenditure was invested on the military in the subsequent years. Let us hope that it does not take unprecedented natural disasters [4] to awaken the world to the fact that it is nature which poses much greater existential risks to the survival of our society in the long-term.

[1] http://airandspace.si.edu/
[2] http://www.care2.com/causes/2013-nasa-budget-gutted.html
[3] http://www.ibtimes.com/forbes-finding-higgs-boson-cost-1325-billion-721503
[4] http://rt.com/news/paint-asteroid-earth-nasa-767/

Humans have questioned death, and have searched for immortality since they first became conscious of the finiteness of life. Many modern humans are now confident (or at least hopeful) that it may be possible to achieve immortality, perhaps by using technological advances. This is a myth. It is against the laws of physics (think of entropy) for anyone to become immortal, so it will not happen.

Let me clarify what I mean. The term ‘immortal’ literally means someone who never dies, i.e. lives forever. But ‘forever’ means really forever, more than 50 trillion years, until the end of time. In the foreseeable future (the future which is relevant to us alive today) this is just plain nonsense. If the term is nonsense, then it should not be used. Better terms may be ‘longevity’, or ‘extreme lifespan’ which means to live for many years, without stipulating a number. Extreme longevity, or extreme life extension is not immortality. One may be able to live for 1000 years, and then still die. Another suitable term could be ‘indefinite lifespan’ which is the absence of a sustained increase of mortality as a function of age (i.e. it is the absence of death due to aging). These terms denote something feasible, something that can be achieved with the use of near-term future technology.

Another legitimate term to use is ‘Human Biological Immortality’. This is a strict term used in biology to refer to the decrease of the rate of cellular mortality as a function of age. It is, in other words, similar to the term ‘indefinite lifespan’. Here the emphasis is on indefinite, and not on infinite.

I believe that certain humans will be able to live indefinitely (50 years, 500 years, no a priori limit) and that this will happen after a combination of natural evolutionary events (https://acrobat.com/#d=MAgyT1rkdwono-lQL6thBQ) enhanced and accelerated by science and technology (http://hplusmagazine.com/2011/03/04/indefinite-lifespans-a-natural-consequence-of-the-global-brain/). Death by aging will be abolished, and people will only die from accidents, illnesses etc. We will still be mortal.

Is it really necessary to stick to the exact meaning of the words? Yes, it is if we are to be taken seriously. To use terms like ‘eternal life’, ‘immortality, or ‘living forever’ decreases the scientific credibility of the anti-aging movement, has undertones of religious beliefs that have no basis in science, and disconnects both the general public and the funding bodies from the subject.

The point here is that any emerging technologies will only emerge if the public supports them, and if the researchers get funding. If supporters of these technologies appear too irrational, illogical or unreasonable, then they will damage the cause, and make it more difficult for others who, still visionary, have more achievable aims.

I was about to discuss the third of three concepts, but thought a look back would be appropriate at this time. In my earlier post I had shown that the photon/particle wave function could not be part of the photon/particle as this would violate the empirical Lorentz-Fitzgerald transformations and therefore, Einstein’s Special Theory of Relativity. The wave function is only the photon/particle’s disturbance of the spacetime it is in, and therefore explains why photons/particles have wave properties. They don’t. They disturb spacetime like a pebble dropped into a pond. The pond’s ripples are not the pebble.

In the recent findings, Dr. Alberto Peruzzo, University of Bristol (UK) the lead author of the paper and quoting “The measurement apparatus detected strong nonlocality, which certified that the photon behaved simultaneously as a wave and a particle in our experiment, … This represents a strong refutation of models in which the photon is either a wave or a particle.” This is a very important finding and another step in the progress of science towards a better understanding of our Universe.

Those of you who have been following my blog posts will recognize that this is empirical validation using single structure test that shows that both wave and particle properties occur together. What is required next, to be empirically rigorous, is to either confirm or deny that this wave function is a spacetime disturbance. For that we require a dual structure test.

If this wave function is a spacetime disturbance, then Einstein’s Special Theory of Relativity is upheld, and we would require a major rethink of quantum physics or the physics of elementary particles. If this wave function is a not spacetime disturbance but part of the particle structure, then there is an empirical exception to the Lorentz-Fitzgerald transformation and we would require a rethink of Einstein’s Special Theory of Relativity.

Here is a proposal for a dual structure test (to test two alternative hypotheses) which probably only an organization like CERN could execute. Is it possible to disturb spacetime in a manner as to exhibit the properties of a known particle but has no mass? That is the underlying elementary particle is not present. I suppose other research institutions could attempt this, too. If successful … it will be a bigger discovery that Dr. Alberto Peruzzo and his team.

My money is on Lorentz-Fitzgerald and Einstein being correct, and I infer that the physics community of quantum and string theorist would not be happy at the possibility of this dual structure test.

So I ask, in the spirit of the Kline Directive, can we as a community of physicists and engineers come together, to explore what others have not, to seek what others will not, to change what others dare not, to make interstellar travel a reality within our lifetimes?

Previous post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I discuss the second of three concepts, that if implemented should speed up the rate of innovation and discovery so that we can achieve interstellar travel within a time frame of decades, not centuries. Okay, I must remind you that this will probably upset some physicists.

One of the findings of my 12-year study was that gravitational acceleration was independent of the internal structure of a particle, therefore, the elegantly simple formula, g=τc2, for gravitational acceleration. This raised the question, what is the internal structure of a particle? For ‘normal’ matter, the Standard Model suggests that protons and neutrons consist of quarks, or other mass based particles. Electrons and photons are thought to be elementary.

I had a thought, a test for mass as the gravitational source. If ionized matter showed the same gravitational acceleration effects as non-ionized matter, then one could conclude that mass is the source of gravitational acceleration, not quark interaction; because the different ionizations would have different electron mass but the same quark interaction. This would be a difficult test to do correctly because the electric field effects are much greater than gravitational effects.

One could ask, what is the internal structure of a photon? The correct answer is that no one knows. Here is why. In electromagnetism, radio antenna’s specifically, the energy inside the hollow antenna is zero. However, in quantum theory, specifically the nanowire for light photons, the energy inside the nanowire increases towards the center of the nanowire. I’m not going to provide any references as I not criticizing any specific researcher. So which is it?

One could ask the question, at what wavelength does this energy distribution change, from zero (for radio waves) to an increase (for light photons)? Again, this is another example of the mathematics of physics providing correct answers while being inconsistent. So we don’t know.

To investigate further, I borrowed a proposal from two German physicists, I. V. Drozdov and A. A. Stahlhofen, (How long is a photon?) who had suggested that a photon was about half a wavelength long. I thought, why stop there? What if it was an infinitely thin slice? Wait. What was that? An infinitely thin slice! That would be consistent with Einstein’s Special Theory of Relativity! That means if the photon is indeed an infinitely thin pulse, why do we observe the wave function that is inconsistent with Special Theory of Relativity? That anything traveling at the velocity of light must have a thickness of zero, as dictated by the Lorentz-Fitzgerald transformations.

The only consistent answer I could come up with was that the wave function was the photon’s effect or the photon’s disturbance on spacetime, and not the photon itself.

Here is an analogy. Take a garden rake, turn it upside down and place it under a carpet. Move it. What do you see? The carpet exhibits an envelope like wave function that appears to be moving in the direction the garden rake is moving. But the envelope is not moving. It is a bulge that shows up wherever the garden rake is. The rake is moving but not the envelope.

Similarly, the wave function is not moving and therefore spreads across the spacetime where the photon is. Now both are consistent with Einstein’s Special Theory of Relativity. Then why is the Standard Model successful? It is so because just as the bulge is unique to the shape of the garden rake, so are the photon’s and other particles’ wave function disturbances of spacetime are unique to the properties of the photon & respective particles.

In my book, this proposed consistency with Special Theory of Relativity points to the existence of subspace, and a means to achieve interstellar travel.

There are a lot of inconsistencies in our physical theories, and we need to start addressing these inconsistencies if we are to achieve interstellar travel sooner rather than later.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

Einstein Described the Telemach Theorem in 1913

Otto E. Rossler

Faculty of Science, University of Tübingen, Auf der Morgenstelle 8, 72076 Tübingen, F.R.G.

Abstract

Two years before finishing the general theory of relativity, Einstein already arrived at the complete constant-c Telemach theorem. This Einstein-Nordström-Abraham metric, as it can be called, remains valid in the vertical direction in the full-fledged general theory of relativity. A connection to cryodynamics is drawn.

(November 7, 2012)

In a 1913 paper titled “On the present state of the problem of gravitation“ [1], Einstein on the fourth page described the Einstein-Nordström-Abraham formalism as it can be called. The four (and by implication five) findings remain valid in the full-fledged theory arrived at two years later, specifically in the implied Schwarzschild metric.

The evidence:

1) c is globally constant.

Quote: “… the velocity of light propagation is equal to the constant c.” (Fourth line underneath Eq.1’)

2) T is inversely proportional to the gravitational potential. (Unit intervals go up with increasing gravity)

Quote: “However, in our case it is possible that the natural [local] interval d-tau-zero differs from the coordinate interval d-tau by a factor [omega] that is a function of phi [the gravitational potential]. We therefore set d-tau-zero = omega d-tau.” (= Eq.3)

3) L is inversely proportional to the gravitational potential. (Unit lengths go up with increasing gravity)

Quote: “The lengths l and the volumes V, measured in coordinates, also play a role. One can derive the following relation between the coordinate volume V and the natural [local] volume V-zero: Eq.(4)” [In this Eq.(4), the ratio V over V-zero is essentially proportional to 1/omega-cubed – so that L over L-zero is essentially proportional to 1/omega]

4) M is proportional to the gravitational potential. (Unit mass goes down with increasing gravity)

Quote: “… according to Nordström’s theory, the inertia of a mass point is determined by the product m times phi [the gravitational potential]; the smaller phi is, i.e., the larger the masses we gather in the neighborhood of the mass point under consideration, the smaller the inertial resistance with which the mass point opposes a change of its velocity becomes.” (Three lines after Eq.2a)

5) Ch is proportional to the gravitational potential. (Unit charges go down with increasing gravity)

Remark: This corollary to point 4 referring to charge is NOT explicitly mentioned by Einstein but follows trivially from the universal rest mass-to-charge ratio valid for each particle class.

Comment

The same 5 points were almost a century later described in the “Telemach theorem” (T,L.M,Ch) [2]. Here Einstein’s equivalence principle of 1907 (lying behind point 2) was shown to entail all 5 facts. Five years before, the same results had been found to be implicit in the vertical direction of the Schwazschild metric of general relativity [3], a fact which was soon generalized to 3 dimensions by a gifted anonymous author named “Ich” (see [3]). Independently, Richard J. Cook [4] arrived at points 1 – 4 on the basis of general relativity proper and subsequently expressed his full support to point 5 (see [2]).

Historical Conclusion

Historians of science have re-worked the period of 1907 (the discovery of the equivalence principle) to 1913 in which the above results were discovered and beyond [5,6]. Nevertheless the Telemach theorem (if the above results deserve this onomatopoetic name) remained unappreciated for almost a century. The reason deserves to be elucidated by historians.

Outlook

A totally unrelated recent theory – cryodynamics – revealed that the famous big-bang theory of cosmology, based on general relativity without regard to the implied Telemach theorem which via L excludes bounded solutions, needs replacement by a stationary cosmology unbounded in space and time in a fractal manner [7]. This fact may help eliminate the strong professional pressure that existed up until recently in favor of sticking to mathematically allowed but physically unrealistic nonlinear transformations in general relativity. In this way, the recent passive revolt staged against constant-c general relativity by part of the establishment in the field in conjunction with the nuclear-physics establishment can perhaps be overcome. Everyone hopes that no ill effects on the survival of planet earth will follow (the last 8 weeks of increasing the risk even further could momentarily still be avoided).

The reason why the scientific outlook for Telemach is maximally bright lies in a favorable chanceful fact. Cryodynamics is maximally important economically [8]. The same industrial-military complex which so far boycotted Telemach and its precursors will enthusiastically embrace cryodynamics, sister discipline to thermodynamics, because of the unprecedented revenues it promises by its for the first time making possible hot fusion on earth [8]. So if money stood in the way of embracing Telemach, the situation has totally changed by now.

References

[1] Einstein, A., On the present state of the problem of gravitation (in German). Physikalische Zeitschrift 14, 1249 – 1262 (1913). See: The Collected Papers of Albert Einstein, Vol. 4, English Translation, pp. 198 – 222, pages 102 – 103. Princeton University Press 1996.

[2] Rossler, O.E., Einstein’s equivalence principle has three further implications besides affecting time: T-L-M-Ch theorem (“Telemach”). African Journal of Mathematics and Computer Science Research 5, 44 – 47 (2012), http://www.academicjournals.org/ajmcsr/PDF/pdf2012/Feb/9%20Feb/Rossler.pdf

[3] Rossler, O.E., Abraham-like return to constant c in general relativity: “R-theorem” demonstrated in Schwarzschild metric. Fractal Spacetime and Noncommutative Geometry in Quantum and High Energy Physics 2, 2012, http://www.nonlinearscience.com/paper.php?pid=0000000148

[4] Cook, R.J., Gravitational space dilation (2009), http://arxiv.org/pdf/0902.2811v1.pdf

[5] Castagnetti, G., H. Goenner, J. Renn, T. Sauer, and B. Scheideler, Foundation in disarray: essays on Einstein’s science and politics in the Berlin years, 1997, http://www.mpiwg-berlin.mpg.de/Preprints/P63.PDF

[6] Weinstein, G., Einstein’s 1912 – 1913 struggles with gravitation theory: importance of static gravitational fields theory, 2012, http://arxiv.org/ftp/arxiv/papers/1202/1202.2791.pdf

[7] Rossler, O.E., The new science of cryodynamics and its connection to cosmology. Complex Systems 20, 105 – 113 (2011). http://www.complex-systems.com/pdf/20-2-3.pdf

[8] Rossler, O.E., A. Sanayei and I. Zelinka, Is Hot fusion made feasible by the discovery of cryodynamics? In: Nostradamus: Modern Methods of Prediction, Modeling and Analysis of Nonlinear Systems, Advances in Intelligent Systems and Computing Volume 192, 2013, pp 1 – 4 (has appeared). http://link.springer.com/chapter/10.1007/978-3-642-3.….ccess=true

— — -.-

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I discuss three concepts, that if implemented should speed up the rate of innovation and discovery so that we can achieve interstellar travel within a time frame of decades, not centuries.

Okay, what I’m going to say will upset some physicists, but I need to say it because we need to resolve some issues in physics to distinguish between mathematical construction and conjecture. Once we are on the road to mathematical construction, there is hope that this will eventually lead to technological feasibility. This post is taken from my published paper “Gravitational Acceleration Without Mass And Noninertia Fields” in the peer reviewed AIP journal, Physics Essays, and from my book An Introduction to Gravity Modification.

The Universe is much more consistent than most of us (even physicists) suspect. Therefore, we can use this consistency to weed out mathematical conjecture from our collection of physical hypotheses. There are two set of transformations that are observable. The first, in a gravitational field at a point where acceleration is a compared to a location at 0 an infinite distance from the gravitational source, there exists Non-Linear transformations Γ(a) which states that time dilation ta/t0, length contraction x0/xa, and mass increase ma/m0, behave in a consistent manner such that:

(1)

.

The second consistency is Lorentz-Fitzgerald transformations Γ(v) which states that at a velocity v compared to rest at 0, time dilation tv/t0, length contraction x0/xv, and mass increase mv/m0, behave in a consistent manner such that:

(2)

.

Now here is the surprise. The Universe is so consistent that if we use the Non-Linear transformation, equation (1) to calculate the free fall velocity (from infinity) to a certain height above the planet’s or star’s surface, and it’s corresponding time dilation, we find that it is exactly what the Lorentz-Fitzgerald transformation, equation (2) requires. That there is this previously undiscovered second level of consistency!

You won’t find this discovery in any physics text book. Not yet anyway. I published this in my 2011 AIP peer reviewed Physics Essays paper, “Gravitational Acceleration Without Mass And Noninertia Fields”.

Now let us think about this for a moment. What this says is that the Universe is so consistent that the linear velocity-time dilation relationship must be observable where ever velocity and time dilation is present, even in non-linear spacetime relationships where acceleration is present and altering the velocity and therefore the time dilation present.

Or to put it differently, where ever Γ(a) is present the space, time, velocity and acceleration relationship must allow for Γ(v) to be present in a correct and consistent manner. When I discovered this I said, wow! Why? Because we now have a means of differentiating hypothetical-theoretical gravitational fields, and therefore mathematical conjectures, from natural-theoretical gravitational fields, which are correct mathematical constructions.

That is, we can test the various quantum gravity & string hypotheses and any of the tensor metrics! Einstein’s tensor metrics should be correct, but from a propulsion perspective there is something more interesting, Alcubierre tensor metrics. Alcubierre was the first, using General Relativity, to propose the theoretical possibility of warp speed (note, not how to engineer it). Alcubierre’s work is very sophisticated. However, the concept is elegantly simple. That one can wrap a space craft in gravitational-type deformed spacetime to get it to ‘fall’ in the direction of travel.

The concept suggest that both equations (1) and (2) are no longer valid as the relative velocity between the outer edges of the spacetime wrap and an external observer is either at c, the velocity of light or greater – one needs to do the math to get the correct answer. Even at an acceleration of 1g, and assuming that this craft has eventually reached c, equation (1) and (2) are no longer consistent. Therefore, my inference is that Alcubierre metrics allows for zero time dilation within the wrap but not velocities greater than the velocity of light. Therefore, it is also doubtful that Dr. Richard Obousy hypothesis that it is possible to achieve velocities of 1E30c with a quantum string version of Alcubierre warp drive is correct.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this set of posts I discuss three concepts. If implemented these concepts have the potential to bring about major changes in our understanding of the physical Universe. But first a detour.

In my earlier post I had suggested that both John Archibald Wheeler and Richard Feynman, giants of the physics community, could have asked different questions (what could we do differently?) regarding certain solutions to Maxwell’s equations, instead of asking if retrocausality could be a solution.

I worked 10 years for Texas Instruments in the 1980s & 1990s. Corporate in Dallas, had given us the daunting task of raising our Assembly/Test yields from 83% to 95%, within 3 years, across 6,000 SKUs (products), with only about 20+ (maybe less) engineers, and no assistance from Dallas. Assembly/Test skills had moved offshore, therefore, Dallas was not in a position to provide advice. I look back now and wonder how Dallas came up with the 95% number.

Impossibly daunting because many of our product yields were in the 70+%. We had good engineers and managers. The question therefore was how do you do something seemingly impossible, without changing your mix of people, equipment and technical skills sets?

Let me tell you the end first. We achieved 99% to 100% Assembly/Test yields across the board for 6,000 SKUs within 3 years. And this, in a third world nation not known for any remarkable scientific or engineering talent! I don’t have to tell you what other lessons we learned from this as it should be obvious. So me telling Dr. David Neyland, of DARPA’s TTOI’ll drop a zero” at the first 100YSS conference in 2011, still holds.

How did we do it? For my part I was responsible for Engineering Yield (IT) Systems, test operation cost modeling for Overhead Transfer Pricing, and tester capacity models to figure out how to increase test capacity. But the part that is relevant to this discussion was team work. We organized the company into teams, brought in consultants to teach what team work was and how to arrive at and execute operational and business decisions as teams.

And one of the keys to team work was to allow anyone and everyone to speak up. To voice their opinions. To ask questions, no matter how strange or silly those questions appeared to be. To never put down another person because he/she had different views.

Everyone from the managing director of the company down to the production operators were organized into teams. Every team had to meet once a week. To ask those questions. To seek those answers. That was some experience, working with and in those teams. We found things we did not know or understand about our process. That in turn set off new & old teams to go figure! We understood the value of a matrix type organization.

As a people not known for any remarkable scientific and engineering talent, we did it! Did the impossible. I learned many invaluable lessons from my decade at Texas Instruments that I’ll never forget and will always be grateful for.

My Thanksgiving this year is that I am thankful I had the opportunity to work for Texas Instruments when I did.

So I ask, in the spirit of the Kline Directive, can we as a community of physicists and engineers come together, to explore what others have not, to seek what others will not, to change what others dare not, to make interstellar travel a reality within our lifetimes?

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I will explore Technological Feasibility. At the end of the day that is the only thing that matters. If a hypothesis is not able to vindicate itself with empirical evidence it will not become technologically feasible. If it is not technologically feasible then it stands no chance of becoming commercially viable.

If we examine historical land, air and space speed records, we can construct and estimate of velocities that future technologies can achieve, aka technology forecasting. See table below for some of the speed records.

Year Fastest Velocity Craft Velocity (km/h) Velocity (m/s)
2006 Escape Earth New Horizons 57,600 16,000
1976 Capt. Eldon W. Joersz and Maj. George T. Morgan Lockheed SR-71 Blackbird 3,530 980
1927 Car land speed record (not jet engine) Mystry 328 91
1920 Joseph Sadi-Lecointe Nieuport-Delage NiD 29 275 76
1913 Maurice Prévost Deperdussin Monocoque 180 50
1903 Wilbur Wright at Kitty Hawk Wright Aircraft 11 3

A quick and dirty model derived from the data shows that we could achieve velocity of light c by 2151 or the late 2150s. See table below.

Year Velocity (m/s) % of c
2200 8,419,759,324 2808.5%
2152 314,296,410 104.8%
2150 274,057,112 91.4%
2125 49,443,793 16.5%
2118 30,610,299 10.2%
2111 18,950,618 6.3%
2100 8,920,362 3.0%
2075 1,609,360 0.5%
2050 290,351 0.1%
2025 52,384 0.0%

The extrapolation suggests that on our current rate of technological innovation we won’t achieve light speed until the late 2150s. The real problem is that we won’t achieve 0.1c until 2118! This is more than 100-years from today.

In my opinion this rate of innovation is too slow. Dr. David Neyland, of DARPA’s TTO was the driving force behind DARPA’s contribution to the 100-year Starship Study. When I met up with Dr. David Neyland during the first 100YSS conference, Sept. 30 to Oct 2, 2011, I told him “I’ll drop a zero”. That is I expect interstellar travel to be achievable in decades not centuries. And to ramp up our rate of technological innovation we need new theories and new methods of sifting through theories.

Previous post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.