Toggle light / dark theme

Solving complex problems is one of the defining features of our age. The ability to harness a wide range of skills and synthesise diverse areas of knowledge is essentially integral to a researcher’s DNA. It is interesting to read how MIT first offered a class in ‘Solving Complex Problems’ back in 2000. Over the course of a semester students attempt to ‘imagineer’ a solution to a highly complex problem. There is a great need for this type of learning in our educational systems. If we are to develop people who can tackle the Grand Challenges of this epoch then we need to create an environment in which our brains are allowed to be wired differently through exposure to diverse areas of knowledge and methods of understanding reality across disciplines.

When I look at my niece who is only 4 years old I wonder how I can give her the best education, and prepare her to meet the challenges of this world, as she grows up in a world which fills my heart with great anxiety. It is fascinating to read about different educational approaches from Steiner education to Montessori education to developing curriculums and school design upon cognitive neuroscience and educational theory. However when I look at the thinkers of insight, and contrast it with educational policy in the developed world, there is quite clearly a huge disconnect between politics and science.

We need to develop a culture of complexity if we are to develop the ability and insight to solve complex problems. When we look at the world from the perspective of complexity it builds a very different mindset in how we think about the world, and how we go about trying to understand the world, and ultimately how we go about solving problems.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

My apologies to my readers for this long break since my last post of Nov 19, 2012. I write the quarterly economic report for a Colorado bank’s Board of Directors. Based on my quarterly reports to the Board, I gave a talk Are We Good Stewards? on the US Economy to about 35 business executives at a TiE Rockies’ Business for Breakfast event. This talk was originally scheduled for Dec 14, but had moved forward to Nov 30 because the original speaker could not make the time commitment for that day. There was a lot to prepare, and I am very glad to say that it was very well received. For my readers who are interested here is the link to a pdf copy of my slides to Are We Good Stewards?

Now back to interstellar physics and the Kline Directive. Let’s recap.

In my last four posts (2c), (2d), (2e) & (2f) I had identified four major errors taught in contemporary physics. First, to be consistent (2c) with Lorentz-Fitzgerald and Special Theory of Relativity, elementary particles contract as their energy increases. This is antithetical to string theories and explains why string theories are becoming more and more complex without discovering new empirically verifiable fundamental laws of Nature.

Second, (2d) again to be consistent with Lorentz-Fitzgerald and Special Theory of Relativity, a photon’s wave function cannot have length. It must infinitesimally thin, zero length. Therefore, this wave function necessarily has to be a part of the photon’s disturbance of spacetime that is non-moving. Just like a moving garden rake under a rug creates the appearance that the bulge or wave function like envelope is moving.

Third, that exotic matter, negative mass in particular, converts the General Theory of Relativity into perpetual motion physics (sacrilege!) and therefore cannot exist in Nature. Fourth, that the baking bread model (2e) of the Universe is incorrect as our observations of the Milky Way necessarily point to the baking bread model not being ‘isoacentric’.

Einstein (2f) had used the Universe as an expanding 4-dimensional surface of a sphere (E4DSS) in one of his talks to explain how the number of galaxies looks the same in every direction we look. If Einstein is correct then time travel theories are not, as an expanding surface would necessarily require that the 4-dimensional Universe that we know, does not exists inside the expanding sphere, and therefore we cannot return to a past. And, we cannot head to a future because that surface has not happened. Therefore, first, the law of conservation of mass-energy holds as nothing is mysteriously added by timelines. And second, causality paradoxes cannot occur in Nature. Note there is a distinction between temporal reversibility and time travel.

In this E4DSS model, wormholes would not cause time travel but connect us to other parts of the Universe by creating tunnels from one part of the surface to another by going inside the sphere and tunneling to a different part of the sphere. So the real problem for theoretical physics is how does one create wormholes without using exotic matter?

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

A response to McClelland and Plaut’s
comments in the Phys.org story:

Do brain cells need to be connected to have meaning?

Asim Roy
Department of Information Systems
Arizona State University
Tempe, Arizona, USA
www.lifeboat.com/ex/bios.asim.roy

Article reference:

Roy A. (2012). “A theory of the brain: localist representation is used widely in the brain.” Front. Psychology 3:551. doi: 10.3389/fpsyg.2012.00551

Original article: http://www.frontiersin.org/Journal/FullText.aspx?s=196&name=cognitive_science&ART_DOI=10.3389/fpsyg.2012.00551

Comments by Plaut and McClelland: http://phys.org/news273783154.html

Note that most of the arguments of Plaut and McClelland are theoretical, whereas the localist theory I presented is very much grounded in four decades of evidence from neurophysiology. Note also that McClelland may have inadvertently subscribed to the localist representation idea with the following statement:

Even here, the principles of distributed representation apply: the same place cell can represent very different places in different environments, for example, and two place cells that represent overlapping places in one environment can represent completely non-overlapping places in other environments.”

The notion that a place cell can “represent” one or more places in different environments is very much a localist idea. It implies that the place cell has meaning and interpretation. I start with responses to McClelland’s comments first. Please reference the Phys.org story to find these quotes from McClelland and Plaut and see the contexts.

1. McClelland – “what basis do I have for thinking that the representation I have for any concept – even a very familiar one – is associated with a single neuron, or even a set of neurons dedicated only to that concept?”

There’s four decades of research in neurophysiology on receptive field cells in the sensory processing systems and on hippocampal place cells that shows that single cells can encode a concept – from motion detection, color coding and line orientation detection to identifying a particular location in an environment. Neurophysiologists have also found category cells in the brains of humans and animals. See the next response which has more details on category cells. The neurophysiological evidence is substantial that single cells encode concepts, starting as early as the retinal ganglion cells. Hubel and Wiesel won a Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Thus there’s enough basis to think that a single neuron can be dedicated to a concept and even at a very low level (e.g. for a dot, a line or an edge).

2. McClelland – “Is each such class represented by a localist representation in the brain?”

Cells that represent categories have been found in human and animal brains. Fried et al. (1997) found some MTL (medial temporal lobe) neurons that respond selectively to gender and facial expression and Kreiman et al. (2000) found MTL neurons that respond to pictures of particular categories of objects, such as animals, faces and houses. Recordings of single-neuron activity in the monkey visual temporal cortex led to the discovery of neurons that respond selectively to certain categories of stimuli such as faces or objects (Logothetis and Sheinberg, 1996; Tanaka, 1996; Freedman and Miller, 2008).

I quote Freedman and Miller (2008): “These studies have revealed that the activity of single neurons, particularly those in the prefrontal and posterior parietal cortices (PPCs), can encode the category membership, or meaning, of visual stimuli that the monkeys had learned to group into arbitrary categories.”

Lin et al. (2007) report finding “nest cells” in mouse hippocampus that fire selectively when the mouse observes a nest or a bed, regardless of the location or environment.

Gothard et al. (2007) found single neurons in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor. They found one neuron that responded in particular to threatening monkey faces. Their general observation is (p. 1674): “These examples illustrate the remarkable selectivity of some neurons in the amygdala for broad categories of stimuli.”

Thus the evidence is substantial that category cells exist in the brain.

References:

  1. Fried, I., McDonald, K. & Wilson, C. (1997). Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron 18, 753–765.
  2. Kreiman, G., Koch, C. & Fried, I. (2000) Category-specific visual responses of single neurons in the human medial temporal lobe. Nat. Neurosci. 3, 946–953.
  3. Freedman DJ, Miller EK (2008) Neural mechanisms of visual categorization: insights from neurophysiology. Neurosci Biobehav Rev 32:311–329.
  4. Logothetis NK, Sheinberg DL (1996) Visual object recognition. Annu Rev Neurosci 19:577–621.
  5. Tanaka K (1996) Inferotemporal cortex and object vision. Annu Rev Neurosci 19:109–139.
  6. Lin, L. N., Chen, G. F., Kuang, H., Wang, D., & Tsien, J. Z. (2007). Neural encoding of the concept of nest in the mouse brain. Proceedings of theNational Academy of Sciences of the United States of America, 104, 6066–6071.
  7. Gothard, K.M., Battaglia, F.P., Erickson, C.A., Spitler, K.M. & Amaral, D.G. (2007). Neural Responses to Facial Expression and Face Identity in the Monkey Amygdala. J. Neurophysiol. 97, 1671–1683.

3. McClelland – “Do I have a localist representation for each phase of every individual that I know?”

Obviously more research is needed to answer these types of questions. But Saddam Hussein and Jennifer Aniston type cells may provide the clue someday.

4. McClelland – “Let us discuss one such neuron – the neuron that fires substantially more when an individual sees either the Eiffel Tower or the Leaning Tower of Pisa than when he sees other objects. Does this neuron ‘have meaning and interpretation independent of other neurons’? It can have meaning for an external observer, who knows the results of the experiment – but exactly what meaning should we say it has?”

On one hand, this obviously brings into focus a lot of the work in neurophysiology. This could boil down to asking who is to interpret the activity of receptive fields, place and grid cells and so on and whether such interpretation can be independent of other neurons. In neurophysiology, the interpretation of these cells (e.g. for motion detection, color coding, edge detection, place cells and so on) are obviously being verified independently in various research labs throughout the world and with repeated experiments. So it is not that some researcher is arbitrarily assigning meaning to cells and that such results can’t be replicated and verified. For many such cells, assignment of meaning is being verified by different labs.

On the other hand, this probably is a question about whether that cell is a category cell and how to assign meaning to it. The interpretation of a cell that responds to pictures of the Eiffel Tower and the Leaning Tower of Pisa, but not to other landmarks, could be somewhat similar to a place cell that responds to a certain location or it could be similar to a category cell. Similar cells have been found in the MTL region — a neuron firing to two different basketball players, a neuron firing to Luke Skywalker and Yoda, both characters of Star Wars, and another firing to a spider and a snake (but not to other animals) (Quiroga & Kreiman, 2010a). Quian Quiroga et al. (2010b, p. 298) had the following observation on these findings: “…. one could still argue that since the pictures the neurons fired to are related, they could be considered the same concept, in a high level abstract space: ‘the basketball players,’ ‘the landmarks,’ ‘the Jedi of Star Wars,’ and so on.”

If these are category cells, there is obviously the question what other objects are included in the category. But, it’s clear that the cells have meaning although it might include other items.

References:

  1. Quian Quiroga, R. & Kreiman, G. (2010a). Measuring sparseness in the brain: Comment on Bowers (2009). Psychological Review, 117, 1, 291–297.
  2. Quian Quiroga, R. & Kreiman, G. (2010b). Postscript: About Grandmother Cells and Jennifer Aniston Neurons. Psychological Review, 117, 1, 297–299.

5. McClelland – “In the context of these observations, the Cerf experiment considered by Roy may not be as impressive. A neuron can respond to one of four different things without really having a meaning and interpretation equivalent to any one of these items.”

The Cerf experiment is not impressive? What McClelland is really questioning is the existence of highly selective cells in the brains of humans and animals and the meaning and interpretation associated with those cells. This obviously has a broader implication and raises questions about a whole range of neurophysiological studies and their findings. For example, are the “nest cells” of Lin et al. (2007) really category cells sending signals to the mouse brain that there is a nest nearby? Or should one really believe that Freedman and Miller (2008) found category cells in the monkey visual temporal cortex that identify certain categories of stimuli such as faces or objects? Or should one believe that Gothard et al. (2007) found category cells in the amygdala of monkeys that responded selectively to images of monkey faces, human faces and objects as they viewed them on a computer monitor? And how about that one neuron that Gothard et al. (2007) found that responded in particular to threatening monkey faces? And does this question about the meaning and interpretation of highly selective cells also apply to simple and complex receptive fields in the retina ganglion and the primary visual cortex? Note that a Nobel Prize has already been awarded for the discovery of these highly selective cells.

The evidence for the existence of highly selective cells in the brains of humans and animals is substantive and irrefutable although one can theoretically ask “what else does it respond to?” Note that McClelland’s question contradicts his own view that there could exist place cells, which are highly selective cells.

6. McClelland – “While we sometimes (Kumeran & McClelland, 2012 as in McClelland & Rumelhart, 1981) use localist units in our simulation models, it is not the neurons, but their interconnections with other neurons, that gives them meaning and interpretation….Again we come back to the patterns of interconnections as the seat of knowledge, the basis on which one or more neurons in the brain can have meaning and interpretation.”

“one or more neurons in the brain can have meaning and interpretation” – that sounds like localist representation, but obviously that’s not what is meant. Anyway, there’s no denying that there is knowledge embedded in the connections between the neurons, but that knowledge is integrated by the neurons to create additional knowledge. So the neurons have additional knowledge that does not exist in the connections. And single cell studies are focused on discovering the integrated knowledge that exists only in the neurons themselves. For example, the receptive field cells in the sensory processing systems and the hippocampal place cells show that some cells detect direction of motion, some code for color, some detect orientation of a line and some detect a particular location in an environment. And there are cells that code for certain categories of objects. That kind of knowledge is not easily available in the connections. In general, consolidated knowledge exists within the cells and that’s where the general focus has been of single cell studies.

7. Plaut – “Asim’s main argument is that what makes a neural representation localist is that the activation of a single neuron has meaning and interpretation on a stand-alone basis. This is about how scientists interpret neural activity. It differs from the standard argument on neural representation, which is about how the system actually works, not whether we as scientists can make sense of a single neuron. These are two separate questions.”

Doesn’t “how the system actually works” depend on our making “sense of a single neuron?” The representation theory has always been centered around single neurons, whether they have meaning on a stand-alone basis or not. So how does making “sense of a single neuron” become a separate question now? And how are these two separate questions addressed in the literature?

8. Plaut – “My problem is that his claim is a bit vacuous because he’s never very clear about what a coherent ‘meaning and interpretation’ has to be like…. but never lays out the constraints that this is meaning and interpretation, and this isn’t. Since we haven’t figured it out yet, what constitutes evidence against the claim? There’s no way to prove him wrong.

In the article, I used the standard definition from cognitive science for localist units, which is a simple one, that localist units have meaning and interpretation. There is no need to invent a new definition for localist representation. The standard definition is very acceptable, accepted by the cognitive science community and I draw attention to that in the article with verbatim quotes from Plate, Thorpe and Elman. Here they are again.

  • Plate (2002):“Another equivalent property is that in a distributed representation one cannot interpret the meaning of activity on a single neuron in isolation: the meaning of activity on any particular neuron is dependent on the activity in other neurons (Thorpe 1995).”
  • Thorpe (1995, p. 550): “With a local representation, activity in individual units can be interpreted directly … with distributed coding individual units cannot be interpreted without knowing the state of other units in the network.”
  • Elman (1995, p. 210): “These representations are distributed, which typically has the consequence that interpretable information cannot be obtained by examining activity of single hidden units.”

The terms “meaning” and “interpretation” are not bounded in any way other than that by means of the alternative representation scheme where “meaning” of a unit is dependent on other units. That’s how it’s constrained in the standard definition and that’s been there for a long time.

Neither Plaut nor McClelland have questioned the fact that receptive fields in the sensory processing systems have meaning and interpretation. Hubel and Wiesel won the Nobel Prize in physiology and medicine in 1981 for breaking this “secret code” of the brain. Here’s part of the Nobel Prize citation:

“Thus, they have been able to show how the various components of the retinal image are read out and interpreted by the cortical cells in respect to contrast, linear patterns and movement of the picture over the retina. The cells are arranged in columns, and the analysis takes place in a strictly ordered sequence from one nerve cell to another and every nerve cell is responsible for one particular detail in the picture pattern.”

Neither Plaut nor McClelland have questioned the fact that place cells have meaning and interpretation. McClelland, in fact, accepts the fact that place cells indicate locations in an environment, which means that he accepts that they have meaning and interpretation.

9. Plaut – “If you look at the hippocampal cells (the Jennifer Aniston neuron), the problem is that it’s been demonstrated that the very same cell can respond to something else that’s pretty different. For example, the same Jennifer Aniston cell responds to Lisa Kudrow, another actress on the TV show Friends with Aniston. Are we to believe that Lisa Kudrow and Jennifer Aniston are the same concept? Is this neuron a Friends TV show cell?”

Want to clarify three things here. First, localist cells are not necessarily grandmother cells. Grandmother cells are a special case of localist cells and this has been made clear in the article. For example, in the primary visual cortex, there are simple and complex cells that are tuned to visual characteristics such as orientation, color, motion and shape. They are localist cells, but not grandmother cells.

Second, the analysis in the article of the interactive activation (IA) model of McClelland and Rumelhart (1981) shows that a localist unit can respond to more than one concept in the next higher level. For example, a letter unit can respond to many word units. And the simple and complex cells in the primary visual cortex will respond to many different objects.

Third, there are indeed category cells in the brain. Response No. 2 above to McClelland’s comments cites findings in neurophysiology on category cells. So the Jennifer Aniston/Lisa Kudrow cell could very well be a category cell, much like the one that fired to spiders and snakes (but not to other animals) and the one that fired for both the Eiffel Tower and the Tower of Pisa (but not to other landmarks). But category cells have meaning and interpretation too. The Jennifer Aniston/Lisa Kudrow cell could be a Friends TV show cell, as Plaut suggested, but it still has meaning and interpretation. However, note that Koch (2011, p. 18, 19) reports finding another Jennifer Aniston MTL cell that didn’t respond to Lisa Kudrow:

One hippocampal neuron responded only to photos of actress Jennifer Aniston but not to pictures of other blonde women or actresses; moreover, the cell fired in response to seven very different pictures of Jennifer Aniston.

References:

  1. Koch, C. (2011). Being John Malkovich. Scientific American Mind, March/April, 18–19.

10. Plaut “Only a few experiments show the degree of selectivity and interpretability that he’s talking about…. In some regions of the medial temporal lobe and hippocampus, there seem to be fairly highly selective responses, but the notion that cells respond to one concept that is interpretable doesn’t hold up to the data.

There are place cells in the hippocampus that identify locations in an environment. Locations are concepts. And McClelland admits place cells represent locations. There is also plenty of evidence on the existence of category cells in the brain (see Response No. 2 above to McClelland’s comments) and categories are, of course, concepts. And simple and complex receptive fields also represent concepts such as direction of motion, line orientation, edges, shapes, color and so on. There is thus abundance of data in neurophysiology that shows that “cells respond to one concept that is interpretable” and that evidence is growing.

The existence of highly tuned and selective cells that have meaning and interpretation is now beyond doubt, given the volume of evidence from neurophysiology over the last four decades.

In a previous post I explored the feasibility of an industrial base on planet Mercury — an option which on first glance had seemed implausible but on getting down to the detail could be considered quite reasonable. Here I go the other direction — outward to the first of the gas giants — and the Galilean moons of Jupiter.

From a scientific point of view it makes a lot of sense to set up a base in this region as it provides the nearest possible base to home that could start to explore the dynamics and weather systems of gaseous planets — which are quite common in our Universe — and how such planets impact on their moons — as potential locations for off-earth colonies and industrial bases. It bears consideration that only two other moons in our outer solar system are of requisite size to have a gravitational field similar or greater to that of our Moon — namely Saturn’s Titan and Neptune’s Triton — so the Galilean moons demand attention.

The first difficulty to consider is the intense radiation from Jupiter, which is far stronger than the Earth’s Van Allen radiation belts. Although proper shielding normally protects living organisms and electronic instrumentation, that from Jupiter is whipped up from magnetic fields 20,000 stronger than Earth’s, so shielding would become difficult. It has been considered that such radiation would be the greatest threat to any craft closing within 300,000 km of the planet. At 420,000 km from Jupiter, Io is the closest of the Galilean satellites. With over 400 active volcanoes, from which plumes of sulphur and sulphur dioxide regularly rise as high as 400 km above its surface, it is considered the most geologically active object in the solar system. The activity could be viewed as a source of heat/energy.

Unlike most satellites, it is composed of silicate rock with a molten iron or iron sulphide core, and despite extensive mountain ranges, the majority of its surface is characterized by extensive plains coated with sulphur and sulphur dioxide frost. One can perhaps disregard its extremely thin sulphur dioxide atmosphere as an inconvenience, though is in too close proximity to Jupiter and its extensive magnetosphere even for occasional mining expeditions from the other moons. In this regard one would have to rule out Io and any resources there completely from consideration for such as base. Onto the other options…

At 670,000 km from Jupiter, the intriguing ice-world of Europa is a much more interesting proposition. Under the ice surface it has a layer of Water Ocean surrounding the planet thought to be 100 km thick. One of the first dilemmas of setting up a base on Europa would be not to contaminate any primitive life that may already have a foothold there. Often considered a strong candidate for extra-terrestrial microbial-type life, if life was found there it could render Europa off-limits for colonisation on the grounds of ethics due to the possible contamination/destruction of a delicate ecosystem. Discounting this concern — with an unlimited supply of water — and by extraction, unlimited oxygen and hydrogen also — we have the most important ingredient to support a colony at our disposal here.

The main drawback for Europa — other than high radiation levels from proximity to Jupiter — could be the inability to mine other materials — though these could be attained from other nearby moons, and of course the extreme cold surface temperature — at approx. 100K.

Further out at just over 1,000,000 km we have Ganymede, the most massive of the Galilean moons, and hence with the strongest gravitational field. Composed of silicate rock and water ice in roughly equal proportions, it also is theorised to have a saltwater ocean far below its surface due to salts (magnesium sulphate and sodium sulphate) shown in results from the Galileo spacecraft, which also detected signs of carbon dioxide and organic compounds.

Ganymede is also thought to have a thin oxygen atmosphere, including ozone and perhaps also an ionosphere — although all again in trace amounts, and a weak magnetosphere. Whilst the atmosphere could be considered negligible in terms of the needs for a colony, it is still far more suited as an industrial base than Europa — as not only has it an ample supply of water/ice, it also has abundant resources in silicates and irons for mining and construction.

And last — but by no means least — we have Callisto — furthest out at almost 2,000,000 km, also composed of equal amounts of rocks and ices, it is different from the other Galilean satellites in that as it does not form a part of the orbital resonance that affects the three inner Galilean satellites, and therefore does not experience appreciable tidal heating. Despite this it enjoys a mean surface temperature of 135K and up to a maximum 165K – still very cold – but not as cold as the other Galilean satellites. Like Ganymede, it also has an extremely thin atmosphere, in this case composed mainly of carbon dioxide and molecular oxygen and may have a subsurface of liquid water — the likelihood of which has raised suggestions in the past that it could harbour life. Callisto has long been considered the most suitable place for a human base for future exploration of the Jupiter system since it is furthest from the intense radiation of Jupiter (http://www.nasa-academy.org/soffen/travelgrant/bethke.pdf). HOPE — Human Outer Planet Exploration — as in the above linked 2003 NASA presentation explores some of the objectives and requirements for such a pilot mission, where Callisto was selected — not surprisingly — as the most appropriate mission destination.

HOPE surface operation concepts where vehicle and robot system concepts were explored to achieving a successful first phase, and the division of tasks between crew and robotics, including the exploration of all these satellites, and it concluded a roundtrip crewed mission between 2–5 years is feasible — with significant advancement in propulsion technologies.


The 100,000 Stars Google Chrome Galactic Visualization Experiment Thingy

So, Google has these things called Chrome Experiments, and they like, you know, do that. 100,000 Stars, their latest, simulates our immediate galactic zip code and provides detailed information on many of the massive nuclear fireballs nearby.

Zoom in & out of interactive galaxy, state, city, neighborhood, so to speak.

It’s humbling, beautiful, and awesome. Now, is 100, 000 Stars perfectly accurate and practical for anything other than having something pretty to look at and explore and educate and remind us of the enormity of our quaint little galaxy among the likely 170 billion others? Well, no — not really. But if you really feel the need to evaluate it that way, you are a unimaginative jerk and your life is without joy and awe and hope and wonder and you probably have irritable bowel syndrome. Deservedly.

The New Innovation Paradigm Kinda Revisited
Just about exactly one year ago technosnark cudgel Anthrobotic.com was rapping about the changing innovation paradigm in large-scale technological development. There’s chastisement for Neil deGrasse Tyson and others who, paraphrasically (totally a word), have declared that private companies won’t take big risks, won’t do bold stuff, won’t push the boundaries of scientific exploration because of bottom lines and restrictive boards and such. But new business entities like Google, SpaceX, Virgin Galactic, & Planetary Resources are kind of steadily proving this wrong.

Google in particular, a company whose U.S. ad revenue now eclipses all other ad-based business combined, does a load of search-unrelated, interesting little and not so little research. Their mad scientists have churned out innovative, if sometimes impractical projects like Wave, Lively, and Sketchup. There’s the mysterious Project X, rumored to be filled with robots and space elevators and probably endless lollipops as well. There’s Project Glass, the self-driving cars, and they have also just launched Ingress, a global augmented reality game.

In contemporary America, this is what cutting-edge, massively well-funded pure science is beginning to look like, and it’s commendable. So, in lieu of an national flag, would we be okay with a SpaceX visitor center on the moon? Come on, really — a flag is just a logo anyway!

Let’s hope Google keeps not being evil.

[VIA PC MAG]
[100,000 STARS ANNOUNCEMENT — CHROME BLOG]

(this post originally published at www.anthrobotic.com)

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

There is one last mistake in physics that needs to be addressed. This is the baking bread model. To quote from the NASA page,

“The expanding raisin bread model at left illustrates why this proportion law is important. If every portion of the bread expands by the same amount in a given interval of time, then the raisins would recede from each other with exactly a Hubble type expansion law. In a given time interval, a nearby raisin would move relatively little, but a distant raisin would move relatively farther — and the same behavior would be seen from any raisin in the loaf. In other words, the Hubble law is just what one would expect for a homogeneous expanding universe, as predicted by the Big Bang theory. Moreover no raisin, or galaxy, occupies a special place in this universe — unless you get too close to the edge of the loaf where the analogy breaks down.”

Notice the two qualifications the obvious one is “unless you get too close to the edge of the loaf where the analogy breaks down”. The second is that this description is only correct from the perspective of velocity. But there is a problem with this.

Look up in the night sky, and you can see the band of stars called the Milky Way. It helps if you are up in the Rocky Mountains above 7,000 ft. (2,133 m) away from the city lights. Dan Duriscoe produced one of the best pictures of our Milky Way from Death Valley, California that I have seen.

What do you notice?

I saw a very beautiful band of stars rising above the horizon, and one of my friends pointed to it and said “That is the Milky Way”. Wow! We could actually see our own galaxy from within.

Hint. The Earth is half way between the center of the Milky Way and the outer edge.

What do you notice?

We are not at the edge of the Milky Way, we are half way inside it. So “unless you get too close to the edge of the loaf where the analogy breaks down” should not happen. Right?

Wrong. We are only half way in and we see the Milky Way severely constrained to a narrow band of stars. That is if the baking bread model is to be correct we have to be far from the center of the Milky Way. This is not the case.

The Universe is on the order of 103 to 106 times larger. Using our Milky Way as an example the Universe should look like a large smudge on one side and a small smudge on the other side if we are even half way out. We should see two equally sized smudges if we are at the center of the Universe! And more importantly by the size of the smudges we could calculate our position with respect to the center of the Universe! But the Hubble pictures show us that this is not the case! We do not see directional smudges, but a random and even distribution of galaxies across the sky in any direction we look.

Therefore the baking bread model is an incorrect model of the Universe and necessarily any theoretical model that is dependent on the baking bread structure of the Universe is incorrect.

We know that we are not at the center of the Universe. The Universe is not geocentric. Neither is it heliocentric. The Universe is such that anywhere we are in the Universe, the distribution of galaxies across the sky must be the same.

Einstein (TV series Cosmic Journey, Episode 11, Is the Universe Infinite?) once described an infinite Universe being the surface of a finite sphere. If the Universe was a 4-dimensional surface of a 4-dimensional sphere, then all the galaxies would be expanding away from each other, from any perspective or from any position on this surface. And, more importantly, unlike the baking bread model one could not have a ‘center’ reference point on this surface. That is the Universe would be ‘isoacentric’ and both the velocity property and the center property would hold simultaneously.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I explain two more mistakes in physics. The first is 55 years old, and should have been caught long ago.

Bondi, in his 1957 paper “Negative mass in General Relativity”, had suggested that mass could be negative and there are surprising results from this possibility. I quote,

“… the positive body will attract the negative one (since all bodies are attracted by it), while the negative body will repel the positive body (since all bodies are repelled by it). If the motion is confined to the line of centers, then one would expect the pair to move off with uniform acceleration …”

As a theoretician Bondi required that the motion be “confined to the line of centers” or be confined to a straight line. However, as experimental physicist we would take a quantity of negative mass and another quantity of positive mass and place them in special containers attached two spokes. These spokes form a small arc at one end and fixed to the axis of a generator at the other end. Let go, and watch Bondi’s uniform straight line acceleration be translated into circular motion driving a generator. Low and behold, we have a perpetual motion machine generating free electricity!

Wow! A perpetual motion machine hiding in plain sight in the respectable physics literature, and nobody caught it. What is really bad about this is that Einstein’s General Relativity allows for this type of physics, and therefore in General Relativity this is real. So was Bondi wrong or does General Relativity permit perpetual motion physics? If Bondi is wrong then could Alcubierre too be wrong as his metrics requires negative mass?

Perpetual motion is sacrilege in contemporary physics, and therefore negative mass could not exist. Therefore negative mass is in the realm of mathematical conjecture. What really surprised me was the General Relativity allows for negative mass, at least Bondi’s treatment of General Relativity.

This raises the question, what other problems in contemporary physics do we have hiding in plain sight?

There are two types of exotic matter, that I know of, the first is negative mass per Bondi (above) and the second is imaginary (square root of −1) mass. The recent flurry of activity of the possibility that some European physicists had observed FTL (faster than light) neutrinos, should also teach us some lessons.

If a particle is traveling faster than light its mass becomes imaginary. This means that these particles could not be detected by ordinary, plain and simple mass based instruments. So what were these physicists thinking? That somehow Lorentz-Fitzgerald transformations were no longer valid? That mass would not convert into imaginary matter at FTL? It turned out that their measurements were incorrect. Just goes to show how difficult experimental physics can get, and these experimental physicists are not given the recognition due to them for the degree of difficulty of their work.

So what type of exotic matter was Dr. Harold White of NASA’s In-Space Propulsion program proposing in his presentation at the 2012 100-Year Starship Symposium? Both Alcubierre and White require exotic matter. Specifically, Bondi’s negative mass. But I’ve shown that negative mass cannot exist as it results in perpetual motion machines. Inference? We know that this is not technologically feasible.

That is, any hypothesis that requires exotic negative mass cannot be correct. This includes time travel.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post on technological feasibility, I point to some more mistakes in physics, so that we are aware of the type of mistakes we are making. This I hope will facilitate the changes required of our understanding of the physics of the Universe and thereby speed up the discovery of new physics required for interstellar travel.

The scientific community recognizes two alternative models for force. Note I use the term recognizes because that is how science progresses. This is necessarily different from the concept how Nature operates or Nature’s method of operation. Nature has a method of operating that is consistent with all Nature’s phenomena, known and unknown.

If we are willing to admit, that we don’t know all of Nature’s phenomena — our knowledge is incomplete — then it is only logical that our recognition of Nature’s method of operation is always incomplete. Therefore, scientists propose theories on Nature’s methods, and as science progresses we revise our theories. This leads to the inference that our theories can never be the exact presentation of Nature’s methods, because our knowledge is incomplete. However, we can come close but we can never be sure ‘we got it’.

With this understanding that our knowledge is incomplete, we can now proceed. The scientific community recognizes two alternative models for force, Einstein’s spacetime continuum, and quantum mechanics exchange of virtual particles. String theory borrows from quantum mechanics and therefore requires that force be carried by some form of particle.

Einstein’s spacetime continuum requires only 4 dimensions, though other physicists have add more to attempt a unification of forces. String theories have required up to 23 dimensions to solve equations.

However, the discovery of the empirically validated g=τc2 proves once and for all, that gravity and gravitational acceleration is a 4-dimensional problem. Therefore, any hypothesis or theory that requires more than 4 dimensions to explain gravitational force is wrong.

Further, I have been able to do a priori what no other theories have been able to do; to unify gravity and electromagnetism. Again only working with 4 dimensions, using a spacetime continuum-like empirically verified Non Inertia (Ni) Fields proves that non-nuclear forces are not carried by the exchange of virtual particles. And therefore, if non-nuclear forces are not carried by the exchange of virtual particles, why should Nature suddenly change her method of operation and be different for nuclear forces? Virtual particles are mathematical conjectures that were a convenient mathematical approach in the context of a Standard Model.

Sure there is always that ‘smart’ theoretical physicist who will convert a continuum-like field into a particle-based field, but a particle-continuum duality does not answer the question, what is Nature’s method? So we come back to a previous question, is the particle-continuum duality a mathematical conjecture or a mathematical construction? Also note, now that we know of g=τc2, it is not a discovery by other hypotheses or theories, if these hypotheses/theories claim to be able to show or reconstruct a posteriori, g=τc2, as this is also known as back fitting.

Our theoretical physicists have to ask themselves many questions. Are they trying to show how smart they are? Or are they trying to figure out Nature’s methods? How much back fitting can they keep doing before they acknowledge that enough is enough? Could there be a different theoretical effort that could be more fruitful?

The other problem with string theories is that these theories don’t converge to a single set of descriptions about the Universe, they diverge. The more they are studied the more variation and versions that are discovered. The reason for this is very clear. String theories are based on incorrect axioms. The primary incorrect axiom is that particles expand when their energy is increased.

The empirical Lorentz-Fitzgerald transformations require that length contracts as velocity increases. However, the eminent Roger Penrose, in the 1950s showed that macro objects elongate as they fall into a gravitational field. The portion of the macro body closer to the gravitational source is falling at just a little bit faster velocity than the portion of the macro body further away from the gravitational source, and therefore the macro body elongates. This effect is termed tidal gravity.

In reality as particles contract in their length, per Lorentz-Fitzgerald, the distance between these particles elongates due to tidal gravity. This macro expansion has been carried into theoretical physics at the elementary level of string particles, that particles elongate, which is incorrect. That is, even theoretical physicists make mistakes.

Expect string theories to be dead by 2017.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

I want to start a project of better visualization of the problems we face. We ask children to visualize in school but we all could use it. In the common economic discussions trillion dollar budgets and a million dollars are discussed interchangeability shows lack of visualization. The West is heading for currency collapse but austerity measures in Greece just add to unemployment not debt reduction, why is this so hard to visualize?

One clear way to shore up the US economy is to end foreign bases and end the embargo of Cuba. Boycotts hurt both sides, the Cuban economy is smaller so it hurts them more. The US economy is shaky so at some point embargo’s may be the straw that makes us fall apart.

Bumblebees spread beehive syndrome, and all flee the hive after a bee sips from the genetic insecticide in the corn syrup in a discarded soda can. Corn that got cross-pollinated by the wind. How would organically labeling food ingredients help the situation? Only corn from the Southern Hemisphere could be truly labeled, not genetically modified. In the past laboratories blew up, on occasions when an experiment went wrong. The earth not just the mountain section of France and Switzerland is the laboratory when it comes to Large Hadron Collider research.

I want to start a project of mass visualization, but before I post any depressing thoughts, I think I must enclose a little excerpt on the good news in the last election. The Republicans lost much of their base as many Orthodox Jews voted Democrat and Cuban-Americans stopped listening to their hysterical leaders, booting two out of office. Suddenly around the country most Council for a Liveable World candidates won. Suddenly far fewer Americans believe that pot or gay marriage will destroy our country. It is for a moment at least suddenly easier to try to solve out collective problems.

Now that I got that paragraph out of the way, I want to go on with my project of visualizing the world around us.

The following link is about visualizing large sums of money and finance in general,

http://usdebt.kleptocracy.us/

Even many professional economists and physicists envisioned far more as a child then their everyday efforts to skillfully ticker with known formulas a little. Visualizing is considered something for kids to do; something only high pressure salesman ask others to do. A very few individuals continue visualizations all their life, such as Albert Einstein.

We live in a universe with the very small and very large, tiny gamma rays can pass through almost any object so the space between the planets circling the sun must be comparable with the space between atoms circling a molecule. And their must be space as between the stars sub-atomically, if gamma rays can pass trough without bumping directly into some resistance. Gravity increases four times as it gets closer, like swinging a heavy ball around ones head puling it harder toward oneself and it goes faster. Space junk before it hits the atmosphere ends up circling at the speed of about three times a day. If the earth had no atmosphere and was solid with nothing lighter than lead, space junk would circle several times faster before hitting the shrunken earth. The Echo Satellite which I liked to look for at night, much slower, the moon once a month around the earth, the earth once every 365 days around the sun with less pull. If the earth shrunk to a black hole, space junk would spin around until reaching the speed of light and go no faster so instead quickly fall toward the black hole that was the earth.

We consider this kind of visualizing something for smart children, but if this is true then Einstein never grew up.

Enclosed are some links for visualizing quantity,

Google the following and click on quick view, GOOGLE THIS PPT FILE,

http://www.hstwohioregions.org/sitefiles/The%20MegaPenny%20Project.ppt

The ancients had a similar illustration concerning a chessboard. A story of a king impressed by his astrologers predictions offered to give his servant any reward of whatever he wanted and got asked for a seemingly humbly request a grain of wheat (or other versions say a grain of rice) doubled for each square on a chess board and if 16 grains equal a penny, what takes the place of rice and wheat grains in our world, a cubic mound, one quintillion pennies, would be comparable with a cube as high as Mt Everest (scrawl the above link for a fire on quadrillion).

Time visualizing,

http://www.costellospaceart.com/html/time_and_the_speed_of_light.html

Visualizing scale

https://richerramblings.wordpress.com/2012/02/10/visualising-scale/

When it comes to visualizing the four dimensions, the old stand by is Flatland,

http://en.wikipedia.org/wiki/Flatland

Where in a two dimensional world certain members appear to have magic by using the third, jumping over barriers appearing to others to be passing right through walls. Actually it is more like us in this corner of the universes living on a frozen lake as amoeba-like oily intelligent blobs that slither around the surface of the frozen lake with little understanding of height. In other words the fourth and fifth or more dimensions are all around us but we don’t notice. If this isn’t true it would mean that dimension is the wrong concept when applying it to time.

There are no sites links that I could find on the dangers and hopes of genetic engineering. However insecticide was genetically implanted on corn for animal feed back in 1991. I see no sense of terror that it might invade the Southern Hemisphere or hear of anyone manually importing Southern Hemisphere bumblebees to our national parks. Now there is fear that the man-make insect terminator genes might spread to rice, wheat and any other plant not helped by insects,

http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0040035

There are fantasist genetic experiments cross-breading goats with spiders to create thread stronger than steel, wide goats might live longer if their hair was resistant to being torn apart by wolfs. Creatures with hard to digest inbreed thread or glass-like bits near their muscles and bones from whatever man-made source would crowd out animals who were easier to digest. Labeling foods for genetically modified products is actually of small comfort.

Some dangerous experiments should be done despite the danger. There was suggestions even 45 years ago on lubricating the fault lines to only have small earthquakes from then on. If things went wrong and their was a huge one instead there would be no Japan disaster or any other huge earthquake in today’s world. I don’t know what is equally dangerous and necessary when it comes to food production. Of course none of this is safe.

Physicians like Otto Rossler has extreme trouble visualizing Hawkins Radiation. But those who skillfully push formulas around find it extremely handy and convincing. Hawkins radiation is one of the main reasons Large Haddon Collider research is considered safe despite the fears among some, of an out of control black hole. Another reason is that there are immense forces in the universe, the idea that puny little humans can make major change is very, very unlikely. However, what if dark matter was really small black holes which make up most of the universe.

Puny humans adding one more hole would be a small change in the universe. If the moon collapsed into a black hole it would disappear from the sky. During what was once a eclipse of the moon would instead appear very weird, as light from the stars near by bent to a great degree.

But what if physicists made a mistake that there is a minimum size that a micro black hole could shrink without becoming unstable. If this is miscalculation is true then there would be ever shrinking holes. They could be in the middle of many celestial objects including our earth. When a light wave or something else passes over it, it might result in a little hole like a bullet hole, possible making the wave shift ever so slightly toward the red. On the other hand I could wrong because why wouldn’t it make the wave narrower more toward violet. If it was in temperature close to absolute zero the object baring down on the mini-micro hole might stick to it instead of making a hole when passing through such as in the helium cooling coil of the Large Hadron Collider. I hope the cooling coals or horizontal not vertical preventing an updraft that might keep the hole growing for a short while as gravity pulls it to the center of the earth. Tremendous cold right next to intense heat may not occur without human help.

Now back to the fourth and more dimensions and if time is a dimension time travel all around us like with creatures living on the surface of a frozen lake, that have a dim concept of height.

In the collider experiment some particles are synch together like a flock of geese or a chorus line all appearing moving and disappearing together,

http://sciencestage.com/r/particles-flock-strange-synchronization-behavior-large-hadron-collider

http://allenlrolandsweblog.blogspot.com/2011/02/hadron-collider-reveals-universal-urge.html

Maybe we don’t actually see a moment but several moment segments at the same time so quantum physics is like a little time machine, when something reaches the speed of light it moves over in time if it moves faster we see evidence of more of a wave gamma ray extremely hot and fast moving away from us in time if cooler we detect infrared heat waves that we get to observe a wide section. If you take a one tenth second timed picture of a water wave you would see a fuzzy line where the wave moved during the filming but a far faster wave in an iron bar would be closer to a picture of an ink line. All the waves could move endlessly in time but we note only perhaps a billionth of a second, longer as time speeds up for the object moving away from us so we see a slightly wider than perhaps a billionth of a second and thus more of the wave segment of a wave that if we could see all the time segments would extend endlessly in time not the line segment we see but and endlessly wide sweep.

So the object behind a light wave is perhaps there or not there depending on which time segment it is actually in.

At one point astronomy consisted of a series of epicycles as new information was obtained a new epicycle was added,

http://en.wikipedia.org/wiki/Mysterium_Cosmographicum

With Hawkins radiation and dark matter instead of just invisible ordinary matter with the same proprieties as visible matter we are going through a somewhat similar constantly tinkering with a theory instead of looking for a new one

The problem is the entire earth is a laboratory ready to come apart if something goes wrong. Safety first or else sooner or later one mistake will be the last .