Toggle light / dark theme

Einstein Described the Telemach Theorem in 1913

Otto E. Rossler

Faculty of Science, University of Tübingen, Auf der Morgenstelle 8, 72076 Tübingen, F.R.G.

Abstract

Two years before finishing the general theory of relativity, Einstein already arrived at the complete constant-c Telemach theorem. This Einstein-Nordström-Abraham metric, as it can be called, remains valid in the vertical direction in the full-fledged general theory of relativity. A connection to cryodynamics is drawn.

(November 7, 2012)

In a 1913 paper titled “On the present state of the problem of gravitation“ [1], Einstein on the fourth page described the Einstein-Nordström-Abraham formalism as it can be called. The four (and by implication five) findings remain valid in the full-fledged theory arrived at two years later, specifically in the implied Schwarzschild metric.

The evidence:

1) c is globally constant.

Quote: “… the velocity of light propagation is equal to the constant c.” (Fourth line underneath Eq.1’)

2) T is inversely proportional to the gravitational potential. (Unit intervals go up with increasing gravity)

Quote: “However, in our case it is possible that the natural [local] interval d-tau-zero differs from the coordinate interval d-tau by a factor [omega] that is a function of phi [the gravitational potential]. We therefore set d-tau-zero = omega d-tau.” (= Eq.3)

3) L is inversely proportional to the gravitational potential. (Unit lengths go up with increasing gravity)

Quote: “The lengths l and the volumes V, measured in coordinates, also play a role. One can derive the following relation between the coordinate volume V and the natural [local] volume V-zero: Eq.(4)” [In this Eq.(4), the ratio V over V-zero is essentially proportional to 1/omega-cubed – so that L over L-zero is essentially proportional to 1/omega]

4) M is proportional to the gravitational potential. (Unit mass goes down with increasing gravity)

Quote: “… according to Nordström’s theory, the inertia of a mass point is determined by the product m times phi [the gravitational potential]; the smaller phi is, i.e., the larger the masses we gather in the neighborhood of the mass point under consideration, the smaller the inertial resistance with which the mass point opposes a change of its velocity becomes.” (Three lines after Eq.2a)

5) Ch is proportional to the gravitational potential. (Unit charges go down with increasing gravity)

Remark: This corollary to point 4 referring to charge is NOT explicitly mentioned by Einstein but follows trivially from the universal rest mass-to-charge ratio valid for each particle class.

Comment

The same 5 points were almost a century later described in the “Telemach theorem” (T,L.M,Ch) [2]. Here Einstein’s equivalence principle of 1907 (lying behind point 2) was shown to entail all 5 facts. Five years before, the same results had been found to be implicit in the vertical direction of the Schwazschild metric of general relativity [3], a fact which was soon generalized to 3 dimensions by a gifted anonymous author named “Ich” (see [3]). Independently, Richard J. Cook [4] arrived at points 1 – 4 on the basis of general relativity proper and subsequently expressed his full support to point 5 (see [2]).

Historical Conclusion

Historians of science have re-worked the period of 1907 (the discovery of the equivalence principle) to 1913 in which the above results were discovered and beyond [5,6]. Nevertheless the Telemach theorem (if the above results deserve this onomatopoetic name) remained unappreciated for almost a century. The reason deserves to be elucidated by historians.

Outlook

A totally unrelated recent theory – cryodynamics – revealed that the famous big-bang theory of cosmology, based on general relativity without regard to the implied Telemach theorem which via L excludes bounded solutions, needs replacement by a stationary cosmology unbounded in space and time in a fractal manner [7]. This fact may help eliminate the strong professional pressure that existed up until recently in favor of sticking to mathematically allowed but physically unrealistic nonlinear transformations in general relativity. In this way, the recent passive revolt staged against constant-c general relativity by part of the establishment in the field in conjunction with the nuclear-physics establishment can perhaps be overcome. Everyone hopes that no ill effects on the survival of planet earth will follow (the last 8 weeks of increasing the risk even further could momentarily still be avoided).

The reason why the scientific outlook for Telemach is maximally bright lies in a favorable chanceful fact. Cryodynamics is maximally important economically [8]. The same industrial-military complex which so far boycotted Telemach and its precursors will enthusiastically embrace cryodynamics, sister discipline to thermodynamics, because of the unprecedented revenues it promises by its for the first time making possible hot fusion on earth [8]. So if money stood in the way of embracing Telemach, the situation has totally changed by now.

References

[1] Einstein, A., On the present state of the problem of gravitation (in German). Physikalische Zeitschrift 14, 1249 – 1262 (1913). See: The Collected Papers of Albert Einstein, Vol. 4, English Translation, pp. 198 – 222, pages 102 – 103. Princeton University Press 1996.

[2] Rossler, O.E., Einstein’s equivalence principle has three further implications besides affecting time: T-L-M-Ch theorem (“Telemach”). African Journal of Mathematics and Computer Science Research 5, 44 – 47 (2012), http://www.academicjournals.org/ajmcsr/PDF/pdf2012/Feb/9%20Feb/Rossler.pdf

[3] Rossler, O.E., Abraham-like return to constant c in general relativity: “R-theorem” demonstrated in Schwarzschild metric. Fractal Spacetime and Noncommutative Geometry in Quantum and High Energy Physics 2, 2012, http://www.nonlinearscience.com/paper.php?pid=0000000148

[4] Cook, R.J., Gravitational space dilation (2009), http://arxiv.org/pdf/0902.2811v1.pdf

[5] Castagnetti, G., H. Goenner, J. Renn, T. Sauer, and B. Scheideler, Foundation in disarray: essays on Einstein’s science and politics in the Berlin years, 1997, http://www.mpiwg-berlin.mpg.de/Preprints/P63.PDF

[6] Weinstein, G., Einstein’s 1912 – 1913 struggles with gravitation theory: importance of static gravitational fields theory, 2012, http://arxiv.org/ftp/arxiv/papers/1202/1202.2791.pdf

[7] Rossler, O.E., The new science of cryodynamics and its connection to cosmology. Complex Systems 20, 105 – 113 (2011). http://www.complex-systems.com/pdf/20-2-3.pdf

[8] Rossler, O.E., A. Sanayei and I. Zelinka, Is Hot fusion made feasible by the discovery of cryodynamics? In: Nostradamus: Modern Methods of Prediction, Modeling and Analysis of Nonlinear Systems, Advances in Intelligent Systems and Computing Volume 192, 2013, pp 1 – 4 (has appeared). http://link.springer.com/chapter/10.1007/978-3-642-3.….ccess=true

— — -.-

1) Unchargedness (Reissner disproved)
2) Arise more readily (string theory confirmed)
3) Are indestructible (Hawking disproved)
4) Are invisible to CERN’s detectors (CERN publication disconfirmed)
5) Slowest specimens will stay inside earth (conceded by CERN)
6) Enhanced cross section due to slowness (like cold neutrons)
7) Exponential growth inside earth (quasar-scaling principle)

The final weeks of 2012 will again double the danger that the earth is going to be shrunk to 2 cm after a delay of a few years. No one on the planet demands investigation. The African Journal of Mathematics did the most for the planet. I ask President Obama to demand a safety statement from CERN immediately. The planet won’t forget it. Nor will America the beautiful. P.S. I thank Tom Kerwick who deleted all my latest postings on Lifeboat for his demanding a “substantiated” posting. I now look forward to his response.

Appendage: “It may Interest the World that I just found T,L,M in Einstein’s 1913 paper on Nordström (“On the present state of the problem of gravitation”) – so that it can no longer be ignored. The result is inherited by the full-fledged theory of general relativity of 1915 but was no longer remembered to be implicit. I give this information to the planet to show that my black-hole results (easy production, no Hawking evaporation, exponential voraciousness) can no longer be ignored by CERN. They call for an immediate stop of the LHC followed by a safety conference. I renew my appeal to the politicians of the world, and especially President Obama, to support my plea. Everyone has the human right to be informed about a new scientific result that bears on her or his survival. I recommend http://www.pitt.edu/~jdnorton/papers/einstein-nordstroem-HGR3.pdf for background information” — 2nd Nov.

The historical context in which Brain Computer Interfaces (BCI) has emerged has been addressed in a previous article called “To Interface the Future: Interacting More Intimately with Information” (Kraemer, 2011). This review addresses the methods that have formed current BCI knowledge, the directions in which it is heading and the emerging risks and benefits from it. Why neural stem cells can help establish better BCI integration is also addressed as is the overall mapping of where various cognitive activities occur and how a future BCI could potentially provide direct input to the brain instead of only receive and process information from it.

EEG Origins of Thought Pattern Recognition
Early BCI work to study cognition and memory involved implanting electrodes into rats’ hippocampus and recording its EEG patterns in very specific circumstances while exploring a track both when awake and sleeping (Foster & Wilson, 2006; Tran, 2012). Later some of these patterns are replayed by the rat in reverse chronological order indicating a retrieval of the memory both when awake and asleep (Foster & Wilson, 2006). Dr. John Chapin shows that the thoughts of movement can be written to a rat to then remotely control the rat (Birhard, 1999; Chapin, 2008).

A few human paraplegics have volunteered for somewhat similar electrode implants into their brains for an enhanced BrainGate2 hardware and software device to use as a primary data input device (UPI, 2012; Hochberg et al., 2012). Clinical trials of an implanted BCI are underway with BrainGate2 Neural Interface System (BrainGate, 2012; Tran, 2012). Currently, the integration of the electrodes into the brain or peripheral nervous system can be somewhat slow and incomplete (Grill et al., 2001). Nevertheless, research to optimize the electro-stimulation patterns and voltage levels in the electrodes, combining cell cultures and neurotrophic factors into the electrode and enhance “endogenous pattern generators” through rehabilitative exercises are likely to improve the integration closer to full functional restoration in prostheses (Grill et al., 2001) and improved functionality in other BCI as well.

When integrating neuro-chips to the peripheral nervous system for artificial limbs or even directly to the cerebral sensorimotor cortex as has been done for some military veterans, neural stem cells would likely help heal the damage to the site of the limb lost and speed up the rate at which the neuro-chip is integrated into the innervating tissue (Grill et al., 2001; Park, Teng, & Snyder, 2002). These neural stem cells are better known for their natural regenerative ability and it would also generate this benefit in re-establishing the effectiveness of the damaged original neural connections (Grill et al., 2001).

Neurochemistry and Neurotransmitters to be Mapped via Genomics
Cognition is electrochemical and thus the electrodes only tell part of the story. The chemicals are more clearly coded for by specific genes. Jaak Panksepp is breeding one line of rats that are particularly prone to joy and social interaction and another that tends towards sadness and a more solitary behavior (Tran, 2012). He asserts that emotions emerged from genetic causes (Panksepp, 1992; Tran, 2012) and plans to genome sequence members of both lines to then determine the genomic causes of or correlations between these core dispositions (Tran, 2012). Such causes are quite likely to apply to humans as similar or homologous genes in the human genome are likely to be present. Candidate chemicals like dopamine and serotonin may be confirmed genetically, new neurochemicals may be identified or both. It is a promising long-term study and large databases of human genomes accompanied by medical histories of each individual genome could result in similar discoveries. A private study of the medical and genomic records of the population of Iceland is underway and has in the last 1o years has made unique genetic diagnostic tests for increased risk of type 2 diabetes, breast cancer prostate cancer, glaucoma, high cholesterol/hypertension and atrial fibrillation and a personal genomic testing service for these genetic factors (deCODE, 2012; Weber, 2002). By breeding 2 lines of rats based on whether they display a joyful behavior or not, the lines of mice should likewise have uniquely different genetic markers in their respective populations (Tran, 2012).

fMRI and fNIRIS Studies to Map the Flow of Thoughts into a Connectome
Though EEG-based BCI have been effective in translating movement intentionality of the cerebral motor cortex for neuroprostheses or movement of a computer cursor or other directional or navigational device, it has not advanced the understanding of the underlying processes of other types or modes of cognition or experience (NPG, 2010; Wolpaw, 2010). The use of functional Magnetic Resonance Imaging (fMRI) machines, and functional Near-Infrared Spectroscopy (fNIRIS) and sometimes Positron Emission Tomography (PET) scans for literally deeper insights into the functioning of brain metabolism and thus neural activity has increased in order to determine the relationships or connections of regions of the brain now known collectively as the connectome (Wolpaw, 2010).

Dr. Read Montague explained broadly how his team had several fMRI centers around the world linked to each other across the Internet so that various economic games could be played and the regional specific brain activity of all the participant players of these games can be recorded in real time at each step of the game (Montague, 2012). In the publication on this fMRI experiment, it shows the interaction between baseline suspicion in the amygdala and the ongoing evaluation of the specific situation that may increase or degree that suspicion which occurred in the parahippocampal gyrus (Bhatt et al., 2012). Since the fMRI equipment is very large, immobile and expensive, it cannot be used in many situations (Solovey et al., 2012). To essentially substitute for the fMRI, the fNIRS was developed which can be worn on the head and is far more convenient than the traditional full body fMRI scanner that requires a sedentary or prone position to work (Solovey et al., 2012).

In a study of people multitasking on the computer with the fNIRIS head-mounted device called Brainput, the Brainput device worked with remotely controlled robots that would automatically modify the behavior of 2 remotely controlled robots when Brainput detected an information overload in the multitasking brains of the human navigating both of the robots simultaneously over several differently designed terrains (Solovey et al., 2012).

Writing Electromagnetic Information to the Brain?
These 2 examples of the Human Connectome Project lead by the National Institute of Health (NIH) in the US and also underway in other countries show how early the mapping of brain region interaction is for higher cognitive functions beyond sensory motor interactions. Nevertheless, one Canadian neurosurgeon has taken volunteers for an early example of writing some electromagnetic input into the human brain to induce paranormal kinds of subjective experience and has been doing so since 1987 (Cotton, 1996; Nickell, 2005; Persinger, 2012). Dr. Michael Persinger uses small electrical signals across the temporal lobes in an environment with partial audio-visual isolation to reduce neural distraction (Persinger, 2003). These microtesla magnetic fields especially when applied to the right hemisphere of the temporal lobes often induced a sense of an “other” presence generally described as supernatural in origin by the volunteers (Persinger, 2003). This early example shows how input can be received directly by the brain as well as recorded from it.

Higher Resolution Recording of Neural Data
Electrodes from EEGs and electromagnets from fMRI and fNIRIS still record or send data at the macro level of entire regions or areas of the brain. Work on intracellular recording such as the nanotube transistor allows for better understanding at the level of neurons (Gao et al., 2012). Of course, when introducing micro scale recording or transmitting equipment into the human brain, safety is a major issue. Some progress has been made in that an ingestible microchip called the Raisin has been made that can transmit information gathered during its voyage through the digestive system (Kessel, 2009). Dr. Robert Freitas has designed many nanoscale devices such as Respirocytes, Clottocytes and Microbivores to replace or augment red blood cells, platelets and phagocytes respectively that can in principle be fabricated and do appear to meet the miniaturization and propulsion requirements necessary to get into the bloodstream and arrive at the targeted system they are programmed to reach (Freitas, 1998; Freitas, 2000; Freitas, 2005; Freitas, 2006).

The primary obstacle is the tremendous gap between assembling at the microscopic level and the molecular level. Dr. Richard Feynman described the crux of this struggle to bridge the divide between atoms in his now famous talk given on December 29, 1959 called “There’s Plenty of Room at the Bottom” (Feynman, 1959). To encourage progress towards the ultimate goal of molecular manufacturing by enabling theoretical and experimental work, the Foresight Institute has awarded annual Feynman Prizes every year since 1997 for contribution in this field called nanotechnology (Foresight, 2012).

The Current State of the Art and Science of Brain Computer Interfaces
Many neuroscientists think that cellular or even atomic level resolution is probably necessary to understand and certainly to interface with the brain at the level of conceptual thought, memory storage and retrieval (Ptolemy, 2009; Koene, 2010) but at this early stage of the Human Connectome Project this evaluation is quite preliminary. The convergence of noninvasive brain scanning technology with implantable devices among volunteer patients supplemented with neural stem cells and neurotrophic factors to facilitate the melding of biological and artificial intelligence will allow for many medical benefits for paraplegics at first and later to others such as intelligence analysts, soldiers and civilians.

Some scientists and experts in Artificial Intelligence (AI) express the concern that AI software is on track to exceed human biological intelligence before the middle of the century such as Ben Goertzel, Ray Kurzweil, Kevin Warwick, Stephen Hawking, Nick Bostrom, Peter Diamandis, Dean Kamen and Hugo de Garis (Bostrom, 2009; de Garis, 2009, Ptolemy, 2009). The need for fully functioning BCIs that integrate the higher order conceptual thinking, memory recall and imagination into cybernetic environments gains ever more urgency if we consider the existential risk to the long-term survival of the human species or the eventual natural descendent of that species. This call for an intimate and fully integrated BCI then acts as a shield against the possible emergence of an AI independently of us as a life form and thus a possible rival and intellectually superior threat to the human heritage and dominance on this planet and its immediate solar system vicinity.

References

Bhatt MA, Lohrenz TM, Camerer CF, Montague PR. (2012). Distinct contributions of the amygdala and parahippocampal gyrus to suspicion in a repeated bargaining game. Proc. Nat’l Acad. Sci. USA, 109(22):8728–8733. Retrieved October 15, 2012, from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3365181/pdf/pnas.201200738.pdf.

Birhard, K. (1999). The science of haptics gets in touch with prosthetics. The Lancet, 354(9172), 52–52. Retrieved from http://search.proquest.com/docview/199023500

Bostrom, N. (2009). When Will Computers Be Smarter Than Us? Forbes Magazine. Retrieved October 19, 2012, from http://www.forbes.com/2009/06/18/superintelligence-humanity-oxford-opinions-contributors-artificial-intelligence-09-bostrom.html.

BrainGate. (2012). BrainGate — Clinical Trials. Retrieved October 15, 2012, from http://www.braingate2.org/clinicalTrials.asp.

Chapin, J. (2008). Robo Rat — The Brain/Machine Interface [Video]. Retrieved October 19, 2012, from http://www.youtube.com/watch?v=-EvOlJp5KIY.

Cotton, I. (1997, 96). Dr. persinger’s god machine. Free Inquiry, 17, 47–51. Retrieved from http://search.proquest.com/docview/230100330.

de Garis, H. (2009, June 22). The Coming Artilect War. Forbes Magazine. Retrieved October 19, 2012, from http://www.forbes.com/2009/06/18/cosmist–terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html.

deCODE genetics. (2012). deCODE genetics – Products. Retrieved October 26, 2012, from http://www.decode.com/products.

Feynman, R. (1959, December 29). There’s Plenty of Room at the Bottom, An Invitation to Enter a New Field of Physics. Caltech Engineering and Science. 23(5)22–36. Retrieved October 17, 2012, from http://calteches.library.caltech.edu/47/2/1960Bottom.pdf.

Foresight Institute. (2012). FI sponsored prizes & awards. Retrieved October 17, 2012, from http://www.foresight.org/FI/fi_spons.html.

Foster, D. J., & Wilson, M. A. (2006). Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature, 440(7084), 680–3. doi: 10.1038/nature04587.

Freitas, R. (1998). Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell, Artificial Cells, Blood Substitutes, and Immobil. Biotech.26(1998):411–430. Retrieved October 15, 2012, from http://www.foresight.org/Nanomedicine/Respirocytes.html.

Freitas, R. (2000, June 30). Clottocytes: Artificial Mechanical Platelets,” Foresight Update (41)9–11. Retrieved October 15, 2012, from http://www.imm.org/publications/reports/rep018.

Freitas, R. (2005. April). Microbivores: Artificial Mechanical Phagocytes using Digest and Discharge Protocol. J. Evol. Technol. (14)55–106. Retrieved October 15, 2012, from http://www.jetpress.org/volume14/freitas.pdf.

Freitas, R. (2006. September). Pharmacytes: An Ideal Vehicle for Targeted Drug Delivery. J. Nanosci. Nanotechnol. (6)2769–2775. Retrieved October 15, 2012, from http://www.nanomedicine.com/Papers/JNNPharm06.pdf.

Gao, R., Strehle, S., Tian, B., Cohen-Karni, T. Xie, P., Duan, X., Qing, Q., & Lieber, C.M. (2012). “Outside looking in: Nanotube transistor intracellular sensors” Nano Letters. 12(3329−3333). Retrieved September 7, 2012, from http://cmliris.harvard.edu/assets/NanoLet12-3329_RGao.pdf.

Grill, W., McDonald, J., Peckham, P., Heetderks, W., Kocsis, J., & Weinrich, M. (2001). At the interface: convergence of neural regeneration and neural prostheses for restoration of function. Journal Of Rehabilitation Research & Development, 38(6), 633–639.

Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., Donoghue, J. P. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398), 372–5. Retrieved from http://search.proquest.com/docview/1017604144.

Kessel, A. (2009, June 8). Proteus Ingestible Microchip Hits Clinical Trials. Retrieved October 15, 2012, from http://singularityhub.com/2009/06/08/proteus–ingestible-microchip-hits-clinical-trials.

Koene, R.A. (2010). Whole Brain Emulation: Issues of scope and resolution, and the need for new methods of in-vivo recording. Presented at the Third Conference on Artificial General Intelligence (AGI2010). March, 2010. Lugano, Switzerland. Retrieved August 29, 2010, from http://rak.minduploading.org/publications/publications/koene.AGI2010-lecture.pdf?attredirects=0&d=1.

Kraemer, W. (2011, December). To Interface the Future: Interacting More Intimately with Information. Journal of Geoethical Nanotechnology. 6(2). Retrieved December 27, 2011, from http://www.terasemjournals.com/GNJournal/GN0602/kraemer.html.

Montague, R. (2012, June). What we’re learning from 5,000 brains. Retrieved October 15, 2012, from http://video.ted.com/talk/podcast/2012G/None/ReadMontague_2012G-480p.mp4.

Nature Publishing Group (NPG). (2010, December). A critical look at connectomics. Nature Neuroscience. p. 1441. doi:10.1038/nn1210-1441.

Nickell, J. (2005, September). Mystical experiences: Magnetic fields or suggestibility? The Skeptical Inquirer, 29, 14–15. Retrieved from http://search.proquest.com/docview/219355830

Panksepp, J. (1992). A Critical Role for “Affective Neuroscience” in Resolving What Is Basic About Basic Emotions. 99(3)554–560. Retrieved October 14, 2012, from http://www.communicationcache.com/uploads/1/0/8/8/10887248/a_critical_role_for_affective_neuroscience_in_resolving_what_is_basic_about_basic_emotions.pdf.

Park, K. I., Teng, Y. D., & Snyder, E. Y. (2002). The injured brain interacts reciprocally with neural stem cells supported by scaffolds to reconstitute lost tissue. Nature Biotechnology, 20(11), 1111–7. doi: 10.1038/nbt751.

Persinger, M. (2003). The Sensed Presence Within Experimental Settings: Implications for the Male and Female Concept of Self. Journal of Psychology. (137)1.5–16. Retrieved October ‎October ‎14, ‎2012, from http://search.proquest.com/docview/213833884.

Persinger, M. (2012). Dr. Michael A. Persinger. Retrieved October 27, 2012, from http://142.51.14.12/Laurentian/Home/Departments/Behavioural+Neuroscience/People/Persinger.htm?Laurentian_Lang=en-CA

Ptolemy, R. (Producer & Director). (2009). Transcendent Man [Film]. Los Angeles: Ptolemaic Productions, Therapy Studios.

Solovey, E., Schermerhorn, P., Scheutz, M., Sassaroli, A., Fantini, S. & Jacob, R. (2012). Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain Input. Retrieved August 5, 2012, from http://web.mit.edu/erinsol/www/papers/Solovey.CHI.2012.Final.pdf.

Tran, F. (Director). (2012). Dream Life of Rats [Video]. Retrieved ?September ?21, ?2012, from http://www.hulu.com/watch/388493.

UPI. (2012, May 31). People with paralysis control robotic arms to reach and grasp using brain computer interface. UPI Space Daily. Retrieved from http://search.proquest.com/docview/1018542919

Weber, J. L. (2002). The iceland map. Nature Genetics, 31(3), 225–6. doi: http://dx.doi.org/10.1038/ng920

Wolpaw, J. (2010, November). Brain-computer interface research comes of age: traditional assumptions meet emerging realities. Journal of Motor Behavior. 42(6)351–353. Retrieved September 10, 2012, from http://www.tandfonline.com/doi/pdf/10.1080/00222895.2010.526471.


…here’s Tom with the Weather.
That right there is comedian/philosopher Bill Hicks, sadly no longer with us. One imagines he would be pleased and completely unsurprised to learn that serious scientific minds are considering and actually finding support for the theory that our reality could be a kind of simulation. That means, for example, a string of daisy-chained IBM Super-Deep-Blue Gene Quantum Watson computers from 2042 could be running a History of the Universe program, and depending on your solipsistic preferences, either you are or we are the character(s).

It’s been in the news a lot of late, but — no way, right?

Because dude, I’m totally real
Despite being utterly unable to even begin thinking about how to consider what real even means, the everyday average rational person would probably assign this to the sovereign realm of unemployable philosophy majors or under the Whatever, Who Cares? or Oh, That’s Interesting I Gotta Go Now! categories. Okay fine, but on the other side of the intellectual coin, vis-à-vis recent technological advancement, of late it’s actually being seriously considered by serious people using big words they’ve learned at endless college whilst collecting letters after their names and doin’ research and writin’ and gettin’ association memberships and such.

So… why now?

Well, basically, it’s getting hard to ignore.
It’s not a new topic, it’s been hammered by philosophy and religion since like, thought happened. But now it’s getting some actual real science to stir things up. And it’s complicated, occasionally obtuse stuff — theories are spread out across various disciplines, and no one’s really keeping a decent flowchart.

So, what follows is an effort to encapsulate these ideas, and that’s daunting — it’s incredibly difficult to focus on writing when you’re wondering if you really have fingers or eyes. Along with links to some articles with links to some papers, what follows is Anthrobotic’s CliffsNotes on the intersection of physics, computer science, probability, and evidence for/against reality being real (and how that all brings us back to well, God).
You know, light fare.

First — Maybe we know how the universe works: Fantastically simplified, as our understanding deepens, it appears more and more the case that, in a manner of speaking, the universe sort of “computes” itself based on the principles of quantum mechanics. Right now, humanity’s fastest and sexiest supercomputers can simulate only extremely tiny fractions of the natural universe as we understand it (contrasted to the macro-scale inferential Bolshoi Simulation). But of course we all know the brute power of our computational technology is increasing dramatically like every few seconds, and even awesomer, we are learning how to build quantum computers, machines that calculate based on the underlying principles of existence in our universe — this could thrust the game into superdrive. So, given ever-accelerating computing power, and given than we can already simulate tiny fractions of the universe, you logically have to consider the possibility: If the universe works in a way we can exactly simulate, and we give it a shot, then relatively speaking what we make ceases to be a simulation, i.e., we’ve effectively created a new reality, a new universe (ummm… God?). So, the question is how do we know that we haven’t already done that? Or, otherwise stated: what if our eventual ability to create perfect reality simulations with computers is itself a simulation being created by a computer? Well, we can’t answer this — we can’t know. Unless…
[New Scientist’s Special Reality Issue]
[D-Wave’s Quantum Computer]
[Possible Large-scale Quantum Computing]

Second — Maybe we see it working: The universe seems to be metaphorically “pixelated.” This means that even though it’s a 50 billion trillion gajillion megapixel JPEG, if we juice the zooming-in and drill down farther and farther and farther, we’ll eventually see a bunch of discreet chunks of matter, or quantums, as the kids call them — these are the so-called pixels of the universe. Additionally, a team of lab coats at the University of Bonn think they might have a workable theory describing the underlying lattice, or existential re-bar in the foundation of observable reality (upon which the “pixels” would be arranged). All this implies, in a way, that the universe is both designed and finite (uh-oh, getting closer to the God issue). Even at ferociously complex levels, something finite can be measured and calculated and can, with sufficiently hardcore computers, be simulated very, very well. This guy Rich Terrile, a pretty serious NASA scientist, sites the pixelation thingy and poses a video game analogy: think of any first-person shooter — you cannot immerse your perspective into the entirety of the game, you can only interact with what is in your bubble of perception, and everywhere you go there is an underlying structure to the environment. Kinda sounds like, you know, life — right? So, what if the human brain is really just the greatest virtual reality engine ever conceived, and your character, your life, is merely a program wandering around a massively open game map, playing… well, you?
[Lattice Theory from the U of Bonn]
[NASA guy Rich Terrile at Vice]
[Kurzweil AI’s Technical Take on Terrile]

Thirdly — Turns out there’s a reasonable likelihood: While the above discussions on the physical properties of matter and our ability to one day copy & paste the universe are intriguing, it also turns out there’s a much simpler and straightforward issue to consider: there’s this annoyingly simplistic yet valid thought exercise posited by Swedish philosopher/economist/futurist Nick Bostrum, a dude way smarter that most humans. Basically he says we’ve got three options: 1. Civilizations destroy themselves before reaching a level of technological prowess necessary to simulate the universe; 2. Advanced civilizations couldn’t give two shits about simulating our primitive minds; or 3. Reality is a simulation. Sure, a decent probability, but sounds way oversimplified, right?
Well go read it. Doing so might ruin your day, JSYK.
[Summary of Bostrum’s Simulation Hypothesis]

Lastly — Data against is lacking: Any idea how much evidence or objective justification we have for the standard, accepted-without-question notion that reality is like, you know… real, or whatever? None. Zero. Of course the absence of evidence proves nothing, but given that we do have decent theories on how/why simulation theory is feasible, it follows that blithely accepting that reality is not a simulation is an intrinsically more radical position. Why would a thinking being think that? Just because they know it’s true? Believing 100% without question that you are a verifiably physical, corporeal, technology-wielding carbon-based organic primate is a massive leap of completely unjustified faith.
Oh, Jesus. So to speak.

If we really consider simulation theory, we must of course ask: who built the first one? And was it even an original? Is it really just turtles all the way down, Professor Hawking?

Okay, okay — that means it’s God time now
Now let’s see, what’s that other thing in human life that, based on a wild leap of faith, gets an equally monumental evidentiary pass? Well, proving or disproving the existence of god is effectively the same quandary posed by simulation theory, but with one caveat: we actually do have some decent scientific observations and theories and probabilities supporting simulation theory. That whole God phenomenon is pretty much hearsay, anecdotal at best. However, very interestingly, rather than negating it, simulation theory actually represents a kind of back-door validation of creationism. Here’s the simple logic:

If humans can simulate a universe, humans are it’s creator.
Accept the fact that linear time is a construct.
The process repeats infinitely.
We’ll build the next one.
The loop is closed.

God is us.

Heretical speculation on iteration
Even wonder why older polytheistic religions involved the gods just kinda setting guidelines for behavior, and they didn’t necessarily demand the love and complete & total devotion of humans? Maybe those universes were 1st-gen or beta products. You know, like it used to take a team of geeks to run the building-sized ENIAC, the first universe simulations required a whole host of creators who could make some general rules but just couldn’t manage every single little detail.

Now, the newer religions tend to be monotheistic, and god wants you to love him and only him and no one else and dedicate your life to him. But just make sure to follow his rules, and take comfort that your’re right and everyone else is completely hosed and going to hell. The modern versions of god, both omnipotent and omniscient, seem more like super-lonely cosmically powerful cat ladies who will delete your ass if you don’t behave yourself and love them in just the right way. So, the newer universes are probably run as a background app on the iPhone 26, and managed by… individuals. Perhaps individuals of questionable character.

The home game:
Latest title for the 2042 XBOX-Watson³ Quantum PlayStation Cube:*
Crappy 1993 graphic design simulation: 100% Effective!

*Manufacturer assumes no responsibility for inherently emergent anomalies, useless
inventions by game characters, or evolutionary cul de sacs including but not limited to:
The duck-billed platypus, hippies, meat in a can, reality TV, the TSA,
mayonaise, Sony VAIO products, natto, fundamentalist religious idiots,
people who don’t like homos, singers under 21, hangovers, coffee made
from cat shit, passionfruit iced tea, and the pacific garbage patch.

And hey, if true, it’s not exactly bad news
All these ideas are merely hypotheses, and for most humans the practical or theoretical proof or disproof would probably result in the same indifferent shrug. For those of us who like to rub a few brain cells together from time to time, attempting to both to understand the fundamental nature of our reality/simulation, and guess at whether or not we too might someday be capable of simulating ourselves, well — these are some goddamn profound ideas.

So, no need for hand wringing — let’s get on with our character arc and/or real lives. While simulation theory definitely causes reflexive revulsion, “just a simulation” isn’t necessarily pejorative. Sure, if we take a look at the current state of our own computer simulations and A.I. constructs, it is rather insulting. So if we truly are living in a simulation, you gotta give it up to the creator(s), because it’s a goddamn amazing piece of technological achievement.

Addendum: if this still isn’t sinking in, the brilliant
Dinosaur Comics might do a better job explaining:

(This post originally published I think like two days
ago at technosnark hub www.anthrobotic.com.
)

2012 has already been a bad omen when it comes to humankind solving the dangers ahead. Perhaps an early review will make next January 1 brighter.

There has been strong information questioning the existence of Hawkins Radiation, which was a major reason most scientists think Black Hole Collider research is safe, without any increase in a call for a safety conference. Once, due to classification keeping it away from the general public, there was a small debate whether the first atomic explosion would set off a chain reaction that would consume the earth. On March 1, 1954 the Lithium that was, for other purposes, put in what was intended to be a small Hydrogen Bomb test, created, by far, the dirtiest atomic explosion ever as the natives on Bikini Island woke up to two suns in the sky that morning. History would be different had the first tests gravely injured people. Eventually people in the future will look back at how humankind dealt with the possibility of instantly destroying itself, as more important, than how it dealt with slowly producing more doomsday-like weapons.

With genetic engineering the results are amazing, goats with hair thousands of times stronger than wool would offer some increased protection from its predators. Think what would happen if, 1 foot long, undigestible fibers possibly with some sharp spots gets accidental inbreed in goat meat, or very un-tasty animals spread in the wild throughout the ecosystem. In 2001 Genetic Insecticide intended only to protect corn to be used in animal feed spread by the winds and cross breading to all corn in the northern hemisphere. Bees drinking corn syrup from one discarded soda can can endanger an entire hive. Now there is fear of this gene getting into wheat, rice and all plants that don’t rely on insects in some way. The efforts to require food to be labeled for genetically modified ingredients doesn’t address the issue and may actually distract from warning of the dangers ahead.

There are some who say bad people want to play God and create a god particle, likewise some say evil Monsanto,with bad motives, is trying to prevent us from buying safe food. This attitude doesn’t help create a safer future, or empower those trying to rationally deal with the danger.

The next danger is the attempt to impose Helter Skelter on the world with an Islam baiting movie. To Coptic Christians there was to be a movie on them being mistreated in Egypt. Actors hired to perform in a movie about flying saucers landing 2000 years ago and helped a man who never needed a shave and had a far away look in his eyes who had a donkey who he loved, and didn’t know who his father was. Islam haters fund-raised to create a movie smearing bin Laden but tricking Muslims to see it with movie posters in Arabic.

Somehow despite the false claim of 100 Jewish donors, no one who looked Jewish and rich was attacked in Los Angeles. Prompt expose‘ by Coptic Christian religious leaders, instead of someone else, exposing the claim that the person who pretended to be a Coptic Christian refugee and a rich Jewish businessman was the same person. Quick response prevented attacks on Copts in Egypt. Every relative of the four Americans killed in Benghazi, didn’t want Romney to use their name for campaign purposes. It is amazing that of all the people killed in the riots around the world following this hate trailer none of them were Americans who wanted their relative’s name used to promote anger at Muslims.

It is a bad omen that this was looked upon by many as a free speech issue not a terror attack. When Lebanese Prime Minister Rafik Hariri was killed to cause tit for tat revenge killings by a van that was stolen a year earlier in Japan, the UN took charge of the investigation. When Charles Manson tried to impose a Helter Skelter race war on the earth he didn’t come close enough to warrant being punished for a separate crime. If these two previous terror attacks on the world had been done in a way that no one was killed in the initial attack, is the earth really dumb enough to discuss it as a free speech issue?

Michael Jackson’s sister, before he died, was alarmed claiming that Michael Jackson’s handlers were systematical putting him under stress to put her brother in harm’s way. Comrad Murray this year wants a new trial insisting that he never would have given Michael orally such a badly mixed dose of anesthesia, and as no one seems to remember he had been distracted by a call on his cell phone for an offer of an important business deal. The world is full of incidents where professionals commit a crime in such a complex , convoluted way that it is hard to prosecute as a crime. Perhaps all these incidents could be looked into again.

It would be helpful if those stereotyped as not being concerned speak out like a sky diver warning about the Collider or a atheist leader and/or smut dealer speaking out on the hate religious film attack calling for investigation and prosecution. This Lifeboat site can accomplish more when it joins in where stereotypically one wouldn’t expect it to.

January through October, 2012, hasn’t been a good omen, in humankind’s ability to solve its problems and deal with danger, perhaps doing a year in review in October instead of waiting till January will make next January’s review brighter.

Most blogs expire concerning comments, one can comment below months from now,

http://readersupportednews.org/pm-section/22-22/14022-ambassador-stevens-is-a-hero-four-heroes-who-ended-a-helter-skelter-chain

http://richardkanepa.blogspot.com/2012/10/it-is-only-human-to-be-angry-over-ones_18.html

A systematic decay rate of white dwarf stars in the galaxy is possibly implicit in the data that the LSAG scientists of CERN just sent you and which you kindly forwarded to me.

This preliminary evidence is quite alarming. It allows one to extrapolate to the effects that the same causally to be implicated agent (black holes) has when produced on earth in ultra-slow form at CERN. This CERN attempts to do for 2 years – and with maximum luminosity during the remaining weeks of 2012.

Much as in nuclear fission the “cold neutrons” (slow neutrons) possess a much larger “cross-section” than fast ones, so the artificial “cold mini black holes” predictably possess a much larger cross section than their ultra-fast natural cousins in white dwarfs. Hence the nightmare of but a few years remaining to planet earth would be supported by empirical evidence for the first time.

Can you arrange for a first public dialog with CERN?

Thank you very much,

Sincerely yours,

Prof. Otto E. Rossler, University of Tubingen, Germany

P.S.: An Italian court just convicted 7 scientists for not having predicted an earth quake. This judgment will not prevail, I predict, because it amounts to clairvoyance requested from science by the court. CERN’s public behavior for 5 years belongs into an entirely different category, however, since they openly ignore an extant scientific proof of their actively causing the worst conceivable disaster. I give CERN the kind advice to stop collisions to date. And I thank the Lifeboat administration for leaving this text online for this is not a game. (Compare also a recent German-language newspaper article http://newsticker.sueddeutsche.de/list/id/1374980 .)

New whitepaper/critique on Nuclear Industrial Safety — International Nuclear Services Putting Buisness Before Safety and Other Essays on Nuclear Safety — Asserts specific concern over the 2038 clock-wrap issue in old UNIX/IBM Control Systems. This is an aggregation of previous contributions to Lifeboat Foundation on the topic of Nuclear Safety.

http://environmental-safety.webs.com/apps/blog/

http://environmental-safety.webs.com/nuclear_essays.pdf

Comments welcome.

I proved that black holes are different – they can only grow in a runaway fashion inside matter.

So if anyone would produce them on earth, earth would be doomed as soon as one would stay. No one disputes this.

Nevertheless the biggest effort at producing them on earth is going to be made during the next 10 weeks. CERN stages it.

I do not ask CERN to stop it: I only ask CERN to explain why they do it.

AND I ASK EVERYONE TO LISTEN

http://www.cinemablographer.com/2012/06/last-night.html
… is too early a movie at the present time (although it is nice). I instead re-iterate that I cannot understand the stubbornness of a whole planet refusing to check whether the offered proof of danger contains an error or not. All of the planet’s media never behaved as irresponsibly before: to refuse checking is never intelligent or defensible in retrospect, or is it?

A German higher administrative court (OVG Münster, Az. 16 A 591/11) ruled definitively yesterday that the principle of reversal of the burden of proof is not applicable in this case: You have to prove that the potentially earth-eating black holes are actually being produced before you can lawfully object to the ongoing attempt at their production.

———————————————–

Let me re-iterate how I see the situation in a manner that is maximally self-critical in a Popperian sense:

The 28 years old Einstein wrote a paper in 1907 which contained a radically new prediction: clocks located more downstairs in gravity (like at the base of a high-rise building) are slower-ticking in a locally imperceptible way. The G.P.S. later agreed.

It took quite a number of years until the automatically existing corollaries to this breathtaking result became identified unequivocally: T (clock time) is accompanied by L (meter-stick length) and M (particle mass) and Ch (particle charge), all locally invisibly affected by the same factor, the first two going up, the last two going down. Since some people have difficulty remembering 4 items at once, I call this find Telemach (T, L, M, Ch) for short. Note that Telemachus helped his father Ulysses expel the suitors of his mother’s throne.

Telemach does not interfere with general relativity: it is implicit in it. But Telemach interferes with some allegedly physical implications of general relativity – Reissner-Nordstrom and Kerr-Newman amongst them. And the most famous post-Einsteinian black-hole theorems (including wormhole based time travel and Hawking evaporation) go down the drain as well. Thus there automatically is a strong lobby in existence against Telemach according to the motto “This is not true and if it is true we don’t believe it.”

Why do I insist on Telemach being important enough to deserve an attempt at falsification to be made by the scientific community? The reason is the looked-forward to hopeful production of black holes (“one every second”) at CERN in Switzerland. CERN’s scientists did publish a big paper a year ago to the effect that they did not find any. Unfortunately, shunned young Telemach predicts that CERN’s sensors cannot detect its most hoped-for success, black holes.

The prediction of an enhanced success rate combined with sensor blindness goes so much against the honor of CERN that they decided 4 years ago to sit it out. No update of the 2008 safety report appeared ever since, no talking to the press about dangers any more, no compliance with a court’s tactful kind request to admit a “safety conference” (January 27, 2011), no permission given to “CERN’s Young Scientist” to implement their invitation for a talk made to a CERN critic 2 ½ years ago, no permission for CERN’s sister organization (the United Nations Security Council) to conduct an investigation – while simultaneously the fact that this highest terrestrial body had been asked for help blocked all national parliaments from discussing the matter.

No one who is accused of committing a crime – this one could be the worst of history – can be expected to act differently. I sympathize with CERN. Few temptations for mortals are less inescapable. Imagine: you got ten billion dollars to test certain hypotheses with maximally shiny machines – and then it is revealed that if you follow the allotted task an entirely new risk is encountered: those who gave you the money will not be pleased. Will you have to give the money back if you hesitate continuing on schedule? In other words: no one would have acted differently in CERN’s place. They just had to crank up their attempt to generate these suddenly allegedly dangerous objects, every year and every month and every day – especially so during the last three scheduled months that we are living through right now before the machine will be dismantled for two years in favor of an almost doubled-in-performance successor after the end of the year 2012 .

So everyone fully understands CERN, both in the present situation and from the point of a future intelligence looking back (as we hope will exist). Only some blockheads who still believe in individual virtues like honesty and dignity are protesting: “Is this not a crime?” When the emperor asks you to die along with him, you have no choice but to comply.

Some parents love their children more than the emperor. I see no friend for the end of the world as of yet.

Today is Felix-Baumgartner day since creativity wins. And today, I saw an interesting dialog about my potentially planet-saving results on the Internet. The latter was conducted by amateur physicists ( http://www.sciforums.com/showthread.php?113769-Invited-%28peer%29-review-of-article%28s%29-by-Otto-R%F6ssler ) who thereby have earned great merit since the whole rest of the profession refuses to come out.

The young colleagues tried to convince themselves and their readers that my “Telemach” result, which has the planet-saving potential if flawless, violates textbook and wiki wisdom and therefore is bound to be false.

Nevertheless I am very grateful to Mr. “rpenner” (pseudonym) and his friends for their being the only scientists so far who dare come out in a not totally anonymous way.

The emphasis they place on the Rindler metric at the beginning is especially meritorious. The Rindler metric is arguably the most important post-Einsteinian discovery. It implies the Telemach theorem – on the truthfulness of which the survival of the planet is predicated as no one denies.

But is the Rindler metric not well known and no one ever extracted fundamental new implications from it? Let me take this topic up for you.

The Rindler metric describes a one-light-year-long rocketship which at the tip has 1 g acceleration (earth’s gravity) and at the rear end has infinite acceleration. It consists of a very large number of “rocket rings” lined up between tip and bottom that all stick spontaneously together without touching because their constant accelerations vary in a lawfully graded manner. The best textbook still is Robert M. Wald’s “General Relativity” of 1984. It correctly reproduces (on page 151) the everywhere equal ticking times valid over the whole length of the ship – which, however, do not reproduce the local clocks’ readings, as the book correctly stresses. The local clocks rather tick more and more slowly towards the tail end to become effectively frozen there. This “local reality” of unit time intervals T inside the Rindler rocket has three corollaries: L (a meter stick’s length) is locally imperceptibly increased in proportion to T; M (a unit mass like that of an electron) is locally imperceptibly decreased by the same factor; and Ch (a unit charge) is likewise locally imperceptibly reduced in proportion.

This is maximally strange since the same rocketship – when briefly interrupted in its acceleration everywhere in the external simultaneity, while being momentarily at rest along the horizontal axis – is not infinitely long (only one light year long) and is not infinitely mass-reduced at the tail nor charge reduced there. Nevertheless these interior artifacts are “ontological”: An astronaut descending from the rocket’s tip inside is, after having been hauled back up again, indeed empirically younger on return in accordance with the equations. Thus, it is the internal picture T,L,M,Ch (“Telemach”) and not the external one which proves to be physically relevant.

My message is that this new ontology is presently being neglected by humankind at the risk of self-extinction. A “safety conference” is all that I am requesting for 4 years.

It will be my privilege – and perhaps not only mine – to learn more about these matters in the continued dialog with Mr. rpenner and his friends, Ms. Trooper and the others, on this forum here. Or if they so prefer, on theirs, to be mirrored here since the present discussion started out on Lifeboat. And if we are lucky will we even be granted a word of kind advice from grandmaster Wolfgang Rindler himself.