Toggle light / dark theme

It may have gone unnoticed to most, but the first expedition for mankind’s first permanent undersea human colony will begin in July of next year. These aquanauts represent the first humans who will soon (~2015) move to such a habitat and stay with no intention of ever calling dry land their home again. Further details: http://underseacolony.com/core/index.php

Of all 100 billion humans who have ever lived, not a single human has ever gone undersea to live permanently. The Challenger Station habitat, the largest manned undersea habitat ever built, will establish the first permanent undersea colony, with aspirations that the ocean will form a new frontier of human colonization. Could it be a long-term success?

The knowledge gained from how to adapt and grow isolated ecosystems in unnatural environs, and the effects on the mentality and social well-being of the colony, may provide interesting insights into how to establish effective off-Earth colonies.

One can start to pose the questions — what makes the colony self-sustainable? What makes the colony adaptive and able to expand its horizons. What socio-political structure works best in a small inter-dependent colony? Perhaps it is not in the first six months of sustainability, but after decades of re-generation, that the true dynamics become apparent.

Whilst one does not find a lawyer, a politician or a management consultant on the initial crew, one can be assured if the project succeeds, it may start to require other professions not previously considered. At what size colony does it become important to have a medical team, and not just one part-time doctor. What about teaching skills and schooling for the next generation to ensure each mandatory skill set is sustained across generations. In this light, it could become the first social project in determining the minimal crew balance for a sustainable permanent off-Earth Lifeboat. One can muse back to the satire of the Golgafrincham B Ark in Hitch-Hiker’s Guide to the Galaxy, where Golgafrinchan Telephone Sanitisers, Management Consultants and Marketing executives were persuaded that the planet was under threat from an enormous mutant star goat, packed in Ark spaceships, and sent to an insignificant planet… which turned out to be Earth. It provides us a satirical remind that the choice of crew and colony on a real Lifeboat would require utmost social research.

1) Unchargedness (Reissner disproved)
2) Arise more readily (string theory confirmed)
3) Are indestructible (Hawking disproved)
4) Are invisible to CERN’s detectors (CERN publication disconfirmed)
5) Slowest specimens will stay inside earth (conceded by CERN)
6) Enhanced cross section due to slowness (like cold neutrons)
7) Exponential growth inside earth (quasar-scaling principle)

The final weeks of 2012 will again double the danger that the earth is going to be shrunk to 2 cm after a delay of a few years. No one on the planet demands investigation. The African Journal of Mathematics did the most for the planet. I ask President Obama to demand a safety statement from CERN immediately. The planet won’t forget it. Nor will America the beautiful. P.S. I thank Tom Kerwick who deleted all my latest postings on Lifeboat for his demanding a “substantiated” posting. I now look forward to his response.

Appendage: “It may Interest the World that I just found T,L,M in Einstein’s 1913 paper on Nordström (“On the present state of the problem of gravitation”) – so that it can no longer be ignored. The result is inherited by the full-fledged theory of general relativity of 1915 but was no longer remembered to be implicit. I give this information to the planet to show that my black-hole results (easy production, no Hawking evaporation, exponential voraciousness) can no longer be ignored by CERN. They call for an immediate stop of the LHC followed by a safety conference. I renew my appeal to the politicians of the world, and especially President Obama, to support my plea. Everyone has the human right to be informed about a new scientific result that bears on her or his survival. I recommend http://www.pitt.edu/~jdnorton/papers/einstein-nordstroem-HGR3.pdf for background information” — 2nd Nov.

FutureICT have submitted their proposal to the FET Flagship Programme, an initiative that aims to facilitate breakthroughs in information technology. The vision of FutureICT is to

integrate the fields of information and communication technologies (ICT), social sciences and complexity science, to develop a new kind of participatory science and technology that will help us to understand, explore and manage the complex, global, socially interactive systems that make up our world today, while at the same time paving the way for a new paradigm of ICT systems that will leverage socio-inspired self-organisation, self-regulation, and collective awareness.

The project could provide us with profound insights into societal behaviour and improve policymaking. The project echoes the Large Hadron Collider at CERN in its scope and vision, only here we are trying to understand the state of the world. The FutureICT project combines the creation of a ‘Planetary Nervous System’ (PNS) where Big Data will be collated and organised, a ‘Living Earth Simulator’ (LES), and the ‘Global Participatory Platform’ (GPP). The LES will simulate the data and provide models for analysis, while the GPP will provide the data, models and methods to everyone. People wil be able to collaborate and research in a very different way. The availability of Big Data to participants will both strengthen our ability to understand complex socio-economic systems, and it could help build a new dialogue between nations in how we solve complex global societal challenges.

FutureICT aim to develop a ‘Global Systems Science’, which will

lay the theoretical foundations for these platforms, while the focus on socio-inspired ICT will use the insights gained to identify suitable designs for socially interactive systems and the use of mechanism that have proven effective in society as operational principles for ICT systems.

It is exciting to think about the possible breakthroughs that could be made. What new insights and scientific discoveries could be made? What new technologies could emerge? The Innovation Accelerator (IA) is one feature of the venture that could create both disruptive technology and politics. Next year will open up a new world of possibilities. A possible project for the Lifeboat Foundation to be involved in.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I discuss the third and final part, Concepts and Logical Flow, of how to read or write a journal paper, that is not taught in colleges.

A paper consists of a series of evolving concepts expressed as paragraphs. If a concept is too complex to be detailed in a single paragraph, then break it down into several sub-concept paragraphs. Make sure there is logical evolution of thought across these sub-concepts, and across the paper.

As a general rule your sentences should be short(er). Try very hard not to exceed two lines of Letter or A4 size paper at font size 11. Use commas judicially. Commas are not meant to extend sentences or divide the sentence into several points!!! They are used to break up a sentence into sub-sentences to indicate a pause when reading aloud. How you use commas can alter the meaning of a sentence. Here is an example.

And this I know with confidence, I remain and continue …

Changing the position of the commas, changes the meaning to

And this I know, with confidence I remain and continue …

We see how ‘confidence’ changes from the speaker’s assessment of his state of knowledge, to the speaker’s reason for being. So take care.

When including mathematical formulae, always wrap. Wrap them with an opening paragraph and a closing paragraph. Why? This enhances the clarity of the paper. The opening paragraph introduce the salient features of the equation(s), i.e. what the reader needs to be aware of in the equation(s), or an explanation of the symbols, or why the equation is being introduced.

The closing paragraph explains what the author found by stating the equations, and what the reader should expect to look for in subsequent discussions, or even why the equation(s) is or is not relevant to subsequent discussions.

Many of these concept-paragraphs are logically combined into sections, and each section has a purpose for its inclusion. Though this purpose may not always be stated in the section, it is important to identify what it is and why it fits in with the overall schema of the paper.

The basic schema of a paper consists of an introduction, body and conclusion. Of course there are variations to this basic schema, and you need to ask the question, why does the author include other types of sections.

In the introduction section(s) you summarize your case, what is your paper about, and what others have reported. In the body sections you present your work. In the conclusion section you summarize your findings and future direction of the research. Why? Because a busy researcher can read your introduction and conclusion and then decide whether your paper is relevant to his or her work. Remember we are working within a community of researchers in an asynchronous manner, an asynchronous team, if you would. As more and more papers are published every year, we don’t have the time to read all of them, completely. So we need a method of eliminating papers we are not going to read.

An abstract is usually a summary of the body of the paper. It is difficult to do well and should only be written after you have completed your paper. That means you are planning ahead and have your paper written and abstracts completed when you receive the call for papers.

An abstract tells us if the paper could be relevant, to include in our list of papers to be considered for the shortlist of papers to be read. The introduction and conclusion tells if the paper should be removed from our short list. If the conclusion fits in with what we want to achieve, then don’t remove the paper from the short list.

I follow a rule when writing the introduction section. If I am writing to add to the body of consensus, I state my case and then write a review of what others have reported. If I am negating the body of consensus, then I write a review of what others have reported, and then only do I state my case of why not.

As a general rule, you write several iterations of the body first, then introduction and finally the conclusion. You’d be surprised by how your thinking changes if you do it this way, This is because you have left yourself open to other inferences that had not crossed your mind from the time you completed your work, to the time you started writing your paper.

If someone else has theoretical or experimental results that apparently contradicts your thesis, then discuss why and why not, and you might end up changing your mind. It is not a ‘sin’ to include contradictory results, but make sure you discuss this intelligently and impartially.

Your work is the sowing and growing period. Writing the paper is the harvesting period. What are you harvesting? Wheat, weeds or both? Clearly the more wheat you harvest the better your paper. The first test for this is the logical flow of your paper. If it does not flow very well, something is amiss! You the author, and you the reader beware! There is no substitute but to rethink your paper.

The second test is, if you have tangential discussions in your paper that seem interesting but are not directly relevant. Prune, prune & prune. If necessary split into multiple concise papers. A concise & sharp paper that everyone remembers is more valuable than a long one that you have to plough through.

Go forth, read well and write more.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I discuss part 2 of 3, Mathematical Construction versus Mathematical Conjecture, of how to read or write a journal paper that is not taught in colleges.

I did my Master of Arts in Operations Research (OR) at the best OR school in the United Kingdom, University of Lancaster, in the 1980s. We were always reminded that models have limits to their use. There is an operating range within which a model will provide good and reliable results. But outside that operating range, a model will provide unreliable, incorrect and even strange results.

Doesn’t that sound a lot like what the late Prof. Morris Kline was saying? We can extrapolate this further, and ask our community of theoretical physicists the question, what is the operating range of your theoretical model? We can turn the question around and require our community of theoretical physicists to inform us or suggest boundaries of where their models fail “ … to provide reasonability in guidance and correctness in answers to our questions in the sciences …”

A theoretical physics model is a mathematical construction that is not necessarily connected to the real world until it is empirically verified or falsified, until then these mathematical constructions are in limbo. Search the term ‘retrocausality’ for example. The Wikipedia article Retrocausality says a lot about how and why of the origins of theoretical physics models that are not within the range of our informed common sense. Let me quote,

“The Wheeler–Feynman absorber theory, proposed by John Archibald Wheeler and Richard Feynman, uses retrocausality and a temporal form of destructive interference to explain the absence of a type of converging concentric wave suggested by certain solutions to Maxwell’s equations. These advanced waves don’t have anything to do with cause and effect, they are just a different mathematical way to describe normal waves. The reason they were proposed is so that a charged particle would not have to act on itself, which, in normal classical electromagnetism leads to an infinite self-force.”

John Archibald Wheeler and Richard Feynman are giants in the physics community, and these esteemed physicists used retrocausality to solve a mathematical construction problem. Could they not have asked the different questions? What is the operating range of this model? How do we rethink this model so as not to require retrocausality?

This unfortunate leadership in retrocausality has led to a whole body of ‘knowledge’ by the name of ‘retrocausality’ that is in a state of empirical limbo and thus, the term mathematical conjecture applies.

Now, do you get an idea of how mathematical construction leads to mathematical conjecture? Someone wants to solve a problem, which is a legitimate quest because that is how science progresses, but the solution causes more problems (not questions) than previously, which leads to more physicists trying to answer those new problems, and so forth .… and so forth .… and so forth .…

In Hong Kong, the Cantonese have an expression “chasing the dragon”.

Disclaimer: I am originally from that part of the world, and enjoyed tremendously watching how the Indian and Chinese cultures collided, merged, and separated, repeatedly. Sometimes like water and oil, and sometimes like water and alcohol. These two nations share a common heritage, the Buddhist monks, and if they could put aside their nationalistic and cultural pride, who knows what could happen?

Chasing the dragon in the Chinese cultural context “refers to inhaling the vapor from heated morphine, heroin, oxycodone or opium that has been placed on a piece of foil. The ‘chasing’ occurs as the user gingerly keeps the liquid moving in order to keep it from coalescing into a single, unmanageable mass. Another more metaphorical use of the term ‘chasing the dragon’ refers to the elusive pursuit of the ultimate high in the usage of some particular drug.”

Solving a mathematical equation always gives a high, and discovering a new equation gives a greater high. So when we write a paper, we have to ask ourselves, are we chasing the dragon of mathematical conjecture or chasing the dragon of mathematical construction? I hope it is the latter.

Previous post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

The historical context in which Brain Computer Interfaces (BCI) has emerged has been addressed in a previous article called “To Interface the Future: Interacting More Intimately with Information” (Kraemer, 2011). This review addresses the methods that have formed current BCI knowledge, the directions in which it is heading and the emerging risks and benefits from it. Why neural stem cells can help establish better BCI integration is also addressed as is the overall mapping of where various cognitive activities occur and how a future BCI could potentially provide direct input to the brain instead of only receive and process information from it.

EEG Origins of Thought Pattern Recognition
Early BCI work to study cognition and memory involved implanting electrodes into rats’ hippocampus and recording its EEG patterns in very specific circumstances while exploring a track both when awake and sleeping (Foster & Wilson, 2006; Tran, 2012). Later some of these patterns are replayed by the rat in reverse chronological order indicating a retrieval of the memory both when awake and asleep (Foster & Wilson, 2006). Dr. John Chapin shows that the thoughts of movement can be written to a rat to then remotely control the rat (Birhard, 1999; Chapin, 2008).

A few human paraplegics have volunteered for somewhat similar electrode implants into their brains for an enhanced BrainGate2 hardware and software device to use as a primary data input device (UPI, 2012; Hochberg et al., 2012). Clinical trials of an implanted BCI are underway with BrainGate2 Neural Interface System (BrainGate, 2012; Tran, 2012). Currently, the integration of the electrodes into the brain or peripheral nervous system can be somewhat slow and incomplete (Grill et al., 2001). Nevertheless, research to optimize the electro-stimulation patterns and voltage levels in the electrodes, combining cell cultures and neurotrophic factors into the electrode and enhance “endogenous pattern generators” through rehabilitative exercises are likely to improve the integration closer to full functional restoration in prostheses (Grill et al., 2001) and improved functionality in other BCI as well.

When integrating neuro-chips to the peripheral nervous system for artificial limbs or even directly to the cerebral sensorimotor cortex as has been done for some military veterans, neural stem cells would likely help heal the damage to the site of the limb lost and speed up the rate at which the neuro-chip is integrated into the innervating tissue (Grill et al., 2001; Park, Teng, & Snyder, 2002). These neural stem cells are better known for their natural regenerative ability and it would also generate this benefit in re-establishing the effectiveness of the damaged original neural connections (Grill et al., 2001).

Neurochemistry and Neurotransmitters to be Mapped via Genomics
Cognition is electrochemical and thus the electrodes only tell part of the story. The chemicals are more clearly coded for by specific genes. Jaak Panksepp is breeding one line of rats that are particularly prone to joy and social interaction and another that tends towards sadness and a more solitary behavior (Tran, 2012). He asserts that emotions emerged from genetic causes (Panksepp, 1992; Tran, 2012) and plans to genome sequence members of both lines to then determine the genomic causes of or correlations between these core dispositions (Tran, 2012). Such causes are quite likely to apply to humans as similar or homologous genes in the human genome are likely to be present. Candidate chemicals like dopamine and serotonin may be confirmed genetically, new neurochemicals may be identified or both. It is a promising long-term study and large databases of human genomes accompanied by medical histories of each individual genome could result in similar discoveries. A private study of the medical and genomic records of the population of Iceland is underway and has in the last 1o years has made unique genetic diagnostic tests for increased risk of type 2 diabetes, breast cancer prostate cancer, glaucoma, high cholesterol/hypertension and atrial fibrillation and a personal genomic testing service for these genetic factors (deCODE, 2012; Weber, 2002). By breeding 2 lines of rats based on whether they display a joyful behavior or not, the lines of mice should likewise have uniquely different genetic markers in their respective populations (Tran, 2012).

fMRI and fNIRIS Studies to Map the Flow of Thoughts into a Connectome
Though EEG-based BCI have been effective in translating movement intentionality of the cerebral motor cortex for neuroprostheses or movement of a computer cursor or other directional or navigational device, it has not advanced the understanding of the underlying processes of other types or modes of cognition or experience (NPG, 2010; Wolpaw, 2010). The use of functional Magnetic Resonance Imaging (fMRI) machines, and functional Near-Infrared Spectroscopy (fNIRIS) and sometimes Positron Emission Tomography (PET) scans for literally deeper insights into the functioning of brain metabolism and thus neural activity has increased in order to determine the relationships or connections of regions of the brain now known collectively as the connectome (Wolpaw, 2010).

Dr. Read Montague explained broadly how his team had several fMRI centers around the world linked to each other across the Internet so that various economic games could be played and the regional specific brain activity of all the participant players of these games can be recorded in real time at each step of the game (Montague, 2012). In the publication on this fMRI experiment, it shows the interaction between baseline suspicion in the amygdala and the ongoing evaluation of the specific situation that may increase or degree that suspicion which occurred in the parahippocampal gyrus (Bhatt et al., 2012). Since the fMRI equipment is very large, immobile and expensive, it cannot be used in many situations (Solovey et al., 2012). To essentially substitute for the fMRI, the fNIRS was developed which can be worn on the head and is far more convenient than the traditional full body fMRI scanner that requires a sedentary or prone position to work (Solovey et al., 2012).

In a study of people multitasking on the computer with the fNIRIS head-mounted device called Brainput, the Brainput device worked with remotely controlled robots that would automatically modify the behavior of 2 remotely controlled robots when Brainput detected an information overload in the multitasking brains of the human navigating both of the robots simultaneously over several differently designed terrains (Solovey et al., 2012).

Writing Electromagnetic Information to the Brain?
These 2 examples of the Human Connectome Project lead by the National Institute of Health (NIH) in the US and also underway in other countries show how early the mapping of brain region interaction is for higher cognitive functions beyond sensory motor interactions. Nevertheless, one Canadian neurosurgeon has taken volunteers for an early example of writing some electromagnetic input into the human brain to induce paranormal kinds of subjective experience and has been doing so since 1987 (Cotton, 1996; Nickell, 2005; Persinger, 2012). Dr. Michael Persinger uses small electrical signals across the temporal lobes in an environment with partial audio-visual isolation to reduce neural distraction (Persinger, 2003). These microtesla magnetic fields especially when applied to the right hemisphere of the temporal lobes often induced a sense of an “other” presence generally described as supernatural in origin by the volunteers (Persinger, 2003). This early example shows how input can be received directly by the brain as well as recorded from it.

Higher Resolution Recording of Neural Data
Electrodes from EEGs and electromagnets from fMRI and fNIRIS still record or send data at the macro level of entire regions or areas of the brain. Work on intracellular recording such as the nanotube transistor allows for better understanding at the level of neurons (Gao et al., 2012). Of course, when introducing micro scale recording or transmitting equipment into the human brain, safety is a major issue. Some progress has been made in that an ingestible microchip called the Raisin has been made that can transmit information gathered during its voyage through the digestive system (Kessel, 2009). Dr. Robert Freitas has designed many nanoscale devices such as Respirocytes, Clottocytes and Microbivores to replace or augment red blood cells, platelets and phagocytes respectively that can in principle be fabricated and do appear to meet the miniaturization and propulsion requirements necessary to get into the bloodstream and arrive at the targeted system they are programmed to reach (Freitas, 1998; Freitas, 2000; Freitas, 2005; Freitas, 2006).

The primary obstacle is the tremendous gap between assembling at the microscopic level and the molecular level. Dr. Richard Feynman described the crux of this struggle to bridge the divide between atoms in his now famous talk given on December 29, 1959 called “There’s Plenty of Room at the Bottom” (Feynman, 1959). To encourage progress towards the ultimate goal of molecular manufacturing by enabling theoretical and experimental work, the Foresight Institute has awarded annual Feynman Prizes every year since 1997 for contribution in this field called nanotechnology (Foresight, 2012).

The Current State of the Art and Science of Brain Computer Interfaces
Many neuroscientists think that cellular or even atomic level resolution is probably necessary to understand and certainly to interface with the brain at the level of conceptual thought, memory storage and retrieval (Ptolemy, 2009; Koene, 2010) but at this early stage of the Human Connectome Project this evaluation is quite preliminary. The convergence of noninvasive brain scanning technology with implantable devices among volunteer patients supplemented with neural stem cells and neurotrophic factors to facilitate the melding of biological and artificial intelligence will allow for many medical benefits for paraplegics at first and later to others such as intelligence analysts, soldiers and civilians.

Some scientists and experts in Artificial Intelligence (AI) express the concern that AI software is on track to exceed human biological intelligence before the middle of the century such as Ben Goertzel, Ray Kurzweil, Kevin Warwick, Stephen Hawking, Nick Bostrom, Peter Diamandis, Dean Kamen and Hugo de Garis (Bostrom, 2009; de Garis, 2009, Ptolemy, 2009). The need for fully functioning BCIs that integrate the higher order conceptual thinking, memory recall and imagination into cybernetic environments gains ever more urgency if we consider the existential risk to the long-term survival of the human species or the eventual natural descendent of that species. This call for an intimate and fully integrated BCI then acts as a shield against the possible emergence of an AI independently of us as a life form and thus a possible rival and intellectually superior threat to the human heritage and dominance on this planet and its immediate solar system vicinity.

References

Bhatt MA, Lohrenz TM, Camerer CF, Montague PR. (2012). Distinct contributions of the amygdala and parahippocampal gyrus to suspicion in a repeated bargaining game. Proc. Nat’l Acad. Sci. USA, 109(22):8728–8733. Retrieved October 15, 2012, from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3365181/pdf/pnas.201200738.pdf.

Birhard, K. (1999). The science of haptics gets in touch with prosthetics. The Lancet, 354(9172), 52–52. Retrieved from http://search.proquest.com/docview/199023500

Bostrom, N. (2009). When Will Computers Be Smarter Than Us? Forbes Magazine. Retrieved October 19, 2012, from http://www.forbes.com/2009/06/18/superintelligence-humanity-oxford-opinions-contributors-artificial-intelligence-09-bostrom.html.

BrainGate. (2012). BrainGate — Clinical Trials. Retrieved October 15, 2012, from http://www.braingate2.org/clinicalTrials.asp.

Chapin, J. (2008). Robo Rat — The Brain/Machine Interface [Video]. Retrieved October 19, 2012, from http://www.youtube.com/watch?v=-EvOlJp5KIY.

Cotton, I. (1997, 96). Dr. persinger’s god machine. Free Inquiry, 17, 47–51. Retrieved from http://search.proquest.com/docview/230100330.

de Garis, H. (2009, June 22). The Coming Artilect War. Forbes Magazine. Retrieved October 19, 2012, from http://www.forbes.com/2009/06/18/cosmist–terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html.

deCODE genetics. (2012). deCODE genetics – Products. Retrieved October 26, 2012, from http://www.decode.com/products.

Feynman, R. (1959, December 29). There’s Plenty of Room at the Bottom, An Invitation to Enter a New Field of Physics. Caltech Engineering and Science. 23(5)22–36. Retrieved October 17, 2012, from http://calteches.library.caltech.edu/47/2/1960Bottom.pdf.

Foresight Institute. (2012). FI sponsored prizes & awards. Retrieved October 17, 2012, from http://www.foresight.org/FI/fi_spons.html.

Foster, D. J., & Wilson, M. A. (2006). Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature, 440(7084), 680–3. doi: 10.1038/nature04587.

Freitas, R. (1998). Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell, Artificial Cells, Blood Substitutes, and Immobil. Biotech.26(1998):411–430. Retrieved October 15, 2012, from http://www.foresight.org/Nanomedicine/Respirocytes.html.

Freitas, R. (2000, June 30). Clottocytes: Artificial Mechanical Platelets,” Foresight Update (41)9–11. Retrieved October 15, 2012, from http://www.imm.org/publications/reports/rep018.

Freitas, R. (2005. April). Microbivores: Artificial Mechanical Phagocytes using Digest and Discharge Protocol. J. Evol. Technol. (14)55–106. Retrieved October 15, 2012, from http://www.jetpress.org/volume14/freitas.pdf.

Freitas, R. (2006. September). Pharmacytes: An Ideal Vehicle for Targeted Drug Delivery. J. Nanosci. Nanotechnol. (6)2769–2775. Retrieved October 15, 2012, from http://www.nanomedicine.com/Papers/JNNPharm06.pdf.

Gao, R., Strehle, S., Tian, B., Cohen-Karni, T. Xie, P., Duan, X., Qing, Q., & Lieber, C.M. (2012). “Outside looking in: Nanotube transistor intracellular sensors” Nano Letters. 12(3329−3333). Retrieved September 7, 2012, from http://cmliris.harvard.edu/assets/NanoLet12-3329_RGao.pdf.

Grill, W., McDonald, J., Peckham, P., Heetderks, W., Kocsis, J., & Weinrich, M. (2001). At the interface: convergence of neural regeneration and neural prostheses for restoration of function. Journal Of Rehabilitation Research & Development, 38(6), 633–639.

Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., Donoghue, J. P. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398), 372–5. Retrieved from http://search.proquest.com/docview/1017604144.

Kessel, A. (2009, June 8). Proteus Ingestible Microchip Hits Clinical Trials. Retrieved October 15, 2012, from http://singularityhub.com/2009/06/08/proteus–ingestible-microchip-hits-clinical-trials.

Koene, R.A. (2010). Whole Brain Emulation: Issues of scope and resolution, and the need for new methods of in-vivo recording. Presented at the Third Conference on Artificial General Intelligence (AGI2010). March, 2010. Lugano, Switzerland. Retrieved August 29, 2010, from http://rak.minduploading.org/publications/publications/koene.AGI2010-lecture.pdf?attredirects=0&d=1.

Kraemer, W. (2011, December). To Interface the Future: Interacting More Intimately with Information. Journal of Geoethical Nanotechnology. 6(2). Retrieved December 27, 2011, from http://www.terasemjournals.com/GNJournal/GN0602/kraemer.html.

Montague, R. (2012, June). What we’re learning from 5,000 brains. Retrieved October 15, 2012, from http://video.ted.com/talk/podcast/2012G/None/ReadMontague_2012G-480p.mp4.

Nature Publishing Group (NPG). (2010, December). A critical look at connectomics. Nature Neuroscience. p. 1441. doi:10.1038/nn1210-1441.

Nickell, J. (2005, September). Mystical experiences: Magnetic fields or suggestibility? The Skeptical Inquirer, 29, 14–15. Retrieved from http://search.proquest.com/docview/219355830

Panksepp, J. (1992). A Critical Role for “Affective Neuroscience” in Resolving What Is Basic About Basic Emotions. 99(3)554–560. Retrieved October 14, 2012, from http://www.communicationcache.com/uploads/1/0/8/8/10887248/a_critical_role_for_affective_neuroscience_in_resolving_what_is_basic_about_basic_emotions.pdf.

Park, K. I., Teng, Y. D., & Snyder, E. Y. (2002). The injured brain interacts reciprocally with neural stem cells supported by scaffolds to reconstitute lost tissue. Nature Biotechnology, 20(11), 1111–7. doi: 10.1038/nbt751.

Persinger, M. (2003). The Sensed Presence Within Experimental Settings: Implications for the Male and Female Concept of Self. Journal of Psychology. (137)1.5–16. Retrieved October ‎October ‎14, ‎2012, from http://search.proquest.com/docview/213833884.

Persinger, M. (2012). Dr. Michael A. Persinger. Retrieved October 27, 2012, from http://142.51.14.12/Laurentian/Home/Departments/Behavioural+Neuroscience/People/Persinger.htm?Laurentian_Lang=en-CA

Ptolemy, R. (Producer & Director). (2009). Transcendent Man [Film]. Los Angeles: Ptolemaic Productions, Therapy Studios.

Solovey, E., Schermerhorn, P., Scheutz, M., Sassaroli, A., Fantini, S. & Jacob, R. (2012). Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain Input. Retrieved August 5, 2012, from http://web.mit.edu/erinsol/www/papers/Solovey.CHI.2012.Final.pdf.

Tran, F. (Director). (2012). Dream Life of Rats [Video]. Retrieved ?September ?21, ?2012, from http://www.hulu.com/watch/388493.

UPI. (2012, May 31). People with paralysis control robotic arms to reach and grasp using brain computer interface. UPI Space Daily. Retrieved from http://search.proquest.com/docview/1018542919

Weber, J. L. (2002). The iceland map. Nature Genetics, 31(3), 225–6. doi: http://dx.doi.org/10.1038/ng920

Wolpaw, J. (2010, November). Brain-computer interface research comes of age: traditional assumptions meet emerging realities. Journal of Motor Behavior. 42(6)351–353. Retrieved September 10, 2012, from http://www.tandfonline.com/doi/pdf/10.1080/00222895.2010.526471.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

I was not intending to write Part 5, but judging from the responses I thought it was necessary to explain how to read a journal paper – and a good read cannot be done without a pen and paper. If you are writing a paper, when you have completed it, I would suggest you set it aside for at least a week. Don’t think about your paper or the topic during this shmita period. Then come back to your paper with a pen & paper and read it afresh. You’d be surprised by the number of changes you make, which means you have to start well before your deadline.

Note, you can find articles on how to review or write papers and here is one, by IOP (Institute of Physics, UK) titled Introduction to refereeing, and is a good guide to read before reading or writing a paper. This is especially true for physics but applies to all the sciences and engineering disciplines.

Note, for those who have been following the comments on my blog posts, IOP explicitly states “Do not just say ‘This result is wrong’ but say why it is wrong…” and “be professional and polite in your report”. So I hope, we as commentators, will be more professional in both our comments and the focus of our comments. Thanks.

In this post I will address what is not taught in colleges. There are three things to look out for when reading or writing a paper, Explicit and Implicit Axioms, Mathematical Construction versus Mathematical Conjecture, and finally, Concepts and Logical Flow. In this first part I discuss Explicit and Implicit Axioms.

This may sound silly but 1+1 = 2 is not an axiom. Alfred North Whitehead and Bertrand Russell proved that 1+1 adds to 2. Therefore, we see, that the immense success of the modern civilization compared to all other previous civilizations, is due to the encroachment of the imperceptible mathematical rigor in our daily lives by nameless, faceless scientist, engineers and technicians. Now that is something to ponder about. If we lose that rigor we lose our society. We can discuss economic and political theory but without this mathematical rigor, nothing else works.

Any theoretical work is based on axioms. For example in Pythagorean geometry, one assumes that surfaces are flat in such a manner the sum of the angles of a triangle adds to 180º. In Riemann geometry this is not the case. Explicit axioms are those stated in the paper.

Implicit axioms are axioms that are taken for granted to be true and therefore not stated, or considered too trivial to be mentioned. More often than not, the author is not aware he or she is using or stating an implicit axiom.

For example, mass causes a gravitational field is an implicit axiom, as we cannot with our current theoretical foundations nor with our current technologies prove either way that mass is or is not the source of a gravitational field. This axiom is also considered trivial because what else could?

But wait, didn’t Einstein … ? Yes correct, he did .…

Mass is a carryover from Newton. It shows how difficult it is to break from tradition even when we are breaking from tradition! Since Newton figured out that mass was an excellent means (i.e. “proxy” to be technically rigorous) to determining gravitational acceleration in mathematical form, therefore mass had to be the source. All tests pertaining to Einstein’s relativity test the field characteristics, not how the source creates the gravitational field.

But our understanding of the world has changed substantially since both Newton and Einstein. We know that quarks are at the center of matter and exist in the same ‘amount’ as mass. So how does one tell the difference between quark interaction and mass as the gravitational source?

The importance of implicit axioms in particular and axioms in general, is that when we recognize them we can change them and drive fundamental changes in theory and technologies. I asked the questions, what is gravity modification and how can we do it? These questions are at best vague, but they were as good a starting point as any? But life happens backwards. We get the answer and then only do we recognize the precise question we were attempting to ask!

When I started researching gravity modification in 1999, I just had this sense that gravity modification should be possible in our lifetimes, but I did not know what the question was. It was all vague and unclear at that time, but I was very strict about the scope of my investigation. I would only deal with velocity and acceleration.

I spent 8 years searching, examining, discarding, testing and theorizing anomalies, trying to get a handle on what gravity modification could be. Finally in 2007 I started building numerical models of how gravitational acceleration could work in spacetime. In February 2008 I discovered g=τc2 and at that moment I knew the question: Can gravitational acceleration be described mathematically without knowing the mass of the planet or star?

So the implicit axiom, mass is required for gravitational acceleration, is no longer valid, and because of that we now have propulsion physics.

If, in the spirit of the Kline Directive, you want to explore what others have not, and seek what others will not, my advice is that when you read a paper ask yourself, what are the implicit and explicit axioms in the paper?

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.


…here’s Tom with the Weather.
That right there is comedian/philosopher Bill Hicks, sadly no longer with us. One imagines he would be pleased and completely unsurprised to learn that serious scientific minds are considering and actually finding support for the theory that our reality could be a kind of simulation. That means, for example, a string of daisy-chained IBM Super-Deep-Blue Gene Quantum Watson computers from 2042 could be running a History of the Universe program, and depending on your solipsistic preferences, either you are or we are the character(s).

It’s been in the news a lot of late, but — no way, right?

Because dude, I’m totally real
Despite being utterly unable to even begin thinking about how to consider what real even means, the everyday average rational person would probably assign this to the sovereign realm of unemployable philosophy majors or under the Whatever, Who Cares? or Oh, That’s Interesting I Gotta Go Now! categories. Okay fine, but on the other side of the intellectual coin, vis-à-vis recent technological advancement, of late it’s actually being seriously considered by serious people using big words they’ve learned at endless college whilst collecting letters after their names and doin’ research and writin’ and gettin’ association memberships and such.

So… why now?

Well, basically, it’s getting hard to ignore.
It’s not a new topic, it’s been hammered by philosophy and religion since like, thought happened. But now it’s getting some actual real science to stir things up. And it’s complicated, occasionally obtuse stuff — theories are spread out across various disciplines, and no one’s really keeping a decent flowchart.

So, what follows is an effort to encapsulate these ideas, and that’s daunting — it’s incredibly difficult to focus on writing when you’re wondering if you really have fingers or eyes. Along with links to some articles with links to some papers, what follows is Anthrobotic’s CliffsNotes on the intersection of physics, computer science, probability, and evidence for/against reality being real (and how that all brings us back to well, God).
You know, light fare.

First — Maybe we know how the universe works: Fantastically simplified, as our understanding deepens, it appears more and more the case that, in a manner of speaking, the universe sort of “computes” itself based on the principles of quantum mechanics. Right now, humanity’s fastest and sexiest supercomputers can simulate only extremely tiny fractions of the natural universe as we understand it (contrasted to the macro-scale inferential Bolshoi Simulation). But of course we all know the brute power of our computational technology is increasing dramatically like every few seconds, and even awesomer, we are learning how to build quantum computers, machines that calculate based on the underlying principles of existence in our universe — this could thrust the game into superdrive. So, given ever-accelerating computing power, and given than we can already simulate tiny fractions of the universe, you logically have to consider the possibility: If the universe works in a way we can exactly simulate, and we give it a shot, then relatively speaking what we make ceases to be a simulation, i.e., we’ve effectively created a new reality, a new universe (ummm… God?). So, the question is how do we know that we haven’t already done that? Or, otherwise stated: what if our eventual ability to create perfect reality simulations with computers is itself a simulation being created by a computer? Well, we can’t answer this — we can’t know. Unless…
[New Scientist’s Special Reality Issue]
[D-Wave’s Quantum Computer]
[Possible Large-scale Quantum Computing]

Second — Maybe we see it working: The universe seems to be metaphorically “pixelated.” This means that even though it’s a 50 billion trillion gajillion megapixel JPEG, if we juice the zooming-in and drill down farther and farther and farther, we’ll eventually see a bunch of discreet chunks of matter, or quantums, as the kids call them — these are the so-called pixels of the universe. Additionally, a team of lab coats at the University of Bonn think they might have a workable theory describing the underlying lattice, or existential re-bar in the foundation of observable reality (upon which the “pixels” would be arranged). All this implies, in a way, that the universe is both designed and finite (uh-oh, getting closer to the God issue). Even at ferociously complex levels, something finite can be measured and calculated and can, with sufficiently hardcore computers, be simulated very, very well. This guy Rich Terrile, a pretty serious NASA scientist, sites the pixelation thingy and poses a video game analogy: think of any first-person shooter — you cannot immerse your perspective into the entirety of the game, you can only interact with what is in your bubble of perception, and everywhere you go there is an underlying structure to the environment. Kinda sounds like, you know, life — right? So, what if the human brain is really just the greatest virtual reality engine ever conceived, and your character, your life, is merely a program wandering around a massively open game map, playing… well, you?
[Lattice Theory from the U of Bonn]
[NASA guy Rich Terrile at Vice]
[Kurzweil AI’s Technical Take on Terrile]

Thirdly — Turns out there’s a reasonable likelihood: While the above discussions on the physical properties of matter and our ability to one day copy & paste the universe are intriguing, it also turns out there’s a much simpler and straightforward issue to consider: there’s this annoyingly simplistic yet valid thought exercise posited by Swedish philosopher/economist/futurist Nick Bostrum, a dude way smarter that most humans. Basically he says we’ve got three options: 1. Civilizations destroy themselves before reaching a level of technological prowess necessary to simulate the universe; 2. Advanced civilizations couldn’t give two shits about simulating our primitive minds; or 3. Reality is a simulation. Sure, a decent probability, but sounds way oversimplified, right?
Well go read it. Doing so might ruin your day, JSYK.
[Summary of Bostrum’s Simulation Hypothesis]

Lastly — Data against is lacking: Any idea how much evidence or objective justification we have for the standard, accepted-without-question notion that reality is like, you know… real, or whatever? None. Zero. Of course the absence of evidence proves nothing, but given that we do have decent theories on how/why simulation theory is feasible, it follows that blithely accepting that reality is not a simulation is an intrinsically more radical position. Why would a thinking being think that? Just because they know it’s true? Believing 100% without question that you are a verifiably physical, corporeal, technology-wielding carbon-based organic primate is a massive leap of completely unjustified faith.
Oh, Jesus. So to speak.

If we really consider simulation theory, we must of course ask: who built the first one? And was it even an original? Is it really just turtles all the way down, Professor Hawking?

Okay, okay — that means it’s God time now
Now let’s see, what’s that other thing in human life that, based on a wild leap of faith, gets an equally monumental evidentiary pass? Well, proving or disproving the existence of god is effectively the same quandary posed by simulation theory, but with one caveat: we actually do have some decent scientific observations and theories and probabilities supporting simulation theory. That whole God phenomenon is pretty much hearsay, anecdotal at best. However, very interestingly, rather than negating it, simulation theory actually represents a kind of back-door validation of creationism. Here’s the simple logic:

If humans can simulate a universe, humans are it’s creator.
Accept the fact that linear time is a construct.
The process repeats infinitely.
We’ll build the next one.
The loop is closed.

God is us.

Heretical speculation on iteration
Even wonder why older polytheistic religions involved the gods just kinda setting guidelines for behavior, and they didn’t necessarily demand the love and complete & total devotion of humans? Maybe those universes were 1st-gen or beta products. You know, like it used to take a team of geeks to run the building-sized ENIAC, the first universe simulations required a whole host of creators who could make some general rules but just couldn’t manage every single little detail.

Now, the newer religions tend to be monotheistic, and god wants you to love him and only him and no one else and dedicate your life to him. But just make sure to follow his rules, and take comfort that your’re right and everyone else is completely hosed and going to hell. The modern versions of god, both omnipotent and omniscient, seem more like super-lonely cosmically powerful cat ladies who will delete your ass if you don’t behave yourself and love them in just the right way. So, the newer universes are probably run as a background app on the iPhone 26, and managed by… individuals. Perhaps individuals of questionable character.

The home game:
Latest title for the 2042 XBOX-Watson³ Quantum PlayStation Cube:*
Crappy 1993 graphic design simulation: 100% Effective!

*Manufacturer assumes no responsibility for inherently emergent anomalies, useless
inventions by game characters, or evolutionary cul de sacs including but not limited to:
The duck-billed platypus, hippies, meat in a can, reality TV, the TSA,
mayonaise, Sony VAIO products, natto, fundamentalist religious idiots,
people who don’t like homos, singers under 21, hangovers, coffee made
from cat shit, passionfruit iced tea, and the pacific garbage patch.

And hey, if true, it’s not exactly bad news
All these ideas are merely hypotheses, and for most humans the practical or theoretical proof or disproof would probably result in the same indifferent shrug. For those of us who like to rub a few brain cells together from time to time, attempting to both to understand the fundamental nature of our reality/simulation, and guess at whether or not we too might someday be capable of simulating ourselves, well — these are some goddamn profound ideas.

So, no need for hand wringing — let’s get on with our character arc and/or real lives. While simulation theory definitely causes reflexive revulsion, “just a simulation” isn’t necessarily pejorative. Sure, if we take a look at the current state of our own computer simulations and A.I. constructs, it is rather insulting. So if we truly are living in a simulation, you gotta give it up to the creator(s), because it’s a goddamn amazing piece of technological achievement.

Addendum: if this still isn’t sinking in, the brilliant
Dinosaur Comics might do a better job explaining:

(This post originally published I think like two days
ago at technosnark hub www.anthrobotic.com.
)

A recent article in Science Daily reported on efforts to measure Cesium-137 and Cesium-134 in bottom dwelling fish off the east coast of Japan to understand the lingering effects and potential public health implications. As the largest accidental release of radiation to the ocean in history, it is not surprising that many demersal fish are found above the limits for seafood consumption. What is more significant is that the contamination in almost all classifications of fish are not declining — suggesting that contaminated sediment on the seafloor could be providing a continuing source. This raises a concern that fallouts from any further nuclear accidents would aggregate over time.

One would question if the IAEA is taking a strong enough position on the permitted location of nuclear power stations. It perplexes me that the main objections to Iran attaining nuclear power are strategic/military. Whilst Iran is not at risk to the threat of tsunamis as Japan is, Iran is one of the most seismically active countries in the world, where destructive earthquakes often occur. This is because it is crossed by several major fault lines that cover at least 90% of the country. How robust are nuclear power stations to a major quake? The IAEA needs to expand its role to advise countries on what regions it would be unsuitable to build nuclear power stations — such as Iran and Japan. Otherwise we are risking a lasting environmental impact to eventually occur — it is only a matter of time.

How the Diablo Canyon nuclear plant, which sits just miles away from the notoriously active San Andreas fault was allowed to be located there let alone operate for a year and half with its emergency systems disabled (according to a 2010 safety review by the federal Nuclear Regulatory Commission). It seems as if there’s a missing link worldwide between the IAEA and regional planning authorities. Or perhaps it is simply down to responsible government.

2012 has already been a bad omen when it comes to humankind solving the dangers ahead. Perhaps an early review will make next January 1 brighter.

There has been strong information questioning the existence of Hawkins Radiation, which was a major reason most scientists think Black Hole Collider research is safe, without any increase in a call for a safety conference. Once, due to classification keeping it away from the general public, there was a small debate whether the first atomic explosion would set off a chain reaction that would consume the earth. On March 1, 1954 the Lithium that was, for other purposes, put in what was intended to be a small Hydrogen Bomb test, created, by far, the dirtiest atomic explosion ever as the natives on Bikini Island woke up to two suns in the sky that morning. History would be different had the first tests gravely injured people. Eventually people in the future will look back at how humankind dealt with the possibility of instantly destroying itself, as more important, than how it dealt with slowly producing more doomsday-like weapons.

With genetic engineering the results are amazing, goats with hair thousands of times stronger than wool would offer some increased protection from its predators. Think what would happen if, 1 foot long, undigestible fibers possibly with some sharp spots gets accidental inbreed in goat meat, or very un-tasty animals spread in the wild throughout the ecosystem. In 2001 Genetic Insecticide intended only to protect corn to be used in animal feed spread by the winds and cross breading to all corn in the northern hemisphere. Bees drinking corn syrup from one discarded soda can can endanger an entire hive. Now there is fear of this gene getting into wheat, rice and all plants that don’t rely on insects in some way. The efforts to require food to be labeled for genetically modified ingredients doesn’t address the issue and may actually distract from warning of the dangers ahead.

There are some who say bad people want to play God and create a god particle, likewise some say evil Monsanto,with bad motives, is trying to prevent us from buying safe food. This attitude doesn’t help create a safer future, or empower those trying to rationally deal with the danger.

The next danger is the attempt to impose Helter Skelter on the world with an Islam baiting movie. To Coptic Christians there was to be a movie on them being mistreated in Egypt. Actors hired to perform in a movie about flying saucers landing 2000 years ago and helped a man who never needed a shave and had a far away look in his eyes who had a donkey who he loved, and didn’t know who his father was. Islam haters fund-raised to create a movie smearing bin Laden but tricking Muslims to see it with movie posters in Arabic.

Somehow despite the false claim of 100 Jewish donors, no one who looked Jewish and rich was attacked in Los Angeles. Prompt expose‘ by Coptic Christian religious leaders, instead of someone else, exposing the claim that the person who pretended to be a Coptic Christian refugee and a rich Jewish businessman was the same person. Quick response prevented attacks on Copts in Egypt. Every relative of the four Americans killed in Benghazi, didn’t want Romney to use their name for campaign purposes. It is amazing that of all the people killed in the riots around the world following this hate trailer none of them were Americans who wanted their relative’s name used to promote anger at Muslims.

It is a bad omen that this was looked upon by many as a free speech issue not a terror attack. When Lebanese Prime Minister Rafik Hariri was killed to cause tit for tat revenge killings by a van that was stolen a year earlier in Japan, the UN took charge of the investigation. When Charles Manson tried to impose a Helter Skelter race war on the earth he didn’t come close enough to warrant being punished for a separate crime. If these two previous terror attacks on the world had been done in a way that no one was killed in the initial attack, is the earth really dumb enough to discuss it as a free speech issue?

Michael Jackson’s sister, before he died, was alarmed claiming that Michael Jackson’s handlers were systematical putting him under stress to put her brother in harm’s way. Comrad Murray this year wants a new trial insisting that he never would have given Michael orally such a badly mixed dose of anesthesia, and as no one seems to remember he had been distracted by a call on his cell phone for an offer of an important business deal. The world is full of incidents where professionals commit a crime in such a complex , convoluted way that it is hard to prosecute as a crime. Perhaps all these incidents could be looked into again.

It would be helpful if those stereotyped as not being concerned speak out like a sky diver warning about the Collider or a atheist leader and/or smut dealer speaking out on the hate religious film attack calling for investigation and prosecution. This Lifeboat site can accomplish more when it joins in where stereotypically one wouldn’t expect it to.

January through October, 2012, hasn’t been a good omen, in humankind’s ability to solve its problems and deal with danger, perhaps doing a year in review in October instead of waiting till January will make next January’s review brighter.

Most blogs expire concerning comments, one can comment below months from now,

http://readersupportednews.org/pm-section/22-22/14022-ambassador-stevens-is-a-hero-four-heroes-who-ended-a-helter-skelter-chain

http://richardkanepa.blogspot.com/2012/10/it-is-only-human-to-be-angry-over-ones_18.html