Toggle light / dark theme

Bioquark, Inc., (http://www.bioquark.com) a company focused on the development of novel biologics for complex regeneration and disease reversion, and Revita Life Sciences, (http://revitalife.co.in) a biotechnology company focused on translational therapeutic applications of autologous stem cells, have announced that they have received IRB approval for a study focusing on a novel combinatorial approach to clinical intervention in the state of brain death in humans.

This first trial, within the portfolio of Bioquark’s Reanima Project (http://www.reanima.tech) is entitled “Non-randomized, Open-labeled, Interventional, Single Group, Proof of Concept Study With Multi-modality Approach in Cases of Brain Death Due to Traumatic Brain Injury Having Diffuse Axonal Injury” (https://clinicaltrials.gov/ct2/show/NCT02742857?term=bioquark&rank=1), will enroll an initial 20 subjects, and be conducted at Anupam Hospital in Rudrapur, Uttarakhand India.

brainimage

“We are very excited about the approval of our protocol,” said Ira S. Pastor, CEO, Bioquark Inc. “With the convergence of the disciplines of regenerative biology, cognitive neuroscience, and clinical resuscitation, we are poised to delve into an area of scientific understanding previously inaccessible with existing technologies.”

Death is defined as the termination of all biological functions that sustain a living organism. Brain death, the complete and irreversible loss of brain function (including involuntary activity necessary to sustain life) as defined in the 1968 report of the Ad Hoc Committee of the Harvard Medical School, is the legal definition of human death in most countries around the world. Either directly through trauma, or indirectly through secondary disease indications, brain death is the final pathological state that over 60 million people globally transfer through each year.

While human beings lack substantial regenerative capabilities in the CNS, many non-human species, such as amphibians, planarians, and certain fish, can repair, regenerate and remodel substantial portions of their brain and brain stem even after critical life-threatening trauma.

operation

Additionally, recent studies on complex brain regeneration in these organisms, have highlighted unique findings in relation to the storage of memories following destruction of the entire brain, which may have wide ranging implications for our understanding of consciousness and the stability of memory persistence.

“Through our study, we will gain unique insights into the state of human brain death, which will have important connections to future therapeutic development for other severe disorders of consciousness, such as coma, and the vegetative and minimally conscious states, as well as a range of degenerative CNS conditions, including Alzheimer’s and Parkinson’s disease,” said Dr. Sergei Paylian, Founder, President, and Chief Science Officer of Bioquark Inc.

Over the years, clinical science has focused heavily on preventing such life and death transitions and made some initial progress with suspended animation technologies, such as therapeutic hypothermia. However, once humans transition through the brain death window, currently defined by the medical establishment as “irreversible”, they are technically no longer alive, despite the fact that human bodies can still circulate blood, digest food, excrete waste, balance hormones, grow, sexually mature, heal wounds, spike a fever, and gestate and deliver a baby. It is even acknowledged by thought leaders that recently brain dead humans still may have residual blood flow and electrical nests of activity in their brains, just not enough to allow for an integrated functioning of the organism as a whole.

coolbrain

“We look forward to working closely with Bioquark Inc. on this cutting edge clinical initiative,” said Dr. Himanshu Bansal, Managing Director of Revita Life Sciences.

About Bioquark, Inc.

Bioquark Inc. is focused on the development of natural biologic based products, services, and technologies, with the goal of curing a wide range of diseases, as well as effecting complex regeneration. Bioquark is developing both biological pharmaceutical candidates, as well as products for the global consumer health and wellness market segments.

About Revita Life Sciences

Revita Life Sciences is a biotechnology company focused on the development of stem cell therapies that target areas of significant unmet medical need. Revita is led by Dr. Himanshu Bansal MD, PhD. who has spent over two decades developing novel MRI based classifications of spinal cord injuries as well as comprehensive treatment protocols with autologous tissues including bone marrow stem cells, dural nerve grafts, nasal olfactory tissues, and omental transposition.

eb35ff85-f339-4837-9fe0-8671b0617cd9.img
“Virtually all new fossil fuel-burning power-generation capacity will end up “stranded”. This is the argument of a paper by academics at Oxford university. We have grown used to the idea that it will be impossible to burn a large portion of estimated reserves of fossil fuels if the likely rise in global mean temperatures is to be kept below 2C. But fuels are not the only assets that might be stranded. A similar logic can be applied to parts of the capital stock.”

Read more

“He is not here; He has risen,” — Matthew 28:6

As billions of Christians around the world are getting ready to celebrate the Easter festival and holiday, we take pause to appreciate the awe inspiring phenomena of resurrection.

crypt

In religious and mythological contexts, in both Western and Eastern societies, well known and less common names appear, such as Attis, Dionysus, Ganesha, Krishna, Lemminkainen, Odin, Osiris, Persephone, Quetzalcoatl, and Tammuz, all of whom were reborn again in the spark of the divine.

In the natural world, other names emerge, which are more ancient and less familiar, but equally fascinating, such as Deinococcus radiodurans, Turritopsis nutricula, and Milnesium tardigradum, all of whose abilities to rise from the ashes of death, or turn back time to start life again, are only beginning to be fully appreciated by the scientific world.

deinoccous

In the current era, from an information technology centric angle, proponents of a technological singularity and transhumanism, are placing bets on artificial intelligence, virtual reality, wearable devices, and other non-biological methods to create a future connecting humans to the digital world.

This Silicon Valley, “electronic resurrection” model has caused extensive deliberation, and various factions to form, from those minds that feel we should slow down and understand the deeper implications of a post-biologic state (Elon Musk, Steven Hawking, Bill Gates, the Vatican), to those that are steaming full speed ahead (Ray Kurzweil / Google) betting that humans will shortly be able to “transcend the limitations of biology”.

transhumangirl

However, deferring an in-depth Skynet / Matrix discussion for now, is this debate clouding other possibilities that we have forgotten about, or may not have even yet fully considered?

Today, we find ourselves at an interesting point in history where the disciplines of regenerative sciences, evolutionary medicine, and complex systems biology, are converging to give us an understanding of the cycle of life and death, orders of magnitude more complex than only a few years ago.

In addition to the aforementioned species that are capable of biologic reanimation and turning back time, we show no less respect for those who possess other superhuman capabilities, such as magnetoreception, electrosensing, infrared imaging, and ultrasound detection, all of which nature has been optimizing over hundreds of millions of years, and which provide important clues to the untapped possibilities that currently exist in direct biological interfaces with the physical fabric of the universe.

jellyfish2

The biologic information processing occurring in related aneural organisms and multicellular colony aggregators, is no less fascinating, and potentially challenges the notion of the brain as the sole repository of long-term encoded information.

Additionally, studies on memory following the destruction all, or significant parts of the brain, in regenerative organisms such as planarians, amphibians, metamorphic insects, and small hibernating mammals, have wide ranging implications for our understanding of consciousness, as well as to the centuries long debate between the materialists and dualists, as to whether we should focus our attention “in here”, or “out there”.

I am not opposed to studying either path, but I feel that we have the potential to learn a lot more about the topic of “out there” in the very near future.

coolbrain

The study of brain death in human beings, and the application of novel tools for neuro-regeneration and neuro-reanimation, for the first time offer us amazing opportunities to start from a clean slate, and answer questions that have long remained unanswered, as well as uncover a knowledge set previously thought unreachable.

Aside from a myriad of applications towards the range of degenerative CNS indications, as well as disorders of consciousness, such work will allow us to open a new chapter related to many other esoteric topics that have baffled the scientific community for years, and fallen into the realm of obscure curiosities.

connection

From the well documented phenomena of terminal lucidity in end stage Alzheimer’s patients, to the mysteries of induced savant syndrome, to more arcane topics, such as the thousands of cases of children who claim to remember previous lives, by studying death, and subsequently the “biotechnological resurrection” of life, we can for the first time peak through the window, and offer a whole new knowledge base related to our place, and our interaction, with the very structure of reality.

We are entering a very exciting era of discovery and exploration.

Reanimalogo

About the author

Ira S. Pastor is the Chief Executive Officer of Bioquark Inc. (www.bioquark.com), an innovative life sciences company focusing on developing novel biologic solutions for human regeneration, repair, and rejuvenation. He is also on the board of the Reanima Project (www.reanima.tech)

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed ‘thinking the unthinkable’. What he had in mind was the aftermath of a thermonuclear war in which, say, 25–50% of the world’s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from ‘the worst case scenarios’ proposed nowadays, even under conditions of severe global warming. Kahn’s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.

And indeed, we did largely follow Kahn’s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of ‘ahead of the curve’ thinking is characteristic of military-based innovation generally. Warfare focuses minds on what’s dispensable and what’s necessary to preserve – and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that ‘necessity is the mother of invention’. Once again, and most importantly, we win even – and especially – if Doomsday never happens.

An interesting economic precedent for this general line of thought, which I have associated with transhumanism’s ‘proactionary principle’, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called ‘the relative advantage of backwardness’. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The ‘learning’ amounts to innovating more efficient means of achieving and often surpassing the predecessors’ level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of ‘backwardness’ on a global scale vis-à-vis the pre-catastrophic humanity.

Doomsday scenarios invariably invite discussions of our species’ ‘resilience’ and ‘adaptability’, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between ‘reliable’ and ‘maintainable’ artefacts. Reliable artefacts tend to be ‘overdesigned’, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be ‘underdesigned’, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.

In a sense, ‘resilience’ and ‘adaptability’ could be identified with either position, but the Cold War’s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios – including the likely negative ones — that we couldn’t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld’s game-theoretic formulation, we need to address the ‘unknown unknowns’, not merely the ‘known unknowns’. Good candidates for the relevant ‘unknown unknowns’ are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences — call them ‘emergent’, if you wish.

It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their ‘negativity’: What would be potentially lost in the various scenarios which would be vital to sustain the ‘human condition’, however defined? The answers would provide the basis for future innovation policy – namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don’t come to pass, nevertheless they will make our normal lives better – as has been the long-term effect of the Cold War.

References

Bleed, P. (1986). ‘The optimal design of hunting weapons: Maintainability or reliability?’ American Antiquity 51: 737– 47.

Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.

Fuller, S. and Lipinska, V. (2014). The Proactionary Imperative. London: Palgrave (pp. 35–36).

Gerschenkron, A. (1962). Economic Backwardness in Historical Perspective. Cambridge MA: Harvard University Press.

Kahn, H. (1960). On Thermonuclear War. Princeton: Princeton University Press.

In professional cycling, it’s well known that a pack of 40 or 50 riders can ride faster and more efficiently than a single rider or small group. As such, you’ll often see cycling teams with different goals in a race work together to chase down a breakaway before the finish line.

This analogy is one way to think about collaborative multi-agent intelligent systems, which are poised to change the technology landscape for individuals, businesses, and governments, says Dr. Mehdi Dastani, a computer scientist at Utrecht University. The proliferation of these multi-agent systems could lead to significant systemic changes across society in the next decade.

Image credit: ResearchGate
Image credit: ResearchGate

“Multi-agent systems are basically a kind of distributed system with sets of software. A set can be very large. They are autonomous, they make their own decisions, they can perceive their environment, “Dastani said. “They can perceive other agents and they can communicate, collaborate or compete to get certain resources. A multi-agent system can be conceived as a set of individual softwares that interact.”

As a simple example of multi-agent systems, Dastani cited Internet mail servers, which connect with each other and exchange messages and packets of information. On a larger scale, he noted eBay’s online auctions, which use multi-agent systems to allow one to find an item they want to buy, enter their maximum price and then, if needed, up the bid on the buyer’s behalf as the auction closes. Driverless cars are another great example of a multi-agent system, where many softwares must communicate to make complicated decisions.

Dastani noted that multi-agent systems dovetail nicely with today’s artificial intelligence. In the early days of AI, intelligence was a property of one single entity of software that could, for example, understand human language or perceive visual inputs to make its decisions, interact, or perform an action. As multi-agent systems have been developed, those single agents interact and receive information from other agents that they may lack, which allows them to collectively create greater functionality and more intelligent behavior.

“When we consider (global) trade, we basically define a kind of interaction in terms of action. This way of interacting among individuals might make their market more efficient. Their products might get to market for a better price, as the amount of time (to produce them) might be reduced,” Dastani said. “When we get into multi-agent systems, we consider intelligence as sort of an emergent phenomena that can be very functional and have properties like optimal global decision or situations of state.”

Other potential applications of multi-agent systems include designs for energy-efficient appliances, such as a washing machine that can contact an energy provider so that it operates during off-peak hours or a factory that wants to flatten out its peak energy use, he said. Municipal entities can also use multi-agent systems for planning, such as simulating traffic patterns to improve traffic efficiency.

Looking to the future, Dastani notes the parallels between multi-agent systems and Software as a Service (SaaS) computing, which could shed light on how multi-agent systems might evolve. Just as SaaS combines various applications for on-demand use, multi-agent systems can combine functionalities of various software to provide more complex solutions. The key to those more complex interactions, he added, is to develop a system that will govern the interactions of multi-agent systems and overcome the inefficiencies that can be created on the path toward functionality.

“The idea is the optimal interaction that we can design or we can have. Nevertheless, that doesn’t mean that multi-agent systems are by definition, efficient,” Dastani said. “We can have many processes that communicate, make an enormous number of messages and use a huge amount of resources and they still can not have a sort of interesting functionality. The whole idea is, how can we understand and analyze the interactions? How can we decide which interaction is better than the other interactions or more efficient or more productive?”

A Lifeboat guest editorial

Richelle Ross-sRichelle Ross is a sophomore at the University of Florida, focusing on statistics and data science. As a crypto consultant, she educates far beyond the campus. Her insight on the evolution and future of Bitcoin has been featured in national publications. Richelle writes for CoinDesk, LinkedIn, and Quora, providing analysis on Bitcoin’s evolving economy.


In 2003, I remember going to see my first IMAX 3D film,
Space Station . My family was touring NASA at Cape Canaveral Florida. The film was an inside view into life as an astronaut enters space. As the astronauts tossed M&Ms to each other in their new gravity-free domain, the other children and space_station_1I gleefully reached our hands out to try and touch the candy as it floated towards us. I had never experienced anything so mind-blowing in my 7 year life. The first 3D film was released in 1922. Yet, surprisingly, flat entertainment has dominated screens for in the 9½ decades that followed. Only a handful of films have been released in 3D—most of them are animated. But now, we are gradually seeing a shift in how people experience entertainment. As methods evolve and as market momentum builds, it promises to be one of the most groundbreaking technologies of the decade. I foresee Virtual Reality reaching a point where our perception of virtual and real-life experiences becomes blurred—and eventually—the two become integrated.

Ever since pen was put to paper, and camera to screen, audiences have enjoyed being swept into other worlds. For those of us “dreamers” being able to escape into these stories is one way we live through and expand our understanding of other times and places—even places space_station_2that may not be accessible in our lifetimes. Virtual reality is the logical progression and natural evolution of these experiences.

I caught the VR bug after one of my Facebook contacts was posting about it and sharing 360 degree videos that were of no use to me unless I too had the headset. Having been a Samsung user for the last several years, I purchased the Samsung VR headset to understand what all the hype was. Just as with my childhood experience visiting the space station, the VR Introduction video sent me floating across the universe. But this time, it was much more compelling. I could turn my head in any direction and experience a vast heavenly realm in 3D vision and tied to my own movements. Behind me was a large planet and in front were dozens of asteroids slowly moving by.

Similar to visiting the Grand Canyon, this is one of those novel experiences you really have to experience to appreciate. Within about ten seconds of trying it out, I had become hooked. I realized that I was experiencing something with far greater potential than an amusement park roller coaster, yet I also recognized that any applications I might imagine barely scratch the surface. This unexpected adrenaline rush is what leads tinkerers to the imaginative leaps that push new technologies into the next decades ahead.

Video games are probably the industry everyone thinks of being affected by this new paradigm. I immediately thought about the Star Wars franchise with its ever expanding universe. It will be a pretty exciting day when you can hold a lightsaber hilt that comes to life when you wear a headset and allows you to experience that universe from your living room. You could even wear a sensored body suit that allows you to feel little zaps or vibrations during gameplay. With more connected devices, the possibility of Li-Fi replacing Wi-Fi and so on, video games are just scratching the surface.

I discussed what the future of VR could offer with Collective Learning founder, Dan Barenboym. We explored various difficulties that impede market adoption. Barenboym was an early enthusiast of virtual reality, having worked with a startup that plans to deploy full-body scanners that give online life to gamers. The project began long before the film Avatar. Berenboym suggests ways that this would improve online shopping dan_barenboym_5624sby allowing people to see their avatar with their own personal measurements in various outfits. This doesn’t have to be limited to at-home experiences though. Dan suggests that instead of walking into the boutique changing room, you walk into one with mirrors connected to VR software. Your reflection ‘tries on’ different virtual outfits before you pull your favorite one off the store rack.

We also discussed the current obstacles of VR like the headset itself, which is a hindrance in some respects as it is a bit uncomfortable to wear for prolonged use. The other looming issue is money. There are many ideas similar to the ones we brainstormed, but startups may struggle to get off the ground without sufficient funding. The Oculus Rift is one great example of how crowdfunding can help entrepreneurs launch their ideas. It is easier than ever before to share and fund great ideas through social networking.

Facebook creator, Mark Zuckerberg, shared his own vision in 2014 after acquiring the Oculus Rift. Zuckerberg eloquently summarized the status of where we’re headed:

Virtual reality was once the dream of science fiction. But the internet was also once a dream, and so were computers and smartphones. The future is coming and we have a chance to build it oculus_rift-transtogether.”

What could this mean for the social networking that Zuckerberg pioneered? I’d venture to say the void of a long distance relationship may be eased with VR immersion that allows you to be with your family at the click of a button. You could be sitting down in your apartment in the U.S., but with the help of a 360 camera, look around at the garden that your mother is tending to in the U.K. The same scenario could be applied to a classroom or business meeting. We already have global and instant communication, so it will serve to add an enriched layer to these interactions.

The concept of reality itself is probably the biggest factor that makes virtual reality so captivating. Reality is not an objective experience. Each of us has a perspective of the world that is colored by our childhood experiences, personality, and culture. Our inner dialogues, fantasies of who we want to become, and areas of intelligence determine so much of what we’re able to accomplish and choose to commit to outside of ourselves. Michael Abrash describes how VR works with our unconscious brain perceptions to make us believe we’re standing on the edge of a building that isn’t really there. At a conscious level, we accept that we are staring at a screen, but our hearts still race—based on an unconscious perception of what is happening. Tapping into this perception-changing part of our brain allows us to experience reality in new ways.

As VR becomes more mainstreamed and incorporated into all areas of our lives such as online shopping, socializing, education, recreation, etc., the degrees of separation from the real world that society applies to it will lessen. Long-term, the goal for VR would be to allow us to use any of our senses and body parts. We should see continued improvements in the graphics and interaction capabilities of VR, allowing for these experiences to feel as real as they possibly can.

One can only imagine the new vistas this powerful technology will open—not just for entertainment, but for education, medicine, working in hazardous environments or controlling machines at a distance. Is every industry planning to incorporate the positive potential of virtual reality? If not, they certainly should think about the potential. As long as we pay attention to present day needs and issues, engineering virtual reality in the Internet of Things promises to be a fantastic venture.

Author’s Note:

Feedback from Lifeboat is important. I’ll be back from time to time. Drop me a note on the comment form, or better yet, add your comment below. Until then, perhaps we will meet in the virtual world.
— RR

As recently as 50 years ago, psychiatry lacked a scientific foundation, the medical community considered mental illness a disorder of the mind, and mental patients were literally written off as “sick in the head.” A fortunate turn in progress has yielded today’s modern imaging devices, which allow neuroscientists and psychiatrists to examine the brain of an individual suffering from a mental disorder and provide the best treatment options. In a recent interview, Columbia University Psychiatry Chair Dr. Jeffrey Lieberman stated that new research into understanding the mind is growing at an accelerated pace.

(iStock)
(iStock)

Lieberman noted that, just as Galileo couldn’t prove heliocentrism until he had a telescope, psychiatry lacked the technological sophistication, tools, and instruments necessary to get an understanding of the brain until the 1950s. It wasn’t until the advent of psychopharmacology and neuroimaging, he said, that researchers could look inside the so-called black box that is the brain.

“(It began with) the CAT scan, magnetic resonance imaging (MRI) systems, positron emission tomography (PET scans) and then molecular genetics. Most recently, the burgeoning discipline of neuroscience and all of the methods within, beginning with molecular biology and progressing to optogenetics, this capacity has given researchers the ability to deconstruct the brain, understand its integral components, its mechanisms of action and how they underpin mental function and behavior,” Lieberman said. “The momentum that has built is almost like Moore’s law with computer chips, (and) you see this increasing power occurring with exponential sort of growth.”

Specifically, the use of MRIs and PET scans has allowed researchers to study the actual functional activity of different circuits and regions of the brain, Lieberman noted. Further, PET scans provided a look at the chemistry of the brain, which has allowed for the development of more sophisticated pathological theories. These measures, he said, were used to develop treatments while also allowing measurement of the effectiveness of both medication-based therapies and psychotherapies.

As an example, Lieberman cited the use of imaging in the treatment of post-traumatic stress disorder (PTSD). The disorder, a hyperarousal that chronically persists even in the absence of threatening stimulation, is treated through a method called desensitization. Over time, researchers have been able to fine-tune the desensitization therapies and treatments by accessing electronic images of the brain, which can show if there’s been a reduction in the activation of the affected amygdala.

Lieberman noted that despite progress in this area, technology has not replaced interaction with the individual patient; however, as technology continues to evolve, he expects the diagnoses of mental disorders to be refined.

“By the use of different technologies including genetics (and) imaging, including electrophysiological assessments, which are kind of EEG based, what we’ll have is one test that can confirm conditions that were previously defined by clinical description of systems,” Lieberman said. “I think, of all the disciplines that will do this, genetics will be the most informative.”

Just as genetics is currently used to diagnose cancer using anatomy and histology, Lieberman said the expanding field is helping researchers distinguish mental illness in individuals with certain genetic mutations. He expects that in the future, doctors will use “biochips” to routinely screen patients and provide a targeted therapy against the gene or gene product. These chips will have panels of genes known to be potentially associated with the risk for mental illness.

“Someone used the analogy of saying the way we treat depression now is as if you needed to put coolant into your car. Instead of putting it into the radiator, you just dump it on the engine,” he said. “So genetics will probably be the most powerful method to really tailor to the individual and use this technique of precision and personalized medicine.”

Lieberman also sees additional promise in magnetic stimulation, deep brain stimulation through the surgical implanting of electrodes, and optogenetics. Though he has plenty of optimism for these treatments and other potential treatments for mental illness, much of their continued growth may hinge on government policy and budgets. Recent coverage of gun violence in the United States, and a public call for better means by which to screen individuals for mental health inflictions, may be an unfortunate catalyst in moving funding forward in this research arena. A recent article from the UK’s Telegraph discusses Google’s newfound interest in this research, with former US Head of the National Institute of Mental Health now in a position at Google Life Sciences.

“Science, technology and healthcare are doing very well, but when it comes to the governmental process, I think we’re in trouble,” he said. “A welcome development in this regard is President Obama’s Human Brain Initiative, which if you look at the description of it, (is) basically to develop new tools in neurotechnology that can really move forward in a powerful way of being able to measure the function of the brain. Not by single cells or single circuits, but by thousands or tens of thousands of cells and multiple circuits simultaneously. That’s what we need.”

Ask the average passerby on the street to describe artificial intelligence and you’re apt to get answers like C-3PO and Apple’s Siri. But for those who follow AI developments on a regular basis and swim just below the surface of the broad field , the idea that the foreseeable AI future might be driven more by Big Data rather than big discoveries is probably not a huge surprise. In a recent interview with Data Scientist and Entrepreneur Eyal Amir, we discussed how companies are using AI to connect the dots between data and innovation.

Image credit: Startup Leadership Program Chicago
Image credit: Startup Leadership Program Chicago

According to Amir, the ability to make connections between big data together has quietly become a strong force in a number of industries. In advertising for example, companies can now tease apart data to discern the basics of who you are, what you’re doing, and where you’re going, and tailor ads to you based on that information.

“What we need to understand is that, most of the time, the data is not actually available out there in the way we think that it is. So, for example I don’t know if a user is a man or woman. I don’t know what amounts of money she’s making every year. I don’t know where she’s working,” said Eyal. “There are a bunch of pieces of data out there, but they are all suggestive. (But) we can connect the dots and say, ‘she’s likely working in banking based on her contacts and friends.’ It’s big machines that are crunching this.”

Amir used the example of image recognition to illustrate how AI is connecting the dots to make inferences and facilitate commerce. Many computer programs can now detect the image of a man on a horse in a photograph. Yet many of them miss the fact that, rather than an actual man on a horse, the image is actually a statue of a man on a horse. This lack of precision in analysis of broad data is part of what’s keep autonomous cars on the curb until the use of AI in commerce advances.

“You can connect the dots enough that you can create new applications, such as knowing where there is a parking spot available in the street. It doesn’t make financial sense to put sensors everywhere, so making those connections between a bunch of data sources leads to precise enough information that people are actually able to use,” Amir said. “Think about, ‘How long is the line at my coffee place down the street right now?’ or ‘Does this store have the shirt that I’m looking for?’ The information is not out there, but most companies don’t have a lot of incentive to put it out there for third parties. But there will be the ability to…infer a lot of that information.”

This greater ability to connect information and deliver more precise information through applications will come when everybody chooses to pool their information, said Eyal. While he expects a fair bit of resistance to that concept, Amir predicts that there will ultimately be enough players working together to infer and share information; this approach may provide more benefits on an aggregate level, as compared to an individual company that might not have the same incentives to share.

As more data is collected and analyzed, another trend that Eyal sees on the horizon is more autonomy being given to computers. Far from the dire predictions of runaway computers ruling the world, he sees a ‘supervised’ autonomy in which computers have the ability to perform tasks using knowledge that is out-of-reach for humans. Of course, this means developing a sense trust and allowing the computer to make more choices for us.

“The same way that we would let our TiVo record things that are of interest to us, it would still record what we want, but maybe it would record some extras. The same goes with (re-stocking) my groceries every week,” he said. “There is this trend of ‘Internet of Things,’ which brings together information about the contents of your refrigerator, for example. Then your favorite grocery store would deliver what you need without you having to spend an extra hour (shopping) every week.”

On the other hand, Amir does have some potential concerns about the future of artificial intelligence, comparable to what’s been voiced by Elon Musk and others. Yet he emphasizes that it’s not just the technology we should be concerned about.

“At the end, this will be AI controlled by market forces. I think the real risk is not the technology, but the combination of technology and market forces. That, together, poses some threats,” Amir said. “I don’t think that the computers themselves, in the foreseeable future, will terminate us because they want to. But they may terminate us because the hackers wanted to.”

I administer the Bitcoin P2P discussion group at LinkedIn, a social media network for professionals. A frequent question posed by newcomers and even seasoned venture investors is: “How can I understand Bitcoin in its simplest terms?”

Engineers and coders offer answers that are anything but simple. Most focus on mining and the blockchain. In this primer, I will take an approach that is both familiar and accurate…

Terms/Concepts: Miners Blockchain Double-Spend

First, forget about everything you have heard about ‘mining’ Bitcoin. That’s just a temporary mechanism to smooth out the initial distribution and make it fair, while also playing a critical role in validating the transactions between individuals. Starting with this mechanism is a bad way to understand Bitcoin, because its role in establishing value, influencing trust or stabilizing value is greatly overrated.

The other two terms are important to a basic understanding of Bitcoin and why it is different, but let’s put aside jargon and begin with the familiar. Here are three common analogies for Bitcoin. #1 is the most typical impression pushed by the media, but it is least accurate. Analogy #3 is surprisingly on target.

1. Bitcoin as Gold

You can think of Bitcoin as a natural asset, but with a firm, capped supply. Like gold, the asset is a limited commodity that a great many people covet. But unlike gold, the supply is completely understood and no one organization or country has the potential to suddenly discover a rich vein and extract it from the ground.

2. Bitcoin as a Debit or Gift Card

Bitcoin is also a little like a prepaid debit card, you can exchange cash for it and then use it to buy things—either locally (subject to growing recognition and acceptance) or across the Internet. But here, too, there is a difference. A debit card must be loaded with a prepaid balance. That is, it must be backed by something else, whereas Bitcoin has an intrinsic value based on pure market supply and demand. A debit card is a vehicle to transmit or pay money—but Bitcoin is the money itself.

3. Bitcoin as a Foreign Currency

Perhaps the most accurate analogy for Bitcoin (or at least where it is headed), is as a fungible, convertible, bankable foreign currency.

Like a foreign currency, Bitcoin can be…

  • Easily exchanged with cash
  • Easily transmitted for purchases, sales, loans or gifts
  • Stored & saved in an online account or in your mattress (Advantage: It can also be stored in a smart phone or in the cloud—and it can backed up!)
  • Has a value that floats with market conditions
  • Is backed by something even more trustworthy than a national government

Unlike the cash in your pocket or bank account, Your Bitcoin wallet can be backed up with a mouse click. And, with proper attention to best practices, it will survive the failure of any exchange, bank or custodian. That is with proper key management and the use of multisig, no one need lose money when a Bitcoin exchange fails. The trauma of past failures was exacerbated by a lack of tools, practices and user understanding. These things are all improving with each month.

So, Whats the Big Deal?

So, Bitcoin is a lot like cash or a debit card. Why is this news? Bitcoin is a significant development, because the creator has devised a way to account for moving money between buyer and seller (or any two parties) that does not require any central bank, bookkeeper or authority to keep tabs. Instead, the bookkeeping is crowd sourced.

For example, let’s say that Alice wants to purchase a $4 item from Bob, an Internet merchant in another country.

a) Purchase and settlement with a credit card

With a credit card, wire transfer or check, Alice can pay $4 easily. But many things occur in the background and they represent an enormous transaction overhead. Alice must have an account at an internationally recognized bank. The bank must vouch for Alice’s balance or credit in real time and it must then substitute its own credit for hers. After the transaction, two separate banks at opposite ends of the world must not only adjust their client account balances, they must also settle their own affairs through an interbank-settlement process.

The two banks use different national currencies and are subject to different laws, oversight and reporting requirements. Over the course of the next few days, the ownership of gold, oil or reserve currencies is transferred between large institutions to complete the affairs of Alice’s $4 purchase.

b) Now, consider the same transaction with Bitcoin

Suppose that Alice has a Bitcoin wallet with a balance equal to $10. Let’s say that these characters represent $10 in value: 5E 7A 44 1B. (Bitcoin value is expressed as a much longer character string, but for this illustration we are keeping it short). Alice wants to buy a $4 item from Bob. Since she has only this one string representing $10, she must somehow get $6 in change.

Bitcoin Transaction

With Bitcoin, there is no bank or broker at the center of a transaction. The transaction is effected directly between Alice and Bob. But there is a massive, distributed, global network of bookkeepers standing ready to help Alice and Bob to complete the transaction. They don’t even know the identities of Alice or Bob, but they are like a bank and independent auditor at the same time…

If Alice were to give Bob her secret string (worth $10), and if Bob gives her a string of characters worth $6 as change, one wonders what prevents Alice from double-spending her original $10 secret? But this can’t happen, because the miners and their distributed blockchain are the background fabric of the ecosystem. In the Bitcoin world, Alice is left with a brand new secret string that represents her new bank balance. It can be easily tested by anyone, anywhere. It is worth exactly $6.

This example is simplified and without underlying detail. But the facts, as stated, are accurate.

Conclusion

For Geeks, Bitcoin is the original implementation of a blockchain distributed ledger. Miners uncover a finite reserve of hidden coins while validating the transactions of strangers. As such, Bitcoin solves the double spend problem and enables person-to-person transactions without the possibility of seizure or choke points.

But for the rest of us, Bitcoin offers a very low cost transaction network that will quickly replace checks and debit cards and may eventually replace cash, central banks, and regional monetary authorities. The safeties, laws and refund mechanisms offered by banks and governments can still be applied to Bitcoin for selected transactions (whenever both parties agree to oversight), but the actual movement of value will be easier, less expensive and less susceptible to 3rd party meddling.

  • Bitcoin is a distributed, decentralized and low cost payment network
  • It is adapted to a digital economy in a connected world: fluid & low friction, trusted, secure
  • More zealous proponents (like me) believe that is gradually becoming the value itself (i.e. it needn’t be backed by assets, a promise of redemption, or a national government. In this sense, it is like a very stable, foreign currency

Additional Reading:

Philip Raymond sits on Lifeboat’s New Money Systems Board and administers Bitcoin P2P, a LinkedIN community. He is co-chair of CRYPSA and host of The Bitcoin Event. He writes for Lifeboat, Quora, Sophos and Wild Duck.

Recently, I was named Most Viewed Writer on Bitcoin and cryptocurrency at Quora.com (writing under the pen name, “Ellery”). I don’t typically mirror posts at Lifeboat, but a question posed today is Quora_Most_Viewed_splashrelevant to my role on the New Money Systems board at Lifeboat. Here, then, is my reply to: “How can governments ban Bitcoin?”


Governments can enact legislation that applies to any behavior or activity. That’s what governments do—at least the legislative arm of a government. Such edicts distinguish activities that are legal from those that are banned or regulated.

You asked: “How can governments ban Bitcoin?” But you didn’t really mean to ask in this way. After all, legislators ban whatever they wish by meeting in a congress or committee and promoting a bill into law. In the case of a monarchy or dictatorship, the leader simply issues an edict.

So perhaps, the real question is “Can a government ban on Bitcoin be effective?”

Some people will follow the law, no matter how nonsensical, irrelevant, or contrary to the human condition. These are good people who have respect for authority and a drive toward obedience. Others will follow laws, because they fear the cost of breaking the rules and getting caught. I suppose that these are good people too. But, overall, for a law to be effective, it must address a genuine public need (something that cries out for regulation), it must not contradict human nature, and it must address an activity that is reasonably open to observation, audit or measurement.

Banning Bitcoin fails all three test of a rational and enforceable law.

Most governments, including China and Italy, realize that a government ban on the possession of bits and bytes can be no more effective than banning feral cats from mating in the wild or legislating that basements shall remain dry by banning ground water from seeking its level.

So, the answer to the implied question is: A ban on Bitcoin could never be effective.

For this reason, astute governments avoid the folly of enacting legislation to ban Bitcoin. Instead, if they perceive a threat to domestic policy, tax compliance, monetary supply controls or special interests, they discourage trading by discrediting Bitcoin or raising concerns over safety, security, and criminal activity. In effect, a little education, misinformation or FUD (fear, uncertainty and doubt) can sometimes achieve what legislation cannot.

Reasons to Ban Bitcoin … a perceived threat to either:

  • domestic policy
  • tax compliance
  • monetary supply controls
  • special interests

Methods to Discourage Trading (rather than a ban)

  • Discredit Bitcoin (It’s not real money)
  • Raise concerns over safety & security
  • Tie its use to criminal activity

Avoiding both a ban—and even official discouragement

There is good news on the horizon. In a few countries—including the USA—central bankers, monetary czars and individual legislators are beginning to view Bitcoin as an opportunity rather than a threat. Prescient legislators are coming to the conclusion that a distributed, decentralized trading platform, like Bitcoin, does not threaten domestic policy and tax compliance—even if citizens begin to treat it as cash rather than a payment instrument. While a cash-like transition might ultimately undermine the federal reserve monetary regime and some special interests, this is not necessarily a bad thing—not even for the affected “interests”.

If Bitcoin graduates from a debit/transmission vehicle (backed by cash) to the cash itself, citizens will develop more trust and respect for their governments. Why? Because their governments will no longer be able to water down citizen wealth by running the printing press, nor borrow against unborn generations. Instead, they will need to collect every dollar that they spend or convince bond holders that they can repay their debts. They will need to balance their checkbooks, spend more transparently and wear their books on their sleeves. All good things.

Naturally, this type of change frightens entrenched lawmakers. The idea of separating a government from its monetary policy seems—well—radical! But this only because we have not previously encountered a technology that placed government accountability and transparency on par with the private sector requirement to keep records and balance the books. [continue below image]…

What backs your currency? Is it immune from hyperinflation?

What backs your currency? Is it immune from hyperinflation?

Seven sovereign countries use the US Dollar as their main currency. Why? Because the government of these countries were addicted to spending which leads to out-of-control inflation. They could not convince citizens that they could wean themselves of the urge to print bank notes with ever increasing zeros. And so, by switching to the world’s reserve currency, they demonstrate a willingness to settle debts with an instrument that cannot be inflated by edict, graft or sloppy bookkeeping.

But here’s the problem: Although the US dollar is more stable than the Zimbabwe dollar, this is a contest in relative trust and beating the clock. The US has a staggering debt that is sustained only by our creditors’ willingness to bear the float. Like Zimbabwe, Argentina, Greece and Germany between the wars, our lawmakers raise the debt ceiling with a lot of bluster, but nary a thought.

Is there a way to instill confidence in a way that is both trustworthy and durable? Yes! —And it is increasingly likely that Bitcoin is the way to the trust and confidence that is so sorely needed.

Philip Raymond sits on the New Money Systems board. He is also co-chair of Cryptocurrency Standards Association and editor at A Wild Duck.