Toggle light / dark theme

The Singularity Institute will be holding the fourth annual Singularity Summit in New York in October, featuring talks by Ray Kurzweil, David Chalmers, and Peter Thiel.

New York, NY (PRWEB) July 17, 2009 — The fourth annual Singularity Summit, a conference devoted to the better understanding of increasing intelligence and accelerating change, will be held in New York on October 3–4 in Kaufmann Hall at the historic 92nd St Y. The Summit brings together a visionary community to further dialogue and action on complex, long-term issues that are transforming the world.

Participants will hear talks from cutting-edge researchers and network with strategic business leaders. The world’s most eminent experts on forecasting, venture capital, emerging technologies, consciousness and life extension will present their unique perspectives on the future and how to get there. “The Singularity Summit is the premier conference on the Singularity,” says Ray Kurzweil, inventor of the CCD flatbed scanner and author of The Singularity is Near. “As we get closer to the Singularity, each year’s conference is better than the last.”

The Singularity Summit has previously been held in the San Francisco Bay Area, where it has been featured in numerous publications including the front page of the San Francisco Chronicle. It is hosted by the Singularity Institute, a 501©(3) nonprofit devoted to studying the benefits and risks of advanced technologies.

Select Speakers

* Ray Kurzweil is the author of The Singularity is Near (2005) and co-founder of Singularity University, which is backed by Google and NASA. At the Singularity Summit, he will present his theories on accelerating technological change and the future of humanity.

* Dr. David Chalmers, director of the Centre for Consciousness at Australian National University and one of the world’s foremost philosophers, will discuss mind uploading — the possibility of transferring human consciousness onto a computer network.

* Dr. Ed Boyden is a joint professor of Biological Engineering and of Brain and Cognitive Sciences at MIT. Discover Magazine named him one of the 20 best brains under 40.

* Peter Thiel is the president of Clarium, seed investor in Facebook, managing partner of Founders Fund, and co-founder of PayPal.

* Dr. Aubrey de Grey is a biogerontologist and Director of Research at the SENS Foundation, which seeks to extend the human lifespan. He will present on the ethics of this proposition.

* Dr. Philip Tetlock is Professor of Organizational Behavior at the Haas School of Business, University of California, Berkeley, and author of Expert Political Judgement: How Good Is It?

* Dr. Jürgen Schmidhuber is co-director of the Dalle Molle Institute for Artificial Intelligence in Lugano, Switzerland. He will discuss the mathematical essence of beauty and creativity.

* Dr. Gary Marcus is director of the NYU Infant Language Learning Center, and professor of psychology at New York University and author of the book Kludge.

See the Singularity Summit website at http://www.singularitysummit.com/.

The Singularity Summit is hosted by the Singularity Institute for Artificial Intelligence.

MediaX at Stanford University is a collaboration between the university’s top technology researchers and companies innovating in today’s leading industries.

Starting next week, MediaX is putting on an exciting series of courses in The Summer Institute at Wallenberg Hall, on Stanford’s campus.

Course titles that are still open are listed below, and you can register and see the full list here. See you there!

————–

July 20: Social Connectedness in Ambient Intelligent Environments, Clifford Nass and Boris deRuyter

July 23: Semantic Integration, Carl Hewitt

August 3–4: Social Media Collaboratory, Howard Rheingold

August 5–6: New Metrics for New Media: Analytics for Social Media and Virtual Worlds, Martha Russell and Marc Smith

August 7: Media and Management Bridges Between Heart and Head for Impact, Neerja Raman

August 10–11: Data Visualization: Theory and Practice, Jeff Heer, David Kasik and John Gerth

August 12: Technology Transfer for Silicon Valley Outposts, Jean Marc Frangos, Chuck House

August 12–14: Collaborative Visualization for Collective, Connective and Distributed Intelligence, Jeff Heer, Bonnie deVarco, Katy Borner

————-

Unique opportunity to sponsor research investigating an infectious cause and potential treatment for Alzheimer’s disease



Alzheimer’s disease afflicts some 20 million people world-wide, over 5 million people of whom reside in the United States. It is currently the seventh-leading cause of death in the US. The number of people with the disease is predicted to increase by over 50% by 2030. The economic as well as emotional costs are huge, the costs being estimated as more than $148 billion each year (direct and indirect, for of all types of dementia, to Medicare, Medicaid and businesses).

The causes of Alzheimer’s disease are unknown, apart from the very small proportion with familial disease. We are investigating the involvement of infectious agents in the disease, with particular emphasis on the virus that causes oral herpes/cold sores/fever blisters. We discovered that most elderly humans harbour this virus in their brains and that in those (and only those) who possess a certain genetic factor, the virus confers a strong risk of developing Alzheimer’s disease. Also, we found that the virus is directly involved with the characteristic abnormalities seen in the brains of Alzheimer’s disease patients.

There are several treatment possibilities available to combat this virus and all would be suitable candidates as therapies in Alzheimer’s disease. However, much more research is needed before trials of these agents for Alzheimer’s disease in humans can begin.

In these financially difficult times many funding bodies have to prioritise projects based around long established hypotheses. Projects involving new avenues of investigation can receive very positive comments by scientific reviewers, yet are rarely funded, as they almost always appear risky compared with projects largely confirming or expanding existing ideas. Such conservative projects are almost guaranteed to produce useful data, though with modest impact. This situation can mean that research proposals with the potential to transform our understanding of a disease and offer new approaches to its treatment never reach the threshold for funding and are not implemented, even though the potential and quality of the science is acknowledged by reviewers and funding panel.

It appears that our work examining a viral cause for Alzheimer’s disease is in this category. Despite our publishing a large number of potentially very exciting papers on this topic, and despite our research projects being reviewed favourably by scientific referees, few funding panels are prepared to commit resources to fund our work, as by doing so they deny funding to other more straightforward, very low risk projects.

We are therefore actively seeking sponsorship for several projects of varying costs to investigate the interaction of virus and specific genetic factor, the pathways of viral damage in the brain, and the effects of antiviral agents. All the projects would provide significant evidence strengthening the case for trialling antiviral agents in Alzheimer’s disease.

Antiviral agents would inhibit a likely major cause of the disease in contrast to current treatments, which merely inhibit the symptoms.

If any Lifeboat member knows of a company or individual that would be interested in sponsoring some of our research on Alzheimer’s disease then please contact me for further details.

Ruth Itzhaki

Contact details:

[email protected]

Faculty of Life Sciences, Moffat Building, The University of Manchester, Manchester M60 1QD, UK


Further reading:

The Times, London

http://www.timesonline.co.uk/tol/news/uk/health/article5295794.ece

Journal of Pathology
http://www3.interscience.wiley.com/journal/121411445/abstract

The Lancet
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(96)10149-5/abstract


I would gladly email any further information.

It will probably come as a surprise to those who are not well acquainted with the life and work of Alan Turing that in addition to his renowned pioneering work in computer science and mathematics, he also helped to lay the groundwork in the field of mathematical biology(1). Why would a renowned mathematician and computer scientist find himself drawn to the biosciences?

Interestingly, it appears that Turing’s fascination with this sub-discipline of biology most probably stemmed from the same source as the one that inspired his better known research: at that time all of these fields of knowledge were in a state of flux and development, and all posed challenging fundamental questions. Furthermore, in each of the three disciplines that engaged his interest, the matters to which he applied his uniquely creative vision were directly connected to central questions underlying these disciplines, and indeed to deeper and broader philosophical questions into the nature of humanity, intelligence and the role played by evolution in shaping who we are and how we shape our world.

Central to Turing’s biological work was his interest in mechanisms that shape the development of form and pattern in autonomous biological systems, and which underlie the patterns we see in nature (2), from animal coat markings to leaf arrangement patterns on plant stems (phyllotaxis). This topic of research, which he named “morphogenesis,” (3) had not been previously studied with modeling tools. This was a knowledge gap that beckoned Turing; particularly as such methods of research came naturally to him.

In addition to the diverse reasons that attracted him to the field of pattern formation, a major ulterior motive for his research had to do with a contentious subject which, astonishingly, is still highly controversial in some countries to this day. In studying pattern formation he was seeking to help invalidate the “argument from design(4) concept, which we know today as the hypothesis of “Intelligent Design.

Turing was intent on demonstrating that the laws of physics are sufficient to explain our observations in the natural world; or in other words, that our findings do not need an omnipotent creator to explain them. It is ironic that Turing, whose work played a central role in laying the groundwork for the creation of Artificial Intelligence (AI), took a clear stance against creationism. This is testament to his acceptance of scientific evidence and rigorous research over weak analogy.

Unfortunately, those who did not and will not accept Darwinian natural selection as the mechanism of evolution will not see anything compelling in Turing’s work on morphogenesis. To those individuals, the development of AI can be taken as “proof,” or a convincing analogy, of the necessity and presence of a creator, the argument being that the Creator created humanity, and humanity creates AI.

However, what the supporters of intelligent design do not acknowledge is that natural selection is itself precisely the cause underlying the development of both humanity and its AI progeny. Just as natural selection resulted in the phenomena that Turing sought to model in his work on morphogenesis (which brings about the propagation of successful traits through the development of biological form and pattern), it is also the driver for the development of intelligence. Itself generated via internalized neuronal selection mechanisms (5, 6), intelligence allows organisms to adapt to their environment continually during life.

Intelligence is the ultimate tool, the development of which allows organisms to survive; it enables them to learn, respond to their environment and adapt their behavior within their own lifetime. It is the fruit of the natural process that brings about successive development over time in organisms faced with scarcity of resources. Moreover, it now allows humans to defy generational selection and develop intelligences external to our own, making use of computational techniques, including some which utilize evolutionary mechanisms (7).

The eventual development of true AI will be a landmark in many ways, notably in that these intelligences will have the ability to alter their own circuits (their version of neurons), immediately and at will. While the human body is capable of some degree of non-developmental neuronal plasticity, this takes place slowly and control of the process is limited to indirect mechanisms (such as varied forms of learning or stimulation). In contrast, the high plasticity and directly controlled design and structure of AI software and hardware will render them well suited to altering themselves and hence to developing improved subsequent AI generations.

In addition to a jump in the degree of plasticity and its control, AIs will constitute a further step forward with regard to the speed at which beneficial information can be shared. In contrast to the exceedingly slow rate at which advantageous evolutionary adaptations were spread through the populations observed by Darwin (over several generations), the rapidly increasing rates of communication in current society result in successful “adaptations” (which we call science and technology) being distributed at ever-increasing speeds. This is, of course, the principal reason why information sharing is beneficial for humans – it allows us to better adapt to reality and harness the environment to our advantage. It seems reasonable to predict that ultimately the sharing of information in AI will be practically instantaneous.

It is difficult to speculate what a combination of such rapid communication and high plasticity combined with ever-increasing processing speeds will be like. The point at which self-improving AIs emerge has been termed a technological singularity (8).

Thus, in summary: evolution begets intelligence (via evolutionary neuronal selection mechanisms); human intelligence begets artificial intelligence (using, among others, evolutionary computation methods), which at increasing cycle speeds, leads to a technological singularity – a further big step up the evolutionary ladder.

Sadly, being considerably ahead of his time and living in an environment that castigated his lifestyle and drove him from his research, meant that Turing did not live to see the full extent of his work’s influence. While he did not survive to an age in which AIs became prevalent, he did fulfill his ambition by taking part in the defeat of argument from design in the scientific community, and witnessed Darwinian natural selection becoming widely accepted. The breadth of his vision, the insight he displayed, and his groundbreaking research clearly place Turing on an equal footing with the most celebrated scientists of the previous century.

For any assembly or structure, whether an isolated bunker or a self sustaining space colony, to be able to function perpetually, the ability to manufacture any of the parts necessary to maintain, or expand, the structure is an obvious necessity. Conventional metal working techniques, consisting of forming, cutting, casting or welding present extreme difficulties in size and complexity that would be difficult to integrate into a self sustaining structure.

Forming requires heavy high powered machinery to press metals into their final desired shapes. Cutting procedures, such as milling and lathing, also require large, heavy, complex machinery, but also waste tremendous amounts of material as large bulk shapes are cut away emerging the final part. Casting metal parts requires a complex mold construction and preparation procedures, not only does a negative mold of the final part need to be constructed, but the mold needs to be prepared, usually by coating in ceramic slurries, before the molten metal is applied. Unless thousands of parts are required, the molds are a waste of energy, resources, and effort. Joining is a flexible process, and usually achieved by welding or brazing and works by melting metal between two fixed parts in order to join them — but the fixed parts present the same manufacturing problems.

Ideally then, in any self sustaining structure, metal parts should be constructed only in the final desired shape but without the need of a mold and very limited need for cutting or joining. In a salient progressive step toward this necessary goal, NASA demonstrates the innovative Electron Beam Free Forming Fabrication (http://www.aeronautics.nasa.gov/electron_beam.htm) Process. A rapid metal fabrication process essentially it “prints” a complex three dimensional object by feeding a molten wire through a computer controlled gun, building the part, layer by layer, and adding metal only where you desire it. It requires no molds and little or no tooling, and material properties are similar to other forming techniques. The complexity of the part is limited only by the imagination of the programmer and the dexterity of the wire feed and heating device.

Electron beam freeform fabrication process in action
Electron beam freeform fabrication process in action

According to NASA materials research engineer Karen Taminger, who is involved in developing the EBF3 process, extensive simulations and modeling by NASA of long duration space flights found no discernable pattern to the types of parts which failed, but the mass of the failed parts remained remarkably consistent throughout the studies done. This is a favorable finding to in-situe parts manufacturing and because of this the EBF³ team at NASA has been developing a desktop version. Taminger writes:

“Electron beam freeform fabrication (EBF³) is a cross-cutting technology for producing structural metal parts…The promise of this technology extends far beyond its applicability to low-cost manufacturing and aircraft structural designs. EBF³ could provide a way for astronauts to fabricate structural spare parts and new tools aboard the International Space Station or on the surface of the moon or Mars”

NASA’s Langley group working on the EBF3 process took their prototype desktop model for a ride on the microgravity simulating NASA flight and found the process works just fine even in micro gravity, or even against gravity.

A structural metal part fabricated from EBF³
A structural metal part fabricated from EBF³

The advantages this system offers are significant. Near net shape parts can be manufactured, significantly reducing scrap parts. Unitized parts can be made — instead of multiple parts that need riveting or bolting, final complex integral structures can be made. An entire spacecraft frame could be ‘printed’ in one sitting. The process also creates minimal waste products and is highly energy and feed stock efficient, critical to self sustaining structures. Metals can be placed only where they are desired and the material and chemistry properties can be tailored through the structure. The technical seminar features a structure with a smooth transitional gradient from one alloy to another. Also, structures can be designed specifically for their intended purposes, without needing to be tailored to manufacturing process, for example, stiffening ridges can be curvilinear, in response to the applied forces, instead of typical grid patterns which facilitate easy conventional manufacturing techniques. Manufactures, such as Sciaky Inc, (http://www.sciaky.com/64.html) are all ready jumping on the process

In combination with similar 3D part ‘printing’ innovations in plastics and other materials, the required complexity for sustaining all the mechanical and structural components of a self sustaining structure is plummeting drastically. Isolated structures could survive on a feed stock of scrap that is perpetually recycled as worn parts are replaced by free form manufacturing and the old ones melted to make new feed stock. Space colonies could combine such manufacturing technologies and scrap feedstock with resource collection creating a viable minimal volume and energy consuming system that could perpetually repair the structure – or even build more. Technologies like these show that the atomic level control that nanotechnology manufacturing proposals offer are not necessary to create self sustaining structure, and that with minor developments of modern technology, self sustaining structures could be built and operated successfully.

The link is:
http://www.msnbc.msn.com/id/31511398/ns/us_news-military/

“The low-key launch of the new military unit reflects the Pentagon’s fear that the military might be seen as taking control over the nation’s computer networks.”

“Creation of the command, said Deputy Defense Secretary William Lynn at a recent meeting of cyber experts, ‘will not represent the militarization of cyberspace.’”

And where is our lifeboat?

Asteroid hazard in the context of technological development

It is easy to notice that the direct risks of collisions with asteroids decreases with technological development. First, they (or, exactly, our estimation of risks) decrease due to more accurate measurement of them — that is, at the expense of more accurate detection of dangerous asteroids and measurements of their orbits we could finally find that the real chance of impact is 0 in the next 100 year. (If, however, will be confirmed the assumption that we live during the episode of comet bombardment, the assessment of risk would increase 100 times to the background.) Second, it decreases due to an increase in our ability to reject asteroids.
On the other hand, the impact of falling asteroids become larger with time — not only because the population density increases, but also because the growing connectedness of the world system, resulting in that damage in one place can spread across the globe. In other words, although the probability of collisions is reducing, the indirect risks associated with the asteroid danger is increasing.
The main indirect risks are:
A) The destruction of hazardous industries in the place of the fall — for example, nuclear power plant. The entire mass of the station in such a case would evaporated and the release of radiation would be higher than in Chernobyl. In addition, there may be additional nuclear reactions because of sudden compression of the station when it is struck by asteroid. Yet the chances of a direct hit of an asteroid in the nuclear plants are small, but they grow with the growing number of stations.
B) There is a risk that even a small group of meteors, moving a specific angle in a certain place in the earth’s surface could lead to lunch of the system for the Prevention of rocket attacks and lead to an accidental nuclear war. Similar consequences could have a small air explosion of an asteroid (a few meters in size). The first option is more likely for developed superpowers system of warning (but which has flaws or unsecured areas in their ABM system, as in the Russian Federation), while the second — for the regional nuclear powers (like India and Pakistan, North Korea, etc.) which are not able to track missiles by radars, but could react to a single explosion.
C) The technology to drive asteroids in the future will create a hypothetical possibility to direct asteroids not only from Earth, but also on it. And even if there will be accidental impact of the asteroid, there will be talks about that it was sent on purpose. Yet hardly anyone will be sent to Earth asteroids, because such action can easily be detected, the accuracy is low and it need to be prepared for decades before event.
D) Deviations of hazardous asteroids will require the creation of space weapons, which could be nuclear, laser or kinetic. Such weapons could be used against the Earth or the spacecrafts of an opponent. Although the risk of applying it against the ground is small, it still creates more potential damage than the falling asteroids.
E) The destruction of the asteroid with nuclear explosion would lead to an increase in its affecting power at the expense of its fragments – to the greater number of blasts over a larger area, as well as the radioactive contamination of debris.
Modern technological means give possibility to move only relatively small asteroids, which are not global threat. The real danger is black comets in size of several kilometers which are moving on elongated elliptical orbits at high speeds. However, in the future, space can be quickly and cheaply explored through self-replicating robots based on nanoteh. This will help to create huge radio telescopes in space to detect dangerous bodies in the solar system. In addition, it is enough to plant one self-replicating microrobot on the asteroid, to multiply it and then it could break the asteroid on parts or build engines that will change its orbit. Nanotehnology will help us to create self-sustaining human settlements on the Moon and other celestial bodies. This suggests that the problem of asteroid hazard will in a few decades be outdated.
Thus, the problem of preventing collisions of the Earth with asteroids in the coming decades can only be a diversion of resources from the global risks:
First, because we are still not able to change orbits of those objects which actually can lead to the complete extinction of humanity.
Secondly, by the time (or shortly thereafter), when the nuclear missile system for destruction of asteroids will be created, it will be obsolete, because nanotech can quickly and cheaply harness the solar system by the middle of 21 century, and may, before .
And third, because such system at time when Earth is divided into warring states will be weapon in the event of war.
And fourthly, because the probability of extinction of humanity as a result of the fall of an asteroid in a narrow period of time when the system of deviation of the asteroids will be deployed, but powerful, nanotechnology is not yet established, is very small. This time period may be equal to 20 years, say from 2030 — until 2050, and the chances of falling bodies of 10 km size during this time, even if we assume that we live in a period comet bombardment, when the intensity is 100 times higher — is at 1 to 15 000 (based on an average frequency of the fall of bodies every 30 million years). Moreover, given the dynamics, we can reject the indeed dangerous objects only at the end of this period, and perhaps even later, as larger the asteroid, the more extensive and long-term project for its deviation is required. Although 1 to 15 000 is still unacceptable high risk, it is commensurate with the risk of the use of space weapons against the Earth.
In the fifth, anti-asteroid protection diverts attention from other global issues, the limited human attention and financial resources. This is due to the fact that the asteroid danger is very easy for understanding — it is easy to imagine, it is easy to calculate the probabilities and it is clear to the public. And there is no doubt of its reality, and there are clear ways for protection. (e.g. the probability of volcanic disaster comparable to the asteroid impact by various estimates, is from 5 to 20 times higher at the same level of energy – but we have no idea how it can be prevented.) So it differs from other risks that are difficult to imagine, that are impossible quantify, but which may mean the probability of complete extinction of tens of percent. These are the risks of AI, biotech, nanotech and nuclear weapons.
In the sixth, when talking about relatively small bodies like Apophis, it may be cheaper to evacuate the area of the fall than to deviate the asteroid. A likely the area of the impact will be ocean.
But I did not call to abandon antiasterod protection, because we first need to find out whether we live in the comet bombardment period. In this case, the probability of falling 1 km body in the next 100 years is equal to 6 %. (Based on data on the hypothetical fall in the last 10 000 years, like a comet Klovis http://en.wikipedia.org/wiki/Younger_Dryas_impact_event , traces of which can be 500 000 in the craters of similar entities called Carolina Bays http://en.wikipedia.org/wiki/Carolina_bays crater, and around New Zealand in 1443 http://en.wikipedia.org/wiki/Mahuika_crater and others 2 impacts in last 5 000 years , see works of http://en.wikipedia.org/wiki/Holocene_Impact_Working_Group ). We must first give power to the monitoring of dark comets and analysis of fresh craters.

Here’s a story that should concern anyone wanting to believe that the military has a complete and accurate inventory of chemical and biological warfare materials.

“An inventory of deadly germs and toxins at an Army biodefense lab in Frederick found more than 9,200 vials of material that was unaccounted for in laboratory records, Fort Detrick officials said Wednesday. The 13 percent overage mainly reflects stocks left behind in freezers by researchers who retired or left Fort Detrick since the biological warfare defense program was established there in 1943, said Col. Mark Kortepeter, deputy commander of the U.S. Army Medical Research Institute of Infectious Diseases.”

The rest of the story appears here:
http://abcnews.go.com/Health/wireStory?id=7863828

Given that “The material was in tiny, 1mm vials that could easily be overlooked,” and included serum from Korean hemorrhagic fever patients, the lack of adequate inventory controls to this point creates the impression that any number of these vials could be outside their lab. Of course, they assure us they have it all under control. Which will be cold comfort if we don’t have a lifeboat.