Toggle light / dark theme

Reposted from Next Big Future which was advancednano.

A 582,970 base pair sequence of DNA has been synthesized.

It’s the first time a genome the size of a bacterium has chemically been synthesized that’s about 20 times longer than [any DNA molecule] synthesized before.

This is a huge increase in capability. It has broad implications for DNA nanotechnology and synthetic biology.

It is particularly relevant for the lifeboat foundation bioshield project

This means that the Venter Institute is on the brink of sythesizing a new bacterial life.

The process to synthesize and assemble the synthetic version of the M. genitalium chromosome

began first by resequencing the native M. genitalium genome to ensure that the team was starting with an error free sequence. After obtaining this correct version of the native genome, the team specially designed fragments of chemically synthesized DNA to build 101 “cassettes” of 5,000 to 7,000 base pairs of genetic code. As a measure to differentiate the synthetic genome versus the native genome, the team created “watermarks” in the synthetic genome. These are short inserted or substituted sequences that encode information not typically found in nature. Other changes the team made to the synthetic genome included disrupting a gene to block infectivity. To obtain the cassettes the JCVI team worked primarily with the DNA synthesis company Blue Heron Technology, as well as DNA 2.0 and GENEART.

From here, the team devised a five stage assembly process where the cassettes were joined together in subassemblies to make larger and larger pieces that would eventually be combined to build the whole synthetic M. genitalium genome. In the first step, sets of four cassettes were joined to create 25 subassemblies, each about 24,000 base pairs (24kb). These 24kb fragments were cloned into the bacterium Escherichia coli to produce sufficient DNA for the next steps, and for DNA sequence validation.

The next step involved combining three 24kb fragments together to create 8 assembled blocks, each about 72,000 base pairs. These 1/8th fragments of the whole genome were again cloned into E. coli for DNA production and DNA sequencing. Step three involved combining two 1/8th fragments together to produce large fragments approximately 144,000 base pairs or 1/4th of the whole genome.

At this stage the team could not obtain half genome clones in E. coli, so the team experimented with yeast and found that it tolerated the large foreign DNA molecules well, and that they were able to assemble the fragments together by homologous recombination. This process was used to assemble the last cassettes, from 1/4 genome fragments to the final genome of more than 580,000 base pairs. The final chromosome was again sequenced in order to validate the complete accurate chemical structure.

The synthetic M. genitalium has a molecular weight of 360,110 kilodaltons (kDa). Printed in 10 point font, the letters of the M. genitalium JCVI-1.0 genome span 147 pages.

In a recent conversation on our discussion list, Ben Goertzel, a rising star in artificial intelligence theory, expressed skepticism that we could keep a “modern large-scale capitalist representative democracy cum welfare state cum corporate oligopoly” going for much longer.

Indeed, our complex civilization currently does seem to be under a lot of stress.

Lifeboat Foundation Scientific Advisory Board member and best-selling author David Brin’s reply was quite interesting.

David writes:


Today’s “modern large-scale capitalist representative democracy cum welfare state cum corporate oligopoly” works largely because the systems envisioned by John Locke and Adam Smith have burgeoned fantastically, producing synergies in highly nonlinear ways that another prominent social philosopher — Karl Marx — never imagined. Ways that neither Marx nor the ruling castes of prior cultures even could imagine.

Through processes of competitive creativity and reciprocal accountability, the game long ago stopped being zero-sum (I can only win if you lose) and became prodigiously positive-sum. (We all win, though I’d still like to win a little more than you.) (See Robert Wright’s excellent book “Non-Zero”.)

Yes, if you read over the previous paragraph, I sound a lot like some of the boosters of FIBM or Faith In Blind Markets… among whom you’ll find the very same neocons and conspiratorial kleptocrats who I accuse of ruining markets! Is that a contradiction?

Not at all. Just as soviet commissars recited egalitarian nostrums, while relentlessly quashing freedom in the USSR, many of our own right-wing lords mouth “pro-enterprise” lip service, while doing everything they can to cheat and foil competitive markets. To kill the golden goose that gave them everything.

The problem is that our recent, synergistic system has always had to push uphill against a perilous slope of human nature. The Enlightenment is just a couple of centuries old. Feudalism/tribalism had uncountable millennia longer to work a selfish, predatory logic into our genes, our brains. We are all descended from insatiable men, who found countless excuses for cheating, expropriating the labor of others, or preserving their power against challenges from below. Not even the wisest of us can guarantee we’d be immune from temptation to abuse power, if we had it.

Some, like George Washington, have set a pretty good example. They recognize these backsliding trends in themselves, and collaborate in the establishment of institutions, designed to let accountability flow. Others perform lip-service, then go on to display every dismal trait that Karl Marx attributed to shortsighted bourgeois “exploiters.”

Indeed, it seems that every generation must face this ongoing battle, between those who “get” what Washington and many others aimed for — the positive-sum game — and rationalizers who are driven by our primitive, zero-sum drives. A great deal is at stake, at a deeper level that mere laws and constitutions. Moreover, if the human behavior traits described by Karl Marx ever do come roaring back, to take hold in big ways, then so might some of the social scenarios that he described.


Do you, as an educated 21st Century man or woman, know very much about the controversy that transfixed western civilization for close to a century and a half? A furious argument, sparked by a couple of dense books, written by a strange little bearded man? Or do you shrug off Marx as an historical oddity? Perhaps a cousin of Groucho?

Were our ancestors — both those who followed Marx and those who opposed him — stupid to have found him interesting or to have fretted over the scenarios he foretold?

I often refer to Marx as the greatest of all science fiction authors, because — while his long-range forecasts nearly all failed, and some of his premises (like the labor theory of value) were pure fantasy — he nevertheless shed heaps of new light and focused the attention of millions upon many basics of both economics and human nature. As a story-spinner, Marx laid down some “if this goes on” thought-experiments that seemed vividly plausible to people of his time, and for a century afterwards. People who weren’t stupid. People who were, in fact, far more intimate with the consequences of social stratification than we have been, in the latest, pampered generation.

As virtually the inventor of the term “capitalism,” Marx ought to be studied (and criticized) by anyone who wants to understand our way of life.

What’s been forgotten, since the fall of communism, is that the USSR’s ‘experiment’ was never even remotely “Marxism.” And, hence, we cannot simply watch “The Hunt For Red October” and then shrug off the entire set of mental and historical challenges. By my own estimate, he was only 50% a deluded loon — a pretty good ratio, actually. (I cannot prove that I’m any better!) The other half was brilliant (ask any economist) and still a powerful caution. Moreover, anyone who claims to be a thinker about our civilization should be able to argue which half was which.

Marx’s forecasts seem to have failed not because they were off-base in extrapolating the trends of 19th Century bourgeois capitalism. He extrapolated fine. But what he never imagined was that human beings might intelligently perceive, and act to alter those selfsame powerful trends! While living amid the Anglo Saxon Enlightenment, Marx never grasped its potential for self-criticism, reconfiguration and generating positive-sum alternatives.

A potential for changing or outgrowing patterns that he (Marx) considered locked, in stone.

Far from the image portrayed by simplistic FIBM cultists, we did not escape Marx’s scenarios through laissez-faire indolence. In fact, his forecasts failed — ironically — because people read and studied Karl Marx.


This much is basic. We are all descended from rapacious, insatiable cheaters and (far worse) rationalizers. Every generation of aristocrats (by whatever surface definition you use, from soviet nomenklatura, theocrats, or royalty to top CEOs) will come up with marvelous excuses for why they should be allowed to go back to oligarchic rule-by-cabal and “guided allocation of resources” (GAR), instead of allowing open competition/cooperation to put their high status under threat. Indeed, those who most stridently tout faith in blind markets are often among the worst addicts of GAR.

In particular, it is the most natural thing in the world for capital owners and GAR-masters to behave in the way that Karl Marx modeled. His forecast path of an ever-narrowing oligarchy — followed ultimately by revolution — had solid historical grounding and seemed well on its way to playing out.

What prevented it from happening — and the phenomenon that would have boggled poor old KM — was for large numbers of western elites and commonfolk to weigh alternatives, to see these natural human failure modes, and to act intelligently against them. He certainly never envisioned a smart society that would extend bourgeois rights and social mobility to the underclasses. Nor that societies might set up institutions that would break entirely from his model, by keeping things open, dynamic, competitive, and reciprocally accountable, allowing the nonlinear fecundity of markets and science and democracy to do their positive-sum thing.

In his contempt for human reasoning ability (except for his own), Marx neglected to consider that smart men and women would actually read his books and decide to remodel society, so that his scenario would not happen. So that revolution, when it came, would be gradual, ongoing, moderate, lawful, and generally non-confiscatory, especially since the positive sum game lets the whole pie grow, while giving bigger slices to all.

In fact, I think the last ninety years may be partly modeled according to how societies responded to the Marxian meme. First, in 1917, came the outrageously stupid Soviet experiment, which simply replaced Czarist monsters with another clade of oppressors, that mouthed different sanctimonious slogans. Then the fascist response, which was a deadly counter-fever, fostered by even more-stupid European elites. Things were looking pretty bleak.


Only then this amazing thing that happened — especially in America — where a subset of wealthy people, like FDR, actually read Marx, saw the potential pathway into spirals of crude capital formation, monopolization, oppression and revolution… and decided to do something about it, by reforming the whole scenario away! By following Henry Ford’s maxim and giving all classes a stake — which also meant ceding them a genuine share of power. A profoundly difficult thing for human beings to do,

Those elites who called FDR a “traitor to his class” were fools. The smart ones knew that he saved their class, and enabled them to enjoy wealth in a society that would be vastly more successful, vibrant, fun, fair, stable, safe and fantastically more interesting.

I believe we can now see the recent attempted putsch by a neocon-kleptocrat aristocratic cabal in broad but simple and on-target context. We now have a generation of wealthy elites who (for the most part) have never read Marx! Who haven’t a clue how chillingly plausible his scenarios might be, if enlightenment systems did not provide an alternative to revolution. And who blithely assume that they are in no danger, whatsoever, of those scenarios ever playing out.

Shortsightedly free from any thought or worry about the thing that fretted other aristocracies — revolution — they feel no compunction or deterrence from trying to do the old/boring thing… giving in to the ancient habit… using influence and power to gather MORE influence and power at the expense of regular people, all with the aim of diminishing the threat of competition from below. And all without extrapolating where it all might lead, if insatiability should run its course.

What we would call “cheating,” they rationalize as preserving and enhancing a natural social order. Rule by those best suited for the high calling of rulership. Those born to it. Or Platonic philosopher kings. Or believers in the right set of incantations.


Whatever the rationalizations, it boils down to the same old pyramid that failed the test of governance in nearly 100% of previous civilizations, always and invariably stifling creativity while guiding societies to delusion and ruin. Of course, it also means a return to zero-sum logic, zero-sum economics, zero-sum leadership thinking, a quashing of nonlinear synergies… the death of the Enlightenment.

Mind you! I am describing only a fraction of today’s aristocracy of wealth or corporate power. I know half a dozen billionaires, personally, and I’d wager none of them are in on this klepto-raid thing! They are all lively, energetic, modernistic, competitive and fizzing with enthusiasm for a progressive, dynamic civilization. A civilization that’s (after all) been very good to them.

They may not have read Marx (in this generation, who has?) But self-made guys like Bezos and Musk and Page etc share the basic values of an Enlightenment. One in which some child from a poor family may out-compete overprivileged children of the rich, by delivering better goods, innovations or services. And if that means their own privileged kids will also have to work hard and innovate? That’s fine by them! Terrific.

When the chips come down, these better billionaires may wind up on our side, weighing the balance and perceiving that their enlightened, long range self-interest lies with us. With the positive-sum society. Just the way FDR and his smart-elite friends did, in the 1930s… while the dumber half of the aristocracy muttered and fumed.

We can hope that the better-rich will make this choice, when the time comes. But till then, the goodguy (or, at least with-it) billionaires are distracted, busy doing cool things, while the more old-fashioned kind — our would-be lords — are clustering together in tight circles, obeying 4,000 years of ingrained instinct, whispering and pulling strings, appointing each other to directorships, awarding unearned golden parachutes, conniving for sweetheart deals, and meddling in national policy…

…doing the same boring thing that human beings will always do — what you and I would be tempted to do — whenever you mix un-curbed ego with unaccountable privilege, plus a deficit of brains.

Planning for the first Lifeboat Foundation conference has begun. This FREE conference will be held in Second Life to keep costs down and ensure that you won’t have to worry about missing work or school.

While an exact date has not yet been set, we intend to offer you an exciting line up of speakers on a day in the late spring or early summer of 2008.

Several members of Lifeboat’s Scientific Advisory Board (SAB) have already expressed interest in presenting. However, potential speakers need not be Lifeboat Foundation members.

If you’re interested in speaking, want to help, or you just want to learn more, please contact me at [email protected]

What’s the NanoShield you ask? It’s a long-term scientific research project aimed at creating a nanotechnoloigical immune system. You can learn more about it here.

Facebook users — please come join the cause and help fund the Lifeboat Foundation’s NanoShield project.

Not a Facebook user? No worries. By joining the Lifeboat Foundation and making even a small donation you can have a hugely positive impact on humanity’s future well being.

So why not join us?

Robert Freitas, Jr., Lifeboat Foundation Fellow and head of the Lifeboat Foundation’s Nanomedicine Division has won the 2007 Foresight Institute Feynman Prize in Communication.

Dr. Pearl Chin, President of the Foresight Nanotech Institute, said Freitas received the award for “pioneering the study and communication of the benefits to be obtained from an advanced nanomedicine that will be made possible by molecular manufacturing [and for having] worked to develop and communicate a path from our current technology base to a future technology base that will enable advanced nanomedicine.”

Prior to his Feynman Prize win Robert shared the Lifeboat Foundation’s 2006 Guardian Award with technology legend Bill Joy. Freitas and Joy shared the Guardian award for their many years of work on mitigating risks posed by advanced technologies.

A better atomic force microscope from Japan:

Credit: Oscar Custance, Osaka University

“A new type of atomic force microscope (AFM) has been developed that can “fingerprint” the chemical identity of individual atoms on a material’s surface. This is one step ahead of existing AFMs, which can only detect the position of atoms. The device determines local composition and structure using a precise calibration method, and can even be used to manipulate specific atomic species. The team demonstrated their “fingerprinting” technique by using an atomic force microscope (AFM) to distinguish atoms of tin (blue) and lead (green) deposited on a silicon substrate (red).”

Here is the associated article (subscription req’d).

Here’s the graphene transistor everyone’s been talking about:

One atom thick, 50 atoms wide. Here is an article going over the advance. It states that the transistors are not likely to be completely ready by 2025, but this estimate seems conservative.

Scientists from Duke recently achieved the new size record for a programmable synthetic nanostructure:

These DNA grids were formed by hierarchial self-assembly. The article on the development states, that the “grid-like structures consist of components that can be independently modified to create arbitrary patterns for different purposes”.

Reminds me of CRN’s cubic micron DNA structure ideas.

The trillion-dollar question is, “when will these advances lead to freely programmable, self-replicating molecular assemblers?” Some scientists are betting on the 2015–2020 timeframe, others say “never”.

“The importance of the space sector can be emphasized by the number of spacecrafts launched. In the period from 1957 till 2005, 6376 spacecraft have been launched at an average of 133 per year. The has been a decrease in the number of spacecrafts launched in the recent years with 78 launched in 2005. Of the 6378 launches, 56.8% were military spacecrafts and 43.2 were civilian. 245 manned missions have been launched in this period. 1674 communication or weather satellites were also launched. The remaining spacecraft launches has been exploration missions.”

Read the entire report here (requires free registration)

Like the Lifeboat Foundation, The Bulletin of Atomic Scientists is an organization formed to address catastrophic technological risks. In catastrophic risk management, vision and foresight are essential. You take at technological, social, and political trends which are happening today — for example, steps towards mechanical chemistry, increasing transparency, or civil atomic programs — and brainstorm with as many experts as possible about what these trends indicate about what is coming 5, 10, or 20 years down the road. Because catastrophic risk management is a long-term enterprise, one where countermeasures are ideally deployed before a threat has even materialized, the further and more clearly you try to see into the future, the better.

Traditionally, The Bulletin has focused on the risk from nuclear warfare. Lately, they have expanded their attention to all large-scale technological risks, including global warming and future risks from emerging technologies. However, the language and claims used on their website show that the organization’s members are only just beginning to get informed about the emerging technologies, and the core of their awareness still lies with the nuclear issue.

From The Bulletin’s statement regarding their decision to move the clock 5 minutes to midnight, from the “emerging technologies” section specifically:

The emergence of nanotechnology — manufacturing at the molecular or atomic level — presents similar concerns, especially if coupled with chemical and biological weapons, explosives, or missiles. Such combinations could result in highly destructive missiles the size of an insect and microscopic delivery systems for dangerous pathogens.

“Highly destructive missiles the size of an insect”? Depressingly, statements like this are a red flag that the authors and fact-checkers at The Bulletin are poorly informed about nanotechnology and molecular manufacturing. To my knowledge, no one in the entire defense research industry has ever proposed creating highly destructive missiles the size of an insect. Highly destructive missiles the size of an insect are impossible for the same reason that meals in a pill are impossible — chemical bonds only let you pack so much energy into a given space. We cannot improve the energy density of explosives like we can improve the speed of computers or the resolution of satellite imagery. There can be incremental improvements, yes, but suggesting that nanotechnology has something to do with highly destructive missiles the size of insects is not just dubious from the point of view of physics, but particularly embarassing because it seems to have been made up from scratch, and was missed by everyone in the organization that reviewed the statement.

The general phrasing of the statement makes it seem like the scientists that wrote it are still stuck in the way of thinking that says “molecular manufacturing has to do with molecules, and molecules are small, so the products of molecular manufacturing will be small”. This is also the bias frequently seen displayed by the general media, although early products based on nanotechnology (not molecular manufacturing), including stainless pants and sunscreen, also subtly direct the popular perception of nanotech. It’s natural to think that nanotechnology, and therefore, molecular manufacturing, means small. However, this natural tendency is flawed. We should recall that the world’s largest organisms, up to 6,600 tons in weight, were manufactured by the molecular machines called ribosomes.

Molecular manufacturing (MM) would greatly boost manufacturing throughput and lower the cost of large products. While some associate MM with smallness, it is better thought of in connection with size and grandeur. Although microscopic killing machines built by MM will definitely become a risk by 2015–2020, the greatest risk will come from the size, performance, and sheer quantity of products. Because a nanofactory would need to be able to output its own weight in product in less than a 12 or so hours or it wouldn’t have been developed in the first place (scaling up from a single molecular manipulator to many trillions requires 33 or so doublings — which could take a long time if the product cycle is not measured in hours), these factories, given raw materials and energy, could produce new factories at an exponential rate. Assuming a doubling time of 12 hours, a 100 kg-size tabletop nanofactory could be used to produce 819,200 kg worth of nanofactory in only a week. As long as the nanofactories can support their own weight and be supplied with adequate matter and energy, they can be made almost arbitrarily large. Minimal labor would be necessary because the manufacturing components are so small, they must be automated to work at all. Regulations and structural challenges from excess height can be circumvented by fabricating nanofactories that are long and wide rather than tall and fragile. Once created, these factories could be programmed to produce whatever products are technologically possible with the tools at hand — at the very least, products at least as sophisticated as the nanofactories themselves. Unscrupulous governments could use the technology to mass produce missiles, helicopters, tanks, and entirely new weapons, as long as their engineers are capable of designing diamondoid versions of these products. Their rate of production, and quality of hardware, would outclass that of non-nano-equipped nations by many orders of magnitude.

Because unregulated, exponentially replicating molecular manufacturing units would create a severe threat to global security, it seems prudent to regulate them with care. Restrictions should be placed on what products can be manufactured and in what quantity and quality. Just as permits and inspections are required to operate industrial machinery, restrictions should be placed on industrial-scale molecular manufacturing. In some cases, preexisting regulatory infrastructure will be sufficient. In others, we’ll need to augment or expand the purview of historical regulations and customize them to address the specific challenges that MM represents.

Further Reading:

30 Essential Nanotechnology Studies
Lifeboat Foundation NanoShield
Nanotechnology Category on Accelerating Future

From the Unemumerated blog, this piece was originally written in 1993:

Using materials native to space, instead of hauling everything from Earth, is crucial to future efforts at large-scale space industrialization and colonization. At that time we will be using technologies far in advance of today’s, but even now we can see the technology developing for use here on earth.

There are a myriad of materials we would like to process, including dirty organic-laden ice on comets and some asteroids, subsurface ice and the atmosphere of Mars, platinum-rich unoxidized nickel-iron metal regoliths on asteroids, etc. There are an even wider array of materials we would like to make. The first and most important is propellant, but eventually we want a wide array of manufacturing and construction inputs, including complex polymers like Kevlar and graphite epoxies for strong tethers.

The advantages of native propellant can be seen in two recent mission proposals. In several Mars mission proposals[1], H2 from Earth or Martian water is chemically processed with CO2 from the Martian atmosphere, making CH4 and O2 propellants for operations on Mars and the return trip to Earth. Even bringing H2 from Earth, this scheme can reduce the propellant mass to be launched from Earth by over 75%. Similarly, I have described a system that converts cometary or asteroidal ice into a cylindrical, zero-tank-mass thermal rocket. This can be used to transport large interplanetary payloads, including the valuable organic and volatile ices themselves into high Earth and Martian orbits.

Earthside chemical plants are usually far too heavy to launch on rockets into deep space. An important benchmarks for plants in space is the thruput mass/equipment mass, or mass thruput ratio (MTR). At first glance, it would seem that almost any system with MTR>1 would be worthwhile, but in real projects risk must be reduced through redundancy, time cost of money must be accounted for, equipment launched from earth must be affordable in the first place (typically

A special consideration is the operation of chemical reactors in microgravity. So far all chemical reactors used in space — mostly rocket engines, and various kinds of life support equipment in space stations — have been designed for microgravity. However, Earthside chemical plants incorporate many processes that use gravity, and must be redesigned. Microgravity may be advantageous for some kinds of reactions; this is an active area of research. On moons or other plants, we are confronted with various fixed low levels of gravity that may be difficult to design for. With a spinning tethered satellite in free space, we can get the best of all worlds: microgravity, Earth gravity, or even hypergravity where desired.

A bigger challenge is developing chemical reactors that are small enough to launch on rockets, have high enough thruput to be affordable, and are flexible enough to produce the wide variety of products needed for space industry. A long-range ideal strategy is K. Eric Drexler’s nanotechnology [2]. In this scenario small “techno-ribosomes”, designed and built molecule by molecule, would use organic material in space to reproduce themselves and produce useful product. An intermediate technology, under experimental research today, uses lithography techniques on the nanometer scale to produce designer catalysts and microreactors. Lithography, the technique which has made possible the rapid improvement in computers since 1970, has moved into the deep submicron scale in the laboratory, and will soon be moving there commercially. Lab research is also applying lithography to the chemical industry, where it might enable breakthroughs to rival those it produced in electronics.

Tim May has described nanolithography that uses linear arrays of 1e4-1e5 AFM’s that would scan a chip and fill in detail to 10 nm resolution or better. Elsewhere I have described a class of self-organizing molecules called _nanoresists_, which make possible the use of e-beams down to the 1 nm scale. Nanoresists range from ablatable films, to polymers, to biological structures. A wide variety of other nanolithography techniques are described in [4,5,6]. Small-scale lithography not only improves the feature density of existing devices, it also makes possible a wide variety of new devices that take advantage of quantum effects: glowing nanopore silicon, quantum dots (“designer atoms” with programmable electronic and optical properties), tunneling magnets, squeezed lasers, etc. Most important for our purposes, they make possible to mass production of tiny chemical reactors and designer catalysts. Lithography has been used to fabricate a series of catalytic towers on a chip [3]. The towers consist of alternating layers of SiO2 4.1 nm thick and Ni 2–10 nm thick. The deposition process achieves nearly one atom thickness control for both SiO2 and Ni. Previously it was thought that positioning in three dimensions was required for good catalysis, but this catalyst’s nanoscale 1-d surface force reagants into the proper binding pattern. It achieved six times the reaction rate of traditional cluster catalysts on the hydrogenolysis of ethane to methane, C2H6 + H2 –> 2CH4. The thickness of the nickel and silicon dioxide layers can be varied to match the size of molecules to be reacted.

Catalysts need to have structures precisely designed to trap certain kinds of molecules, let others flow through, and keep still others out, all without getting clogged or poisoned. Currently these catalysts are built by growing crystals of the right spacing in bulk. Sometimes catalysts come from biotech, for example the bacteria used to grow the corn syrup in soda pop. Within this millenium (only 7.1 years left!) we will start to see catalysts built by new techniques of nanolithography, including AFM machining, AFM arrays and nanoresists Catalysts are critical to the oil industry, the chemical industry and to pollution control — the worldwide market is in the $100’s of billions per year and growing rapidly.

There is a also big market for micron-size chemical reactors. We may one day see the flexible chemical plant, with hundreds of nanoscale reactors on a chip, the channels between them reprogrammed via switchable valves, much as the circuits on a chip can be reprogrammed via transitors. Even a more modest, large version of such a plant could have a wide variety of uses.

Their first use may be in artificial organs to produce various biological molecules. For example, they might replace or augment the functionality of the kidneys, pancreas, liver, thyroid gland, etc. They might produce psychoactive chemicals inside the blood-brain barrier, for example dopamine to reverse Parkinson’s disease. Biological and mechanical chemical reactors might work together, the first produced via metaboic engineering[7], the second via nanolithography.

After microreactors, metabolic engineering, and nanoscale catalysts have been developed for use on Earth, they will spin off for use in space. Microplants in space could manufacture propellant, a wide variety of industrial inputs and perform life support functions more efficiently. Over 95% of the mass we now launch into space could be replaced by these materials produced from comets, asteroids, Mars, etc. Even if Drexler’s self-replicating assemblers are a long time in coming, nanolithographed tiny chemical reactors could open up the solar system.

[1] _Case for Mars_ conference proceedings, Zubrin et. al.
papers on “Mars Direct“
[2] K. Eric Drexler, _Nanosystems_, John Wiley & Sons 1992
[3] Science 20 Nov. 1992, pg. 1337.
[4] Ferry et. al. eds., _Granular Nanoelectronics_, Plenum Press 1991
[5] Geis & Angus, “Diamond Film Semiconductors”, Sci. Am. 10/92
[6] ???, “Quantum Dots”, Sci. Am. 1/93
[7] Science 21 June 1991, pgs. 1668, 1675.

These microreactors have a multiplicity of uses in various Lifeboat-relevant endeavors, including making human beings more resistant against harmful diseases. Molecular nanotechnology, rather than being long-range, is likely to be developed between 2010 and 2020. The Center for Responsible Nanotechnology has written at length in favor of this view.

There were several significant developments and announcements that were nanotechnology related.

The UK Ideas Factory Sandpit announced three ambitious, but in my opinion achievable projects in the 2–5 year timeframe.

1. A system with software based control for the assembly of DNA oligomers, nanopartices and other small molecules. This would be a significant advance over current DNA synthesis if they are successful.

2. Computer-directed actuators with sub-angstrom precisions that is based upon novel surface-bound, reconfigurable nanoscale building blocks and a prototype computer-controlled matter manipulator (akin to a nanoscale conveyor belt)

3. A matter compiler project which is to make the engineering control system to direct molecular assembly These announced projects could prompt the funding of more projects with aggressive molecular nanotechnology related objectives. If that was the case then this could be the beginning of a technological race.

Dwave systems has announced the date for the demonstration of their 16 qubit quantum computer

Dwave systems has a current roadmap with well over 1,000 by the end of 2008.

There are some quantum algorithms that can’t be run using the current architecture. The technical reason for this is that the devices that couple qubits i and j are of the \sigma_z^{i} \sigma_z^{j} type. There are some 16-qubit states that can’t be generated with the X + Z + ZZ Hamiltonian. Their roadmap includes the addition of an XZ coupler to their architecture, which will make their systems universal. The reason for doing this is that they plan to build processors specifically for quantum simulation, which represents a big commercial opportunity.

Their roadmap has an introduction of a quantum simulation processor line in 2009. NOTE: 1000 qubits would enable 2**1000 states or about 10**300. 10**80 is the number of atoms in the observable universe The 2009, 1000+ qubit quantum simulation processor would be a big boost for molecular nanotechnology research.

Honeycomb nanotubes have been created by a team in China They appear to be able to transfer the high single tube strength to the macroscale. These along with Carbon nanotube Superthreads (which was announced in 2006) seem like part of a wave of big carbon nanotube developments. They should have significant commercial impact and the potential of carbon nanotubes to strengthen and alter products will be significantly realized in 2007. The other thing that I draw from this is that the advances are happening in North America, Europe and China.