Toggle light / dark theme

Proof that The End of Moore’s Law is Not The End of The Singularity

Posted in futurism, robotics/AI


Samsung 850 Pro: The solution to Moore’s Law ending.

During the last few years, the semiconductor industry has been having a harder and harder time miniaturizing transistors with the latest problem being Intel’s delayed roll-out of its new 14 nm process. The best way to confirm this slowdown in progress of computing power is to try to run your current programs on a 6-year-old computer. You will likely have few problems since computers have not sped up greatly during the past 6 years. If you had tried this experiment a decade ago you would have found a 6-year-old computer to be close to useless as Intel and others were able to get much greater gains per year in performance than they are getting today.

Many are unaware of this problem as improvements in software and the current trend to have software rely on specialized GPUs instead of CPUs has made this slowdown in performance gains less evident to the end user. (The more specialized a chip is, the faster it runs.) But despite such workarounds, people are already changing their habits such as upgrading their personal computers less often. Recently people upgraded their ancient Windows XP machines only because Microsoft forced them to by discontinuing support for the still popular Windows XP operating system. (Windows XP was the second most popular desktop operating system in the world the day after Microsoft ended all support for it. At that point it was a 12-year-old operating system.)

It would be unlikely that AIs would become as smart as us by 2029 as Ray Kurzweil has predicted if we depended on Moore’s Law to create the hardware for AIs to run on. But all is not lost. Previously, electromechanical technology gave way to relays, then to vacuum tubes, then to solid-state transistors, and finally to today’s integrated circuits. One possibility for the sixth paradigm to provide exponential growth of computing has been to go from 2D integrated circuits to 3D integrated circuits. There have been small incremental steps in this direction, for example Intel introduced 3D tri-gate transistors with its first 22 nm chips in 2012. While these chips were slightly taller than the previous generation, performance gains were not great from this technology. (Intel is simply making its transistors taller and thinner. They are not stacking such transistors on top of each other.)

But quietly this year, 3D technology has finally taken off. The recently released Samsung 850 Pro which uses 42 nm flash memory is competitive with competing products that use 19 nm flash memory. Considering that, for a regular flat chip, 42 nm memory is (42 × 42) / (19 × 19) = 4.9 times as big and therefore 4.9 times less productive to work with, how did Samsung pull this off? They used their new 3D V-NAND architecture, which stacks 32 cell layers on top of one another. It wouldn’t be that hard for them to go from 32 layers to 64 then to 128, etc. Expect flash drives to have greater capacity than hard drives in a couple years! (Hard drives are running into their own form of an end of Moore’s Law situation.) Note that by using 42 nm flash memory instead of 19 nm flash memory, Samsung is able to use bigger cells that can handle more read and write cycles.

Samsung is not the only one with this 3D idea. For example, Intel has announced that it will be producing its own 32-layer 3D NAND chips in 2015. And 3D integrated circuits are, of course, not the only potential solution to the end of Moore’s Law. For example, Google is getting into the quantum computer business which is another possible solution. But there is a huge difference between a theoretical solution that is being tested in a lab somewhere and something that you can buy on Amazon today.

Finally, to give you an idea of how fast things are progressing, a couple months ago Samsung’s best technology was based on 24-layer 3D MLC chips and now Samsung has already announced that it is mass producing 32-layer 3D TLC chips that hold 50% more data per cell than the 32-layer 3D MLC chips currently used in the Samsung 850 Pro.

The Singularity is near!

19 Comments so far

  1. Also expect another look at wafer scale integration and low resolution printing on large areas eg how many transistors can a square metre of graphene hold?

  2. Eric,

    I also found myself wondering at the more recent trends in CPU processing power increases. Usually I try to avoid looking at trends of five years or less, but in this case, I gave it a try. My full post is here: http://www.williamhertling.com/2013/12/recent-rate-of-computer-processing-growth/

    What I found is that is the compounded annual growth rate (CAGR) in processing power did indeed fall off, from 53% annually from 2003 to 2008 to 9% annually from 2008 to 2013.

    However, I thought about what growth in processing power meant. Does it mean that isolated to a given computer? Or does it mean that which is dedicated to me personally? For example, “In 2008, I had a laptop (Windows Intel Core 2 T7200) and a modest smartphone (a Treo 650). In 2013, I have a laptop (MBP 2.6 GHz Core i7), a powerful smartphone (Nexus 5), and a tablet (iPad Mini). I’m counting only my own devices and excluding those from my day job as a software engineer.”

    When I plotted the increase in total personal computing power, I found that it increased at 51% annually over the five years from 2008 to 2013, effectively the same as the longer term annual growth.

    So it could be that although we’re running into some constraints with single-chip performance, it doesn’t matter because we’re already started to see the distribution of computing among many devices.

  3. Nice article, well balanced, kudos.

    Computing technologies continue to be broadly researched internationally. Memory storage, data processing, data routing and related bottleneck reduction, hyperscaling, … man-machine interfaces … are advancing, but seem to be having a significant gap in data processing development.

    I’m buying 8 Gbyte micro-SD memory for $4, and 256 Gbyte SD memory for around $30. The data transfer bottleneck needs to be addressed. Serial access instead of parallel direct addressing access is currently a major speed bottleneck issue.

    This is a data structural issue, and not a foundational physics issue. So tremendous gains can be realized through new developments in parallel concurrent read/write data structures. Redundant banks of memory with electronics to interleave data being read and written concurrently by many-core processors.

    This redundant memory and interleave processing is the beginning of true parallel processing machines. Instead of the interleave processors managing memory for the many-core serial processors, the interleave processors and memory block states become the main processors and the many-core processors are reduced to extracting information based upon man-machine human interface needs.

    Basic mathematics and textual processing are human needs that are centered in serial processing. However, physics and economics modeling requires the nuances of influence that parallel processing can support; i.e. thing humans cannot directly relate to (neural networks, mutually influenced systems of relativity… manipulating moderators of space/time components of energy, force, momentum, angular acceleration…

    I believe a technological singularity has already occurred. We are just not usefully engaging communications with it.

    Consider a civilization developed 9 billion years ago, before our solar system even formed. That technology development proceeded similarly to our own. Then the singularity already exists.

    Based on this assumption, the singularity is part of all of us physically, and part of our thoughts.

    What would be USEFUL for the singularity related to human thought?

    Praying for useless self-centered greed based things would seem to be annoying, and not responded to.

    Could the singularity provide us with USEFUL insights and tools, if we met a fundamental communications criteria?

    Is controlling space/time a matter of USEFULLY connecting to an already developed tool?

    Ascension

    James Dunn

  4. the singularity is low quality science fiction

    there has been no progress in AI in accord with processor speed

    you can’t “prove” that something isn’t over when you never proved that it started

  5. Wow this is one of the sneakiest ways to sneak an amazon affiliate link into an article ever! However, still an interesting read.

  6. The singularity is near? if you believe that then you probably don’t understand the relationship between computational power and lateral programming. AI is not a simple problem of clock cycles, cycles per operation or the amount of data on a hard drive.

  7. For most people, bandwidth has replaced raw processing power as the key element of computing performance. Arguments for the singularity have never been about Moore’s law. It is about convergence. Standalone computers have been replaced by computers and devices acting as access points to the internet and to its global repository of data and cloud based processes and services. The singularity will come as we develop a way to converge advances in biology with advances in electronics allowing humanity to directly interface our 3 pounds of gray matter into the internet of the year 20??

  8. Wow, I’m sure glad the reddit neckbeards showed up to express their intellectual superiority over the author about the singularity. Yes, this article may not provide proof, but it is a step in the right direction.

  9. @Marty

    The word “proof” was used to show that 3D chips are being mass produced and sold today. They are not just a curiosity in a lab. In fact, the Samsung 850 Pro is already a second-generation product with the third-generation using 32-layer 3D TLC chips coming out next year. (And Intel will be releasing a similar product next year as well.)

  10. The cloud is going to perform soon the most demanding tasks of computing, database and storage at a level not comparable with portable devices. No surprise, the portables are only an element of the internet of things, a great complex business of the next future, demanding huge computing, database and knowledge capabilities in order to become exponentially profitable. These last will be available for sure in a great distributed, integrated and highly specialized network of hardware and software. The business has well understood this point: Apple, Google, IBM, Microsoft, Samsung and many other great corporations as well as startups are focusing their efforts in this direction. Clock speed and 3D architecture are only part of the problem, a much more integrated approach is needed: a different architecture allowing faster communication among cpu and memory using light too, specialized neuromorphic chips for A.I., memristors… A plethora of new technologies are emerging in all fields of informatics (included medicine). Probably within five years the Hp mini supercomputer will be already a reality as well as the optical supercomputer some others are experimenting in Europe. Plans to increase the top supercomputer speed are under way and in 10–15 years, given the recent progress in quantum computing, also these last will be available. The success factor in this case is the convergence and integration of many different technologies, of hardware and software, all this driven by huge market opportunities and prospected demand.

  11. @ Eric Klien

    I have to say your title does not reflect your statement. 3D chips, especially in their current state, are not the answer to moores law (or the singularity for that matter), but they are a good starting point. I was just trying to defend you against the horde of neckbeards!

  12. Just a final provoking question: where the first superintelligences will be generated? In the labs, within top security supercomputers? Or will they emerge in the net, silenty, without Humanity even realizing it, thanks to the uncomparable power of distributed computing and to a far more stimulating learning environment? In conclusion 3D architecture is only a small element of the puzzle that is going to lead to singularity, since the driving forces are far more complex and permeating. The Moore’s Law itself shows apparent limits in the description of these trends. It is just an element of more complex indexes that can help us in managing the strategies to Singularity and later trying to integrate in it (if self-generated super A.I.s allowing us). Connection and synergies are other fundamentals. I personally believe that new actors / forces of this complex reality, other than Humanity, have already entered the stage leading to Singularity. The future, in my opinion, doesn’t already belong to us anymore. Sorry if I did not limit my comments to the central topic of 3D systems and more generally to Moore’Law.

  13. @James Dunn

    “The data transfer bottleneck needs to be addressed. ”

    Currently, the SSD interface is transitioning from SATA to PCI Express and the related M.2 interface. Pretty much all new SSDs can transfer data as fast as the latest SATA interface allows so this is a big deal.

    While this doesn’t fully address the many issues that your detailed comment covered, it is a step in the right direction.

    @endthedisease

    Being able to do 3D will give manufacturers yet another way to increase computing power. The more options they can pick from to increase computing power, the more likely that computing power will increase faster than ever. Note that it will be a few years before NAND chip (SSDs) manufacturers get really good at 3D manufacturing. And at this point we don’t know how soon such developments will come to CMOS chip (CPUs) manufacturers. I expect that the success of 3D NAND chips will inspire more research into 3D CMOS chips and this is a big deal.

    Here’s a bit of info about the 3D CMOS chip situation:

    Besides the 3D tri-gate transistors which were introduced by Intel in 2012, chips are getting more and more metal layers in them. For instance, here are the stats for the last 8 Intel generations:

    Merom, 65nm = 8 layers.
    Penryn, 45nm = 9 layers.
    Nehalem, 45nm = 9 layers.
    Westmere, 32nm = 9 layers.
    Sandy Bridge 32nm = 9 layers.
    Ivy Bridge, 22nm = 9 layers.
    Haswell, 22nm = 11 layers.
    Broadwell, 14nm = 13 layers.

    Note the recent jump in layers for the last two generations!

Leave a Reply