The Death of Moore’s Law: What it means and what might fill the gap going forward (2024)

Written By: Audrey Woods

In 1965, engineer and businessman Gordon Moore observed a trend that would go on to define the unprecedented technological explosion we’ve experienced over the past fifty years. Noting that the number of transistors in an integrated circuit doubles about every two years, Moore laid out his eponymous law, which has since become the engine behind the growing computer science industry, making everything we now enjoy—cellphones, high-resolution digital imagery, household robots, computer animation, etc.—possible.

However, Moore’s Law was never meant to last forever. Transistors can only get so small and, eventually, the more permanent laws of physics get in the way. Already transistors can be measured on an atomic scale, with the smallest ones commercially available only 3 nanometers wide, barely wider than a strand of human DNA (2.5nm). While there’s still room to make them smaller (in 2021, IBM announced the successful creation of 2-nanometer chips), such progress has become prohibitively expensive and slow, putting reliable gains into question. And there’s still the physical limitation in that wires can’t be thinner than atoms, at least not with our current understanding of material physics.

THE REALITY: MOORE’S LAW IS OVER

If you ask MIT Professor Charles Leiserson, Moore’s Law has been over since at least 2016. In conversation with CSAIL Alliances, he points out that it took Intel five years to go from 14-nanometer technology (2014) to 10-nanometer technology (2019), rather than the two years Moore’s Law would predict. Although miniaturization is still happening, the Moore’s Law standard of doubling the components on a semiconductor chip every two years has been broken. The implications are far-reaching and, Professor Leiserson admits, concerning, especially with the recent frenzy around generative AI and large language models (LLMs). He says, “the only way to get more computing capacity today is to build bigger, more energy-consuming machines. If we’re in an AI arms race with our adversaries, it could have a dramatically bad impact on climate.”

Does this mean that the pace of progress we’ve all grown accustomed to will slow down? Or are there ways to continue enjoying the technological gains of the past half-century? Together with CSAIL Research Scientist Neil Thompson, MIT Professor of the Practice Joel Emer, MIT Adjunct Professor Butler Lampson, Research Scientist Tao B. Schardl, and other MIT scholars, Professor Leiserson released a paper in 2020 titled “There’s plenty of room at the Top: What will drive computer performance after Moore’s law?” This publication put forward several ideas of how improvement in computer performance can be found at the “top” of the computing stack rather than at the transistor level. What they concluded was that there are still significant gains to be had through software performance engineering, proposing solutions in software, algorithms, and hardware architecture that make systems more efficient and therefore faster.

One problem, the authors explain, is that programmers have grown accustomed to consistent improvement in performance being a given, which has led to practices that valued productivity over performance. This might mean using code that worked for one problem on a different problem where it’s less efficient, or applying simple code because it’s easier to write than more complicated but faster options. In their paper, the authors were able to achieve 5 orders of magnitude in speed improvements on certain applications just by optimizing coding methods. Although most applications can’t typically be sped up by as much, software performance engineering offers a promising solution for adding substantial computing capability, even with the diminishing returns of technological advancements. “You can think about it like retirement,” Professor Leiserson says. “While you’re earning, it may be more productive to increase your earning ability than to cut costs. When you retire and are on a fixed income, you cut costs as your only option. The post-Moore era is like retirement.”

OPTIONS & ALTERNATIVES

Transitioning to coding techniques that prioritize performance is easier said than done, though, especially when a whole generation of programmers have been trained in a Moore’s Law environment. Faster code is, unfortunately, slower to write and more complicated to conceptualize. To address this, Professor Leiserson aims to develop tools that make it easier and more enjoyable to write efficient code. One such tool is OpenCilk, a platform for task-parallel programming that Dr. Schardl has led the development of, which facilitates parallel coding and incorporates software productivity tools for correctness, scalability, and performance. As leaders of the Supertech Research Group, Professor Leiserson and Dr. Schardl are also exploring parallel applications, adaptive computing, cache-oblivious algorithms, productivity tools, and other methods that support scalable, high-performance computing.

Are there other innovations in the pipeline that could replace Moore’s Law? Some have hailed quantum as a potential new technology that might take off the way digital computing has. Professor Leiserson is skeptical though, since quantum computing is still such a young science and is unlikely to replace general-purpose computing anyway. Quantum computers are projected to be superior at solving problems like breaking encryption, modeling complex interactions such as protein bonding, and boosting the potential of AI and machine learning. But, at least for the foreseeable future, quantum computers are no better and, in some cases, worse than classical computers at a great many tasks that we require our computers to do every day. All this means that quantum computers are far from guaranteed to fuel a digital boom comparable to the past fifty years.

There are other “exotic technologies,” as Professor Leiserson calls them, such as 3D integration, photonic computing, carbon nanotube transistors, and neuromorphic computing, any one of which might someday emerge as a contender to replace Moore’s Law. But Professor Leiserson says, “I doubt we’ll see its like again. It is unique in history. There’s always been ample opportunity to innovate in a free society, regardless of the current state of technology. But as for getting free increases in computational capability year after year, I’m afraid that’s over. Now, we’ll have to work for our gains.”

On the bright side, he believes such scarcity will inspire more innovation and creativity. Where before developers didn’t have to think too hard about the structure of their software since “if there’s a speed problem, Moore’s Law would solve that for you in a couple of years,” now noticeable performance growth will require new tools, languages, hardware, and ways of thinking that will challenge the computer science community.

WHAT CAN BE DONE RIGHT NOW

In the short term, this means companies looking to deal with the increasing cost of computing power can start by prioritizing education and up-to-date training for their programmers, with a particular focus on eliminating silos between software, algorithms, and hardware. Dr. Thompson points out, “for tech giants like Google and Amazon, the huge scale of their data centers means that even small improvements in software performance can result in large financial returns.” Understanding new tools like OpenCilk, applying methods such as parallel computing, and paying attention to efficiency techniques coming out of research institutions like MIT CSAIL can help those looking to prolong the upward trend of progress.

Dr. Thompson has also raised awareness about the need for massive public investment in successor technologies such as the CHIPS Act, which he says was an “incredibly important place to start.” With the U.S. losing its historic lead in advanced computing, it’s important for our leaders to look ahead and support research, startups, and growth in new technological sectors. While there’s no guarantee that such investments will pay off, one thing’s certain: society will only need more computing power going forward. As our digital tools become more interconnected, integrated, and intelligent, it will be increasingly necessary to squeeze everything we can out of the technology we have. Which means that while Moore’s Law might be over, research in this space is just getting started.

To learn more about the work happening at CSAIL in this area, visit our website at https://cap.csail.mit.edu/ or contact Lori Glover at lglover@mit.edu.

The Death of Moore’s Law: What it means and what might fill the gap going forward (2024)

FAQs

What is the death of Moore's law? ›

As we continue to miniaturize chips, we'll no doubt bump into Heisenberg's uncertainty principle, which limits precision at the quantum level, thus limiting our computational capabilities. James R. Powell calculated that, due to the uncertainty principle alone, Moore's Law will be obsolete by 2036.

What will happen after Moore's law ends? ›

Reducing the CPU consumption of workloads has always been a smart move for companies that want to save money on hosting costs. But in a post-Moore's Law world, workload optimization will become even more crucial. This means we're likely to see more workloads move to containers, for example.

What will replace Moore's law? ›

There are other “exotic technologies,” as Professor Leiserson calls them, such as 3D integration, photonic computing, carbon nanotube transistors, and neuromorphic computing, any one of which might someday emerge as a contender to replace Moore's Law.

What is Moore's law and why is it important? ›

Moore's Law is the observation that the number of transistors on an integrated circuit will double every two years with minimal rise in cost. Intel co-founder Gordon Moore predicted a doubling of transistors every year for the next 10 years in his original paper published in 1965.

How does Moore's Law affect you and your daily life? ›

Economic Implications of Moore's Law

One of the economic impacts of the law is that computing devices continue to show exponential growth in complexity and computing power while effecting a comparable reduction in cost to the manufacturer and the consumer.

What is the problem with Moore's Law? ›

The primary negative implication of Moore's law is that obsolescence pushes society up against the Limits to Growth. As technologies continue to rapidly "improve", they render predecessor technologies obsolete.

Is Moore's Law still going strong? ›

In 1975, he revised his observation and predicted that the number of components would double every two years. This prediction remained fairly accurate for nearly 50 years—and in 2024, engineers and scientists are still attempting to keep up; they have succeeded in printing transistors almost the size of atoms.

Why Moore's Law is not applicable today? ›

As Moore's Law is to do with the number of transistors on a chip, even with technological advancements, it becomes harder to implement each time it happens. “Moore's Law is slowing down, and the cost per function benefits of further transistor shrinkage have gone,” says Furber. “There are clearly physical limits here!

What do you think will happen if Moore's law runs out of steam? ›

While its ending has been more of a soft wane than a dramatic crag, its implications are not. Technology innovation won't be stopping, but it's going to be radically different than what the industry has known. Some see it as a call to arms for some creative disruption, others as the freedom to innovate.

What are the solutions to Moore's Law? ›

To bypass the death of Moore's law, a new quantum computing paradigm is needed, which entails replacing the electron based transistors with quantum mechanical transistors. This can be accomplished with Bose-Einstein condensates (BECs). Atomic BECs were first achieved in 1995.

What is beyond Moore's Law? ›

The end of Moore's law will affect all devices—both processing and storage—that depend on shrinking feature size to make progress. 4. Increasing circuit or storage density will require a technology that supports signal gain and reduces the energy that data movement consumes.

What does Moore's Law predict? ›

Moore's Law is an observation that the number of transistors in a computer chip doubles every two years or so. As the number of transistors increases, so does processing power. The law also states that, as the number of transistors increases, the cost per transistor falls.

Will semiconductors become obsolete? ›

There are a multitude of reasons that semiconductors become obsolete reflected by the dynamic nature of technology and demands of the market.

What is more than Moore? ›

More than Moore is the functional diversification of the integrated circuit. It focuses on integrating new materials and functionalities on the chip by incorporating non-digital components.

Does Moore's law apply to AI? ›

Now, enter the world of artificial intelligence, where the pace set by Moore's Law seems almost leisurely in comparison. AI is on a sprint, with its computational power doubling not every two years, but approximately every six months.

Why is Moore's law no longer valid? ›

The simple answer to this is no, Moore's Law is not dead. While it's true that chip densities are no longer doubling every two years (thus, Moore's Law isn't happening anymore by its strictest definition), Moore's Law is still delivering exponential improvements, albeit at a slower pace.

What threatens Moore's law? ›

As chips get smaller and more powerful, they get hotter and present power-management challenges. And at some, point Moore's Law will stop because we will no longer be able to shrink the spaces between components on a chip.

What does Moore laws refer to? ›

Definition. Moore's law states that the number of transistors on a chip doubles every 24 months. More precisely, the law is an empirical observation that the density of semiconductor integrated circuits one can most economically manufacture doubles about every 2 years.

What change did Moore make to his prediction? ›

He predicted that it was possible that by 1975, there would be 65,000 components on an integrated circuit. In 1975, he revised his observation and predicted that the number of components would double every two years.

Top Articles
Latest Posts
Article information

Author: Lilliana Bartoletti

Last Updated:

Views: 6435

Rating: 4.2 / 5 (73 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Lilliana Bartoletti

Birthday: 1999-11-18

Address: 58866 Tricia Spurs, North Melvinberg, HI 91346-3774

Phone: +50616620367928

Job: Real-Estate Liaison

Hobby: Graffiti, Astronomy, Handball, Magic, Origami, Fashion, Foreign language learning

Introduction: My name is Lilliana Bartoletti, I am a adventurous, pleasant, shiny, beautiful, handsome, zealous, tasty person who loves writing and wants to share my knowledge and understanding with you.