Google is using machine learning to help design its next generation of machine learning chips. The algorithmâs designs are âcomparable or superiorâ to those created by humans, say Googleâs engineers, but can be generated much, much faster. According to the tech giant, work that takes months for humans can be accomplished by AI in under six hours.
Google has been working on how to use machine learning to create chips for years, but this recent effort â described this week in a paper in the journal Nature â seems to be the first time its research has been applied to a commercial product: an upcoming version of Googleâs own TPU (tensor processing unit) chips, which are optimized for AI computation.
âOur method has been used in production to design the next generation of Google TPU,â write the paperâs authors, co-led by Google research scientists Azalia Mirhoseini and Anna Goldie.
AI, in other words, is helping accelerate the future of AI development.
In the paper, Googleâs engineers note that this work has âmajor implicationsâ for the chip industry. It should allow companies to more quickly explore the possible architecture space for upcoming designs and more easily customize chips for specific workloads.
An editorial in Nature calls the research an âimportant achievement,â and notes that such work could help offset the forecasted end of Mooreâs Law â an axiom of chip design from the 1970s that states that the number of transistors on a chip doubles every two years. AI wonât necessarily solve the physical challenges of squeezing more and more transistors onto chips, but it could help find other paths to increasing performance at the same rate.
The specific task that Googleâs algorithms tackled is known as âfloorplanning.â This usually requires human designers who work with the aid of computer tools to find the optimal layout on a silicon die for a chipâs sub-systems. These components include things like CPUs, GPUs, and memory cores, which are connected together using tens of kilometers of minuscule wiring. Deciding where to place each component on a die affects the eventual speed and efficiency of the chip. And, given both the scale of chip manufacture and computational cycles, nanometer-changes in placement can end up having huge effects.
Google compared the challenge of chip design to a board game
Googleâs engineers note that designing floor plans takes âmonths of intense effortâ for humans, but, from a machine learning perspective, there is a familiar way to tackle this problem: as a game.
AI has proven time and time again it can outperform humans at board games like chess and Go, and Googleâs engineers note that floorplanning is analogous to such challenges. Instead of a game board, you have a silicon die. Instead of pieces like knights and rooks, you have components like CPUs and GPUs. The task, then, is to simply find each boardâs âwin conditions.â In chess that might be checkmate, in chip design itâs computational efficiency.
Googleâs engineers trained a reinforcement learning algorithm on a dataset of 10,000 chip floor plans of varying quality, some of which had been randomly generated. Each design was tagged with a specific ârewardâ function based on its success across different metrics like the length of wire required and power usage. The algorithm then used this data to distinguish between good and bad floor plans and generate its own designs in turn.
As weâve seen when AI systems take on humans at board games, machines donât necessarily think like humans and often arrive at unexpected solutions to familiar problems. When DeepMindâs AlphaGo played human champion Lee Sedol at Go, this dynamic led to the infamous âmove 37â â a seemingly illogical piece placement by the AI that nevertheless led to victory.
Nothing quite so dramatic happened with Googleâs chip-designing algorithm, but its floor plans nevertheless look quite different to those created by a human. Instead of neat rows of components laid out on the die, sub-systems look like theyâve almost been scattered across the silicon at random. An illustration from Nature shows the difference, with the human design on the left and machine learning design on the right. You can also see the general difference in the image below from Googleâs paper (orderly humans on the left; jumbled AI on the right), though the layout has been blurred as itâs confidential:
This paper is noteworthy, particularly because its research is now being used commercially by Google. But itâs far from the only aspect of AI-assisted chip design. Google itself has explored using AI in other parts of the process like âarchitecture exploration,â and rivals like Nvidia are looking into other methods to speed up the workflow. The virtuous cycle of AI designing chips for AI looks like itâs only just getting started.
Update, Thursday Jun 10th, 3:17PM ET: Updated to clarify that Googleâs Azalia Mirhoseini and Anna Goldie are co-lead authors of the paper.