Chip War: The Fight for the World’s Most Critical Technology by Chris Miller – Here are my Six Lessons and Takeaways

Chip WarMembers of Congress would no doubt have been furious had they learned that DARPA—ostensibly a defense agency—was wining and dining professors of computer science as they theorized about chip design. But it was efforts like these that shrank transistors, discovered new uses for semiconductors, drove new customers to buy them, and funded the subsequent generation of smaller transistors.

We rarely think about chips, yet they’ve created the modern world. The fate of nations has turned on their ability to harness computing power.  

This book contends that semiconductors have defined the world we live in, determining the shape of international politics, the structure of the world economy, and the balance of military power.

A next-generation chip emerged roughly once every two years, requiring new facilities and new machinery.

Strategists in Beijing and Washington now realize that all advanced tech—from machine learning to missile systems, from automated vehicles to armed drones—requires cutting-edge chips, known more formally as semiconductors or integrated circuits. A tiny number of companies control their production.

Globalization as we know it wouldn’t exist without the trade in semiconductors. 

Around a quarter of the chip industry’s revenue comes from phones; much of the price of a new phone pays for the semiconductors inside. For the past decade, each generation of iPhone has been powered by one of the world’s most advanced processor chips. In total, it takes over a dozen semiconductors to make a smartphone work, with different chips managing the battery, Bluetooth, Wi-Fi, cellular network connections, audio, the camera, and more. …Apple makes precisely none of these chips. It buys most off-the-shelf: memory chips from Japan’s Kioxia, radio frequency chips from California’s Skyworks, audio chips from Cirrus Logic, based in Austin, Texas.

Kilby called his invention an “integrated circuit,” but it became known colloquially as a “chip,” because each integrated circuit was made from a piece of silicon “chipped” off a circular silicon wafer.   

At his parents’ fiftieth wedding anniversary party in 1972, Bob Noyce interrupted the festivities, held up a silicon wafer, and declared to his family: “This is going to change the world.” Now general logic could be mass-produced. Computing was ready for its own industrial revolution and Intel had the world’s most advanced assembly lines. 

However, “globalization” of chip fabrication hadn’t occurred; “Taiwanization” had. Technology hadn’t diffused. It was monopolized by a handful of irreplaceable companies. American tech policy was held hostage to banalities about globalization that were easily seen to be false.

Chris Miller: Chip War: The Fight for the World’s Most Critical Technology

 —————-

I am generally reluctant to use the word “war” in a context that isn’t actually…war.  But, this book is not wrong to use the word war in its title.

The world runs on these tiny, ever-smaller little chips that we find inside practically…everything. Cars, TVs, Smartphones, computers, missiles, power plants…everything. Elon Musk even wants to implant them inside our human brains.

The book Chip War: The Fight for the World’s Most Critical Technology by Chris Miller tells us about the history of these remarkable chips.  And, he tells us enough about how they are made, and where they are made, and how many countries are involved in their making, that it is easy to see why the word war is appropriate.

Let me state the obvious; if the world loses access to one specific company and its plant in Taiwan, we will be…sent back to the dark ages; to the time BC – Before Chips.

In my synopses, I ask What is the point?  Here is the point for this book:  The world – the entire world – is being run on those tiny computer chips.  The “war” we fight is for (the future of) those chips. 

And I ask Why is this book worth our time?  Here are my three reasons for this book:

#1 – This book provides a remarkable history of the computing world, and the shrinking of the transistors into ever-smaller, ever-tinier, chips.

#2 – This book helps us understand how global (sort of) this computer chip world is. 

#3 – This books shows us the vast, gigantic challenge facing any country which wants to do all of its needed chip building within its own country.  It may not be possible!

I include a few pages of Quotes and Excerpts from the book—the “best of” Randy’s highlighted Passages.  Here are a number of the best of the best from this book: 

China now spends more money each year importing chips than it spends on oil.   

China is devoting its best minds and billions of dollars to developing its own semiconductor technology in a bid to free itself from America’s chip choke. If Beijing succeeds, it will remake the global economy and reset the balance of military power. World War II was decided by steel and aluminum, and followed shortly thereafter by the Cold War, which was defined by atomic weapons. The rivalry between the United States and China may well be determined by computing power. 

At the core of computing is the need for many millions of 1s and 0s. …these numbers don’t actually exist. They’re expressions of electrical currents, which are either on (1) or off (0). 

Last year, the chip industry produced more transistors than the combined quantity of all goods produced by all other companies, in all other industries, in all human history. Nothing else comes close.   

Looking forward from 1965, Moore predicted a decade of exponential growth—but this staggering rate of progress has continued for over half a century. In 1970, the second company Moore founded, Intel, unveiled a memory chip that could remember 1,024 pieces of information (“bits”). It cost around $20, roughly two cents per bit. Today, $20 can buy a thumb drive that can remember well over a billion bits. 

Chips from Taiwan provide 37 percent of the world’s new computing power each year. Two Korean companies produce 44 percent of the world’s memory chips. The Dutch company ASML builds 100 percent of the world’s extreme ultraviolet lithography machines, without which cutting-edge chips are simply impossible to make. OPEC’s 40 percent share of world oil production looks unimpressive by comparison. 

A state-of-the-art computer called ENIAC, built for the U.S. Army at the University of Pennsylvania in 1945 to calculate artillery trajectories, had eighteen thousand vacuum tubes. On average, one tube malfunctioned every two days, bringing the entire machine to a halt and sending technicians scrambling to find and replace the broken part.

Like Chang, Noyce and Moore saw no limits to the growth of the chip industry so long as they could figure out mass production.   

The Nobel Prize for inventing the transistor went to Shockley, Bardeen, and Brattain. Jack Kilby later won a Nobel for creating the first integrated circuit; had Bob Noyce not died at the age of sixty-two, he’d have shared the prize with Kilby. 

In 1965, Moore was asked by Electronics magazine to write a short article on the future of integrated circuits. He predicted that every year for at least the next decade, Fairchild would double the number of components that could fit on a silicon chip. If so, by 1975, integrated circuits would have sixty-five thousand tiny transistors carved into them, creating not only more computing power but also lower prices per transistor. As costs fell, the number of users would grow. This forecast of exponential growth in computing power soon came to be known as Moore’s Law. It was the greatest technological prediction of the century. 

Alongside new scientific discoveries and new manufacturing processes, this ability to make a financial killing was the fundamental force driving forward Moore’s Law. As one of Fairchild’s employees put it in the exit questionnaire he filled out when leaving the company: “I… WANT… TO… GET… RICH.”

Simply stealing a chip didn’t explain how it was made, just as stealing a cake can’t explain how it was baked. The recipe for chips was already extraordinarily complicated.   

But letting Japan build an electronics industry was part of U.S. Cold War strategy, so, during the 1960s, Washington never put much pressure on Tokyo over the issue.   

Handheld calculators were the iPhones of the 1970s. 

“In the past 200 years we have improved our ability to manufacture goods and move people by a factor of 100,” Mead calculated. “But in the last 20 years there has been an increase of 1,000,000 to 10,000,000 in the rate at which we process and retrieve information.” A revolutionary explosion of data processing was coming. “We have computer power coming out of our ears.” 

The U.S. military lost the war in Vietnam, but the chip industry won the peace that followed, binding the rest of Asia, from Singapore to Taiwan to Japan, more closely to the U.S. via rapidly expanding investment links and supply chains.  

Sony’s research director, the famed physicist Makoto Kikuchi, told an American journalist that Japan had fewer geniuses than America, a country with “outstanding elites.” But America also had “a long tail” of people “with less than normal intelligence,” Kikuchi argued, explaining why Japan was better at mass manufacturing. 

“Disruptive innovation” sounded attractive in Clayton Christensen’s theory, but it was gut-wrenching in practice, a time of “gnashing of teeth,” Grove remembered, and “bickering and arguments.

One popular Soviet joke from the 1980s recounted a Kremlin official who declared proudly, “Comrade, we have built the world’s biggest microprocessor!” 

In 2001, Apple released the iPod, a visionary product showing how digital technology could transform any consumer device.

Every PC maker, from IBM to Compaq, had to use an Intel or an AMD chip for their main processor, because these two firms had a de facto monopoly on the x86 instruction set that PCs required. There was a lot more competition in the market for chips that rendered images on screens.  

GPUs, by contrast, are designed to run multiple iterations of the same calculation at once. This type of “parallel processing,” it soon became clear, had uses beyond controlling pixels of images in computer games. It could also train AI systems efficiently.  

The U.S. government in general buys a smaller share of the world’s chips than ever before. The U.S. government bought almost all the early integrated circuits that Fairchild and Texas Instruments produced in the early 1960s. By the 1970s, that number had fallen to 10−15 percent. Now it’s around 2 percent of the U.S. chip market.

Big tech firms—Google, Amazon, Microsoft, Apple, Facebook, Alibaba, and others—are now pouring money into designing their own chips. There’s clearly no deficit of innovation.

It’s undeniable that the microprocessor, the workhorse of modern computing, is being partially displaced by chips made for specific purposes.

What’s surprising though, is how easy it is for almost anyone to access the fast lane by buying an Nvidia chip or by renting access to an AI-optimized cloud.

And in my synopses, I share key insights and principles, and some stories, from the books that I present.  Here are a few from this book. (Sentences in italics are excerpts directly from the book):

  • That time when a bridge was destroyed in Vietnam…
  • The Sparrow III anti-aircraft missiles that U.S. fighters used in the skies over Vietnam relied on vacuum tubes that were hand-soldered. …The Sparrow missile’s radar system broke on average once every five to ten hours of use. …This time, American bombs scored direct hits. Dozens of other bridges, rail junctions, and other strategic points were hit with new precision bombs. A simple laser sensor and a couple of transistors had turned a weapon with a zero-for-638 hit ratio into a tool of precision destruction.
  • Let’s remember – we ain’t seen nothing yet!
  • Yesterday’s chips are so, so far behind today’s chips.
  • Tomorrow’s chips will make today’s chips seem ancient and quaint…
  • Let’s also remember
  • it takes big money, thus…government has been crucial; and may be less crucial; and may be more crucial, in chip development
  • What are we talking about?
  • Computer Chips (a phrase that encompasses the different kinds of chips) are in…everything.From cars, to Smartphones, to climate control systems, to TVs, to missile guidance systems, to power plants and building energy systems, to all computers, to…everything.
  • And, it is not “A” computer chip,” but “chips,” in most everything.
  • The chips are primarily made in only a handful of countries, especially – the United States, and South Korea, and Japan, and TAIWAN.
  • And the most important chips are made in one place: Taiwan.
  • After over two decades with Texas Instruments, Morris Chang had left the company in the early 1980s after being passed over for the CEO job and “put out to pasture,” he’d later say.
  • Why “copying” and stealing does not really work.
  • it is not possible to “copy” all the details of how to actually mass produce these chips
  • by the time you successfully copy, it is almost out of date…
  • Moreover, the cutting edge was constantly changing, per the rate set out in Moore’s Law.  …No other technology moved so quickly—so there was no other sector in which stealing last year’s design was such a hopeless strategy. 
  • It is really, really, really expensive to produce the modern chips. And the next ones may cost even more; much, much more…
  • Taiwan:
  • Today, Apple’s most advanced processors—which are arguably the world’s most advanced semiconductors—can only be produced by a single company in a single building, the most expensive factory in human history, which on the morning of August 18, 2020, was only a couple dozen miles off the USS Mustin’s port bow.
  • Fabricating and miniaturizing semiconductors has been the greatest engineering challenge of our time. Today, no firm fabricates chips with more precision than the Taiwan Semiconductor Manufacturing Company, better known as TSMC.
  • Apple sold over 100 million iPhone 12s, each powered by an A14 processor chip with 11.8 billion tiny transistors carved into its silicon.
  • Moore’s Law at work
  • Fairchild cofounder Gordon Moore noticed in 1965 that the number of components that could be fit on each chip was doubling annually as engineers learned to fabricate ever smaller transistors. This prediction—that the computing power of chips would grow exponentially—came to be called “Moore’s Law”
  • It was only sixty years ago that the number of transistors on a cutting-edge chip wasn’t 11.8 billion, but 4.
  • The making of Moore’s Law is as much a story of manufacturing experts, supply chain specialists, and marketing managers as it is about physicists or electrical engineers. 
  • From Silicon Valley, to the rest of the world…
  • Today’s semiconductor supply chain requires components from many cities and countries, but almost every chip made still has a Silicon Valley connection or is produced with tools designed and built in California.
  • A typical chip might be designed with blueprints from the Japanese-owned, UK-based company called Arm, by a team of engineers in California and Israel, using design software from the United States.
  • The design is carved into silicon using some of the world’s most precise machinery, which can etch, deposit, and measure layers of materials a few atoms thick. These tools are produced primarily by five companies, one Dutch, one Japanese, and three Californian, without which advanced chips are basically impossible to make. Then the chip is packaged and tested, often in Southeast Asia, before being sent to China for assembly into a phone or computer.
  • Today, thanks to Moore’s Law, semiconductors are embedded in every device that requires computing power—and in the age of the Internet of Things, this means pretty much every device. Even hundred-year-old products like automobiles now often include a thousand dollars worth of chips. Most of the world’s GDP is produced with devices that rely on semiconductors. For a product that didn’t exist seventy-five years ago, this is an extraordinary ascent.
  • It takes government money; lots and lots of government money…
  • Three days after Noyce and Moore founded Fairchild Semiconductor, at 8: 55 p.m., the answer to the question of who would pay for integrated circuits hurtled over their heads through California’s nighttime sky. Sputnik, the world’s first satellite, launched by the Soviet Union, orbited the earth from west to east at a speed of eighteen thousand miles per hour. …The U.S. thought it was the world’s science superpower, but now it seemed to have fallen behind. …Bob Noyce suddenly had a market for his integrated circuits: rockets.
  • America’s vast reserve of scientific expertise, nurtured by government research funding and strengthened by the ability to poach the best scientists from other countries, has provided the core knowledge driving technological advances forward.
  • AND… Asian governments, in Taiwan, South Korea, and Japan, have elbowed their way into the chip industry by subsidizing firms, funding training programs, keeping their exchange rates undervalued, and imposing tariffs on imported chips.
  • AND YET… – the balancing act:  we want government money; we want little to no government regulation… 
  • How vulnerable are we? Too vulnerable!
  • The global network of companies that annually produces a trillion chips at nanometer scale is a triumph of efficiency. It’s also a staggering vulnerability. 
  • A tale of two decisions…
  • By the mid-2000s, just as cloud computing was emerging, Intel had won a near monopoly over data center chips, competing only with AMD. Today, nearly every major data center uses x86 chips from either Intel or AMD. The cloud can’t function without their processors.

…AND THEN…

  • Despite eventually pouring billions of dollars into products for smartphones, Intel never had much to show for it.  …Just a handful of years after Intel turned down the iPhone contract, Apple was making more money in smartphones than Intel was selling PC processors. Intel tried several times to scale the walls of Apple’s castle but had already lost first-mover advantage. …So Intel never found a way to win a foothold in mobile devices, which today consume nearly a third of chips sold. It still hasn’t. 
  • The formula
  • innovation in all aspects (product design + tool/machine development + manufacturing excellence) + hard work + usefulness
  • The danger
  • Taiwan’s chip factory and China’s military…

And here are my six lessons and takeaways:

#1 — It might really help to know our history better.

#2 – The chips will keep getting smaller/bigger/smaller.

#3 – The greater the chip capability, the greater the products.

#4 – The greater the chip capability, the more capable the weapons.

#5 – Are we possibly going to see a huge new breakthrough with multi-talented chips?

#6 – If you fall behind, you will be left behind.  In business. In war/conflict. In… 

There are a lot of good reasons to read Chip War.  It fills in gaps in your history knowledge.  It gives you a sense of the critical role played by these tiny chips in so much of modern life.

And, it gives you reason to worry.

It is a book worth reading!  I am very glad I read it.

—————— 

My synopsis of Chip War will be available soon on our web site.  You can purchase our synopses presentations from the buy synopses tab at the top of this page.  On that page, you can search by book title. And click here for our newest additions.

Each synopsis comes with my comprehensive, multi-page synopsis handout, plus the audio recording of my presentation delivered at the First Friday Book Synopsis in Dallas.

Leave a Reply

Your email address will not be published. Required fields are marked *