From Horses To Hyperloops

David Donnelly
15 min readApr 29, 2024

--

A Brief History Of The Great Acceleration

The headline of an Associated Press newspaper article from 1953 reads: “They’ll Be No Escape In Future From Telephones.” The article then quotes a telephone company representative speculating that telephones will soon be carried and worn like watches, and eventually offer the capability to see the recipient of your call. The representative even anticipates the device being able to translate languages. The article became a widely distributed topic of conversation. Some readers were fascinated, but most simply scoffed at the absurd possibility of such predictions. Seventy years later, smartphones, video calls, and Google Translate are used by billions of people around the globe.

A few years after the article was published, two innovations would ignite the most radical technological revolution in human history. These advancements, the creation of the monolithic integrated circuit, or microchip, and the use of the relatively cheap metal silicon as a semiconductor presented unlimited potential. Future developments in emerging technologies were now just a matter of how many of these new, smaller transistors could fit onto a single integrated circuit.

In 1965, a man by the name of Gordon Moore observed that the amount of transistors that could be added to a microchip began to double approximately every two years, increasing the power of these chips while also making them smaller. This observation became known as Moore’s Law, and it would accurately predict what is often called “The Great Acceleration.”

Apollo 11 Command Module/National Air and Space Museum

To convey the magnitude of Moore’s Law, consider this: The first computer to use silicon integrated circuits, NASA’s Apollo Guidance Computer, the spacecraft that took us to the moon had only 12,000 transistors. In 2019, the chip inside the iPhone 11 had more than 8.5 billion. In other words, the astronauts who went to the moon did so with little more than the computing power of a calculator. Now, the device that you likely have in your pocket is exponentially more powerful.

Arthur C. Clark famously wrote that “Any sufficiently advanced technology is indistinguishable from magic.” But just like anything else, once this magic becomes part of our everyday lives, we become immune to its marvels, and fixed on the next, best thing. Suddenly, the iPhone 11 isn’t simply good enough and now we need the iPhone 12. Does anybody want to use an old computer when there is something faster? Of course not. We have become wired to adopt and move on, without a thought of the consequences of bringing new technologies into our minds, homes, societies, nations, and planet. We do this for several understandable economic and political reasons, but we are overdue for an updated, honest diagnosis.

While filming my last documentary The Cost of Convenience, a remarkable breakthrough occurred in the world of nuclear fusion.

For the first time in history, scientists were able to yield energy from a fusion reaction in what is known as “ignition” or “energy gain.” This starting point into a new era of clean energy should have been celebrated as an international sign of hope. Instead, it barely got a day or two in the news cycle. We are so accustomed to the magic of technology, that it’s hard to be amazed by anything at all. This is why we need to understand how we got here. By revisiting our past, we can rekindle that sense of awe and wonder. By doing so, we gain perspective and relearn a healthy fear of the raw power of the technologies we’ve invited into our lives.

*****************

CALIFORNIA, 2022

I’m sitting inside a small winery just outside of San Francisco. We rented out the entire venue to interview one person on camera. My cinematographer asks me to sit in the chair where the person we are waiting on to arrive will soon be sitting. He makes last-minute lighting adjustments as my phone rings. It’s Roger McNamee. He has arrived. You may not recognize the name Roger McNamee but his name is synonymous with Silicon Valley, where he moved to in 1982 as an analyst and investor. Until recent years, he was known for his venture financing deals, including a $250 million early investment in Facebook (now Meta) from his VC firm Elevation. But I was here to ask him about his more recent endeavors, his work as an activist. A man who, according to Google, has a net worth surpassing a billion dollars, largely from investments in tech startups, has now dedicated his life to calling out the various internet platforms he helped create.

Roger was an early mentor to Mark Zuckerberg. He introduced Zuckerberg to Sherly Sandberg, who became his long-time number two. In 2019, after the Cambridge Analytica breaks, Roger became increasingly concerned about the company’s behavior and was one of the first industry voices to speak out. He was ignored. The influence of the mentee surpassed that of the mentor. And the mentor became an outspoken activist against not only Zuckerberg, but the industry he was a part of for nearly four decades.

David Donnelly Interviews Silicon Valley legend Roger McNamee

As Roger explains: “Silicon Valley exists because in 1956, the federal government entered into a consent decree with the telephone monopoly, AT&T, that created a separate computer industry and then took AT&T’s intellectual property and put it in the public domain, which meant their patent on the transistor became available to everyone. That regulatory event created Silicon Valley. And, you can make a case, and I would make it, that nearly every major cycle that followed was triggered one way or another by an antitrust intervention.”

It was the breakup of the AT&T monopoly that sparked the digital revolution. And the unfolding Cold War added fuel to the fire, which inspired a sense of urgency to dominate the space race. The ingredients were in place for a technological boom. And as Roger explains, the culture driving innovation was much different than today:

“Until about 2003, the technology industry was idealistic. It had a culture that was the merger of the values of the space program, which is what got it going, and the hippie values that Atari and Apple brought into the equation. And what was interesting is you wouldn’t think those two things would mesh very well, but they both came from a common idealism about the notion of using technology to empower the people who used it and those who were affected by it.”

By the 1980s, America was a technological powerhouse. Electronics of all kinds kept getting smaller and faster. This was the decade of personal computers, fax machines, pagers, CDs, camcorders, and cellular phones (large ones). Although these innovations might not seem so revolutionary to millennials, it’s important to remember what they were replacing: notebooks, snail mail, landlines, cassette tapes, etc. How humans communicate and absorb data was going through a transformation. The analog world was beginning to fade, one innovation at a time.

Popular culture was also changing. The idealistic drivers that fueled the space program and companies like Apple and Atari, were substituted for a mentality that focused on growth and profit above all. This was the era of leveraged buyouts, corporate raiders, junk bonds, and “high-tech”, live, stock tickers that conveyed the transfer of huge sums of wealth second by second. Opportunity was right there in front of your eyes, up for the taking. If there’s one film encapsulating this culture of the 80s business world, it’s Oliver Stone’s Wall Street. Gordon Gecko, the ruthless businessman brought to life by Michael Douglas, was originally written by Stone as a villain. It was a morality tale about greed and corruption. But the “greed is good” mantra might as well have been a mission statement for the average, ambitious Wall Street banker. This period of excess came to a temporary halt on October 19th, 1987, known as black Monday, when the global stock exchanges experienced a sudden and unexpected plunge. By the time the market recovered, investors were looking for a new path to extreme wealth. They would find it in an unlikely place: a Cold War-era military experiment limited to government and academic elites. It was called the Internet.

*****************

The 1990s brought with it excitement and hope. The internet inspired a slew of .com startups, but the speed at which the internet could be accessed lagged behind the enthusiasm of founders. When I was in high school, I can clearly remember the frustration of having to wait for my mom to get off the phone so that I could go online. Anyone born in the 80s or earlier can clearly remember the unique sound of the internet connecting via landline. It was a tedious process that required a tremendous amount of patience.

The potential was still there. I remember the amazement of being able to put a Microsoft Encarta CD-ROM into a computer and access information with a few clicks. As a high school student, the offline capability of accessing high concentrations of data digitized the entire process of research. I was now hunting and pecking through a digital encyclopedia instead of combing through physical books and then copying that information by hand into a notebook. Our prized Encyclopedia Britannica collection instantly became ornamentation and the door-to-door salesmen who peddled their educational importance with persuasive guilt met the same fate as the dinosaurs.

As internet infrastructure improved and website coding allowed for faster loading speeds, the benefits of being able to access an organized portal of the world’s information was a real possibility. The first search engines were clunky, buggy, and slow. In 1998, two visionaries in the computer science department at Stanford outlined a plan for a superior search engine in their now infamous paper: “The Anatomy of a Large-Scale Hypertextual Web Search Engine.”

Their names were Sergey Brin and Larry Page, and the name for their new idea was a deliberate misspelling of the mathematical term for the number one followed by 100 zeroes: a googol. “Google,” as we’ll learn through this book, changed everything.

If there was an omen for what was about to happen in the stock market, it would be the notorious “Y2k” bug. For months leading up to the literal New Year celebration of the millennium was a barrage of articles and news stories about a mysterious error in the code of financial institutions that was surely to cause a banking apocalypse at the stroke of midnight. Of course, the ball dropped, and the apocalyptic bug never reared its ugly head. But something else culminated in 2000. The overvaluation and saturation of .com startups that had started to falter by the mid-90s reached a breaking point. The dot-com bubble burst.

For the Google guys, who had already accepted millions of dollars from investors, it was a make-or-break moment. In the words of Roger McNamee: “When the internet bubble burst Google’s investors panicked, and they insisted that Google come up with a business model.” The business model they came up with was personalized ads, which would forever change the relationship between consumers and marketers. This model was also antithetical to the mission described in the famous Stanford paper. Associate Professor of AI Ethics at Oxford and author Carissa Veliz elaborates: “Larry Page and Sergey Brin had written a paper in 1998 in which they explicitly say that search engines that rely on ads have comfort of interest and that they won’t be as good because they won’t be catering to their users, but to publishers of ads. And yet years later, they couldn’t figure out how to come up with a better business model. They just couldn’t.”

It wasn’t as if these two brilliant founders had intended to create the most effective marketing machine in the history of human civilization. The original Google mission statement was “Don’t Be Evil.” It’s more evident that they got caught up in a powerful system that would come to define the Silicon Valley of the 21st century. Entrepreneur and MIT computer science alum Riz Virk explains this further: “I do think that many entrepreneurs have started well-intentioned because they thought a particular technology would bring certain benefits to society, but over time what’s happened, as these companies have grown, they’ve become public companies and now they have to meet earnings numbers every quarter. Fundamentally, it’s a capitalist system and Silicon Valley is like capitalism on steroids.”

This increasing pressure for insane growth and profit was a far cry from the hippie and space program values that inspired the original Silicon Valley. Roger McNamee explains this paradigm shift: “That culture lasted for nearly 50 years. When it went away, it went away because a critical thing changed. Before 2003, if you were an engineer, you had to deal with the fact there was never enough processing power, memory, storage or bandwidth to do everything you wanted to do. You had to pick your spots. As a result, you typically would find a customer, understand what that customer values most, and focus all your energy on delivering that. And that was super healthy because it kept the creators of technology and the people who used technology in alignment. Beginning in 2003, all those limitations started to evaporate…”

But there was another dynamic that was also redefining the investment landscape, the cost of starting a new technology company. As the world became more digitized, founders didn’t need the same amount of tangible assets to start a company. Dollars could be stretched further. Servers could be rented from larger companies. Organizational software could be licensed. Investors were salivating. As Roger describes, “…a big change took place that took the cost of creating a startup from $150 million to create all the technology to run your website to $10 million, because all you needed was a credit card. You could go to someone like Amazon Web Services and get all of your infrastructure as you need it without all the capital costs. That changed everything because suddenly venture capitalists and other investors had much less risk. A $10 million cost is much less risky than $150 million, which meant that instead of needing entrepreneurs who were in their 40s and 50s, you could suddenly work with 20-year-olds and all their buddies from college.”

Less risk combined with little regulation and a new generation of young, talented computer coders presented the perfect opportunity for entrepreneurs like Mark Zuckerberg, who officially launched Facebook in 2004. Other social networking sites existed like Friendster and MySpace but nobody had achieved the “network effect” yet. A network effect is the phenomenon by which the value or utility a user derives from a good or service depends on the number of users of compatible products. In other words, if everyone you know is on a particular internet platform, it becomes incredibly challenging for another internet platform to be competitive because it would require a critical mass of users to compete. This is important to understand because it explains why investors could justify contributions in the hundreds of millions at a time. It was a race to achieve network effect and lock in market share. Growth at all costs was crucial to the business model. As we soon learned, this growth was attained with little regard for the impact it would have on consumers.

By the end of 2005, the total number of cell phone users in the U.S. alone grew 14% year-on-year to 207.9 million, approximately 69% of the entire population at the time. Cellular phones got integrated into our cultural norms before we could even have a conversation about it. Suddenly we were always connected, being forced to create rules and etiquette along the way. But for the most part, the cellular phones that people were using were still limited to calls and texts. BlackBerrys were gaining in popularity but still had limited data capability. Then on June 27th, 2007, Apple launched the first iPhone. In the same year, Netflix, initially a DVD rental company, would launch its streaming service.

As Moore’s Law took effect, we’d see more powerful smartphones released year after year, immediately rendering the previous models archaic. The infrastructure began to catch up as well. WiFi, which was originally introduced in the late 90s, was becoming more and more common. Eventually, consumers in the West would expect it everywhere, in coffee shops, hotels, shopping malls, and especially, in their homes. This presented more than just an opportunity to connect to the internet. Now, large-volume data transfers could occur at more locations.

Video game entrepreneur and Silicon Valley investor Rizwan Virk explains this transition in greater detail: “So even though mobile smartphones arrived in 2007, they didn’t take off until 2010. And so the first half of the decade was very much about getting away from the old idea of the internet, which is the web, where you have to log into a computer and you see a web page, to an always connected model with the phone. And that’s where 3G 4G and now 5G are coming into play. And so in the early days of smartphones, and even in 2010, you couldn’t download many apps on wireless, you had to go and plug your phone into your computer. And so if there was an app that was larger than say 10 megabytes, and then later in 2011, they changed it to a hundred megabytes and then they made it bigger. So today you can pretty much install most apps right over the air, right through Wi-Fi, or even on using a cellular phone network, which is not something you could do back then.”

In 2008, echoes of the dotcom bubble sounded through Wall Street as a massive housing crisis threatened the entire global economy. The pressure on tech companies to deliver profits intensified. However, some key innovations created an opportunity for the fastest accumulation and concentration of wealth in the history of human civilization. Despite the recession, people were still spending hours every day online. The hardware was improving on smartphones as well as the software. High-definition screen resolutions and improved camera quality made the smartphone something you couldn’t leave home without. Companies like Google and Facebook took full advantage of this period in world history and introduced a new era in big tech, what Harvard professor Shoshanna Zuboff calls The Age of Surveillance Capitalism.

In the 2010s, a perfect storm started to form in the palm of our hands. The rise of social media. High-quality cameras. Ubiquitous WiFi. Unlimited storage in the cloud. Popular culture normalized phone use at all times. And selfies. Lots of selfies. The red flags were everywhere. Here are just a few of the scandals that happened in the 2010s (that we know about):

In 2013, Edward Snowden released files showing the NSA accessed Google and Yahoo servers, giving unauthorized data access to millions of users. In the same year, Yahoo had a data breach impacting 500 million users. In 2014, Yahoo got hacked again, but this time it’s 3 billion users. In the same year, Uber and “God View.’’ In 2016, Russia used Facebook to interfere with the American presidential election. In 2017, Equifax got hacked and the private data of 147 million Americans including social security numbers was compromised. In 2018, investigators from the United Nations blamed Facebook for the genocide of Rohingya Muslims in Myanmar. In the same year, Facebook acknowledged that the data-analysis firm Cambridge Analytica misused millions of users’ private data.

In 2020, the COVID-19 pandemic hit and Americans adopted technology at a rate that normally takes seven years. By 2022, the average person spends an entire day each week staring at a screen connected to the internet. And what is fueling this entire economy? It’s not only your attention, your time, and often your sanity…it’s your data, a commodity that now is more valuable than oil or dollars. And it’s being used to manipulate you, starting with the engineering of algorithms that are created specifically to hijack your brain and penetrate your most intimate thoughts, dreams, and desires.

Yes, this is all f*cking terrifying and elicits serious anxiety and concerns about our future. But this is our reality, and we must first acknowledge the reflection in our societal mirror before we can change it. Understanding what’s REALLY happening behind our screens and finding ways for that knowledge to permeate into the canon of common knowledge is a great challenge, but regulation without cultural change will create a repeating loop of facing the same problem with each new iteration of technology. Look no further than our government reaming Meta execs about past actions while AI is unleashed onto the mass public. Chasing big tech this way is no different than a dog chasing its tail.

Our brains (and our government) cannot evolve at the speed of technological growth, but by reflecting upon how we got here, we can predict where we are going. Learning the fundamentals about the mechanisms driving internet platforms is one of the single most important things we can do as citizens, family members, and humans. It’s essential education for the digital age and is required to maintain our sanity amidst the incessant noise of information overload, protect our rights when we are constantly being surveilled and processed, and preserve our democracy at a time when we have lost a sense of shared reality.

This is a complex issue with many variables. That’s why we distilled three years of research and hundreds of hours of interviews into a 92-minute documentary, The Cost Of Convenience. Our team worked with a cast of experts in a variety of fields from around the globe to identify the root cause of this problem.

If you want to learn more, then watch the film. Share it with others. And use it to start a conversation.

--

--