Chapter 1: The Silicon Valley Creation Myth
The opening chapter dismantles the familiar tale of Silicon Valley as a place where solitary innovators revolutionized the world through sheer brilliance and determination. While stories of garage startups and maverick founders make for compelling folklore, the reality is that the valley’s growth was deeply intertwined with government funding and Cold War priorities. After the Second World War, the U.S. government began to invest vast sums into research and development to secure technological superiority over the Soviet Union. Stanford University played a pivotal role in this effort, especially under the leadership of Frederick Terman, who actively cultivated ties with the Department of Defense and funneled military contracts to companies spun out of academic research.
Early pioneers in semiconductor technology, including William Shockley and the group that left him to form Fairchild Semiconductor, became successful largely because their transistors and circuits had immediate military applications. Missile guidance systems and other defense technologies created a ready market, ensuring a steady flow of government orders that offset the risks of technological experimentation. These public investments built the infrastructure and expertise that would later fuel the commercial microelectronics revolution, yet the story was later retold as if it had been purely driven by private initiative.
The chapter traces the rise of artificial intelligence research to the same pattern of state patronage. Organizations like the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF) provided early funding for core AI disciplines: natural language processing, robotics, and machine vision. Programs such as the Strategic Computing Initiative aimed to develop intelligent systems that could process vast amounts of information for military applications, laying the groundwork for technologies that now underpin everything from search engines to autonomous vehicles.
A central theme is that AI’s early champions were not merely inventors but also evangelists who courted publicity and resources by making bold promises about the imminent arrival of intelligent machines. Figures like John McCarthy and Marvin Minsky gained attention for predicting that human-level AI was just around the corner. While these predictions spurred investment and attracted talent, they also set unrealistic expectations that contributed to repeated cycles of enthusiasm and disappointment, known as AI winters.
During the Cold War, defense money flowed so abundantly that many companies could survive for years without proving commercial viability. But as the geopolitical climate shifted in the late 1980s and 1990s, defense budgets contracted. Companies that had relied on military procurement began to reorient their technologies for civilian markets. This transition was gradual and often messy, with many firms repurposing tools originally designed for warfare into consumer products, rather than inventing them from scratch to meet civilian needs.
The mythology of the self-made innovator served a deeper ideological purpose: it justified the concentration of wealth and influence among a small cadre of entrepreneurs by implying that their success was the product of merit alone. This narrative obscured how state subsidies and policy choices determined which technologies advanced and which were sidelined. By focusing public attention on heroic origin stories, it became harder to question whether the benefits of innovation were being shared equitably.
The emergence of venture capital as a dominant force in Silicon Valley further entrenched this mythology. Investors quickly learned that charismatic founders with compelling personal narratives could attract outsized valuations and media coverage. Even unproven technologies could be hyped into market dominance when presented as the vision of a singular genius. The interplay between venture capital, speculative investment, and military funding formed a self-reinforcing system that rewarded grandiose claims.
Another consequence of this dynamic was the normalization of using public resources for private gain. Companies that had been nurtured with government contracts transitioned into commercial juggernauts while retaining the benefits of public support. Yet when criticism arose about monopolistic practices or social harms, these same companies invoked the language of individual achievement to deflect scrutiny.
Throughout the chapter, the recurring message is that the structural conditions of Silicon Valley—access to federal money, alignment with national security priorities, and the cultivation of captivating myths—created an environment where technological development was both accelerated and distorted. The incentives that propelled the semiconductor industry are the same forces driving today’s AI boom.
Recognizing the true origins of Silicon Valley is presented as an essential step toward understanding AI’s present trajectory. Without confronting the reality that innovation has always been shaped by politics, funding priorities, and ideological storytelling, there is a risk of repeating the same mistakes and concentrating power even further in the hands of a small elite.
Key Points
- The popular image of self-made inventors in garages overlooks the decisive role of government funding.
- Cold War defense spending built the infrastructure and early markets for computing technologies.
- DARPA and NSF investments laid the foundation for core AI disciplines.
- Early AI advocates made optimistic predictions that fueled hype cycles and disillusionment.
- The transition from military to commercial applications was gradual and driven by necessity.
- The myth of the lone genius legitimized concentrated wealth and influence.
- Venture capital leveraged these stories to attract speculative investment.
- Public resources were routinely used for private enrichment.
- The same structural incentives now shape the AI industry.
- A clear-eyed view of this history is essential to understand and challenge contemporary AI power dynamics.
Chapter 2: Scaling Up the Machine
The second chapter focuses on how AI research evolved from a collection of academic experiments into a massive industrial enterprise, requiring unprecedented resources to train increasingly complex models. In the early decades, artificial intelligence was largely confined to university labs, where computing power was scarce and data even scarcer. Researchers often relied on small datasets and toy problems, which limited the ambitions and performance of their systems. As computing capacity improved and digital data proliferated, a new approach emerged that emphasized scale over handcrafted rules.
This shift was marked by the recognition that machine learning algorithms—especially deep learning—could achieve remarkable results if provided with vast amounts of data and computation. Breakthroughs in areas like image recognition and natural language processing were no longer driven by conceptual leaps alone but by the ability to harness huge datasets and specialized hardware such as graphics processing units (GPUs). The landmark success of ImageNet in 2012 demonstrated that brute-force scaling could outperform more elegant but narrowly defined methods.
Industrial labs began to devote enormous resources to acquiring and labeling data, building distributed computing clusters, and hiring teams of engineers to maintain these sprawling infrastructures. What had once been an academic discipline was now moving into the corporate sphere, where the ability to scale up operations conferred decisive competitive advantages. The economic barriers to entry rose rapidly as training a single large model could require millions of dollars in cloud compute costs.
The chapter details how companies like Google, Facebook, and Amazon leveraged their control over consumer data and cloud infrastructure to dominate AI research. This concentration of technical capacity made it increasingly difficult for smaller firms and academic researchers to compete. While some open-source tools and datasets were released to the broader community, the cutting edge remained largely in the hands of a few well-capitalized players.
The environmental consequences of scaling were profound. Training state-of-the-art models consumed enormous amounts of electricity, much of it generated by fossil fuels. One study cited in the chapter estimated that developing a single natural language model could produce as much carbon dioxide as five cars over their entire lifespans. Yet despite these impacts, there was little incentive within companies to slow down, as the race to create bigger and more capable systems had become an end in itself.
The pursuit of scale also shaped research culture. Engineers and scientists were increasingly evaluated on their ability to deliver performance gains through scaling rather than through novel theoretical insights. This emphasis on benchmarks and leaderboards rewarded those who could marshal the most resources, reinforcing a dynamic in which power and credibility accrued to the largest organizations.
The chapter highlights how this scaling imperative led to the centralization of talent. Researchers who once operated within academia were recruited by tech giants offering generous salaries and access to infrastructure unavailable elsewhere. While this migration accelerated progress in some respects, it also narrowed the field of inquiry, as commercial priorities began to dictate what problems were worth solving.
An undercurrent of concern runs through the discussion of scaling’s social costs. As models became larger and more complex, they also became harder to interpret and audit. The opacity of these systems made it difficult for regulators, and even their own creators, to understand why models behaved the way they did. This lack of transparency carried risks not only for fairness and accountability but also for safety in high-stakes applications.
The financial arms race to scale up AI systems further deepened the divide between a handful of companies and the rest of society. Small research labs and startups that could not afford to compete for compute resources were left to occupy the margins of the field. Meanwhile, the public’s ability to shape AI’s trajectory was constrained by the technical and economic barriers that protected the industry from external scrutiny.
The chapter closes by arguing that the fixation on ever-larger models has become a defining feature of contemporary AI, reshaping both the industry’s internal culture and its relationship to the broader world. Scaling is not simply a technical strategy but an organizing principle that reinforces concentration of power, accelerates environmental costs, and narrows the scope of what is possible in AI research.
Key Points
- AI evolved from small academic projects into an industrial-scale enterprise.
- Breakthroughs in deep learning were achieved by scaling data and computation rather than purely conceptual advances.
- Tech giants leveraged their infrastructure and user data to dominate AI research.
- Training large models consumes massive amounts of energy and produces significant carbon emissions.
- Research culture shifted to reward scaling performance over theoretical innovation.
- Talent centralized in corporate labs with resources unavailable to smaller players.
- Scaling increased the opacity and unpredictability of advanced models.
- Financial barriers to entry excluded smaller firms and academic researchers.
- The public’s ability to influence AI development was diminished by technical complexity and corporate control.
- Scaling has become the central organizing logic of contemporary AI, with profound social and environmental consequences.
Chapter 3: The Shadow Factory
This chapter uncovers the hidden labor force that underpins modern AI systems, showing that behind the promise of automation lies an immense reliance on low-paid human work. While the public narrative often emphasizes the sophistication of machine learning algorithms, many of the capabilities attributed to artificial intelligence depend on armies of people performing repetitive tasks to create training data and maintain system performance. These invisible workers operate in what the chapter calls “shadow factories,” distributed across the globe and largely shielded from public view.
The proliferation of data labeling services has become a defining feature of the AI supply chain. Every time an algorithm learns to recognize faces, parse natural language, or identify objects, it relies on meticulously labeled examples created by human annotators. Platforms such as Amazon Mechanical Turk, Appen, and Scale AI have industrialized this process, creating piecework economies where workers are paid per task, often earning less than minimum wage in their countries. The work is typically outsourced to regions with lower labor costs, including parts of Africa, South Asia, and Southeast Asia.
Content moderation is another essential form of hidden labor. Companies operating large-scale AI-driven platforms depend on moderators to review flagged material, remove harmful content, and train systems to recognize abuse. The psychological toll on these workers can be severe, as they are often required to sift through graphic violence, hate speech, and exploitation. Yet their contributions are rarely acknowledged, and they frequently lack adequate mental health support or labor protections.
The chapter describes how this invisible workforce maintains the illusion of fully autonomous AI. When chatbots respond smoothly to queries or recommendation systems surface relevant content, it is easy to believe that machines are making sense of the world on their own. In reality, vast amounts of human judgment are embedded in these systems, from defining labels to verifying model outputs and correcting errors. The boundary between human and machine intelligence is far more porous than popular narratives suggest.
Economic precariousness is a defining feature of shadow factory labor. Tasks are often distributed via digital platforms that classify workers as independent contractors, exempting companies from providing benefits or job security. Pay rates can fluctuate unpredictably, and workers have little recourse if they are suddenly deactivated from the platforms they rely on for income. While tech firms reap enormous profits from scaling their AI capabilities, the people performing this foundational labor remain in conditions of chronic insecurity.
The chapter also explores how data work has become entangled with global inequalities. Developed economies extract value by sourcing cheap labor from poorer regions, creating a digital supply chain that mirrors earlier patterns of colonial extraction. Even as companies market AI as an emancipatory force that will liberate humanity from drudgery, they depend on the hidden exploitation of marginalized workers to sustain their business models.
Some efforts have been made to improve conditions, including initiatives to set minimum pay standards and provide more transparency about task pricing. However, enforcement remains weak, and many workers are reluctant to speak out for fear of losing access to the platforms altogether. The opacity of the system makes it difficult for journalists, researchers, and regulators to assess working conditions or hold companies accountable.
The consequences of this hidden labor extend beyond economic injustice. Because many annotation and moderation tasks are subjective, the values and judgments of workers shape AI systems in subtle but significant ways. The categories they create, the examples they select, and the decisions they make about borderline cases all become part of the datasets that train models to “understand” the world. Yet these contributions are almost never credited or examined publicly, reinforcing the myth that AI is an objective, value-neutral technology.
This chapter argues that the reliance on shadow factory labor is not a temporary phase on the way to true automation but an enduring feature of how AI operates. As systems grow more complex and the demand for clean, well-labeled data increases, the dependence on human workers has only deepened. The labor-intensive reality of AI contradicts the popular vision of seamless machine autonomy.
By exposing the invisible workforce behind modern AI, the chapter challenges readers to reconsider what counts as innovation and whose contributions are valued. It concludes by warning that unless these labor dynamics are addressed, the AI industry will continue to reproduce global inequalities and conceal the true costs of its progress behind a polished veneer of technological inevitability.
Key Points
- Modern AI systems rely heavily on low-paid human labor to create training data and maintain performance.
- Data labeling services have become industrialized through global digital platforms.
- Content moderation exposes workers to traumatic material with inadequate support.
- The apparent autonomy of AI systems conceals extensive human judgment and effort.
- Economic insecurity and lack of protections define the working conditions of data annotators.
- The AI supply chain perpetuates global inequalities reminiscent of colonial extraction.
- Efforts to improve labor conditions are limited and often lack effective enforcement.
- Human decisions embedded in datasets shape AI models in subjective ways.
- Dependence on hidden labor is a structural feature, not a transitional phase.
- Recognizing and valuing this invisible workforce is essential to understanding the real costs of AI innovation.
Chapter 4: The Panopticon State
This chapter examines the rise of AI-powered surveillance systems, focusing in particular on how China has integrated these technologies into a pervasive apparatus of social control. The story begins with the rapid proliferation of cameras and biometric sensors across Chinese cities, where the combination of facial recognition, gait analysis, and networked databases has created an unprecedented capacity to monitor individuals in real time. The government has justified this expansion as necessary for public safety, crime prevention, and social stability, framing surveillance as a benign tool to improve citizens’ lives.
A central example is the development of the “Sharp Eyes” program, which aims to achieve near-total visual coverage of urban and rural areas alike. Under this initiative, live camera feeds are linked not only to police command centers but also to neighborhood watch groups and community organizations. Citizens are encouraged to participate in monitoring their own communities, blurring the line between state surveillance and social pressure. This distributed model draws on China’s longstanding practices of grassroots governance, reconfigured through digital technology.
The chapter describes how commercial incentives have helped drive the spread of surveillance infrastructure. Chinese AI companies such as SenseTime, Megvii, and Hikvision have received government contracts and subsidies to develop advanced recognition systems. In turn, these firms have become global exporters of surveillance technologies, marketing their products to governments in Asia, Africa, and Latin America. The integration of commercial and state interests has created a powerful engine for the expansion of the surveillance industry both within China and abroad.
Beyond cameras, the state has developed data fusion platforms that integrate disparate information streams: travel records, financial transactions, social media posts, and personal relationships. These systems are designed to create comprehensive dossiers on individuals, enabling predictive policing and risk scoring. Authorities claim these capabilities help identify threats before they materialize, but critics argue that such tools erode any meaningful distinction between suspicion and guilt.
A prominent example is the use of predictive policing in Xinjiang, where extensive monitoring has been combined with algorithmic risk assessments to determine who should be detained in “re-education” facilities. The chapter emphasizes that the same core technologies—pattern recognition, network analysis, automated alerts—are widely deployed in other parts of China, though often with less overt repression. This continuity underscores that surveillance practices in Xinjiang are not an isolated aberration but an extreme expression of broader trends.
The chapter also challenges the view that China’s model is wholly unique. In the United States and Europe, many of the same technologies have been adopted by law enforcement agencies and private companies. Predictive policing software, license plate readers, and facial recognition tools have been deployed in American cities with minimal oversight. The difference, it argues, is not only in scale but in the degree of transparency and the existence of legal constraints.
Public acceptance of surveillance has been shaped by narratives about safety and modernization. In China, official messaging frames data collection as a patriotic duty and a sign of technological progress. Citizens are encouraged to see themselves as contributors to national stability. In Western contexts, surveillance is more often justified through the language of consumer convenience and counterterrorism, but the effect is similar: expanding the state’s and corporations’ capacity to observe and influence behavior.
The chapter discusses how the normalization of constant monitoring risks undermining trust in social institutions. When people know they are being watched, they may self-censor or conform to perceived expectations. Over time, this dynamic erodes the space for dissent and experimentation. Surveillance becomes not just a means of enforcing rules but a mechanism for shaping what people believe is possible.
While Chinese authorities claim that their approach offers a model of efficient governance, the chapter raises questions about the costs to civil liberties and individual autonomy. It argues that AI-powered surveillance tends to expand incrementally, justified by emergencies or exceptional circumstances, until it becomes woven into everyday life. Once established, these systems are difficult to dismantle, as they create vested interests and dependencies among security agencies and technology providers.
The chapter concludes by warning that the Panopticon is not a distant dystopian possibility but an emerging reality. Whether in China or elsewhere, the spread of AI surveillance requires democratic societies to confront hard questions about privacy, power, and the role of technology in governing human behavior.
Key Points
- China has built a vast AI-powered surveillance system integrating cameras, biometrics, and data fusion.
- Programs like Sharp Eyes encourage citizen participation in monitoring.
- Chinese AI companies have become major suppliers of surveillance technology domestically and globally.
- Predictive policing combines data streams to assign risk scores and justify detentions.
- Practices in Xinjiang are an intensified form of wider national trends.
- Similar technologies are spreading in Western democracies with less transparency.
- Narratives of safety and progress normalize surveillance.
- Constant monitoring undermines trust and encourages conformity.
- Surveillance systems expand gradually and become entrenched.
- Democratic societies must confront the implications of pervasive AI monitoring.
Chapter 5: A Tale of Two Internets
This chapter explores how China and the West have developed fundamentally different models of internet governance, infrastructure, and ideology, resulting in a global digital landscape marked by fragmentation and competition. It begins by tracing China’s efforts to build a sovereign internet architecture capable of insulating the country from external influence. The Great Firewall, first implemented in the late 1990s, became the backbone of this system, combining technical filtering, content controls, and legal restrictions to shape what information flows into and within China’s networks.
The creation of this separate digital ecosystem was driven by both political and economic motivations. On the political side, Chinese leaders viewed information sovereignty as essential to maintaining social stability and regime legitimacy. Allowing foreign platforms unrestricted access was seen as a risk to national security and ideological control. On the economic side, blocking Western tech companies created space for domestic firms to thrive without facing entrenched global competitors.
Over time, this approach enabled the rise of internet giants like Alibaba, Tencent, and Baidu, whose platforms now permeate nearly every aspect of Chinese life, from payments to messaging to e-commerce. These companies operate in a regulatory environment that obliges them to assist the state in monitoring and controlling content. The result is a hybrid model where private enterprise and government authority are closely intertwined.
In contrast, the Western internet evolved around principles of openness, decentralization, and minimal state interference, at least in its early decades. Companies like Google, Facebook, and Amazon flourished in an environment where regulators were slow to impose limits on data collection, platform power, or algorithmic influence. While this hands-off approach encouraged innovation, it also led to widespread abuses, including privacy violations, disinformation campaigns, and anti-competitive practices.
The chapter emphasizes that the divergence between these two models is not simply a matter of censorship versus freedom. Both systems centralize power in large technology companies, but they differ in how state authority interacts with corporate interests. In China, regulation is overt and political objectives are explicit, while in the West, governments have often ceded oversight to the market or acted reactively in the face of scandals.
Cross-border tensions have intensified as each side increasingly frames the other’s model as a threat. Chinese leaders criticize Western platforms for undermining sovereignty and social cohesion, while American officials warn that Chinese technology companies export authoritarian values through surveillance tools and censorship infrastructure. These narratives fuel geopolitical competition over whose digital ecosystem will prevail.
The chapter discusses how the Belt and Road Initiative has extended China’s internet model to other countries, offering funding and expertise to build infrastructure embedded with monitoring capabilities. This “Digital Silk Road” has attracted governments eager to expand connectivity but also drawn criticism for promoting practices incompatible with democratic norms. Meanwhile, American and European firms continue to export their own systems of data extraction and targeted advertising, creating parallel patterns of influence.
A key theme is the growing difficulty of maintaining a truly global internet. Technical standards, regulatory frameworks, and commercial alliances are diverging, raising the prospect of a splintered digital environment often referred to as the “Splinternet.” In such a scenario, data flows, content access, and even hardware compatibility could increasingly depend on geopolitical alignments.
The chapter also addresses how ordinary users experience these divisions. While Chinese internet users have access to sophisticated platforms and services, their interactions are bounded by censorship and pervasive surveillance. Western users, though formally more free, are subject to opaque algorithmic manipulation and commercial exploitation of their personal data. In both contexts, individuals have limited control over how their information is collected and used.
It concludes by arguing that the contest between these models is shaping the future of global connectivity. The struggle is not only about technological supremacy but about which values and institutions will define digital life. Without concerted efforts to develop alternative approaches that protect rights while promoting innovation, the world risks being locked into a binary choice between state control and corporate dominance.
Key Points
- China built a sovereign internet with extensive censorship and surveillance.
- The Great Firewall enabled domestic platforms to grow without foreign competition.
- Chinese tech firms operate in close alignment with government objectives.
- The Western internet emphasized openness and market-driven growth but produced widespread abuses.
- Both models concentrate power in a small set of companies.
- Each side increasingly portrays the other’s system as a geopolitical threat.
- China’s Digital Silk Road exports surveillance infrastructure to other nations.
- The Splinternet describes the fragmentation of the global internet into competing blocs.
- Users in both systems face limits on autonomy and transparency.
- The rivalry over internet models will define the values underpinning global digital life.
Chapter 6: The Data Wars
This chapter examines the intensifying global competition over data, describing it as a new form of geopolitical contest that rivals the struggles over oil or rare earth minerals in earlier eras. Data has become the most valuable strategic resource of the digital economy, enabling nations and corporations to develop advanced AI systems, train predictive models, and gain insights into populations’ behaviors and preferences. As machine learning depends on vast and diverse datasets to improve accuracy, whoever controls data effectively controls the future trajectory of technological power.
The chapter begins by explaining how the volume and richness of available data grew exponentially in the past two decades. Smartphones, sensors, and online platforms now generate streams of information about every dimension of human life. From biometric markers and purchasing histories to geolocation trails and social relationships, this data can be collected, combined, and monetized on an unprecedented scale. The capacity to process and exploit these flows is heavily concentrated in a handful of companies and states.
Competition over data has taken on explicit geopolitical dimensions. Governments view the ability to harvest, store, and analyze massive datasets as a national security imperative. In China, data localization requirements ensure that sensitive information remains within the country’s borders, accessible to security services when needed. In the United States, intelligence agencies have partnered with technology firms to tap into commercial data streams for counterterrorism and foreign surveillance purposes.
The chapter describes how this scramble for data has fueled rivalries among firms as well as between nations. Tech giants acquire startups primarily to gain access to proprietary datasets, often paying extraordinary sums not just for technology but for the troves of user data that come with it. Corporations race to secure exclusive arrangements with data-rich platforms or to lock down supply chains that can deliver continuous streams of fresh information for model training.
Data protection laws have emerged as both a shield and a weapon in these contests. The European Union’s General Data Protection Regulation (GDPR) set new standards for user consent and data portability, reshaping how companies collect and process information. While privacy advocates welcomed these measures, some policymakers also saw them as tools to limit American and Chinese firms’ dominance in European markets. In other countries, governments have adopted similar policies partly as economic strategies to protect local industries from foreign data extraction.
The chapter highlights the tension between public expectations of privacy and corporate incentives to accumulate data. Users often have little visibility into how their information is used or traded, and consent mechanisms are frequently opaque or misleading. At the same time, companies argue that restricting data flows undermines innovation and slows progress in fields such as healthcare, where large datasets can improve diagnostic accuracy.
A recurring theme is the growing asymmetry between those who generate data and those who control it. Ordinary people’s daily activities produce raw material of immense value, yet they typically receive no compensation or meaningful control over how it is deployed. This dynamic has given rise to calls for new models of data ownership or benefit sharing, though practical frameworks remain elusive.
The chapter also addresses the role of data in shaping social and political outcomes. Algorithms trained on biased or incomplete datasets can reinforce discrimination, while targeted advertising and recommendation engines can manipulate opinions and behaviors at scale. The Cambridge Analytica scandal, in which political consultants harvested Facebook data to influence elections, is presented as a cautionary tale of how data can be weaponized.
Data flows have become so entangled with national interests that they are now routinely invoked in trade negotiations and diplomatic conflicts. Disputes over whether data should be treated as a commodity, a public good, or a sovereign asset have surfaced in forums from the World Trade Organization to bilateral talks between major powers. The lack of consensus on governance norms increases the risk of fragmentation and conflict.
The chapter concludes by warning that unless clear frameworks are established to manage data responsibly, the world will continue drifting toward a landscape dominated by a few entities with unchecked power. Addressing the data wars requires not only technical solutions but also a reimagining of how value, consent, and accountability operate in the digital age.
Key Points
- Data has become a strategic resource comparable to oil or rare earth minerals.
- Smartphones and sensors produce vast quantities of personal and behavioral data.
- Governments see data control as essential for national security and economic power.
- Corporations compete fiercely to acquire proprietary datasets and lock in exclusive access.
- Data protection laws serve both privacy aims and economic interests.
- Users have limited visibility and control over how their data is exploited.
- The benefits of data accumulation flow to companies, not individuals.
- Biased datasets and opaque algorithms can distort social and political outcomes.
- Data governance has become a flashpoint in international trade and diplomacy.
- New frameworks are urgently needed to ensure data is managed ethically and equitably.
Chapter 7: The Algorithmic Leviathan
This chapter focuses on how algorithms have moved from peripheral tools to central mechanisms of governance, shaping decisions across economic, social, and political life. Algorithms now determine what people see online, how credit is allocated, which job applicants are shortlisted, and even who becomes a target for police attention. The chapter argues that as algorithmic systems have spread, they have created a new form of concentrated power—a Leviathan that is less visible than traditional institutions but just as capable of enforcing norms and distributing rewards.
The proliferation of algorithmic decision-making began as an effort to improve efficiency and objectivity. In theory, removing human judgment promised more consistent and less biased outcomes. However, in practice, algorithms often embed and amplify the prejudices present in their training data and design assumptions. When historical inequalities are encoded in the datasets that train predictive models, the results systematically disadvantage marginalized communities.
The chapter provides examples of algorithmic bias in areas such as criminal justice, where risk assessment tools used by courts have been shown to overestimate the likelihood of reoffending among Black defendants. In the financial sector, credit-scoring models routinely penalize individuals for factors correlated with poverty or racial identity. These systems are presented as neutral and scientific, but their decisions can entrench structural inequities behind a veneer of technological legitimacy.
Opacity is another defining feature of the algorithmic Leviathan. Even when companies disclose that algorithms are in use, the internal workings of these models remain inaccessible to most people affected by them. Proprietary claims and technical complexity combine to create a “black box,” making it nearly impossible for outsiders to understand why a particular decision was made or to challenge it effectively. This lack of transparency undermines accountability and reinforces public distrust.
The chapter highlights how algorithms increasingly operate at scale without meaningful oversight. Recommendation engines determine what information circulates on social media platforms, shaping public discourse and influencing democratic processes. By optimizing for engagement and profit, these systems often prioritize sensationalism, polarization, and misinformation. The cumulative effect is to distort collective understanding of reality.
Economic consequences are also significant. Algorithms facilitate winner-takes-all dynamics by favoring established players who can invest in optimizing for algorithmic visibility. Small businesses and independent creators struggle to compete in an environment where platform rules can change abruptly and without explanation. The result is a further concentration of economic power in a handful of dominant firms.
A recurring theme in the chapter is the illusion of neutrality. Companies frequently claim that algorithms merely reflect user preferences, but the design of these systems involves a series of value-laden choices: which goals to optimize, which data to collect, and which trade-offs to accept. These choices are rarely made democratically, yet they have profound impacts on how societies allocate resources and define opportunity.
Some jurisdictions have begun to experiment with regulatory responses, such as requiring algorithmic impact assessments or mandating transparency about how models function. However, enforcement remains patchy, and the rapid pace of technological change outstrips the capacity of regulators to keep up. The chapter warns that without more robust frameworks, algorithms will continue to operate as de facto institutions, shaping lives without consent or scrutiny.
The chapter also explores psychological effects. When people experience decisions made by opaque systems, they can feel powerless and alienated. The sense that unseen forces are constantly evaluating and categorizing individuals can erode trust in social institutions and foster resignation rather than civic engagement.
The chapter concludes by arguing that the spread of algorithmic governance represents a fundamental shift in how power is exercised. Algorithms do not merely automate decisions; they create new forms of influence that are difficult to contest or even perceive. Confronting the algorithmic Leviathan requires not only technical reforms but also a collective reckoning with how authority and accountability should operate in a digitally mediated world.
Key Points
- Algorithms have become central mechanisms of governance across many domains.
- They often embed and perpetuate historical biases despite claims of neutrality.
- Examples in criminal justice and finance illustrate discriminatory impacts.
- Opacity and proprietary protections make algorithmic decisions difficult to challenge.
- Recommendation engines distort public discourse by promoting sensational content.
- Algorithmic systems intensify economic concentration and market dominance.
- Design choices reflect values and priorities that are rarely made transparent.
- Regulatory efforts are emerging but remain fragmented and limited.
- The psychological impact includes alienation and loss of trust.
- Addressing algorithmic power requires both technical and democratic solutions.
Chapter 8: Collisions and Convergence
This chapter explores how the AI development paths of the United States and China, while shaped by different political and cultural contexts, have begun to converge in their tactics, ambitions, and unintended consequences. Early in their trajectories, each country framed its technological strategy in contrasting ideological terms: the U.S. positioned itself as the champion of market-driven innovation and personal freedom, while China emphasized state-led modernization and collective prosperity. Over time, however, the practical demands of building powerful AI systems have led both nations to adopt increasingly similar approaches.
One area of convergence is the embrace of massive data collection. In China, data harvesting is openly integrated into state policy and social governance, with clear legal mandates requiring companies to share information with authorities. In the U.S., commercial surveillance is driven by the profit motive rather than explicit state directives, but the outcome is remarkably similar: pervasive tracking of individuals’ behaviors, preferences, and movements. Whether the justification is national security or personalized advertising, the result is a continuous expansion of data extraction infrastructure.
The chapter describes how AI research culture in both countries has shifted toward prioritizing scale and performance benchmarks over transparency and accountability. The pressure to publish record-breaking results and secure funding has created incentives for researchers to focus on optimizing large models, even when their societal impacts are poorly understood. This competitive dynamic has produced a global arms race in compute resources and data acquisition.
Cross-border flows of talent and capital have further blurred distinctions between the two systems. Many Chinese AI researchers train or work in American institutions before returning to domestic companies or labs. U.S. venture capital firms have invested billions into Chinese startups, especially in sectors like computer vision and autonomous driving. These relationships complicate efforts to portray AI competition purely as a zero-sum struggle between nations.
The chapter also discusses how each country has adopted elements of the other’s governance models. In the U.S., rising public anxiety about disinformation, privacy violations, and algorithmic harms has led to calls for stronger regulation, echoing aspects of China’s more interventionist approach. Meanwhile, Chinese companies have experimented with forms of self-regulation and consumer-focused branding to build trust and expand globally, borrowing techniques from Silicon Valley playbooks.
Despite ideological differences, the business models of leading AI firms in both countries rely on similar foundations: monetizing personal data, scaling compute-intensive infrastructure, and maintaining opaque proprietary systems. This convergence has created a shared set of incentives that prioritize rapid deployment over caution or ethical reflection.
The chapter highlights how these parallel trajectories have contributed to a decline in ethical standards across the industry. As each country accelerates development to avoid falling behind, there is little appetite for imposing safeguards that could slow progress. AI safety researchers and civil society advocates warn that this dynamic creates a race to the bottom, where companies are reluctant to adopt best practices unless compelled by regulation.
This competition has geopolitical implications beyond technology itself. As American and Chinese firms export AI products and platforms to other nations, they bring embedded values and governance assumptions with them. Emerging economies find themselves pressured to align with one camp’s standards and infrastructure, raising concerns about digital colonialism and diminished sovereignty.
The chapter discusses how these dynamics have made it more difficult to create shared norms or cooperative frameworks. Even as experts agree that some challenges, like algorithmic bias or AI-driven misinformation, are global in nature, trust deficits between governments impede collective action. Attempts at multilateral agreements often stall over disagreements about transparency, enforcement, and geopolitical rivalries.
The chapter concludes by arguing that while the U.S. and China began with divergent visions for AI, the convergence of their practices reflects deeper structural pressures. The combination of profit motives, national security imperatives, and competitive fear has produced systems that resemble each other more than either side admits. Recognizing this convergence is necessary to chart a different course that prioritizes human well-being over narrow strategic advantage.
Key Points
- The U.S. and China started with different AI development philosophies but have converged in practice.
- Both countries rely on large-scale data harvesting, albeit justified in different terms.
- AI research prioritizes performance and scale over transparency and ethics.
- Cross-border flows of talent and capital complicate simplistic narratives of rivalry.
- Each country has borrowed elements of the other’s governance and business models.
- Leading firms share incentives to monetize data and maintain proprietary systems.
- Competitive pressures have weakened ethical standards across the industry.
- AI exports embed national governance assumptions in recipient countries.
- Geopolitical rivalries hinder the creation of shared norms and safeguards.
- Convergence of practices demands critical examination to avoid a race to the bottom.
Chapter 9: The Fight for the Future
This chapter profiles the growing movement of people and organizations working to confront the harms of AI and reclaim its development for the public good. While much of the narrative around artificial intelligence has focused on competition between states and corporations, a diverse coalition of ethicists, whistleblowers, policymakers, and activists has emerged to challenge the status quo. Their efforts highlight the possibility of alternative paths grounded in accountability, justice, and democratic oversight.
One major strand of this resistance comes from within the tech industry itself. Employees at leading AI firms have organized walkouts, signed petitions, and resigned in protest over projects they consider unethical. High-profile departures, such as those of researchers who raised concerns about algorithmic bias or military applications, have drawn public attention to the internal conflicts over AI’s direction. These actions have sometimes forced companies to cancel controversial contracts or adopt new ethical guidelines, though enforcement often remains inconsistent.
The chapter describes how civil society organizations have played a pivotal role in documenting and publicizing AI’s unintended consequences. Groups dedicated to digital rights, privacy, and social justice have produced influential reports that reveal how automated systems can exacerbate discrimination, concentrate economic power, and undermine democratic norms. Their investigations have pressured regulators to act and provided resources for communities affected by algorithmic harms.
Regulatory interventions are another front in this fight. Some governments have introduced laws requiring greater transparency about how AI systems function and what data they use. The European Union’s proposed Artificial Intelligence Act, for example, would establish risk-based categories and impose strict obligations on high-risk applications. While such regulations face intense lobbying from industry groups, they represent significant attempts to set guardrails around AI development.
Grassroots activism has also contributed to shifting public attitudes. Campaigns against facial recognition surveillance, for example, have persuaded several cities to ban or restrict its use by law enforcement. These victories demonstrate that collective action can achieve tangible outcomes even in the face of powerful corporate and state interests. Community-led efforts have also explored alternatives to dominant AI models, such as participatory design processes and data trusts that empower people to control how their information is used.
The chapter emphasizes that one of the most important challenges is bridging the gap between technical expertise and democratic accountability. AI systems are complex, and debates about their risks can seem inaccessible to the public. However, initiatives to educate citizens and policymakers about how these systems work have begun to lay the groundwork for more informed oversight.
An undercurrent of urgency runs throughout the discussion. The pace of AI deployment means that decisions made today will shape societal structures for decades to come. Once automated systems are entrenched in critical areas like health care, criminal justice, and employment, reversing course becomes exponentially harder. This recognition has motivated many advocates to push for a precautionary approach that prioritizes harm reduction over unbridled innovation.
The chapter also addresses tensions within the movement. While some reformers believe that responsible development and stronger regulation can align AI with democratic values, others argue that the technology’s underlying incentives are inherently incompatible with fairness and equality. This debate has led to diverging strategies: some groups focus on improving governance frameworks, while others call for scaling back or banning certain applications altogether.
The role of international cooperation is presented as both essential and fraught. AI’s global nature means that no single jurisdiction can address all its challenges in isolation. Yet geopolitical rivalries and conflicting legal systems often impede collaboration. Despite these obstacles, transnational alliances of researchers, advocates, and policymakers continue to work toward shared principles and standards.
The chapter concludes by asserting that the fight for the future of AI is ultimately about who gets to decide how technology serves society. While the forces pushing for unchecked expansion are formidable, the persistence and creativity of those resisting them demonstrate that alternative visions are possible. Ensuring that AI promotes collective well-being rather than narrow interests will require sustained engagement and vigilance.
Key Points
- A diverse coalition is challenging the dominant trajectory of AI development.
- Whistleblowers and employee activism have exposed internal ethical conflicts.
- Civil society organizations document and publicize algorithmic harms.
- Regulatory efforts like the EU’s AI Act seek to impose accountability.
- Grassroots campaigns have won victories against surveillance technologies.
- Public education initiatives aim to democratize understanding of AI systems.
- The urgency of addressing AI’s impacts grows as systems become entrenched.
- Debates within the movement reflect differing visions for reform or abolition.
- International cooperation is necessary but hampered by geopolitical tensions.
- The struggle over AI’s future is fundamentally about democratic control and collective well-being.
Conclusion: Choosing Our Empire
The conclusion synthesizes the central themes of the book and calls for a collective reckoning with the forces driving AI’s development. It begins by arguing that the trajectory of artificial intelligence is not predetermined or inevitable, despite the aura of technological determinism often surrounding it. Instead, the systems that have taken hold reflect a series of deliberate choices—by corporations, governments, and researchers—that have prioritized scale, surveillance, and profit over transparency, accountability, and equity.
The chapter asserts that the metaphor of empire is useful because it captures how AI functions as both a tool and a structure of power. Like past empires, the AI industry extracts resources—in this case, data and human labor—from vast populations, reorganizes societies around its own priorities, and presents its expansion as a form of progress. Yet, also like empires, it generates resistance, contestation, and counter-narratives that challenge its legitimacy.
One central question raised is whether societies will continue to cede authority to opaque technical systems or reclaim democratic agency over how technology is designed and deployed. The chapter highlights that deference to algorithmic authority has already reshaped expectations about privacy, fairness, and even the scope of human judgment. Many people have come to accept that AI systems are too complex to question, reinforcing a dynamic in which accountability is diffused or denied.
The text argues that confronting this dynamic requires first acknowledging the full scope of AI’s social and ecological costs. From carbon emissions generated by training large models to the psychological toll of content moderation and the exploitation of low-paid data workers, these costs are too often rendered invisible by narratives of innovation. Recognizing them is the starting point for developing more sustainable and humane alternatives.
The chapter also calls for rethinking how value is defined in AI development. Rather than measuring progress solely in terms of performance benchmarks or market capitalization, societies could adopt metrics that center well-being, equity, and ecological health. This shift would require not just new policies but a cultural transformation in how success is understood and rewarded.
Policy proposals are presented as necessary but insufficient on their own. While stronger regulations, impact assessments, and transparency requirements are essential, the conclusion argues that structural change will also depend on empowering workers, communities, and civil society organizations to participate meaningfully in decision-making. Democracy must be embedded into the governance of AI, not treated as an afterthought.
International cooperation is highlighted as a crucial element. Given that AI systems operate across borders, piecemeal national regulations can only go so far. Building shared frameworks for accountability and ethical standards will be challenging in an era of geopolitical rivalry, but the alternative is a world where power accrues to whichever actors are most willing to ignore safeguards in pursuit of dominance.
The chapter emphasizes that technological innovation does not have to be synonymous with social harm. AI could be redirected to address pressing collective challenges—like improving healthcare access, mitigating climate change, or supporting democratic participation—if institutions and incentives were realigned. Achieving this will require confronting entrenched interests and rejecting the fatalism that portrays current trajectories as unchangeable.
A recurring message is that individuals are not powerless. While it is easy to feel overwhelmed by the scale and complexity of AI systems, the conclusion argues that social movements, policy reforms, and cultural shifts have repeatedly altered the course of other technologies. Small actions can accumulate into momentum that reshapes norms and expectations.
The book closes by affirming that the choice is not between adopting AI or rejecting it wholesale, but between different visions of what kind of society AI will help build. Choosing an empire defined by democratic values, shared prosperity, and ecological sustainability is still possible—but only if enough people decide to act.
Key Points
- AI’s trajectory results from deliberate choices, not technological inevitability.
- The metaphor of empire captures AI’s extractive and organizing power.
- Deference to opaque systems undermines democratic agency.
- The social and ecological costs of AI are often hidden by narratives of progress.
- New measures of success should center well-being and equity.
- Regulation is necessary but must be paired with empowerment and participation.
- International cooperation is essential to prevent a race to the bottom.
- AI can be redirected toward solving collective challenges if priorities shift.
- Individuals and communities have agency to influence outcomes.
- The core choice is between competing visions of the society AI will help create.
