Technology…is not intrinsically bad. Much of it … is brilliant and beneficial – at least to humans. But invention often originates in short-term or siloed thinking. And even more frequently, its application fails because of political and economic decisions taken with little heed for non-humans and future generations. … The old idea of conquering nature has never really gone away. Instead of changing ourselves, we adapt the environment … The United States, though, pays little heed to its pre-industrial history. The country’s identity is deeply enmeshed with technology, which is treated as the great enabler of progress and freedom (Watts, 2021).
A successful society is a progress machine. It takes in the raw material of innovations and produces broad human advancement. America’s machine is broken. The same could be said of others around the world. And now many of the people who broke the progress machine are trying to sell us their services as repairmen (Giridharadas, 2019).
This book began with a literature review and the identification of emerging issues and case studies. The latter included the Internet of Things (IoT) and the prospect of ‘driverless cars.’ Related evidence from these and other sources suggested that the broad, rapid and largely unreflected-upon adoption of Silicon Valley’s high-tech offerings, while impressive in many respects, evolved from surprisingly narrow and inherently problematic foundations. A wide variety of human and social concerns have emerged that cast serious doubt on the viability of this trajectory and outlook. Among them are:
- Questionable values (unbounded profit, growth of monopoly power, size and over-reach in multiple domains).
- The calculated use of strategies intended to conceal how high tech and the growth of corporate power compromise and degrade many aspects of public and private life.
- Inadequate conceptions of human identity and purpose that contradict standards of safety, respect and dignity as defined, for example, in the UN Declaration on Human rights.
- Equally thin and instrumental views of socially vital concepts such as ‘friends’, ‘communication’ and ‘progress.’
- One-dimensional views of high tech that bestow upon it an assumed and unquestioned ontological status that can neither be justified nor sustained.
- Failure to question self-serving practices that permit high-tech innovations to be released into social and economic contexts without due regard for unintended effects, drawbacks and long-term implications.
- How foresight and provident care have been overtaken by the naked power of speculative investments in ill-considered innovation, marketing and the resurgence of monopoly practices on a global scale (Slaughter, 2018b).
Chapter three considered some features of ‘compulsive innovation,’ took a brief look at artificial intelligence (AI) and also drew attention to the apparently unstoppable rise of surveillance systems around the world. Its main emphasis, however, was to begin the task of ‘framing solutions.’ It was proposed that certain ‘blind spots’ that afflict Silicon Valley, its investors and supporters, could be reconceptualised as opportunities to reframe and re-direct the entire enterprise. A four-quadrant model from Integral enquiry re-focused attention away from the over-hyped exteriors of IT systems to highlight dynamic but widely overlooked interior phenomena such as worldviews and values. Habermas’ insistence on the primacy of what he calls ‘constitutive human interests’ also served to anchor the discussion in these vital domains. The chapter reviewed a variety of strategies for better understanding and intervening in systems that undermine humanity’s autonomy and well-being. They included:
- Transcending reductionism and re-purposing the Internet;
- Productive innovation; and,
- Humanising and democratising the IT revolution (Slaughter, 2018c).
It is universally accepted, however, that the IT revolution is anything but static. It is therefore unsurprising that a multi-faceted ‘pushback’ against the continued expansion and power of the Internet oligarchs has continued to grow and develop. In an Atlantic essay during mid-2019, Madrigal outlines 15 entities that he refers to as ‘an ecosystem of tech opponents.’ (Madrigal, 2019). This chapter draws on some of these newly emerging insights to extend the scope of the critique and provide further support for possible solutions. It begins with a view of the ‘fractured present’ and continues with four contrasting accounts by individuals who have, in quite specific ways, acted as ‘witnesses’ to this unprecedented upheaval. The upcoming chapters also employ a metaphor from The Matrix film trilogy to consider how the real-world matrix of high-tech entities and systems can be better understood, or ‘decoded.’ Overall, it suggests that the clarity of insight now emerging from such sources may begin to resolve the digital dilemmas we collectively face. It helps to establish the grounds for hope and effective action. Finally, we should not be under any illusion that we are dealing with a stable situation or outlook. The over-reach of high-tech innovation and its thoughtless implementation has multiple costs and brings with it quite new dimensions of hazard and risk. In other words, we are treading unstable grounds ripe for change. But what kind of change and whose interests will prevail?
The fractured present
Many features of human history are known to work against integration and the smooth functioning of society. They include poverty, revolution, war, disease, the exhaustion of physical resources and imagination (Tainter, 1988). During recent centuries, and especially since the Industrial Revolution, new forms of human organisation and technology progressively extended this list, giving rise to new versions of old problems as well as entirely new ones. During the early 21st Century, a particularly perverse combination of IT capability and capitalist values created powerful waves of change and dis-integration that now permeate our own fractured present. While it suited the institutional beneficiaries of the IT revolution (Silicon Valley behemoths, associated start-ups, investors, certain government agencies) to evoke the myth of progress and portray this ‘revolution’ as a broadly liberating force, that view has steadily lost credibility. A particular series of events occurring within a very specific historical context, sometimes known as the ‘Neoliberal ascendancy,’ unfortunately arrived at precisely the wrong moment. As global dilemmas became increasingly evident, the view that ‘markets’ should prevail over ‘governance’ was used to repeatedly delay or destroy many of the very adaptive responses upon which more far-sighted policies could have been based. US governments in particular failed to fully comprehend or restrain the aggressive, monopolistic strategies that arose in their midst. Consequently, no-one in positions of power and authority succeeded in subjecting these developments to sufficiently thorough-going assessment, technological or otherwise.
In retrospect, few people paused to consider the repercussions of these developments in future. Some may argue that this apparent blindness should be attributed to inherent human limitations, including plain, old fashioned naivety. Yet the fact remains that the Internet oligarchs intentionally obscured the growing costs of their activities behind a wall of self-serving propaganda, marketing glitz, for distraction and outright deception of the general public. The costs include undermining human agency, weakening democracy, destroying livelihoods, fracturing social systems and creating new sources of conflict and violence. The following vignettes evoke the ‘lived quality’ of situations replete with disturbing human consequences (Fazzini, 2019).
- A mother discovers that her 12-year-old son has become addicted to the hard porn he first encountered via friend’s ‘phone in a school playground.
- A student who’d sent intimate images of herself to her boyfriend finds herself being ogled and trolled months later by school acquaintances as well as strangers on the internet.
- New parents who’d installed a video monitor on their child’s crib find out later that the feed was intercepted by thieves who used it to compromise their home network.
- A young man is hauled before a court for furiously striking his pregnant partner because she challenged his addiction to multi-player online gaming.
- The owners of any organisation with an online presence can switch their computers on one morning only to find that they’ve become a victim of ‘ransom ware’ and have been ‘locked out’ of all their data. To have any chance of retrieving it they are required to pay a sum of money in Bitcoin to a remote and unknown entity. Help is available but there’s no guarantee the data will ever be recovered.
- A mature affluent woman falls for a good-looking former soldier on the internet who has run into hard times. As their relationship develops, he asks for financial help. After several such transactions the victim discovers that she has been sending money to a 20-something scammer in Nigeria.
- The would-be purchasers of a new property discover that the deposit paid into their lawyers’ authorised account was diverted elsewhere by scammers and could not be recovered. The bank denies all responsibility.
These and countless similar examples have occurred, and are occurring, almost everywhere. Table four provides an indicative overview under three broad headings.
|Table 4||Human, Social and Geopolitical Costs of the IT Revolution|
|Human costs||• The loss of privacy on a vast scale.
• Loss of control over private data and the uses to which it is put.
• A steady decline in respect and tolerance for ‘others’ and other ways of being.
• A growing tendency to stereotype, blame, exploit and attack from a distance.
• Misuse of passwords to threaten, steal and control.; the rise of identity theft.
• The rise of hacking, phishing, cyber-bullying and scams of every possible kind.
• The rise of on-line predatory behaviour, including the sexual abuse of children.
• Diminution of the right to be free of such abuse, and of the right to sanctuary.
• Evisceration of the inner lives of countless individuals, especially in developing nations.
• Propagation of false solutions and solutions to problems that do not exist (solutionism).
• Propagation of vacuous ‘entertainment’ that degrades human life and experience.
• The rise of equally vacuous ‘influencers’ who are richly rewarded for showcasing trash.
• The active promotion of outrage as a means of creating ‘user engagement.’
• Careless and repeated abrogation of the 1946 UN Declaration of Human Rights.
• Denial of the right to an open and ‘surveillance free’ life now and in the future.
|Social costs||• Repeated assaults on the value of truth and the integrity of scientific knowledge.
• The consequent weakening of social integration and clear-sighted decision making.
• Radical questioning / undermining of precedence and authority in almost every domain.
• The compromising of core human institutions such as: government, health and education.
• The decay of social capital, traditions and ways of life built up over generations.
• The deliberate or careless resourcing of ‘bad actors’ at every level and in every country.
• The broadcasting of demeaning ideas, memes, narratives and images of every kind.
• The curation, replication and use of anti-social ‘performances’ (including sexual assault and mass killings) that in turn promote further violence and destructive responses.
• The deliberate use of dopamine reward responses to create and sustain addiction for commercial gain.
• The deliberate and systematic appropriate of creative work – including that of artists, writers, musicians and journalists without any or adequate payment.
• The associated ‘starvation’ of traditional news through direct theft of material and loss of funding through declining advertising income.
• The attempt to replace government services funded by formal taxation with commercial for-profit costs levied by private companies in their own interests (for example, age care, health care, education and related social services).
• The re-orientation of intra-nation security services from protection of native populations to the wholesale invasion of their privacy and autonomy.
• The corresponding inability of governments to protect themselves or their citizens from random external cyberattacks.
|Geopolitical costs||• A continuing shift from the Internet as positive enabler of legitimate civil functions to a multi-dimensional liability, i.e. an expanding series of hard-to-fix vulnerabilities.
• The willingness of nation states to develop increasingly powerful surveillance capabilities and high-risk interventions in the IT systems of other countries for purposes of intimidation and control.
• The resulting ‘dismal dialectic’ by which competing nation states seek temporary advantage over others by pursuing ever more dangerous and threatening internet- and satellite-enabled offensive capabilities.
• The growing likelihood of autonomous ‘soldiers,’ ‘smart’ drones and the like, bringing the prospect of cyber warfare ever closer.
• The asymmetric benefits that accrue to ‘bad actors’ at every level. For example, Internet-enabled crime such as money laundering, financial scams, illegal transfers to and from rogue administrations. As compared with the very high costs of pursuing any kind of wrong-doing or criminal activities via Internet means. The costs of the latter tend to be very low, while the costs of pursuing it in terms of time, money and expertise are prohibitively high.
• Multiple vulnerabilities arising from the lack of coordination and cooperation in the digital arena between the three largest centres of power and control: China, Russia and the USA.
• The global emergency, however, recognises no political boundaries whatsoever. Although IT systems have achieved global reach few or no effective human / political organisations have emerged that are capable of providing integration and coordination on a similar scale.
• Effective global governance appears to be a remote possibility at present.
These examples demonstrate how profoundly the IT revolution – as implemented by Silicon Valley and its clients – has helped to fashion the dangerous and unstable world that we now inhabit. It is a world that blunders into new dilemmas while failing to resolve those it already has. What many have overlooked, for example, is that to maintain what are now considered ‘normal’ operations, the high-tech world can no longer function without recourse to vast numbers of very complex devices operating silently in the background. The entire system is, in principle, vulnerable and needs to be constantly protected from entropic malfunction and deliberate on-line aggression (Galloway, 2020). Assurances regarding these endless liabilities have never been fulfilled. It is unlikely that they ever will be (Gent, 2020).
To summarise, Western civilisation has embarked on a process of high-tech development with certain well-known benefits and other less well-known costs for which there are apparently very few easy or ready-made solutions. It is therefore, worthwhile to enquire if the IT revolution itself may constitute a new and dangerous progress trap (Lewis and Maslin, 2018). So instead of passively accepting the technology onslaught, it needs to be subjected to sustained critical enquiry. Exactly how does this historical condition affect life, culture, tradition and meaning? How, under these chaotic circumstances, can solutions be crafted that hold out real hope of recovering the collective future? In order to de-code the matrix we first need to understand how it developed and why.
Understanding the matrix
RED PILL, BLUE PILL?
In the first Matrix movie the lead character, Neo, is offered a choice between red and blue pills (Warner Bros, 1999). One will wipe his memory and return him to the world of conventional surfaces with which he is familiar. The other will open his eyes so that he can not only see The Matrix for what it is but penetrate into, and perhaps even influence it. He opts for the former and as the mundane world slumbers begins his ‘deep dive’ into reality. The trilogy narrative may not be entirely coherent, but it certainly tapped some deep and perhaps obscured aspects of human psychology. In so doing it arguably triggered half-conscious questions or fears about ‘what is really going on’ with succeeding waves of technology over which we appear to have little or no control. The key word here is ‘appear’ since what is at stake are not immutable, natural forces or God-like injunctions handed down from above. Rather, the high-tech world has been created by individuals making critical decisions at the behest of people in real time and places with vested interests and imperatives.
In the ‘blue pill’ version of ‘the real’ the global monopoly platforms created by Google, Facebook and others are believed to exist to help us access information, explore human knowledge and connect with others around the world. We are led to believe that the power of modern technology is at everyone’s fingertips to do with as they will. In exchange for what are described as ‘free’ services, personal data from everyday lives and activities is scanned, recorded, used and sold. This information helps ever-attentive suppliers to better know and anticipate human needs. By drawing on as much information as possible dedicated Google users are, it is said, enabled to more efficiently navigate their way through an ever more complex world. For reasons best known to themselves some appear happy to install various ‘digital assistants’ that record their daily conversations. Some choose to unburden themselves of familiar low-grade tasks such as remembering train times, navigating a city or knowing what groceries to buy when. Which encourages them to use these services in real time. Dedicated ‘always-on’ monitoring devices that connect the young to their parents and friends and the elderly to medical support seem to have wide appeal. Yet prying on everyone, even in most private moments, are hidden armies of ‘data aggregators’ that sift and sort and organise the flood of information about what people do, where, how and even why they do it. It can be claimed that such technologies protect individuals from external harm and perhaps protect society from certain kinds of criminal activity. Overall, it is presumed that the ‘blue pill’ provides a pretty fair bargain.
Such passive and generalised assumptions that the technology and the systems they are embedded in are benign and useful have been widely accepted. We know this because the monopoly platforms (and their investors) have grown so immensely rich and powerful on the proceeds (Bagshaw, 2019). A ‘business-as-usual’ view simply assumes that these arrangements are broadly acceptable – albeit requiring routine upgrades and related changes from time to time (improved ‘personalisation’, longer battery life, sleeker handsets etc). In the absence of countervailing perspectives and clear evidence, alternative views of high-tech modernity can be difficult or impossible to articulate. This is especially the case in less affluent nations where Facebook, for example, and its subsidiary ‘WhatsApp,’ are used by large numbers of people who confuse these invasive and heavily monetised apps with the Internet per se. Given the strong tendency of social media to exacerbate dissent, extremism and even direct violence the consequences can be tragic. This has been seen in mass shootings, some of which have been streamed in real time. But a similar dynamic has occurred in other situations where social dissent has risen to such extremes that community violence and ‘ethnic cleansing’ have resulted. Two examples are the descent of the ‘Arab Spring’ into chaos and the expulsion of the Rohingya from their homes and villages in Myanmar to a precarious existence in nearby Bangladesh. Nor, given recent events, is the US immune from such consequences.
Clearly a ‘red pill’ account requires real effort over time and a certain tolerance for discomfort and uncertainty. It raises disturbing questions that not everyone may be ready or able to pursue. It acknowledges the reality of what some regard as a true existential crisis with ‘forks in the road’ and pathways to radically different future outcomes. This view also suggests that the continuation and further development of surveillance capitalism leads directly to the kind of over-determined dystopian oppression already emerging in China (Needham, 2019). It therefore seeks to clarify just how the juggernaut works, to identify and name hidden factors, to expose the intangible forces that are working behind the scenes to shape our reality, and ourselves, in a variety of perverse ways. Yet before it can be tamed or directed toward different ends society needs to understand in some depth how we arrived at the point where societies are confronted by deformed versions of high tech and a fundamentally compromised Internet. Such an account clearly goes beyond the critique of technical arrangements to questions of purpose, history and context.
MISCONCEPTIONS, MERCHANDISING AND ADDICTION
The view explored here is that the IT revolution owes at least as much to human and cultural factors as it does to purely technical ones. For example, the barely qualified optimism with which it has been associated arguably owes more to marketing and merchandising – America’s great unsought ‘gifts’ to the world – than it does to the services and distractions of any device whatsoever. The close association that’s claimed to exist between technical innovations on the one hand and human progress on the other tells only part of the story and therefore remains problematic. Such generic ‘optimism’ is, perhaps, little more than a handy distraction used to conceal the predations of corporate power in this singularly heartless industry. As digital devices continue to penetrate nearly every aspect of human life, the forces driving them need close attention. They are shaped and enabled every bit as much by unconscious pre-suppositions and cultural myths as they are by computer chips, hard drives and servers. Such underlying intangibles – values, cultures and worldviews – powerfully determine what forms technologies take and the uses to which they are put.
John Naughton, a seasoned observer of the shifting IT landscape has identified what he refers to as ‘two fundamental misconceptions.’ The first is ‘implicit determinism’ which he describes as:
The doctrine that technology drives history and society’s role is to adapt to it as best it can… that capitalism progresses by “creative destruction” – a “process of industrial mutation that continuously revolutionises the economic structure from within (Naughton, 2020).
In this view the second critical flaw in the worldview of Silicon Valley is ‘its indifference to the requirements of democracy:’
The survival of liberal democracy requires a functioning public sphere in which information circulates freely…Whatever public sphere we once had is now distorted and polluted by… Google, YouTube, Facebook and Twitter, services in which almost everything that people see, read or hear is curated by algorithms designed solely to increase the profitability of their owners (Naughton, 2020).
The ’determinism’ and ‘indifference’ that Naughton refers to are two of many unacknowledged features that characterise this particular high-tech culture and degrade so many of its offerings. Another is the addiction to digital devices and the services they provide. Their appeal was ‘designed in’ with enormous care and strenuously promoted using every available marketing tool and technique. The language of advertising is, quite obviously, a projection of corporate interests and, as such, has no place for what might be called ‘autonomous needs.’ Its intrinsic conceptions of human beings, human life, are irredeemably reductive. The fact that advertising has become the central pillar of the Internet is not something to be passively accepted. It requires an explanation.
During the post-war years, routine sales were regarded as too slow and uncertain, meaning that profits were always going to suffer. The advertising industry was a response to this highly ‘unsatisfactory’ situation. The whole point was to boost ‘demand.’ The strategy was so successful that over subsequent years ‘consumer demand’ became a ‘meta-product’ of this particular worldview (growthism) that expressed specific values (materialism, envy, consumerism etc). Buying and selling in this high-pressure mode made a kind of sense in the heady years of post-war America. The big mistake was to allow it to become so embedded, so much part of the ‘American way of life’ that it became normalised thereafter (Packard, 1962). Clearly times have changed, and those early imperatives make less sense than ever. Yet the present wave of IT-related selling continues to draw heavily on the very same manipulative tradition. One clear difference, however, with this new flood of products and services, is that entirely novel features appeared that seemed to by-pass rational thought and ethical evaluation. Compelling new devices and the apparently ‘free’ services that they enabled seemed to meet peoples’ authentic needs for organisation, communication, and agency and so on. At the time they were mistaken for gifts. More recently, however, the nature, extent and costs of addiction to digital devices, especially for children and young people, have become impossible to ignore (Krien, 2020). Yet even now responses to such concerns remain slow, uncertain and largely cosmetic (Exposure Labs, 2020).
Heavily curated projections of IT as a neutral or positive enabler have clearly succeeded up to a point. But as more people experience the social, cultural and economic ramifications the legitimacy of digital manipulation will likely attract ever greater scrutiny. Societies permeated by powerfully networked digital devices not only operate along unconventional l lines, they also overturn earlier ways of life (Klein, 2020). The era of large-scale, targeted and pervasive merchandising may not be over, but it does face new challenges that emerge from lived experience and the deep, irrepressible need for human autonomy. As people seek to understand their reality, their world, in greater depth they will be more willing to look beyond the photo app, the chat group and those innocent-looking Facebook pages where powerful AIs stare coldly back right into their soul. They will want to know why this unauthorised invasion happened and how it can be prevented from recurring. They will need a clearer understanding of the nuances of innovation and demand more honest explanations from those who shaped this revolution without regard to the consequences.
MONETISING DATA, INVENTING ‘BEHAVIOURAL SURPLUS’
Google was incorporated in the USA in 1998 soon after the Mosaic web browser that opened up the Internet to the public. Data collected at that early stage was seen merely as raw research material for which authorisation was neither sought nor granted. Indexing the World Wide Web (WWW) provided reams of data which was analysed and fed back into the system for users’ own benefit. It allowed users, for example, to fine tune their own searches. This arrangement recognised what had long been a standard feature of commercial practice – the inherent reciprocity between a company and its customers. But since Google did not have a distinctive product of its own the company was considered insufficiently profitable (itself a social judgement based on particular values and priorities). Subsequent discoveries, such as ‘data mining’ constituted a ‘tipping point’ that changed everything. Rich patterns of human behavior were progressively revealed but the research interest no longer applied; it was overtaken by commercial imperatives. These covert profit making operations were regarded as highly secret and were shielded from public view. A further critical shift occurred when it was realised that the avalanche of new data could be manipulated and monetised. The vast potential was eagerly welcomed by Google’s equity investors who, as Google announced at a 1999 press conference, had contributed some US$25 million to the company. These investors, with their value focus on money, expansion and profit, brought strong pressures to bear with the sole aim of boosting the company’s financial returns in which they now held a powerful interest. None of these activities apparently broke any laws or regulations as they existed at the time, so were not considered illegal. The best that can be said is that they were, perhaps, ‘non-legal’ in that they took place in secret and within a regulatory vacuum.
Very few understood at the time that this constituted a critical point of transition from one form of commercial activity to another. But it was consistent with Google’s priorities which had never been on improving peoples’ lives or contributing to society in any meaningful way. A couple of years later one of Google’s founders, Larry Page, spoke about further options that lay beyond mere searching operations. This was made explicit when he declared that ‘People will generate huge amounts of data… Everything you’ve heard or seen or experienced will become searchable… Your whole life will be searchable’ (Zuboff, 2019, p. 98). As Zuboff (2019, p.68-69) notes ‘Google’s users were not customers – there is no economic exchange, no price and no profit. Users are not products but sources of raw-material supply.’ She adds that:
Google turned its growing cache of behavioural data, computer power and expertise to the single task of matching ads with queries… It would cross into virgin territory. Search results were…put to use in service of targeting ads to individual users… Some data would continue to be applied to service improvement, but growing stores of collateral signals would be repurposed to improve profitability both for Google and its advertisers. These behavioural data available for use beyond service improvement constituted a surplus, and it was on the strength of this behavioural surplus that the young company would find its way to the “sustained and exponential profits” that would be necessary for survival (Zuboff, 2019, p.74-5).
To achieve this ambition the company simply ignored social, moral and legal issues in favour of technological opportunism and unilateral power. These were and are all human decisions, human inventions, not ‘an inherent result of digital technology nor an expression of information capitalism.’ This was an ‘intentionally constructed at a moment in history (that represented) a sweeping new logic that enshrined surveillance and the unilateral expropriation of behavior as the basis for a new market form. (It) resulted in a huge increase in profits on less than four years’ (Zuboff, 2019, p.85-7).
Greed and opportunism were, however, not the only factors involved. The dominant Neoliberalist ideology succeeded in reducing the scope and power of government regulation and promoting a structural shift toward market-led practices. Anti-trust strategies that had previously been used to constrain monopolies were also set aside leaving companies to expand seemingly without limit. As mentioned below, Zuboff and Snowden both refer to the aftermath of the 9/11 disaster when the CIA and other government agencies formed a powerful but hidden alliance with Google. The former made a fatal choice to draw as fully and deeply as possible on the very surveillance techniques pioneered commercially by Google. These two highly secretive entities then found ways to conceal their surveillance operations not merely from the public but also from Congress. The immediate result was a decisive shift away from ‘privacy’ toward a new and dangerous type of ‘security,’ (Snowdon, 2019; Greenwald, 2015). Earlier aspirations for an ‘open Internet,’ and the long-standing value assumption that human rights were paramount, were abandoned. The scope of these changes was admitted in 2013 by former CIA Director Michael Haydon when he acknowledged that ‘the CIA could be fairly charged with militarising the World Wide Web’ (Zuboff, 2019, p.114). These developments arguably set the stage for the present dangerous and unstable geopolitical situation we now face.
Google became progressively stronger. Its targeted advertising methodology was patented in 2003 and the company went public in 2004. Profits rose precipitously and it soon became one of the world’s richest companies. In its rush for dominance and profit it pursued a series of unsanctioned, non-legal projects such as Google Earth (2001), an eventually unsuccessful attempt to ‘digitise the world’s books’ (2004); (Guion, 2012) and Street View (2007). While all have their uses, the company’s supreme over-confidence and ignorance of common values repeatedly demonstrated its complete lack of interest in seeking or gaining legitimate approval. What it did obtain within the US was ‘regulatory capture’ of government policy. The question that will not go away, however, is whether any private company should be allowed to have this power and whether that power is better invested in public utilities charged with pursuing social well-being rather than private profit. Such distinctions matter a great deal and have implications beyond IT. In 2012, for example, Google paid its dues to its ideological friends by bestowing generous grants upon conservative anti-government groups that opposed regulation and taxes and actively supported climate change denial (Zuboff, 2019, p.126). Hence the regressive aspects of Google’s business model and sense of entitlement clearly extend far beyond the surveillance economy per se.
Having opened out vast new and undefended territories of ‘behavioural surplus,’ Google’s model was emulated by many others, beginning with Facebook (Taplin, 2017). Today Google’s penetration into nearly every aspect of social and economic life is more extensive, more powerful than that of any nation state. Yet the legitimacy of these operations remains as problematic as ever. In order to understand and confront the Matrix cultural factors, powerful individuals and obscure decisions all need to be taken into account.