Witnesses to the revolution

The application of Hayek’s Big Idea to every aspect of our lives negates what is most distinctive about us. It assigns what is most human about human beings – our minds and our volition – to algorithms and markets, leaving us to mimic, zombie-like, the shrunken idealisations of economic models… As a result – the space where we offer up reasons and contest the reasons of others – ceases to be a space for deliberation, and becomes a market in clicks, likes and retweets. The internet is…magnified by algorithm; a pseudo-public space that echoes the voice already inside our head. (Metcalf, 2017).

 You only have to spend billions marketing something if its worth is in doubt (Meadows, 2001).

The steady emergence of publications and new sources of insight into the substantive character of the IT revolution arguably constitutes a counter trend in its own right since understanding precedes action. Although it is beyond the scope of any single paper to survey these in detail, four sources qualify particular attention. They are Permanent Record (Snowden, 2019), The Psychology of Silicon Valley (Cook, 2020), The Age of Surveillance Capitalism (Zuboff, 2019) and How to Destroy Surveillance Capitalism (Doctorow, 2020).  Snowden’s (2019) focus is primarily on his experience as a trusted member of the US security apparatus. He explains how, in the normal course of his work, he was confronted by critical changes in the way his government reacted to geopolitical shifts and events. He was shocked to discover how the surveillance options enabled by newly emerging technologies were turned upon the American people. Cook’s career began as co-founder of a non-profit organisation focusing on the effects of technology. This, in turn, led her to consider how high tech affects society more generally. From here it was a short step to exploring the psychological dimensions of Silicon Valley, the single most influential incubator of these changes. Her conclusions add compelling detail to the overall picture.

Zuboff’s (2019) was a university business professor with long-standing interests in how new technology affected workers and organisations. This earlier focus provided a sound basis for her detailed investigation into how the Oligarchs were created. Of greatest significance, perhaps, was her in-depth exposure of the stealth methods embedded in their business models that allowed them to successfully avoid detection and regulation for so long. From here she provided a rich account about how they undermined democracy and social norms in the pursuit of larger profits. Doctorow (2020), on the other hand, is a radical thinker with strong and well-established links within the IT subculture. His work embraces fictional and non-fictional approaches to IT-related issues. Thus, he has a distinctive ‘insiders’ view both of the tech itself and the critiques advanced against it. As such he provides his own critique of Zuboff’s contention that the main culprit here is ‘rogue capitalism.’ For Doctorow (2020) the main issues concern the resurgence of monopolies and the need for far more comprehensive digital rights.

Taken together the authors of these works qualify as ‘witnesses to the revolution.’ As such, they serve as a corrective to the prevailing view that this revolution is primarily about technology and the growing array of high-tech digital devices.  Readers of earlier works will also be aware that Integral approaches distinguish between inner and outer realities as well as individual and collective ones. Hence much of our interest here is how this revolution has affected, and is continuing to affect, the inner lives of people, organisations and cultures.

Snowdon’s dilemma

In contrast to other, more in-depth treatments, Snowden’s account is straightforward, almost banal. After being injured during army training his proficiency in IT enabled him to begin working in the security sector. He worked his way up through various government agencies and eventually earned the envied ‘most trusted’ status. With an unquestioned belief in the goals and purposes of this work he became adept at handling highly classified material. Until 9/11; after which everything changed. He discovered incontrovertible evidence that, contrary to accepted practice and in direct contravention of the US constitution, the US government had started spying on its own people. Back in 2004/2005 he’d been aware of an unclassified report that outlined some superficial details of the President’s Surveillance Program (PSP). This allowed for ‘warrantless wiretapping’ of citizens’ communications and was supposed to wind down within a couple of years. Several years later, however, the classified version intended only for a very highly restricted group turned up on his desk. It described a secret program known as STELLARWIND which described how ‘the agency’s mission had been transformed from using technology to defend the country to using it to controlling it.’ This had been achieved by ‘redefining citizens’ private Internet communications as potential signals intelligence.’ He realised that ‘the activities it outlined were so deeply criminal that no government would ever allow it to be released unredacted.’ The National Security Agency (NSA) argued that ‘the speed and volume of contemporary communication had outpaced, and outgrown, American law … and that a truly global world required a truly global intelligence agency.’ This, in turn, and according to ‘NSA logic,’ led to ‘the necessity of the bulk collection of internet communications’ (Snowden, 2019, p.177). In summary, the way that STELLARWIND was being used meant that instead of working to defend the US and its citizens, the NSA had started to identify their private communications as standard ‘intel’ ripe for unlimited collection and analysis.

What Snowden had unwittingly discovered was what he called a ‘culture of impunity’ that had somehow circumvented the Legislative Board, the Judiciary, Civil Society representatives and even the US Executive Branch. Notions of ‘privacy’ that, as noted earlier, had supposedly been enshrined in the post-war UN Declaration of Human Rights, had been trashed without any real public justification, debate or explanation. These were political decisions taken under the protective cover of ‘security’ – but that was not all. There was something about the technology itself that opened it to such egregious misuse. Snowden realised that while regulatory regimes were specific to each country, technology crossed borders with impunity and remained largely intact. This meant that the spread of personal data was, in principle, unlimited. Moreover, its unconstrained proliferation extended throughout and beyond individual lives. It also struck him forcefully that no previous generation had ever had to face such a profound symbolic assault on their privacy and continued well-being. Since we were the first, it was essential that we faced up to what was happening and dealt with it.

Such conclusions are decidedly ‘non-trivial.’ They indicate global changes of state that cannot but affect humanity in powerful but little-understood ways. Among these are that the overreach of high tech and unconstrained power appear to lead, in Snowden’s words, to ‘a vision of an appalling future.’ He is therefore justified in asking: is this indeed what we are willing to impose on present and future generations? In this view humanity appears to have reached what might be called ‘a historical pivot’ of unknown dimensions. While Snowden has been portrayed as a ‘whistle blower’ or even ‘traitor’ it’s clear that he is neither speaking for himself, nor pursuing merely personal interests. He seeks to act on behalf of humanity and, indeed, of future generations. As such the values being expressed here are clearly world-centric in scope and the worldview post-conventional. His decision to leave the US for what could well become a lonely and isolated life in exile became a moral imperative. Robert Mann’s (2014) account of the Snowden story is exemplary. It not only accurately captures other personal aspects but also shows how decisions after the 9/11 attack at the very highest levels of the US government contradicted the constitution and normalised criminal uses of the internet. This, in turn, established a series of precedents that made it that much easier for other nations to follow suit. It was, at heart, a fatal abnegation of world leadership with immense long term costs into the future.

Two points stand out here. First, his view from the inner recesses of the US security apparatus raises deeply concerning questions about just what values are operating there. Second, if those values and their associated motivations serve to undermine, rather than protect civilised life, the capacity of US governance to deal firmly and decisively with the many dilemmas raised by its own agents of high-tech innovation can also be compromised.  It follows that the identity, values and culture of Silicon Valley (SV) are central and need to be taken fully into account. The myths and stories it tells, the narratives it projects upon the wider world have real consequences, some of them contradictory and severe. A psychological profile of the Valley helps to provide a more nuanced understanding of how we arrived at this particular point in history. Equally, such a profile, if credible, might well provide useful insights into just what changes in its culture and worldview may be required.

Psychology of Silicon Valley

Katy Cook’s decision to explore the psychology of Silicon Valley began with questions that have occurred to many others. How, for example, was it that so many people were becoming addicted to successive waves of high-tech devices? What might be the cumulative effects on health, wellbeing and relationships? Where is all this unregulated innovation taking us? Her initial involvement was with a non-profit organisation that considered the effects of technology and ran awareness campaigns on possible responses. The perspective she later developed is useful here because, in contrast to more common everyday external views of the IT revolution, she focuses on internal aspects that normally remain implicit, out of sight, and thus seldom considered. Viewed from a psychological perspective, however, the Valley and all it represents, looks decidedly darker and more problematic than the upbeat public persona it presents to the world. It highlights, for example, the fact that there are major differences between what this world-shaping entity would like others to believe and what it actually is. Cook’s view is essentially that SiliconValley has been ‘corrupted’ because it prioritises the wrong (i.e. socially damaging) things. These include making profit and growth the ultimate values, owners and shareholders the ultimate beneficiaries and the use of outright lies and manipulative evasions as core strategies. At heart, she believes, the Valley fails to understand itself. This may seem an obvious point, but it has real implications. It means, for example, that in spite of its wealth and power (or perhaps because of them) it lacks the qualities that psychologists have long associated with ‘emotional intelligence.’ These are serious charges so it’s worth summarising the evidence.

Under ‘identity’ she notes that the Valley sees itself as an ‘ideas culture.’ Whereas in earlier times this was linked with counter-cultural aspirations for a more open and democratic future, established businesses and their investors remained doggedly focused on the same old ‘extractive’ culture. Big ideas are said to thrive in Silicon Valley but they are narrowly applied in the search for technical solutions. This makes greater sense when key traits of programmers and computer specialists are revealed. A considerable body of evidence shows that they are skilled at puzzle solving but they neither like, nor are much interested in, people. Moreover, the industry actively selects for ‘anti-social, mathematically inclined males’ (Cook, 2020, p.24). The author is not alone in suggesting that the ‘high-fliers’ of Silicon Valley should be considered, in some crucial respects, as ‘under-educated.’ This initially startling conclusion is supported by evidence that their educational backgrounds are strongly associated with science, maths and engineering but lacking when it comes to the human sciences. With this in mind we need look no further to explain what Cook (2020) regards as ‘a staggering amount of unconscious bias.’ In summary, she identifies three key issues:

 

  • Tech tends to be an uncommonly homogenous culture, marked by a lack of diversity and an unwillingness to embrace pluralism.
  • It is rife with discrimination, including sexism, ageism, and racism, as well as harassment.
  • There is a disturbing level of immaturity that permeates many corporations, often emanating from the highest levels (Cook, 2020, p.39).

For these and related reasons the author concludes that, industry-wide, there’s evidence of a ‘working environment that is fundamentally broken and unhealthy.’ It’s entirely consistent with this view that the myths and stories promulgated by Silicon Valley have been carefully curated at huge expense by marketing experts with the sole purpose of exerting desired effects on affluent, but distinctly naïve, populations. A litany of manufactured ‘sound bites’ familiar to many, reveal attempts to portray Silicon Valley’s major companies in a more positive light. They include ‘Bring the world closer together’; ‘Give everyone a voice’ (Facebook); ‘Organise the world’s information’ (Google); ‘Broadcast yourself (YouTube); ‘Make tools that advance humankind’ (Apple); ‘Work hard. Have fun. Make History’ (Amazon) etc. (Cook, 2020).  Thus, while they may claim to reflect ‘lofty aspirations’ and ‘benevolent ideals’ they are just as likely to be ‘false and toxic aphorisms designed to mask the true intentions of the companies who craft them.’ Such slogans are intended to distract attention from the underlying aims of the industry which are to ‘bring in the largest amount (SIC) of users, for the longest period possible, at the most frequent rate.’ Hence, overall CV ‘has managed to paint a self-serving picture of itself that fails to reflect the reality of its priorities and intentions’ (Cook, 2020, p.70). The key point to note is the divergence between what Silicon Valley says and what it actually does. ‘Capital’ she notes, ‘doesn’t want to change the world. (It just) wants to make more capital,’ (Cook, 2020). And this really is the heart of the issue. Many of the claims that emerge from Silicon Valley seek to promote ‘desirables’ such as engagement, connection, friendship and the like. But behind such pronouncements there is a barely concealed moral vacuum. There is no reality at all in shared ‘background myths’ such as ‘tech knows best’ or that these companies can in any way be considered ‘trustworthy custodians.’ The motivations and values underlying what they actually do clearly point in a quite different direction.

Cook (2020) points to the tension between ‘socially liberal values and techno-capitalist incentives’ noting that the latter remain focused on the kinds of limited short-term profit-oriented values mentioned above. But what she calls the ‘transgression’ of Silicon Valley ‘is not so much a result of ‘for-profit’ and ‘corporate priorities’ so much as a ‘gross misrepresentation of its motives,’ (Cook, 2020). Sufficient time has now passed for some of the consequences to become clear. She adds:

SV has spent years and billions of dollars persuading the public to worship an industry that claims to have its best interests at heart. (However) the tech industry is driven by the same market forces as any other market-driven industry … Placing greater importance on making money than on taking care of people’s needs results in a society with deeply unhealthy values, in which people come second to financial objectives. A society built on such values loses a great deal of its capacity for humanity. We have allowed the tech industry, through a lack of regulation and the proliferation of unhealthy behavioural norms, to become the bastion of an economic order that has abandoned morality in favour of dividends for an elite few. (Furthermore), ‘research has found evidence of an inverse relationship between elevated social power and the capacity for empathy and compassion’ (Cook, 2020).

The divergence between what Silicon Valley claims to have delivered and what it has actually achieved is undoubtedly one of the chief underlying causes of the deep social divisions, disunity and perpetual conflict that have sadly become among the distinguishing features of American society. Having failed to rein in the Oligarchs and related financial and corporate interests the US appears to have suffered a ‘collective breakdown of order, truth, and the psychological orientation they provide.’ The profit and ad-driven business model that Silicon Valley adopted thrived on the back of social trends that have progressively undermined the coherence and status of truth, respect and fact-based debate. Those trends include radical individualism, market fundamentalism, polarisation, volatile dissent and a callous indifference to the well-being of others. Hence, ‘digital disinformation’ now constitutes a serious global risk not only to the US but also to the whole world.

Clearly, the spread of such disruptions and distortions across entire populations does not end at the level of damaged individual lives. The deliberate and forceful ramping up of ‘engagement’ by any means deemed necessary ensured that the overall costs continued to mount such that a full accounting is unlikely to ever be rendered. While the potential for good certainly existed at the outset, the combination of naivety, greed and lack of oversight / regulation allowed a toxic ecology of dangerous technology-enabled innovations not merely to emerge but also be normalised. Collectively these drove the overall costs of the IT revolution into quite new territory. It was no longer simply a medium for individuals and powerful groups. it swelled with ‘bad actors’ of every kind, from petty criminals to nation states. What has since emerged even exceeds what the ‘dark market’ could achieve (Glenny, 2011). Both the disastrous 2016 US election and Brexit demonstrated that entire societies are no longer protected from digital manipulation. Which helps to explain why during 2019-2020 the world found itself backing uncertainly into a state of geopolitical instability and the ever-growing threat of global cyber war (Zappone, 2020).

Finding our bearings, challenging legitimacy

Close to 700 pages The Age of Surveillance Capitalism is not, by any means, a ‘quick read.’ The language makes few concessions and the barely concealed passion behind some sections is perhaps not entirely consistent with standard academic conventions. Yet the effort to come to grips with this revelatory and courageous work could hardly be more worthwhile. In effect the author re-frames key aspects of the last few decades, the time when IT took on new forms and, literally invaded human awareness, ways of life, before anyone grasped the significance of what was happening. Now, that the details of this invasion have been documented in compelling detail, a fundamental reorientation (both to the high-tech systems and, more importantly, to those in whose interests the present deceptions are maintained) can be envisaged. Which is no small achievement. At the macro level revised understandings of the recent past allow for a re-consideration of the present from which may emerge distinctively different futures than earlier, more anodyne, default views had perhaps allowed. For example, Peter Schwartz’ over-optimistic vision in The Long Boom (2000) is one of many that saw the coming IT revolution in highly overwhelmingly positive terms.

One question answered early on is: who was responsible for this invasion? There’s a distinct cast of characters, prominent among which are the owners and investors of Google, Facebook and similar companies. Behind these organisations, however, are many others including neo-liberal ideologists, venture capitalists, several US presidents and powerful agencies closely associated with the US government. Yet even that’s too simple. As is clear from Snowden’s account, Bin Laden, the prime mover of the 9/11 attack, also had an influence since it was this event that led US security agencies to pivot away from earlier concerns about ‘privacy’ in favour of a particularly invasive form of ‘security’. It’s a bit like the ‘rabbit hole’ featured in the Matrix film trilogy: the further down you go, the more you find. Zuboff, however, is far from getting lost. She locates dates, events, players and consequences in a highly disciplined and comprehensible way. Her almost forensic methods open up the possibility of knowing what has happened, understand it and gain clarity about what responses may be needed.

Part of Zuboff’s contribution is terminology. She provides a language and a framework that serves to reveal much of what’s been hidden and to resource the projects and actions that are clearly needed. It’s necessary to note, however, that no language is objective and early attempts to create one based on quite new phenomena are bound to require critique and modification over time. Language is, of course, anything but static. A couple of examples will suffice to demonstrate the relevance of these interventions. One is a notion of the ‘two texts;’ while a second is about learning to distinguish between ‘the puppet’ and ‘the puppet master.’ In the former case she makes a strong distinction between what she calls the ‘forward text’ and the ‘shadow text.’ The forward text refers to that part of the on-line world that users of, say, Google and Facebook, can see, use and be generally be aware of. This embraces the whole gamut of design features intended to keep people in the system where their actions and responses can be constantly harvested and sold to others (data processors, advertising companies, political parties and the like). The simplest way to think of this ‘text’ is to view it as the ‘bait’ that keeps people returning for repeated dopamine hits. The ‘shadow text’ refers to the vast hidden world owned by, controlled by, and singularly benefitting from what Zuboff (2019) calls the ‘extraction imperative’. This is a secretive world that, even at this late stage, has experienced minimal regulatory oversight, especially in the US, the country of origin. Similarly, in the second case, a so-called ‘smart phone’ can be regarded as ‘the puppet’ that appears to operate according to its proximate owner’s bidding. Whereas the remote owners of hidden intelligences (a vast network of dedicated AI applications) are the invisible and currently unaccountable masters. Knowing how to use the former as a tool and enabler is one thing. Coming to grips with the hidden imperatives of the puppet masters is quite another. The separation between the two is corrosive, sustained and entirely deliberate. Knowing this can provide part of the motivation to respond by acting in defence of human autonomy itself.

The author carefully explores how this system became established and how it morphed from being something useful that initially supported peoples’ authentic needs (for connection, communication, identity, location etc.) into an all-out assault on each person’s interior life. The shift from serving customers with high quality search functions to ruthlessly exploiting their personal details is described in detail. Even now, following the Cambridge Analytica and similar scandals, few have yet grasped just how far this process of yielding their interiority to what Zuboff (2019) calls ‘Big Other’ has gone. For example, she documents how it exerts particularly savage consequences on young people at the very time when their identities, sense of self etc. are already unstable as they proceed through the upheavals of adolescence. She has strong words for what is involved (Zuboff, 2019). For example:

Young life now unfolds in the spaces of private capital, owned and operated by surveillance capitalists, mediated by their ‘economic orientation’ and operationalised in practices designed to maximise surveillance revenues… (Consequently) …Adolescents and emerging young adults run naked through these digitally mediated social territories in search of proof of life… (Zuboff, 2019, p456 & p.463).

Immersion in social media is known to be associated with a range of symptoms such as anxiety and depression but this particular rabbit hole goes deeper. Viewed through the evidence presented here a combination of ‘rogue capitalism’ with the far-reaching capabilities of digital technology are bearing down on matters of primary and non-negotiable interest to all human beings. That is, the capacity of everyone to know, value and, indeed, to maintain their inner selves. It’s here that Zuboff (2019) introduces a pivotal concept – the primacy of what she calls ‘the latency of the self’. She writes:

What we are witnessing is a bet-the-farm commitment to the socialisation and the normalisation of instrumental power for the sake of surveillance revenues… In this process the inwardness that is the source of autonomous action and moral judgement suffers and suffocates (Zuboff, 2019, p.468).

Thus far from being the fulfilment of humanity’s aspirations and dreams, what she calls surveillance capitalism leads to ‘the blankness of perpetual compliance,’ (Zuboff, 2019). Attentive readers may well ask ‘have we not seen this before?’ We have, not only in the great dystopian fictions of our time but also in recent history. History shows that when entire populations are deprived of their inner lives, their deepest sense of self, they become depressed, diminished and even disposable. Zuboff gives credit to some of the early responses, many by the European Union and some member states. Yet there’s a long way to go before the myths promulgated by the Internet oligarchs are recognised by entire populations (and the politicians who represent them) and seen for what they are: a sustained assault by secretive but radically indifferent private entities on the very foundations of their humanity.

Perils of monopoly

Zuboff’s opus has obviously contributed much to the process of ‘de-mythologising’ the IT revolution and revealing the practices of some of its key players. It is both an analytic triumph and, at to some extent, a personal crusade. It is to be expected that other observers will exhibit different and contrasting responses. E.L. Doctorow’s account is informed by a more close-up, participant view of what the IT revolution is and does. His detailed view of how the new media actually work in practice suggests that the ‘surveillance’ side of the story, while dangerous and objectionable, may not be quite as trouble-free and all-powerful as it may first appear. In his understanding it is also, to some extent, a kind of double-edged sword with its own distinct weaknesses. So, rather than take on the Internet Oligarchs in a kind of ‘frontal assault’ he considers some of the traps and issues that make them appear less monolithic and somewhat less threatening. Specifically, he suggests that the primary focus needs to shift from surveillance per se to the raft of problems he associates with monopolies. For example:

Zuboff calls surveillance capitalism a ‘rogue capitalism’ whose data-hoarding and machine-learning techniques rob us of our free will. But influence campaigns that seek to displace existing, correct beliefs with false ones have an effect that is small and temporary while monopolistic dominance over informational systems has massive, enduring effects. Controlling the results to the world’s search queries means controlling access both to arguments and their rebuttals and, thus, control over much of the world’s beliefs. If our concern is how corporations are foreclosing on our ability to make up our own minds and determine our own futures, the impact of dominance far exceeds the impact of manipulation and should be central to our analysis and any remedies we seek (Doctorow, 2020).

Or again:

Data has a complex relationship with domination. Being able to spy on your customers can alert you to their preferences for your rivals and allow you to head off your rivals at the pass. More importantly, if you can dominate the information space while also gathering data, then you make other deceptive tactics stronger because it’s harder to break out of the web of deceit you’re spinning. Domination — that is, ultimately becoming a monopoly — and not the data itself is the supercharger that makes every tactic worth pursuing because monopolistic domination deprives your target of an escape route (Doctorow, 2020, p.10).

From this point of view the very real dangers and dysfunctions that Facebook, for example, imposes on users have a simple solution: break the company up into smaller elements and divest it of those it has monopolistically acquired. Of great interest in the present context, however, is that while Facebook’s surveillance regime is ‘without parallel in the Western world’ and constitutes a ‘very efficient tool for locating people with hard-to-find traits,’ it cannot allow normal discussions to run unmolested. This is because the latter cannot deliver sufficient ads (or hits on ads) in the high-intensity mode demanded by the business model. The company therefore chose to boost what it calls ‘engagement’ by injecting streams of inflammatory material in order to create ‘artificial outrage.’ The fact that these can be dangerous and costly in the real world accurately demonstrates the perversity of the model and completely undermines any pretence that Facebook might contribute to social well-being. Thus, the writer is less concerned about the data capture per se than he is about the way the growth of monopolies forces people to consume the kind of material that makes them miserable! In this account the ‘big four’ (Facebook, Google, Amazon and Apple) all rely on such positions in order to dominate their respective market segments. In summary:

  • Google’s dominance isn’t a matter of pure merit – it’s derived from leveraged tactics that would have been illegal under ‘classical’ (pre-Reagan) anti-trust regulations.
  • Similarly, Amazon’s self-serving editorial choices determine what people buy on that platform. Consumers’ rights are overwhelmed because the company’s wealth and power enable it to simply buy up any significant and rivals or would-be competitors.
  • On the other hand, Apple is the only retailer permitted to sell via its products on its own platforms. It alone controls what products are allowed into its ‘secret garden’ (the app store). It monitors its customers and uses its dominance to exploit other software companies as ‘free-market researchers’ (Doctorow, 2020, p16).

The fact that these monopolistic conditions have remained for well over a decade with little or no regulation once again reveals the inability of successive US governments to understand or respond to what has been happening in their midst. As Doctorow (2020) notes ‘only the most extreme ideologues think that markets can self-regulate without state oversight.’ He suggests three reasons for this:

  1. They’re locked in to (a) ‘limbic system arms race’ with our capacity to reinforce our attentional defence systems that seek to resist the new persuasion techniques. They’re also locked in an arms race with their competitors to find new ways to target people for sales pitches.
  2. They believe the surveillance capitalism story. Data is cheap to aggregate and store, and both proponents and opponents of surveillance capitalism have assured managers and product designers that if you collect enough data, you will be able to perform sorcerous acts of mind control, thus supercharging your sales.
  3. The penalties for leaking data are negligible (Doctorow, 2020, p17).

This is where things can appear confusing because, as Snowden’s account suggested, state surveillance that had earlier been focused outward on the wider world was re-purposed to focus on the American people. In the process public / private distinctions became blurred. Similarly, big tech regularly ‘rotates its key employees in and out of government service’ meaning one or two years at Google could easily be followed by a similar time at the Department of Defence (DoD) or the White House, etc… This ‘circulation of talent’ leads to what’s known as ‘regulatory capture.’ It indicates a diffuse but powerful sense of mutual understanding which emerges between organisations that previously had clear and distinct boundaries and quite different purposes. One of the consequences of such capture is that liability for questionable security practices can be shifted on to the customers of big tech and thence to the wider society. The question ‘who is responsible?’ then becomes more difficult to answer.

Doctorow (2020, p. 21-22) asserts that ‘big tech is able to practice surveillance not just because it is tech but because it is big;’ also that (it) ‘lies all the time, including in their sales literature’. It got this way not because it was tech but because the industry arose at the very’ moment that anti-trust was being dismantled,’ (Doctorow, 2020). The role that Robert Bork played in this process has been told by Taplin and others (Taplin, 2017). In essence, it meant that some 40 years ago, when anti-trust regulations were being framed, Bork ensured that they focused less on limiting corporate size and power than on attempting to restrain the costs of products to consumers. This judgement, and the legislative loophole in section 230 of the Communications Decency Act of 1996 (which ensured that media companies were protected from the consequences of any material that might appear on their sites) along with the lack of effective Congressional oversight, are essentially what allowed these companies to grow beyond any reasonable limit. The key clause in the legislation reads ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’ (Harcher, 2020). The fact, as Cook noted, that ‘capital wants to make more capital’ supplied the motive and the rationale. And as Zuckerberg once pronounced, this also enabled them to ‘move fast and break things.’

Doctorow (2020) differs most clearly from other commentators in his tendency to see surveillance capitalism as anything other than plain, old-fashioned capitalism. Thus, in his view, it does not need to be ‘cured.’ Rather, what needs beefing up and applied more widely is ‘trust-busting’ and bans on monopolistic mergers. For him, big tech is not as powerful as it would like others to believe and, although it has largely escaped thus far, it cannot actually overturn the rules to protect itself from the resurgence and renewal of anti-trust measures. For him the issue is – are we up to it? It’s clear that the ‘we’ he has in mind is considerably wider than that of government agencies and the technically adept. For Doctorow (2020) the ‘fake news’ generated by monopolistic systems that have shredded what was earlier regarded as shared reality is not merely an irritant but ‘an epistemological crisis.’ A widespread breakdown of shared meanings, and the radical uncertainty it creates suggest the ‘terrifying prospect’ of a widespread loss of control and capability. Yet, one of the distinctive points of this account is that at the heart of any technologically advanced society is a need for integration. This, according to Doctorow (2020), is what he calls ‘the hard problem’ of our species. If we can’t coordinate different activities across multiple domains such a civilisation cannot but fail.

While for Zuboff (2019), the high-tech path to the future is what she calls a ‘bet-the-farm-commitment’ or choice, here it is portrayed as the only real option. But it is framed through two different strategies. Ultimately, Doctorow (2020, p. 33) believes, ‘we can try to fix Big Tech by making it responsible for bad acts by its users, or we can try to fix the internet by cutting Big Tech down to size. But we can’t do both’. In this view and outlook the preferred option is for a broad-based coalition spanning government and civil society to break up the monopolies, reform big tech and drive ‘up and out’ of the present dilemma.