Most people will have heard of various forthcoming ‘next big things’ such as ‘augmented reality,’ ‘self-driving cars’ and the ‘Internet of Things’ (IoT). Yet the chances are that they won’t have heard about them from personal or local sources since claims about their alleged benefits don’t originate there. Rather, they are one of many campaigns that originate elsewhere – that is, from a handful of the world’s most powerful organisations and their associates. As things stand, entire populations are regularly subjected to powerful marketing operations intended solely to prepare them for the so called miraculous new services that no one has ever wanted or needed. As Morozov and others have suggested ‘the Internet’ is a domain where numerous ‘solutions’ are offered for problems that currently do not exist – a phenomenon he calls ‘solutionism’ (Morozov, 2013). Hence it is difficult to find credible evidence of any real ‘demand’ for an IoT. Rather, it is all about power and accumulation on a vast scale. Powerful organisations insist that these latest innovations are inevitable. They claim that ‘the genie is already out of the bottle’ without offering any plausible justification to what this ‘genie’ actually is or what kind of ‘bottle’ it may have escaped from. Subtlety and depth of meaning are uncommon in these contextualised claims. Superficial, overly positive views about high-tech innovation, however, not only reflect their pretentious assumptions, they also speak volumes about the overriding self-serving priorities of the organisations involved. Yet, there should be no doubt that the innovation ‘push’ model is certainly disruptive and frequently dangerous. The reasons are straightforward – it constantly injects random elements into complex social systems that are then forced to adapt, often at considerable cost to people, professions and organisations at large. Reflecting on the 2016 US election one observer commented that:
We have fetishised “disruption”. Governments have stood by and watched it take down all industries in its path – the market must do what the market must do. Only now, the wave is breaking on its shore. Because what the last week of this presidential campaign has shown us is that technology has disrupted, is disrupting, is threatening to upturn the democratic process itself – the best, most stable, most equitable form of governance the world has yet come up with (Cadwalladr, 2016).
Despite this malaise an IoT per se should not necessarily be considered a categorical mistake. Well-designed devices installed in robust networks with appropriate technical and exacting safety standards would have a variety of uses. A host of specialised applications can be readily envisaged in education, surgery, disaster management and so on. The elderly, disabled and sick could gain greater autonomy and enhanced capability to run their own lives. Potentially positive uses like these may well be unlimited. But the dangers and costs of the IoT as envisaged by the power hungry appear to outweigh these benefits.
Standing behind the seductive merchandising are questions such as: who is promoting the IoT? Who stands to gain and who will lose? Can we be sure that it will protect privacy and enhance human wellbeing or will it further erode both? Answering the ‘who’ question is straightforward. The main drivers and beneficiaries of this particular ‘radically transformative innovation’ are the corporate tech giants from Silicon Valley, their like-minded associates and high-tech manufacturers ever on the lookout for new markets. They share this particular expansionist worldview that continues to be virtually unchallenged. In fact following the 2016 US presidential election the Neo-Conservative ascendancy was reinvigorated. Central to its ideology is an assumption that equates ‘progress’ with single-minded technical innovation and development. Such a view, however, works against shared interests as it arguably rests on category errors and inadequate views of culture, human identity and human autonomy. Such limitations and costs were perhaps best expressed by Lewis Mumford who declared that: ‘I have taken life itself to be the primary phenomenon, and creativity, rather than the “conquest of nature,” as the ultimate criterion of man’s biological and cultural success’ (Mumford, 1971). He would, of course, be unemployable in Silicon Valley.
This is not because Trump supported Neo-Conservatism directly. His antagonism towards it is well known. Nor is it because Silicon Valley has entirely abandoned its leaning toward Libertarian values. In the former case it is rather that a rich minority has thrived under Trump that remains deeply immersed in the ‘Neo-Con’ world from which it continues to derive significant financial and other benefits. In the latter Silicon Valley exhibits a profound disconnect from Democratic politics and the growing social costs of its own activities. The Neo-Cons therefore remain free to go about their business in the absence of any serious constraints.
Disruptions and consequences
In some ways the high-tech sector resembles a wayward child that challenges authority and ignores boundaries. So it is unsurprising that, as existing product categories become saturated, it seeks to invent new ones. But what’s good for Internet oligarchs and giant corporations may not be good for everyone else. Long before the IT revolution informed observers such as C.S. Lewis, Ivan Illich, E.F. Schumacher and many others understood that the ‘conquest of nature’ has a nasty habit of rebounding on people by compromising their humanity and riding rough shod over their rights. The entire high-tech sector has expanded rapidly over recent decades and, as a result, many of the organisations involved have become financially wealthy. But if they are not rich in humanity, perceptiveness, the ability to sustain people or cultures, then this becomes an empty and regressive form of wealth.
The high-tech sector has exhibited a dangerous and apparently unquenchable obsession with ‘inventing the future backwards.’ That’s to say, it pours millions into speculative technical operations with little thought as to whether the outputs are necessary or helpful. There’s an abiding preoccupation with beating the immediate competition (including other high-tech behemoths) regardless of other considerations. Many will remember how the ‘information superhighway’ evoked images of openness, safety, productivity, social benefits spread far and wide. A range of new tools certainly came into wide use. Information on virtually any topic became almost instantly available. Useful knowledge is another matter entirely and wisdom may be the scarcest resource of all.
None of the above can be blamed on the Internet pioneers who built early versions of these systems and devices. Many appear to have believed that what they were doing was useful and constructive (Taplin, 2017). Unfortunately, once the new tools were released into wide use the aims, ambitions, values and so on of the pioneers counted for little. New, poorly understood, world-shaping forces came into play. Yet the power apparently granted to the latter does not, in fact, reside with innovators and disruptors. In a more considered view it resides in the domain of ‘the social’ from which countervailing power (for example in the form of sanctions or legitimation) may eventually arise.
The entrepreneurial marketplace and a new arms race
In the meantime, left to the vagaries of ‘the market,’ further waves of high-tech innovation will continue to generate highly polarised consequences. It doesn’t really matter what the high-tech gurus and the Internet oligarchs like to claim at any particular time in terms of the efficacy and usefulness of new products and services. Nor does it matter how glossy the marketing, how many times stimulating or provocative TED talks are viewed on YouTube or how enticing the promises appear. The very last entities to entrust with the future of humanity and its world are those who make ‘innovation’ their ultimate value and selling their core profession. High-tech promises based on pragmatic, utilitarian and commercial values overlook or omit so much that’s vital to people and societies that they have little or no chance of creating or sustaining open and egalitarian societies. (The ideology of ‘value-free technology’ is discussed below.)
Proponents of the IoT, however, seek to convince the public that it will be widely useful. Homes can be equipped to respond to every need, whim and requirement. Owners won’t need to be physically present since they can communicate remotely with their home server. What could possibly go wrong? The honest answer is: just about everything. Perhaps the greatest weakness and enduring flaw in the IoT is this: connecting devices together is one thing, but securing them is quite another. As one well-qualified observer put it ‘IoT devices are coming in with security flaws which were out-of-date ten years ago’ (Palmer, 2016). Naughton (2016) acknowledges that ‘there’s a lot to be said for a properly networked world.’ He adds ‘what we’ve got at the moment, however, is something very different – the disjointed incrementalism of an entrepreneurial marketplace.’ He adds that:
There are thousands of insecure IoT products already out there. If our networked future is built on such dodgy foundations, current levels of chronic online insecurity will come to look like a golden age. The looming Dystopia can be avoided, but only by concerted action by governments, major companies and technical standards bodies (Naughton, 2016).
Even now private e-mail cannot be considered secure. One slip, one accidental click on a nasty link, can initiate a cascade of unwelcome consequences. There’s no reason to believe that anyone’s wired-up electronic cocoon will be any different. Consider this: a creepy Russian website was allowing users to watch more than 73,000 live streams from unsecure baby monitors (Mendelsohn, 2016). In the absence of careful and effective system-wide redesign what remains of our privacy may well disappear. First world societies are on the cusp of being caught up in the classic unwinnable dialectic of an offensive / defensive arms race. Currently, few understand this with sufficient clarity. It’s therefore likely that many will continue to sign up for this new, interconnected fantasy world with no idea or little idea of the dangers involved or the precautions required. Some will ask why they were not warned. The fact is that such warnings have been plentiful but have fallen upon deaf ears.
No discussion of the Internet and its pervasive effects is complete without reference to a persistent – some would say extreme – view that technology is ‘value free.’ Technology is said to be ‘neutral,’ what matters is how it is applied. This represents a distinct philosophical position supporting a specific worldview that eludes many, especially in the U.S. where such issues tend to remain occluded. So it’s not surprising that the limitations, not to say defects, of such a view are, on the whole, seen more clearly beyond the U.S. and far removed from Silicon Valley (Beck, 1999). For those who have absorbed the pre-conscious assumptions of U.S. culture the ‘IT revolution’ and its products are more likely to be described in glowingly positive terms (tinged, of course, with varying degrees of national self-interest). Yet such views are far from universal. Wherever healthy forms of scepticism thrive it’s obvious that information processing – once restricted to the world of machines – has already colonised the interior spaces of everyday life to an unwise extent (see Zuboff, 2015, below). Allowing it to penetrate ever further into human life is clearly fraught with adverse consequences.
Greenfield (2017) has considered how these processes operate at three scales: the human body, the home and public spaces. To take just one example, in his view the rise of ‘digital assistants’ … ‘fosters an approach to the world that is literally thoughtless, leaving users disinclined to sit out any prolonged frustration of desire, and ever less critical about the processes that result in gratification’ (Greenfield, 2017). They operate surreptitiously in the background according to the logic of ‘preemptive capture.’ The services they offer are designed to provide the companies concerned with ‘disproportionate benefits’ through the unregulated acquisition (theft) of personal data. Lying behind such operational factors, however, is ‘a clear philosophical position, even a worldview … that the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion.’ When applied to cities Greenfield regards this as:
Effectively an argument that there is one and only one correct solution to each identified need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something that can be encoded in public policy, without distortion (Greenfield, 2017).
Hence ‘every aspect of this argument is questionable.’ Similarly, the view that ‘anything at all is perfectly knowable’ he regards as perverse since so many aspects of individual and collective life cannot be reduced to digital data. Differences of value, identity, purpose, meaning, interest and interpretation – the very attributes that make human life so rich and varied – are overlooked or eliminated. It follows that,
The bold claim of ‘perfect’ knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it is astonishing that any experienced engineer would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful (Greenfield, 2017).
In summary, claims for ‘perfect confidence’ in the social applications of digital systems are ‘incommensurate with everything we know about how technical systems work.’ In other words the dominant ideology behind the rapid expansion of the IoT and related systems is clearly unfit for many of the purposes to which it is currently being applied. Or to put this differently ‘hard’ empiricism involves systemic reductionism that works directly against the wider human and social interests outlined above.
Fiction informs foresight
It’s no secret that high-tech nightmares exploring the dark side of ‘progress’ have been a staple of science fiction (SF) for well over a century. Far from being idly ‘negative’ they can be viewed as useful reminders to, for example, not proceed too far too fast with these powerful, seductively networked technologies. H.G. Wells attempted an early expression of this concern in his 1895 novel The Time Machine in the contrasts he drew between the effete and vulnerable Eloi and the brutal Morlocks (Wells, 1895). Then in 1909 E.M. Forster made an even more deliberate attempt to identify the likely effects of becoming over-dependent on technology in his novella The Machine Stops (Forster, 1909). More than a century later it still carries a forceful message that is both credible and explicit. Then, in the early 1970s, J.G. Ballard began his decades-long explorations of ennui and decay in the ruins of high-tech environments – the abandoned high-rise, the empty swimming pool and so on. One of the most evocative is a short story in his 1973 collection Vermillion Sands called ‘The thousand dreams of stellavista’ (Ballard, 1973). It portrays a house constructed to exquisitely mirror the needs of its inhabitants in real time. Unfortunately it turns out that a previous occupant was insane. Over time the house begins to exhibit similar symptoms – which places later owners in peril of their lives. This is obviously not merely a metaphor. Daniel Suarez’ Daemon picks up the familiar theme of runaway technology and gives it a powerful new twist. He draws on a wealth of information technology (IT) know-how to explore how a dormant entity – or daemon – is activated, becoming a self-replicating, highly destructive virtual menace (Suarez 2010). Finally Dave Eggers’ prescient 2013 novel The Circle brings the story up to date in a highly relevant and insightful critique of the digital utopianism that arguably characterises the current thinking and practice of IT corporations (Eggers, 2013). It’s a salutary tale in which human ideals become subordinated to an ever more dominating technical infrastructure. This is, of course, only a small sample of a vast literature exploring almost every aspect of technological dystopias.
Futurists and foresight practitioners often recognise such sources as essential background. But they also earn their living by scanning the environment for more specific and empirically based ‘signals of change.’ The art and science of ‘environmental scanning’ is, however, arguably more advanced in theory than it is in broad, commonly accepted, practice. In terms of social governance in a digital era, this is a serious oversight. Consequently the relative absence of high quality foresight places entire societies at significantly greater risk than they need to be. Here, for example, are a couple of ‘scanning hits’ on surveillance and the IoT.
“The Internet of Things (IoT) has particular security and privacy problems…it affect the physical world, sometimes controlling critical infrastructure, and sometimes gathering very private information about individuals” (Seitz, 2015).
The IOT “network is responsible for collecting, processing, and analyzing all the information that passes through the network to make decisions, in other words, millions of devices permanently connected to the Internet act and interact intelligently with each other to feed and benefit thousands of applications that are also connected to the network,” (Alvarez, 2021).
A plausible trajectory
During these dangerous and uncertain times much is at stake – not least of which is how to manage a world severely out of balance. More competent, imaginative and far-sighted leadership would help, as would a growing society-wide resistance to the values and, indeed many of the products, of the high-tech giants. Strategies of this kind would contribute toward a thorough re-appraisal of various pathways toward viable futures (Floyd & Slaughter, 2012). Those who are fortunate enough to be living in still-affluent areas are being taken on a ride intended to distract them, to still their growing fears for the future, through the many diversions provided by new generations of technological devices. But the above suggests that it’s time to push back and seek answers to questions such as the following:
- Does it make sense to accept the current, deeply flawed, vision of the IoT that promises so much but ticks so few essential boxes, especially in relation to privacy and security?
- Are whole populations really willing to passively submit to a technical and economic order that it grows more dangerous and Dystopian with each passing year?
- To what extent should time, resources and attention be focused on the kinds of long-term solutions that preserve human and social options? (Slaughter, 2015).
If things continue to proceed along the present trajectory the system is likely to misbehave, to be hacked, militarised, fail just when it needs to work faultlessly. In this eventuality domestic users may start backing out and rediscovering the virtues of earlier analogue solutions. Although simpler and less flexible, the latter could gain new appeal since they lack the ability to exact hidden costs and turn peoples’ lives upside down in unpredictable ways. Some might well opt wholesale for a simpler life (Kingsnorth, 2017). Early adopters of the IoT are, however, not restricted to householders. They include businesses, government agencies and public utilities. It is often forgotten that the latter are structurally predisposed toward greater socio-political complexity – which also contributes to the growth imperative. Thus, according to Tainter, large-scale organisations are unlikely to pursue deliberate simplification strategies while at the same time becoming increasingly vulnerable to collapse (Tainter, 1988). Given the overall lack of effective social foresight, as well as the parlous state of government oversight in general, present modes of implementation may proceed unabated for some time. Security breaches on an unprecedented scale would then take place, disruptions to essential services would occur and privacy for many would all-but vanish. The costs would be painful but they would also constitute a series of ‘social learning experiences’ par excellence. At that point serious efforts to raise standards and secure the IoT become unavoidable.