Compulsive innovation

‘You can’t stop progress.’ One of the themes that emerges from this enquiry is the need and opportunity for large-scale, democratically mediated social design and a commitment to long-term social innovation in the public interest. At first sight it may appear difficult to see how the motivation for such efforts could arise or from whom. But these are early days and motivation can emerge from a variety of sources. To begin with, in a context of radically ambiguous technical innovation, with its accompanying upheavals and disruptions the widely held view that ‘you can’t stop progress’ clearly lacks credibility when used fatalistically, and should be set aside. Modifying this slightly to ‘you can’t stop technical innovation’ is a small step forward but doesn’t take us very far. Of far greater value is a more nuanced understanding of what terms such as ‘progress’ and ‘innovation’ actually mean, what values they spring from, whose interests are represented (or extinguished) and what longer term impacts and consequences may plausibly occur.

Such issues are hardly part of common conversation but if society is to regain any say in its own prospects, these issues need to be brought into the open and debated much more widely. Similarly, the social, political, technical and environmental consequences of neo-liberal formulas of economic growth along with ever increasing inequality are no longer in doubt across the globe. People are becoming ever more concerned about these issues and, moreover, the Earth system itself is responding to multiple human impacts with glacially slow, but unstoppable, momentum. The faulty notions of ‘progress’ in this context clearly need to be unpacked as they are fraught with ambiguity and increasingly divorced from genuine human interests (Metcalf, 2017). Australia’s Gold Coast illustrates this dilemma rather well. The mode of development on display is a living testament to a worldview characterised by profit-seeking, denial and short-termism. These are not characteristics that bode well for the future (Slaughter, 2016).

‘Progress’ is often seen as synonymous with technical innovation but such notions do not withstand close scrutiny. Similarly, a continuing free-for-all dialectic of innovation and counter innovation quickly becomes irrational in our currently divided world. In what may be an inexact but tellingly perverse reversal of Moore’s Law the stakes grow ever more extreme with each new level of technical capability. Yet business leaders and decision makers seem largely unaware of this. We can see this in the current breakout of IT company investments in powerful real-world applications such as automation and advanced robotics that look set to destroy most, if not all, semi-skilled jobs (Murphy, 2017). We see it in the irrationality of emerging autonomous weapon systems (Begley & Wroe, 2017). We also see it on the mid-term horizon in the systemic threats that plausibly arise from quantum computing (Majot & Yampolskiy, 2015). A more immediate example is the rise of GPS spoofing. The early development of this technology was undoubtedly useful as it introduced precise, reliable navigation into countless transportation applications. Now certain features in its design are being quite deliberately employed to disable it. According to reports anomalous results were first spotted by PokemonGo players near sensitive sites in Moscow and then began appearing elsewhere. For example, alarms began to sound when the master of a ship in the Black Sea discovered that his position was over 30 kilometres away from where it was supposed to be. Russia is thought to be one entity experimenting with the technology. But of equal or perhaps greater concern is that spoofing software can now be downloaded from the Internet and employed by anyone with the knowledge and will to do so (Hambling, 2017).

A similar dialectic is apparent in countless other examples, sometimes even in advance. Actively scanning the environment for signals of change does, in theory, provide time to respond. Separate scanning hits may interact to reveal previously hidden possibilities. For example, public media announce that trials of driverless ‘autonomous vehicles’ (AVs) will occur along a public motorway. A UK Minister of Transport announces that AVs will be in use by 2021 (Topham, 2017). Such developments are now becoming technically feasible. Yet around the same time a radical group publishes details about how, with a little imagination, vehicle-derailing devices can be easily and cheaply constructed and set in place leaving those responsible to disappear without trace (Thiessen, 2017). Little imagination is required to suggest that both high-, and low-tech devices will be developed to intervene and disrupt the smooth operation of AV technology wherever it is deployed. Once again, we are reminded that new technologies are never ‘value free;’ they always come with hidden weaknesses and costs, winners and ‘losers’. Those who put their faith in complex systems will eventually need to recognise that the latter are not infallible. Those with different values and what one might call ‘oppositional’ social interests will continue to take advantage of any weaknesses or blind spots (Bartlett, 2017). It follows that the ‘hidden’, non-material side of any technology is at least as significant as its physical form. It therefore requires much closer attention.

Artificial intelligence

Bill Gates and Stephen Hawking are among many who have warned of the dangers of artificial intelligence (AI) and the very real possibility that it may represent an existential threat to humanity. Fresh impetus to this debate was provided when Mark Zuckerberg and Elon Musk clashed over this issue. While Musk echoed previously expressed concerns, Zuckerberg would have none of it. For him such talk was ‘negative’ and ‘irresponsible.’ He’s dead against any ‘call for a slowdown in progress’ with AI (Frier, 2017). So it fell to director James Cameron, director of Terminator 2 and other movie blockbusters, to inject some reality into the proceedings by reminding everyone of the mammoth in the room. Namely that it is ‘market forces (that) have put us into runaway climate change, and the sixth mass extinction.’ He then added that ‘we don’t seem to have any great skill at not experimenting on ourselves in completely unprecedented ways’ (Maddox, 2017).

What is significant here is that it falls to a movie director to draw attention to links between the products of an advanced techno-economic system and the growing likelihood of irrational outcomes. Such concerns are fundamental to the maintenance of public safety and wellbeing. Yet, careful consideration of the social implications of technical change by public authorities has declined even as the need for it has increased. The race to create artificial intelligence is being pursued in many places. Yet, few of the key players appear willing to pull back and rigorously assess the risks or seek guidance from wider constituencies. Whether East or West, to passively ‘follow the technology wherever it leads’ is technological determinism writ large. It’s clearly an inadequate basis upon which to make decisions, let alone to gamble with the future of humanity.

We cannot assume that advanced AI will take over the world and either destroy humanity or render it redundant. Such outcomes are certainly possible but there are genuine differences of opinion on these very questions (Caughill, 2017; Brooks, 2017). Of more immediate concern is that various agencies have been looking to AI for military and security ‘solutions’ for some years. Roboticised figures have been common in the entertainment industry for several decades. But wider appreciation of risks involved in their use in real-world situations has been minimal thus far. Now, however, robot soldiers are being designed and tested. In 2017, for example, a group called the Campaign to Stop Killer Robots met at the United Nations in New York. Part of the program included a film illustrating the potential of ‘assassin drones’ to sweep into a crowded area, identify targets using facial recognition, apply lethal force and vanish. Concerned scientists were attempting to ‘highlight the dangers of developing autonomous weapons that can find, track and fire on targets without human supervision’ (Sample, 2017). This may sound like science fiction (SF) but a leading AI scientist offered at least two reasons for believing that such devices are closer than one might think. In his view:

The technology illustrated in the film is simply an integration of existing capabilities. It is not science fiction. In fact, it is easier to achieve than self-driving cars, which require far higher standards of performance. (Also) because AI-powered machines are relatively cheap to manufacture, critics fear that autonomous weapons could be mass produced and fall into the hands of rogue nations or terrorists who could use them to suppress populations and wreak havoc (Sample, 2017)

This is merely one branch of a rapidly evolving area of research and innovation but the prospects are clearly terrifying. Another key question raised was: who or what locus of authority provided the green light to arms manufacturers, the disruptors of Silicon Valley, or indeed anyone else to carry out these unprecedented experiments? Reinventing the world in a high-tech era – whether by innovation or disruption or both – is a non-trivial matter. To routinely and relentlessly create new dangers and hazards cannot do other than threaten the viability of humanity and social life. Yet somehow these entities continue to operate openly and with confidence, yet lacking anything remotely like a social license. Some consider that the development of AI could be the test case that decides the matter once and for all. Here is Taplin again on how what he regards as the benign legacy of Engelbart – an Internet pioneer – was turned toward darker ends. He writes that the latter ‘saw the computer as primarily a tool to augment – not replace – human capability’. Yet ‘in our current era, by contrast, much of the financing flowing out of Silicon Valley is aimed at building machines that can replace humans’ (Taplin, 2017, p. 55). At this point, the ghost of Habermas might well be heard whispering something along the lines of ‘whatever happened to our communicative and emancipatory interests?’ To what extent does their absence from dominant technical discourses mean they are also missing from the products and outcomes they produce?

The panopticon returns

The original panopticon as envisaged by Jeremy Bentham in the 18th Century was a design for a prison in which all the inmates could be continuously monitored without their knowledge. Since they could never know whether they were being observed or not they were constrained to act as if they were at all times. Hence they became adept at controlling their own behaviour (Wikipedia, 2017). During recent years newer versions have arisen that bring this oppressive model to mind. One is in China; the other much more widely distributed. Chinese intentions to use IT for social control are revealed by Kai Strittmatter (2020), who states: “China’s new drive for repression is being underpinned by unprecedented advances in technology”, including:

  • Facial and voice recognition
  • GPS tracking
  • supercomputer databases
  • intercepted cell phone conversations
  • the monitoring of app use, and;
  • millions of high-resolution security cameras

“This digital totalitarianism has been made possible not only with the help of Chinese private tech companies, but the complicity of Western governments and corporations eager to gain access to China’s huge market,” (Strittermatter, 2020).

This may not seem like a particularly significant departure from what’s already occurring elsewhere. What is different is that China already has totalitarian tendencies since it is ruled by an inflexible party machine that shows no interest in human rights or related democratic norms. While the US has itself long been hamstrung by deadlocked and ineffectual governments it does have a constitution that protects certain core rights (such as free speech). Despite systematic predation (through copyright theft and monopoly power) by Internet oligarchs, the US also retains elements of a free press and it certainly has an independent judiciary. Furthermore, the European Economic Community (EEC) has already taken the first steps to establishing a more credible regime of regulation. In so doing it has shown that it is willing and able to take on the Internet oligarchs and force them to change their behaviour. So in the West there are real prospects of reining in at least some of the excesses.

a city of people walking with an eye n the center of the city with words apparing on the walls around the city stating 'we're watching you always'

But China is a very different story. According to reports its oppressive ‘grid system’ of systematic surveillance has been operating in Beijing since 2007. Aspects of this oppressive new system were summarised as long ago as 2013 in a Human Rights Report.  For example:

The new grid system divides the neighbourhoods and communities into smaller units, each with a team of at least five administrative and security staff. In some Chinese cities the new grid units are as small as five or ten households, each with a “grid captain” and a delegated system of collective responsibility … Grid management is specifically intended to facilitate information-gathering by enabling disparate sources into a single, accessible and digitized system for use by officials. … In Tibet the Party Secretary told officials that ‘we must implement the urban grid management system. The key elements are focusing on … really implementing grid management in all cities and towns, putting a dragnet into place to maintain stability. … By 2012 the pilot system was in ‘full swing’ (as it had stored) nearly 10,000 basic data’ (and collected) hundreds of pieces of information about conditions of the people (Human Rights Watch, 2013).

By 2015 this vast modern panopticon was ready to be rolled out to enable the full-on mass surveillance of China’s 1.5 billion citizens. According to the Metamorphosis Foundation (2020):

Any society that looks to stratify people based on how they look, based on their health, based on their data and things about them, is an incredibly authoritarian and sinister society. The societies throughout history that have tried to separate and stratify people based on data about them are (those) that we want to stay as far away as possible from…Collaboration of all stakeholders and demand for public debate are key to preventing situations in which the power to decide is taken from citizens and lies only in the hands of private companies or police forces…

Since then further details of this oppressive and inescapable surveillance system in China have emerged. For example, a wired article by Rachel Botsman revealed that two Chinese data giants – China Rapid Finance and Sesame Credit – had been commissioned by the government to create the required infrastructure using copious amounts of big data. Free access to this vast resource means that people can be monitored, rated and evaluated in depth throughout their normal lives. It turns out that ‘individuals on Sesame Credit are measured by a score ranging between 350 and 950 points.’ While the algorithms remain secret the five factors employed are not – credit history, fulfilment capacity (or ability to abide by contractual arrangements), personal characteristics, behaviour and preferences and, finally, interpersonal relationships. Those with high scores get consumer choices, easy credit and the chance to travel; those with low scores become the new underclass with few meaningful choices at all. These are described as ‘private platforms acting essentially as spy agencies for the government.’ The author then adds that ‘the government is attempting to make obedience feel like gaming. It is a method of social control dressed up in some points-reward system. It’s gamified obedience’ (Botsman, 2017).

What’s particularly curious here is the inevitability of non-trivial perverse outcomes, foremost among which are the immense cultural and human costs. Masha Gessen’s mesmerising and sometimes painful account of life in post-revolutionary Russia clearly demonstrates how hard it is to imagine that a cowed and passive population could retain sufficient awareness or creativity to contribute much of value to any culture, however instrumentally powerful it may appear (Gessen, 2017). In Botsman’s view ‘where these systems really descend into nightmarish territory is that the trust algorithms used are unfairly reductive. They don’t take into account context.’ Yet without a keen sense of context meaning becomes free-floating and elusive. Finally there’s the inevitable emergence of ‘reputation black markets selling under-the-counter ways to boost trustworthiness’ (Botsman, 2017). Overall, this may turn out to be the world’s prime contemporary example of a ‘deformed future’ in the making.

A second and equally subversive example over the last few years is the growing use of voice-activated ‘digital assistants’. Skillfully packaged as mere ‘assistants’ and ‘helpers’ they are ‘on’ all the time and thus set to respond to every request and whim. Some of are equipped with female voices that are intended to exert a distinctly seductive effect as shown in Spike Jonez’s 2013 film Her. What is less obvious (at least to the user) is that with each and every use the individuals reveal ever more information about their not-so-private lives. Before long comprehensive profiles are assembled, preferences noted and rich fields of data produced.

As things stand, the operators of these systems own this treasure trove of information and suggest new products and services in the light of those already consumed. Sales go up but consumers become ever more tightly bound to their own induced impulses and proclivities. Thus, instead of having open-ended learning experiences, of responding to challenges, of deepening their knowledge and understanding of their own authentic needs and human qualities, those who succumb can end up having ‘feelings’ for, and an ersatz fictional bond to, a remote impersonal network that exists only to exploit them. A further consequence of becoming over-reliant on such ‘immersive’ technologies is that the real-world skills and capacities of human beings start to atrophy. Memory, time keeping, spatial awareness are among the capabilities that wind down over time leaving people ever more dependent and at risk (Aitkin, 2016). People are seduced into becoming a core component of the ‘product’ being sold. As the human interiors shrink and fall away, identity itself becomes elusive and problematic.

In summary, leaving the high-tech disruptors in any field to their own devices, so to speak, simply means that the human enterprise is subjected to random shocks and abuses that end up placing it in ever-greater peril. For Naomi Klein this is part of a deliberate playbook designed to provide a minority with greater dominance and power (Klein, 2007). But it’s also the result of a certain kind of blindness that comes from over-valuing the technical and under-valuing the human and the social. If there’s a consistent theme here it’s that power in the wrong hands creates more problems than it solves. So high-tech innovation needs to be separated from simple notions of ‘progress.’ It is fundamentally a question of values and power – instrumental, cultural and symbolic. If humanity wants to avoid dystopian outcomes, human societies will need to find new ways to retain their power and control and part only with what they judge necessary to governance structures that meet their real needs. In other words, it’s time to disrupt the disruptors. They’ve had their moment in the sun and the clouds are gathering. It’s time for them to stand aside so that a different world can emerge.

 

 

Licence

Share This Book