Transcending reductionism, re-purposing the Internet

Interior drivers, scales of implementation

Virtually everyone outside the Silicon Valley bubble who has paused to consider the complex tangle of issues thrown up by the IT revolution in general and the Internet in particular tends, at some point, to reach a key conclusion – that the key issues before us are not primarily technical. Technology provides the physical substrate and software an artificial ‘nervous system’ that reaches ever more deeply into human lives. But merely following technical capabilities as far as they can be driven appears to confine humanity on a fast train to Dystopia and perhaps the end of human civilisation itself. Juval Harari unintentionally provided a rehearsal, or test case, for that thesis in his book Homo Deus (Harari, 2015). Here the main driver of change was considered to be the ingenuity of large groups of people and their most significant achievements, indeed, were said to be those associated with high technology. Yet by relentlessly following this technologically determinist path, what the author refers to as ‘unaugmented’, humans are expected to fall by the wayside and become the ‘road kill’ of history. It is severe and uncompromising conclusion but unavoidable with the starting assumptions. If, on the other hand, the uses of high-tech are shaped and conditioned by progressive social drivers – such as life-affirming values and expanded worldviews – the outcomes would certainly be very different. So in playing a reductionist game with the very forces that moderate raw technical power degrades language, values, worldviews and similar culturally derived sources of meaning and capability – Harari actually demonstrates how vitally necessary they really are (Slaughter, 2017). Nor is this the only source that confirms this vital insight. As mentioned above, the idea that repressing or turning away from human qualities and social phenomena is exceptionally damaging receives powerful support from Masha Gessen in her book The Future is History (Gessen, 2017).

There are clearly many aspects to this story and a growing number of informed observers of this rapidly changing scene. Greenfield, for example, is by no means alone in viewing the IT revolution as a full-on invasion. So he is alert to the implications of what he calls ‘the colonisation of everyday life by information processing.’ As with other critical approaches he is interested not merely in raw outcomes but also in the motives of promoters, the ideas behind the hardware and the social interests involved.  Working at a more fine-grained level and acknowledging such interests helps to re-frame core assumptions within corporate and business environments. In 2015 John Naughton reported on work by Doc Searles on what he calls the ‘intention economy.’ Of direct relevance to the issue of there being human interests beyond the purely technical is the following view. Namely that ‘many market problems … can only be solved from the customer side: by making the customer a fully-empowered actor in the market place, rather than one whose power in many cases is dependent upon exclusive relationships with vendors, by coerced agreement provided entirely by those vendors’ (Naughton, 2015).

From considering the IoT at three scales of implementation Greenfield wants to probe more deeply into what they mean through actual case studies (Greenfield, 2017). As we have seen repeatedly, the marketing of high-tech devices commonly assert assumed benefits to users but obscure underlying corporate benefits. So at the individual human scale biometric devices such as the Fitbit and the Apple Watch monitor a variety of health and fitness indices. Yet, these personal data are valued, analysed and used as inputs to advertising and sales. Insurance companies have vested interests in these skewed transactions such as offering reductions in premiums in exchange for such personal data. Truck and public service drivers are especially vulnerable to the imposition of more heavy-handed versions. Then, unless this trend is halted, the intensive collection of personal data may be required of all drivers and other persons responsible for vehicles and related machinery. The logical end this insidious process is akin to the imposition of total surveillance.

That these observations are not ‘merely’ theoretical or personal but extend to other scales is confirmed by the emergence of ‘Google Urbanism,’ an ambitious plan by the company’s Alphabet subsidiary to reconfigure cities in its own image. Its pilot project on the Toronto waterfront sought to ‘reimagine urban life in five dimensions – housing, energy, mobility, social services and shared public spaces.’ However, what caused most concern was a proposed ‘data-harvesting, wi-fi beaming digital layer’ to provide a ‘single unified source of information about what is going on.’ This was intended to gather ‘an astonishing level of detail’ such that ‘each passing footstep and bicycle tire could be accounted for and managed.’ Issues of privacy and the blurring of public and private interests were set aside confirming the suspicion that ‘the role of technology in urban life is obvious: It is a money-maker’ (Bliss, 2018). Fortunately, opposition to this project grew to the point where it was eventually cancelled. For Morozov, ever on the alert for new forms of Internet solutionism, heavy-handed developments of this kind signal ‘the end of politics.’ He comments that:

Even neoliberal luminaries like Friedrich Hayek allowed for some non-market forms of social organisation in the urban domain. They saw planning as a necessity imposed by the physical limitations of urban spaces: there was no other cheap way of operating infrastructure, building streets, avoiding congestion. For Alphabet, these constraints are no more: continuous data flows can replace government rules with market signals. (Morozov, 2017c)

Seen in this light the emergence of high-end ‘smart cities’ represents a further incursion of technical expertise into the lifeworlds of people, the ethos of cultures and the character of the settlements where much of humanity lives. More recently Sadowski has suggested that such environments may best be referred to as ‘captured cities’ (Sadowski, 2020). Such conclusions clearly challenge the legitimacy of this entire process. Greenfield’s own recommendations include the following.

 

  • The use of algorithms to guide the distribution of public resources should be regarded as a political act.
  • Claims of perfect competence in relation to ‘smart city’ rhetoric should be rejected.
  • Any approach the whole IT domain should include a healthy dose of skepticism.
  • Commercial attempts to gather ever more data about people should be resisted (Greenfield, 2017).

Taming the ubiquitous algorithm

Standing at the core of a vast number of IT processes is the ubiquitous algorithm. Its relative obscurity and foundation in mathematics means that for many people it remains a mystery. But this need not continue. Cathy O’Neil was originally employed as a ‘quant’ in the heart of the New York financial district prior to the Global Financial Crisis (GFC). She saw first-hand, how the algorithms that exploit ‘big data’ can be used productively or as instruments of power and exploitation. In her view most people are unaware of how these new capabilities have proliferated. Consequently the reliance of bureaucratic systems on them is seldom appreciated. In the US she notes that ‘getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically.’ She adds that:

The technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent (O’Neil 2016).

She uses a ‘four-layer hierarchy’ in relation to what she calls ‘bad algorithms.’ At the first level are those with ‘unintentional problems that reflect cultural biases’. Next are those that ‘go bad through neglect.’ Third are those that she regards as ‘nasty but legal’ and finally ‘intentionally nefarious and sometimes outright illegal algorithms.’ In relation to the latter she adds that:

There are hundreds of private companies…that offer mass surveillance tools. They are marketed as a way of locating terrorists or criminals, but can be used to target and root out citizen activists. And because they collect massive amounts of data, predictive algorithms and scoring systems are used to filter out the signal from the noise  (O’Neil 2016).

The scam run by Volkswagon to conceal the results of emissions tests is, in her view, perhaps the most well-known example; but the sale of surveillance systems to repressive regimes looms larger as a serious future threat.  In her 2016 book Weapons of Math Destruction she looks into numerous context only to find the same dynamic at work. In one case a school district attempted to identify the weakest teachers and designed a set of tests of ‘teacher effectiveness’ using algorithms. Many of the criteria, however, such as how well students were learning year to year, could not be measured directly. The use of unverifiable proxies resulted in wildly varying results – but teachers were sacked anyway. From this and other cases O’Neil concluded that many algorithms are poorly designed and proxies used in place of real data invisibly distort the results. Another oft-experienced trap is where hidden feedback loops render data meaningless the more often they are run within a system. What is also significant about this account is that the underlying issues are less about mathematics, statistics or data, than they are about transparency (or its lack) power and control. Currently in the US, for example, the well-off can usually afford human representation whereas the poor are left with poorly performing data and a bureaucracy they can neither influence nor communicate with. In summary, used well algorithms can be tools that usefully extract value from big data. Used poorly, they can certainly ramp up the efficiency of operations but at the cost of unreliable or unjust results and increasing inequality.

O’Neil (2016) suggests a number of solutions, none of which are short term or particularly easy to implement without wider social support. ‘First and foremost’, she suggests, ‘we need to start keeping track.’ For example, ‘each criminal algorithm we discover should be seen as a test case. Do the rule-breakers get into trouble? How much? Are the rules enforced, and what is the penalty?’ She continues:

We can soon expect a fully-fledged army of algorithms that skirt laws, that are sophisticated and silent, and that seek to get around rules and regulations. They will learn from how others were caught and do it better the next time. They will learn how to do it better the next time. It will get progressively more difficult to catch them cheating. Our tactics have to get better over time too (O’Neil, 2016).

Finally she suggests that:

We need to demand more access and ongoing monitoring, especially once we catch them in illegal acts. For that matter, entire industries, such as algorithms for insurance and hiring, should be subject to these monitors, not just individual culprits. It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently. When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around (O’Neil, 2016).

O’Neil’s program for re-purposing algorithms is certainly ambitious but, given the plethora of unresolved issues in this area, it seems entirely appropriate. In her book she also calls for a ‘model builder’s pledge’ (similar to the Hippocratic Oath taken by medical practitioners) a full-scale regulatory system, algorithmic audits and greater investments in research. In light of this she speaks approvingly of Princeton’s Web Transparency and Accountability Project and European approaches (noted below) that are, starting to dictate a new raft of terms and conditions that the Internet giants will have to recognise. Ultimately, she returns to the same ground that others have indicated in arguing that such choices are fundamentally moral, hence also ethical and social.

Defensive measures, key questions

Many options are available to those who are willing to invest the time and effort in responding to these issues and concerns. In mid-2017, for example, Australian reporter Rose Donahue interviewed Helen Nissenbaum in New York about the ‘obfustication movement.’ This was described as a ‘David and Goliath’ strategy that relied on the fact that David had more freedom to act than his opponent (Donahue, 2017). Donahue noted that Nissenbaum had developed tools specifically designed to disrupt Google’s tracking and ad delivery systems. One called ‘TrackMeNot’ allows users to browse undisturbed under the cover of randomly generated searches. Another dubbed ‘AdNauseum’ is a tool that collects data from every site visited by the user and stores them in a vault. This vastly overstates the user’s activity and therefore serves Google false information. While such tools may at present appeal only to a minority there are undoubtedly many more to come. A high-tech defensive war against the overreach of Internet oligarchs is increasingly likely. Many of these tools will become easier to use and personal agency will be enhanced as more people avail these tools.

In summary, the present Internet has evolved – or ‘de-evolved’ – into its present condition over an extended period. It will therefore not easily be prised from the grasp of giant corporations. Repurposing the Internet will take time. It will take concerted social and political action as well as extensive technical backup. Charles Arthur credits online rights activist Aral Balkan with the following insight: ‘If you see technology as an extension of the self, then what is at stake is the integrity of our selves’. He continues: ‘Without that – without individual sovereignty – we’re looking at a new slavery’ (Arthur, 2017). So key issues include the following.

 

  • What kind of society do we want to live in?
  • What visions of human life, society and culture do we believe in?
  • What kinds of futures arise from our collective decisions?

These are exactly the kinds of questions that have driven futures / foresight thinking and practice for several decades. As the wider implications of IT revolution cause more and more people to focus upon them so new players will need to become more involved in the search for solutions. Governments, city authorities and civic administrators at all levels will need to be open to new forms of social engagement. They, in turn will also need greater support from an informed public.

 

Licence

Share This Book