Article Image
read

An article by Karen Hao from April 15th described a (long-overdue) trend in the capitalist centres to start taking seriously the idea of regulating artificial intelligence. A bill before Congress, the Algorrithmic Accountability Act, “would require big companies to audit their machine learning systems for bias and discrimination and take corrective action in a timely manner if such issues were identified”. Needless to say, this shift owes a lot to the work on algorithmic bias and the technological reification of social structures in algorithms, work by Safiya Noble, Virginia Eubanks, Zeynep Tufekci and others. This work has exposed the very real problems with the technologies of the current AI Summer.

In another recent article, Bianca Wylie, whose sustained criticism of Toronto’s Sidewalk Labs project has made the social and political consequences of the project impossible to ignore, points out a very important facet of the “discursive struggle” around AI/technology in the current conjuncture. When Sidewalk Labs states that “sensors have become a part of our daily lives,” Wylie identifies the rhetorical ploy being used:

Passive language, without agency. As though the sensors and cameras just sprouted into the world, without creators or purchasers, without contracts or decisions. This subtle normalization is the work to guarantee that a quantified city becomes the social norm. And of course it will come with a side of greenwashing in the rhetoric, to make sure that you might hesitate or feel a sense of social pressure to get with the program - you’re not anti-sustainability, are you? The quantified ‘resilient’ city is key to opening up new markets.

They key here, from my perspective, is markets. In response to the Karen Hao article, I wrote:

Regulating AI/technology under capitalism is meaningless, because technology plays an intrinsic role in capitalist development. Technological change is not some independent process, or subject merely to the whims of silicon valley.

In other words, as Wylie points out, machine learning and Big Data did not just “sprout into the world”, but are the products of the decisions made by real people in a very real mode of production. However, the creators and purchasers also did not appear in the world fully-formed, independent, and autonomous. Their choices and decisions are also structured and constrained by the logic of capital. To paraphrase Voltaire, capitalists and workers did not exist, so it was necessary for capitalism to invent them. (NB: I’m leaning pretty heavily on the structuralist account of capitalism, which plays down the agency of workers and capitalists; the truth is, as always, more dialectical).

So what do I mean by technology’s intrinsic role in capitalist development? For Marx and Engels, technology is a fundamental element in humanity’s relationship to the world. Tools connect the human being and the objective world, and tools/technology evolved alongside human beings through the work that we accomplished through the mediation of the tools themselves. In The Part Played by Labour in the Transition from Ape to Man, Engels argued that the further away from the direct employment of tools (including our bare hands) the more mystified our relationship to technology became:

All merit for the swift advance of civilisation was ascribed to the mind, to the development and activity of the brain. Men became accustomed to explain their actions as arising out of thought indetead of their needs… and so in the course of time there emerged that idealistic world outlook which, especially since the fall of the world of antiquity, has dominated men’s minds.

This idealism expresses itself in today’s world in many different ways: in the idea that algorithms are “just math” (pure, ideal, and therefore without bias); in the idea that passing a law automatically affects the material conditions of production; and in Apple’s injunction to just “think different” in order to achieve the kind of lifestyle we have been marketed and sold since the Second World War.

But the most insidious kind of idealism is that which sees technology as neutral tools, whose use has been perverted by amoral Silicon Valley entrepreneurs, corrupt politicians, and greedy CEOs. This is the view of technology (and technological progress) shared not only by the capitalist world as a whole, but of the Marxists of the Second International and the Bolsheviks. Until Raniero Panzieri’s re-evaluation of technological progress in the 1950s, technology was believed to be “class neutral”, and the same technological restructuring of work was bad for workers in the US and good for workers in the USSR.

For Marx, on the other hand, technology under capitalism is capitalist technology - that is, when capitalism is the dominant mode of production, all technology serves the purpose of capitalist exploitation and expansion. Technology, in Marx’s view, is a means for lowering the cost of labour and (in the short term at least) of increasing profits. Technological development is a site of the struggle between labour and capital, but it is also the site of competition between capitalists. Every technological advance lowers the cost of labour for first capitalist that employs it, but as that technological change becomes widespread, every capitalist shares in the lowering of labour costs, and the competitive advantage of the first capitalist is lost.

AI, machine learning, and Big Data, are simply the latest technological development in this ongoing struggle between labour and capital, and between capitalists themselves. The automation of factory work that began in the 1950s and 1960s dispensed with the need for human workers on assembly lines (replaced with robots). The current AI summer is simply seeking to automate facets of labour (intellectual and emotional labour) which were previously beyond the reach of automation. In addition, surveillance and platform capitalism - neither of which is radically new - use surveillance and platform technologies to harvest micromoments of labour, represented as data, both for direct exploitation and for more subjective conditioning and structuration of workers (as Phoebe Moore’s work on The Quantified Self and Maurizio Lazzarato’s on “machinic subjection” can attest).

In this context, the attempt to regulate AI technologies without dismantling the capitalist system can be seen as today’s equivalent of safety regulations around assembly lines. While it may save some lives, regulations also play their part in the disciplining of labour (because they are capitalist regulations), but they also allow for their own quantified logic of injury and death: how many injured or dead workers are acceptable compared to the cost of compliance? Technology and regulation under capitalism - capitalist regulation and technology - is constrained by this kind of profit/loss logic precisely because capitalist technology is a tool for the increase of profit. Indeed, this is the logic surrounding the gradual improvement of facial recognition software. The racist biases built into facial recognition are, it is argued, temporary, and a small price to pay for the eventual benefits of facial recognition. We should sacrifice the people whose lives are ruined by the false positives and other algorithmic injustices in order to benefit from future profit. This has always been the logic of capitalist development: the sacrifice of the poor, oppressed, and marginalized in order to wring more and more profit from the system over time. This is the logic, after all, that has led us to the brink of unavoidable climate disaster.

The idea that we can wish away (with laws formulated by the capitalist state) the worst abuses of capitalism is a pipe dream. Despite the imputation that anti-capitalists are literally insane, we can’t pick and choose individual parts of the mode of production for our analysis, critique, or material attack. Capitalism as a whole has to go; only then can we regain some relationship with our tools and with the natural world not driven by the inhuman and dehumanizing logic of profit and the cold calculus of the expendability of human lives. The climate catastrophe that is nearly upon us makes this problem (and project) more urgent than ever.

Image

Sam Popowich

Discovery and Web Services Librarian, University of Alberta

Back to Overview