Notes for Counterpower Group, Session 2.

Algorithm and Order


Morozov is a trenchant critic of Silicon Valley— not of its technologies per se, but of the blinkered and potentially disastrous cyber-utopianism that has come with it. He doesn’t decry Silicon Valley as evil. Instead, he warns us about the evils that await if we blindly welcome its hegemony. In To Save Everything, Click Here, he diagnoses two main failings of its worldview: “Internet-Centrism” and “Solutionism,” both his terms. Internet-Centrism is a kind of epochalism, a belief that the Internet is so unique that it’s incommensurable with history— so totally new that we have nothing to compare it with (except the printing press, apparently). For true devotees, the Internet is a special kind of a tool geared for social goods and against social bads, predisposed toward things like democracy and enlightenment and against things like domination and ignorance. If it’s correctly harnessed, they say, humanity could rosily dispense with strife, scarcity and alienation altogether. Right from the start, Morozov pokes fun at the notion that there is this one unified thing called “the Internet”— a single technology with an essence rather than an ensemble of diverse and often contradictory networked technologies— much less a technology with an inherently good essence, that can want or urge things like empathy or equality. And just as the Internet has no good essence, it equally harbors nothing essentially bad or pernicious. Morozov is not a technophobe and has been vocal about the good that could be accomplished with data, sensors, networks, and devices, given the right political understandings. Unfortunately for humanity, we often have to relearn our lessons about the “ambivalence” of technologies, or more generally the ambivalence of power and empowerment: no technology is intrinsically good or bad or anything; everything rests on the uses, circumstances, and social configurations of power.

Solutionism— somewhat related to this— is the misguided belief that most of our problems can be solved merely through engineering, technics and the hunt for the holy grail of the optimum. This hasty “Will to Improve,” this Northern Californian desire to “make the world a better place,” deserves our skepticism on several registers. “Alas, all too often, this never-ending quest to ameliorate— or what the Canadian anthropologist Tania Murray Li, writing in a very different context, has called “the will to improve”— is short-sighted and only perfunctorily interested in the activity for which improvement is sought. Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized— if only the right algorithms are in place!— this quest is likely to have unexpected consequences that could eventually cause more damage than the problems that they seek to address” (Pg. 5). Problems that ought to be tackled through political deliberation, personal introspection, interpersonal redress, or poetic invention get shoehorned into an algorithmic logic. Worse yet, Morozov notes, these older approaches lose their popularity with leaders and public alike precisely because they are (necessarily) so cumbersome, messy, and many times intractable, and we become further seduced by the ease of algorithms. We try to solve crime through predictive policing (à la Minority Report) rather than addressing the structural conditions that create crimes and privation in the first place. We externalize our self-control into devices without realizing that “externalizing self-control” is just another term for “control.” We falsely assume that communication inherently tends toward consensus and mutual respect (rather than what looks to me like the “tragedy of the comments”). And because of the impoverishment of public institutions, we turn to ever more “public-private partnerships” that serve ad hoc needs without guarantees of ultimately serving public interests (that is, raw deals signed under duress).

In his warnings against reliance upon the unreliable, Morozov also sounds somewhat like Lessig cautioning us against the subtle corrosions of “improper dependency” in Republic Lost. Some of Morozov’s targets are certainly doing some good. The problem is that they’re also nurturing unhealthy reliances, as with public dependency upon private (that is, corporate) means or human discernment upon algorithms. This point comes out sharply in his critique of gamification. His point is not that these civic badges or Farmville points earn money for corporate sponsors (though they usually do), or that gamifying duties and chores can’t successfully nudge or incentivize good public behavior (they often can). The problem is that these incentives— usually stoked by self-interest— might come to structurally supplant nobler motivations. We shouldn’t help leukemia patients for a chance to win a vacation to Curaçao; we should help them in order to cure their leukemia. Morozov notices that “there’s an element of self-fulfilling prophecy at work here: if policymakers believe that self-interest is the only option available, they will shape social and legal institutions accordingly. Perhaps they might even solicit the desired behavior— thus getting the much-needed confirmation that the world does indeed work the way they think” (303). Put another way, let’s imagine two diverse parenting styles: the first gives the child a Baby Ruth every time they mow the neighbor’s lawn; the second instead conveys to the child the intrinsic rewards of neighborliness. Which parenting style, do you think, would raise the better citizen?

Clearly, one danger of the rise of Silicon Valley is still that corporations like Google, Facebook, Amazon, and Uber are precisely that— corporations. They are firms and as such are, all sloganeering aside, committed to profit-maximization above all else. When these firms had either the free-wheeling liberty of a start-up or comfortable margins above the competition, they could flirt with other aims and unprofitable-but-interesting endeavors. Once these firms get larger or go public, serious market pressures erode everything except that profit-maximizing imperative. Two questions then arise. The first: how do these firms make their money? The second: how might they incidentally reorganize social being as they do so? The first question is important because, for many, entities like Google and Facebook seem “free” and so come across as little more than benevolent and generous services. Add a cute color palette, or a funky photo generator, and we can’t help but love them. We haven’t, as a society, come to any real legal or social understanding about data, social graphs or networks, so it’s hard to judge these exchanges, or their fairness and social effects. We don’t really understand the power equations involved, and Silicon Valley has been very quick to exploit our fuzziness. As for the second question, many have been so wowed by the success and initial ease of industry disruptors like Amazon and Uber, that they haven’t asked what exactly this disruption, in the final tally, has disrupted. Morozov continually warns about the erosion of the democratic public sector, for instance, and others have lamented the collapse of certain other social relations and institutions, but the widest question, which I don’t have room or ability to answer, is what choices about the reorganization of social being are we making by just rolling with Silicon Valley? And this is a question that goes beyond the fact that Silicon Valley is corporate or run by historically clueless tech-bros trying to become billionaires before thirty. Many of the forms and aims of Silicon Valley could turn ugly without market imperatives. Hence Morozov’s attacks on Silicon shibboleths like openness, transparency, connectivity, efficiency, perfection, and even the eradication of crime are not attacks on their ability to carry them out or their ulterior motives for doing so. He is attacking them as absolutes. Take transparency, for instance:

“British transparency theorist David Heald draws a useful distinction between transparency as an intrinsic value, as an end in itself, and transparency as an instrumental value, as merely a means to some more important goal, like accountability. Thus, writes Heald, ‘the “right” varieties of transparency are valued because they are believed to contribute to, for example, effective, accountable, and legitimate government and to promoting fairness in society.’ This means, among other things, that there are also ‘wrong’ varieties of transparency, which might lead to populism, thwart deliberation, and increase discrimination. It’s hard to believe that when Vladimir Putin order workers to install Web cams at polling stations across Russia, his invocation of transparency rhetoric serves functions other than legitimizing his own stay in power by pretending that Russian elections are even more democratic and transparent than those of Russia’s Western critics” (80)

Beyond corporate pressures, beyond Morozov, beyond just “Internet Centrism,” we can think more generally about the ambivalence of technics and power. They are ambivalent and not neutral— an important distinction. They are never neutral; whatever the tool, it will accomplish a mixed bag of goods and bads, depending on the use and circumstance. The idea then is not to locate a neutral technology— there ain’t such a thing— it’s to fully consider the oncoming goods and bads of each ambivalent technology.

What are the ambivalences of, say, a search engine like Google? Naturally we’re wary of having an advertising company as the custodian of our new universal library; we see this backfiring down the line. But I’d be less concerned about advertising distorting the results of the search. Google is very aware of its dependence on public trust, and the fickleness of Googlers; with a few exceptions, I think Google is greatly incentivized to try to keep PageRank as unbought as possible. A realer danger exists in minor alterations in the algorithm that can radically redirect eyes and ears, and thus the entire human conversation, as in the recent warnings about Google’s power to influence our elections. However, fears like these— while very, very real— focus on deliberate manipulations by Google, what they call SEME (Search Engine Manipulation Effect), and overlook the dangers even in the hypothetical case of a wholly angelic or best-case-scenario Google.

For one, as we’ve seen, the mere overlay of an order, however apparently objective, neutral, or innocent, will impose both goods and bads. Order is always created and exploited by powers, and search engines are certainly no exception. These powers might be as democratic and humanist as you can imagine; there will still be goods and bads to come. In the case of an order as monopolizing as Google, it becomes the very grid through which certain things are possible or thinkable, and others not. “The fundamental codes of culture— those governing its language, its schemas of perception, its exchanges, its techniques, its values, the hierarchy of its practices— establish for every man, from the very first, the empirical orders with which he will be dealing and within which he will be at home.” This is from the beginning of Foucault’s The Order of Things, and with each year the ordering principles of the internet become more and more the invisible and “fundamental codes of culture,” and thus more and more deserving of scrutiny and skepticism.

Placing aside concerns about market pressures or biases in opaque, proprietary algorithms though, and assuming the results are conditioned by a “perfect” algorithm, the basic hierarchies of Google search results are created by us, the users. On first glance, people might be tempted to call this ordering “democratic” because it is performed by the people. However, I’d insist— and I think Morozov would agree— that this is a shallow conception of democracy. Yes, the crowds are there, and the ultimate form is a function of their behavior. This isn’t enough though. Democracy means, etymologically, rule by the people, and as it’s been said: with great power comes great responsibility. It’s not enough that the crowd— the demos— merely shows up; they also have to engage in real decision-making. Google search results provides no space for deliberation, debate, dissent, or decision, and hence no space for anything that we should honor with the name of democracy. It would be correcter to call Google or any similar search engine “catallactic” and “hedonic,” and I’ll explain why.

Catallactic or catallaxy was a term brought to life by the Austrian economist Friedrich Hayek in attempting to explain the behavior of pricing mechanisms in free markets among actors with wildly diverse goals. More generally, it describes how a stable orders arise from the billions of tinier, local, uncoordinated interactions (for Hayek, exchanges creating prices; for our purposes, clicks and links creating search engine rankings). No one necessarily decides on the prices just as no one necessarily decides on the order of search results (still speaking hypothetically of course). Generally, this is a good thing. However, sometimes things should be decided, at least in part, and so it’s important to see the limitations of the catallactic model. For instance, the order of search results is not just an order like the alphabet but a hierarchy of visibility. It determines what we see and hear and know and debate; it determines without deciding what is “relevant.” And while Google and others tinker with their algorithms to make them cleverer, in much of the results— especially with or within social media— orders of visibility are sorted by something of a pleasure principle. This is what I mean by Google being “hedonic,” or more accurately hedonistic: flittering like a moth toward pleasure and away from displeasure. Connecting back to earlier, “relevance” is not subject to discussion, debate, or dissent; it is calculated by nudges and incentives however self-interested, subconscious, or sick-minded. It won’t necessarily adhere to norms or ideals. Search engines sort a thick, philosophical concept like “relevance” according to what Lazzarato calls “asignifying semiotics”— by indices and signals that do not rise to the level of concepts and discussability. Numbers. Clicks. Links. Metrics. Patterns. Is this the wrong way to make a search engine? No, it’s probably the best and most efficient way in general. But there will be times when it’s not, or when the bads go unnoticed, or even when it works too well.

One maxim I’ve been rubbing a lot lately is that power compounds. Power differentials tend to widen exponentially, as power is used to gain yet more power, as with financial capitalization or the war-time acquisition of an enemy’s weapons and resources. This is also true for popularity (and search engines) as visibility begets more visibility without a proportional changes in relevance; slight differences between say Slot 1 and Slot 2 in your search for “funny news bloopers” get amplified through a feedback loop of order and attention. This distortion is not a manipulation by Google, but if anything, by a public with finite time and attention. I would say that most deliberate manipulation comes from users, as well. This includes, of course, manipulations like “search engine optimization” and “online reputation management,” but should widen to include attempts not just to hack the search engine but public behavior and institutions as well. As with the third register of corruption that we spoke about in the first session— by which you could become a good politician, a good businessman, and even a good novelist without necessarily bettering our political, economic, or cultural life— here we uncover gimmicks for becoming “good at relevance” without becoming any more relevant. This means gaming the conditions of success in order to clutch a kind of oversuccess, a suroptimum. Then, when everybody gets into the game, crafting content purely for gaining visibility, the whole thing goes to shit, and does so catallactically (as we see plenty of “bad” catallaxy at work in the marketplace).

Part of the problem lies with the project itself: to quantify and cardinally order “relevance.” What is relevance? And take this question philosophically because no one has ever answered it. Perhaps a thick concept like relevance cannot be grasped algorithmically (hint: it can’t). Perhaps what would be relevant is not simply what is current or proximate. Perhaps the most relevant idea is something no one has thought of, or linked to, or insinuated before, and will be a complete rupture in the conversation. The more an order becomes dominant though, such as Google search results, the more it dominates its contents and banishes even much-needed disorder. However, Morozov charges, Silicon Valley doesn’t trouble itself too much over the lossiness of its methods. “The problem with Silicon Valley’s quest to organize the world’s information (Google is only one of the culprits here) is that it tends to succumb to the worst excesses of “information reductionism”— a tendency to view all knowledge through the prism of information that sociologist Nikolas Tsoukas faults for assuming that a “set of indices” can “adequately describe, to represent, the phenomena at hand.” The quest to organize the world’s knowledge cannot proceed without doing at least some violence to the knowledge it seeks to organize; making knowledge “legible,” to borrow James Scott’s phrase, is tricky regardless of whether a totalitarian government or a Silicon Valley start-up does it” (87).

Instead of searching for a single perfect order, it would be better to not rely on any one order— or orderer— if for no other reason than that our “fundamental codes of culture” become less invisible and insidious when offered an alternative or competitor. Not just two or three Googles, or Facebooks, or Twitters, not just close competitors à la Ford and Chevy, but genuinely alternative visions and off-kilter engines that would introduce thickness, randomness, debate, and irrelevance, or that call upon users or other entities to reuse or misuse the data collected by Google, inverting its hierarchies, introducing noise or gumminess, or deriving their own models or purposes from those data sets (which Morozov wants us to legally reclaim). This plurality of contenders introduces— however piecemeal— some discussion, dissent, and decision into “relevance” that the seamlessness of Google tries so hard to expunge. This would not be enough for Morozov, wary as he is of solutions that put the ball back into the court of Silicon Valley, or its satellites, or even many activists with technological solutions to technological problems. His contention, borrowed from people like Bruno Latour, is that we confront technology not just as an inert object or set of objects, but always as a hybrid of society and technology, in which a technology cannot be separated from its uses and contexts, or the social and political questions surrounding these uses and contexts. And much more than me, when he says “politics,” he often means quite literally “getting officials elected and legislation passed,” and continually warns of vacations into theory— especially those that expound the “essence of technology.”

This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.