Pages

Tuesday, July 13, 2010

How to Have Culture in an Algorithmic Age

Note: this entry was posted originally to The Late Age of Print on June 14, 2010. I'm reposting it here due to the surprising amount of attention it's received and because of its relevance to my readers here on D&R.


The subtitle of this post ought to be "apparently," since I have developing doubts about substituting digital surveillance systems and complex computer programs for the considered -- humane -- work of culture.

Case in point: about six weeks ago, Galley Cat reported on a new Kindle-related initiative called "popular highlights,"which Amazon.com had just rolled out onto the web for beta testing. In a nutshell, Amazon is now going public with information about which Kindle books are the most popular, as well as which passages within them have been the most consistently highlighted by readers.

How does Amazon determine this? Using the 3G connection built into your Kindle, the company automatically uploads your highlights, bookmarks, marginal notes, and more to its server array, or computing cloud. Amazon calls this service "back up," but the phrase is something of a misnomer. Sure, there's goodwill on Amazon's part in helping to ensure that your Kindle data never gets deleted or corrupted. By the same token, it's becoming abundantly clear that "back up" exists as much for the sake of your convenience as it does for Amazon itself, who mines all of your Kindle-related data. The Galley Cat story only confirms this.

This isn't really news. For months I've been writing here and elsewhere about the back up/surveillance issue, and I even have an academic journal article appearing on the topic this fall. Now, don't get me wrong -- this is an important issue. But the focus on surveillance has obscured another pressing matter: the way in which Amazon, and indeed other tech companies, are altering the idea of culture through these types of services. Hence my concern with what I'm calling, following Alex Galloway, "algorithmic culture."

In the old paradigm of culture -- you might call it "elite culture," although I find the term "elite" to be so overused these days as to be almost meaningless -- a small group of well-trained, trusted authorities determined not only what was worth reading, but also what within a given reading selection were the most important aspects to focus on. The basic principle is similar with algorithmic culture, which is also concerned with sorting, classifying, and hierarchizing cultural artifacts.

Here's the twist, however, which is apparent from the "About" page on the Amazon Popular Highlights site:
We combine the highlights of all Kindle customers and identify the passages with the most highlights. The resulting Popular Highlights help readers to focus on passages that are meaningful to the greatest number of people.

Using its computing cloud, Amazon aggregates all of the information it's gathered from its customers' Kindles to produce a statistical determination of what's culturally relevant. In other words, significance and meaningfulness are decided by a massive -- and massively distributed -- group of readers, whose responses to texts are measured, quantified, and processed by Amazon.

I realize that in raising doubts about this type of cultural work, I'm opening myself to charges of elitism. So be it. Anytime you question what used to be called "the popular," and what is now increasingly referred to as "the crowd," you open yourself to those types of accusations. Honestly, though, I'm not out to impugn the crowd.

To my mind, the whole elites-versus-crowd debate is little more than a red-herring, one that distracts from a much deeper issue: Amazon's algorithm and the mysterious ways in which it renders culture.

When people read, on a Kindle or elsewhere, there's context. For example, I may highlight a passage because I find it to be provocative or insightful. By the same token, I may find it to be objectionable, or boring, or grammatically troublesome, or confusing, or...you get the point. When Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost. What this means is that algorithmic culture, in its obsession with metrics and quantification, exists at least one level of abstraction beyond the acts of reading that first produced the data.

I'm not against the crowd, and let me add that I'm not even against this type of cultural work per se. I don't fear the machine. What I do fear, though, is the black box of algorithmic culture. We have virtually no idea of how Amazon's Popular Highlights algorithm works, let alone who made it. All that information is proprietary, and given Amazon's penchant for secrecy, the company is unlikely to open up about it anytime soon.

In the old cultural paradigm, you could question authorities about their reasons for selecting particular cultural artifacts as worthy, while dismissing or neglecting others. Not so with algorithmic culture, which wraps abstraction inside of secrecy and sells it back to you as, "the people have spoken."

3 comments:

Brett Boessen said...

This is a great point, and at least for me, often easy to forget. The aggregation of data via their "backup" process is as coldly quantitative as a graph or chart (exchanging the divergent and singular specificity of each data point for a kind of simplicity). I consider this very similar to the argument in favor of more ethnographic studies of culture (that the singular can often be revealing in its depth, even if it is limited by its breadth).

Still, one question: is "fear" really the way you would describe your relationship to the "black box"? That seems a little extreme to me (for me it would be more like "concern" or perhaps even "troubled").

Ted Striphas said...

Thanks for the comment, Brett. Yes, "fear" is probably too strong a word, as you rightly note. Thanks for reigning in my rhetorical flourish.

scritic said...

Ted, this is a very thought-provoking post. I put up some thoughts on this here