Robert W Gehl, Assistant Professor of New Media, Department of Communication, University of Utah

Reverse Engineering Social Media

Winner of the 2015 Association of Internet Researchers Nancy Baym Book Award

Reverse Engineering Social Media mixes together software studies, cultural studies, and Marxian political economy to critique contemporary social media sites such as Facebook, Google, Twitter, Wikipedia, Amazon, and Digg.

Social media have many contradictions. On the one hand, sites like Facebook and Twitter are fun to use, but on the other hand, they can often seem like work as we feel compelled to constantly like and tweet. They are free to use and yet somehow create billions of dollars for their owners. They allow us to produce our identities and yet delimit what we can say and do. So how did we get here?

This book reverse engineers social media by tracing their genealogies and decomposing their architectures. It begins with one of the latest developments in social media: socialbots (programs with social media profiles meant to appear human to others). These 'bots are linked to the long history of Turing-inspired artificial intelligence work. What socialbots reveal is the extent to which programs can interact with us online - and how sites like Facebook, Google+, and Twitter structure us to be a bit more machine-like in order to allow that interaction.

This book critiques Facebook and Twitter's myopic focus on the new, best seen in prompts like "What are you doing now?" and in structures such as new Tweets on top, by tracing it to the Von Neumann architecture of computation. This architecture has two parts: the processor, which is focused on the immediate, and storage, which is concerned with transposing data out of time. Social media's emphasis on the new is a move to have users act as processors, while social media site owners store the results of this processing in rationalized archives - archives the user is not given access to.

The book also traces social media's use of software architectures to direct users-as-laborers, a technique developed in the early days of software engineering and now a prevalent part of that field. While software architectures can direct the labor of programmers who build software for corporations such as Microsoft and Apple, they are also used to direct the work of users who build profiles and connect with one another through social media interfaces.

Finally, Reverse Engineering Social Media explores protocological power in social media, specifically as this form of power appears in marketing and advertising standards. In the 1990s, before the rise of social media, trade groups like the Interactive Advertising Bureau standardized ad sizes and metrics. With these standards in place, sites such as Google could rely on them to build their businesses out of surveillance and customized advertising.

However, the book doesn't stop at critique. After exploring the architectures of sites such as Facebook and Twitter, Reverse Engineering Social Media looks for ways out of the structural inequalities of contemporary corporate social media. One way out is illustrated by the answer to the question: why do we type in "wikipedia.org" instead of "wikipedia.com"? The answer to this question (a labor strike early in Wikipedia's history) reveals conditions by which activists might articulate users, discourses, and technologies into successful resistance to the dominant surveillance and marketing based political economy of the Web.

The final chapter considers all of the contradictions of social media and offers a "design speculation" for socialized media. Socialized media would be under the control of users, would be free of corporate and state surveillance, and would allow users to fully express themselves and enjoy the pleasures of social media. While this speculation seems like an impossible goal, Reverse Engineering Social Media surveys the state-of-the-art, activist-led efforts to make this speculation real. There is much to be optimistic about if we reverse engineer social media.