- Shopping Bag ( 0 items )
"Clay Shirky may be the finest thinker we have on the internet revolution." Steven Johnson, author of Everything Bad Is Good for You and Emergence" "Clay has long been one of my favorite thinkers on all things internet---not only is he smart and articulate, but he's one of those people who is able to crystallize the half-formed ideas that I've been trying to piece together into glittering, brilliant insights that make me think, yes, of course, that's how it all works" Cory Doctorow, coeditor of Boing Boing and author of Overclocked: Stories of the Future Present" "Shirky writes cleanly and convincingly about the intersection of technological innovation and social change; he makes both the science and the sociology accessible" New York Observer" "[Shirky] looks at self-organization with an academic's eye, tinged by an appreciation of the commerce that underlies a fair amount of the Internet." The Seattle Times" "In story after story, Clay masterfully makes the connections as to why business, society and our lives continue to be transformed by a world of net-enabled social tools. His pattern-matching skills are second to none." Ray Ozzie, Microsoft Chief Software Architect." "The author of the breakout hit Here Comes Everybody reveals how new technology is changing us from consumers to collaborators, unleashing a torrent of creative production that will transform our world" "In Cognitive Surplus, internet guru Clay Shirky forecasts the thrilling changes we will all enjoy as new digital technology puts our untapped resources of talent and goodwill to use at last." "Since Americans were suburbanized and educated by the postwar boom, we've had a surfeit of intellect, energy, and time---what Shirky calls a "cognitive surplus." But this abundance had little impact on the common good because television consumed the lion's share of it. Now, for the first time, people are embracing new media that allows us to pool our efforts at vanishingly low cost. The results of this aggregated effort range from mindexpanding---reference tools like Wikipedia---to life-saving-like Ushahidi.com, which has allowed Kenyans to report on acts of violence in real time." "Shirky charts the vast effects that our cognitive surplus---aided by new technologies---will have on twenty-first-century society and how we can best exploit those effects. For instance, he acknowledges that new tech brings greater freedom to publish and hence lower quality on average. But it also allows for the sort of experimentation that produces our greatest innovations. Shirky also assesses the transformative impact of online culture, which is by definition more transparent than traditional management structures." The potential impact of cognitive surplus is enormous. Wikipedia, which was built out of roughly 1 percent of the man-hours that Americans spend watching TV every year, is only the iceberg's tip. Shirky shows how society and our daily lives will be improved dramatically as we learn to exploit our goodwill and free time like never before.
Cognitive Surplus, the new book by internet guru Clay Shirky, begins with a brilliant analogy. He starts with a description of London in the 1720s, when the city was in the midst of a gin binge. A flood of new arrivals from the countryside meant the metropolis was crowded, filthy, and violent. As a result, people sought out the anesthesia of alcohol as they tried to collectively forget the early days of the Industrial Revolution.
For Shirky, the gin craze of 18th-century London is an example of what happens when societies undergo abrupt changes, such as the shift from rural agriculture to urban factories. Life becomes a bewildering struggle, and so we self-medicate the struggle away. But Shirky isn't a historian, and this isn't a history book. Instead, he's trying to grapple with our future. As he notes, the second half of the 20th century has been defined by a similarly difficult social transition, as we move into a post-industrialized world characterized by the incessant flow of information.
So what has been our gin? Shirky's answer is simple, perhaps too simple. He argues that the television sitcom -- those comic soap operas that saturated the airwaves for decades -- was the alcohol of post-war societies, "absorbing the lion's share of the free time available to the developed world." (The numbers are depressing: even today, Americans sit through a hundred million hours of TV commercials every weekend.) Instead of fretting about the dislocations of the Information Age, we sat on the couch and watched Gilligan's Island.
But now, Shirky says, the reign of television is coming to an end. For the first time in decades, a few select cohorts of those under the age of 30 seem to be watching less TV than their parents. (Shirky doesn't mention that overall television consumption is still rising. According to Nielsen's media tracking survey, the amount of time the average American spent in front of the tube reached 153 hours per month in 2009, the highest level ever recorded.) But if young people aren't watching quite as many mindless sitcoms, and they're not drunk in the streets, then what the hell are they doing?
They're online, prowling the world wide web. Shirky describes this shift in media consumption as a net "cognitive surplus," since our brain is no longer mesmerized by the boob tube. Needless to say, he describes this surplus as a wonderful opportunity, a chance to get back some of the productive social interactions that were lost when we all decided to watch TV alone. And when this new pool of free time is combined with the internet -- a tool that enables strangers all across the world to connect with each other -- the end result is a potentially vast new source of productivity. "The wiring of humanity lets us treat free time as a shared global resource," Shirky writes. Furthermore, the web allows people to "design new kinds of participation and sharing that take advantage of that resource."
After Shirky introduces his argument, much of the remaining 170 pages of the book are devoted to outlining what this surplus has produced. The author begins by describing the protests in South Korea over the importation of American beef. Interestingly, a majority of the protesters were teenage girls, who had been motivated to take to the streets by their online conversations. (Many of these conversations took place on a website dedicated to a Korean boy band.) Shirky describes this protest movement in breathless terms: "When teenage girls take to the streets to unnerve national governments, without needing professional organizations or organizers to get the ball rolling, we are in new territory," he writes.
But are we really? There were, after all, a few political protests before the internet. Somehow, the students at Kent State found a way to organize without relying on the chat rooms of Bobdylan.com. While the internet might enable a bit more youthful agitprop, it seems unlikely that we are on the cusp of a new kind of politics, driven by the leisure hours of the young.
The most compelling example of Shirky's cognitive surplus is also the most obvious: Wikipedia. He estimates that the pages of Wikipedia represent more than 100 million hours of human thought. All that unpaid labor has produced, by far, the most comprehensive, thorough, and intelligent summary of human knowledge that has ever existed. (Wikipedia includes more than 15 million free articles in 200 different languages.) And it was all done by perfect strangers, most of whom are not experts in anything.
But Wikipedia is not new; the online encyclopedia was first launched in 2001. After getting enthralled by the opening premise of the book, I expected Shirky to have a long list of exciting new examples of our surplus at work. This is where the book gets slightly disappointing. From Wikipedia, Shirky takes us on a tour of ... lolcats. He cites ICanHasCheezburger.com as an example of what happens when our cognitive surplus is transformed into "the stupidest possible creative act." While Shirky pokes fun at the site, he still argues that it represents a dramatic improvement over the passive entertainment of television. "The real gap is between doing nothing and doing something, and someone making lolcats has bridged that gap."
There are two things to say about this. The first is that the consumption of culture is not always worthless. Is it really better to produce yet another lolcat than watch The Wire? And what about the consumption of literature? By Shirky's standard, reading a complex novel is no different than imbibing High School Musical, and both are less worthwhile than creating something stupid online. While Shirky repeatedly downplays the importance of quality in creative production -- he argues that mediocrity is a necessary side effect of increases in supply -- I'd rather consume greatness than create yet another unfunny caption for a cat picture.
The second thing is that it remains entirely unclear if the creative and generous acts made possible by the internet are really a replacement for time spent watching sitcoms. After all, people have always had hobbies; although they watched plenty of bad television, they also read newspapers and built model airplanes, went on hikes and volunteered at the local shelter. In other words, we weren't quite as mindless or disconnected as Shirky seems to believe. In his zeal to celebrate the revolutionary capabilities of the internet, Shirky downplays the virtues of the world before the web. And then there is the terrifying possibility (not addressed by Shirky) that our online life is detracting, not from time spent watching TV, but from our interest in things that have nothing to do with technology, such as talking with friends or taking walks in the park.
It's easy to find quibbles with an argument as audacious and thought-provoking as that put forward in Cognitive Surplus. The fact is, Shirky has written an important book about an interesting moment in human history. We have arranged our modern lives to maximize free time. Now, thanks to the virtual infrastructure of the internet, we are able to collaborate and interact as never before. The question is what these collaborations will create. A surplus, after all, is easy to squander.
1 Gin, Television, and Cognitive Surplus 1
2 Means 31
3 Motive 65
4 Opportunity 97
5 Culture 131
6 Personal, Communal, Public, Civic 161
7 Looking for the Mouse 183
A conversation with Andrew Keen
(With additional questions from James Mustich, Editor-in-Chief of the Barnes & Noble Review).
According to media columnist Michael Wolff, the name Clay Shirky is “now uttered in technology circles with the kind of reverence with which left-wingers used to say, ‘Herbert Marcuse’.”Wolff is right. Shirky has emerged as a luminary of the new digital intelligentsia, a daringly eclectic thinker as comfortable discussing 15th-century publishing technology as he is making political sense of 21st-century social media.
In his 2008 book, Here Comes Everybody,, Shirky imagined a world without traditional economic or political organizations. Two years later and Shirky has a new book, Cognitive Surplus, which imagines something even more daring -- a world without television. To celebrate the appearance of the revered futurist’s latest volume, we’re delighted to share a February discussion between Shirky, Barnes & Noble Review Editor-in-Chief James Mustich, and BNR contributor Andrew Keen. What follows is an edited transcript of their conversation about the future of the book, of the reader and the writer, and, most intriguingly, the future of intimacy.
James Mustich: Clay, I was very taken with that post you wrote about the early days of the Gutenberg revolution.
Clay Shirky: Oh, yes. Eisenstein’s book.
JM: Right. It had a very insightful historical perspective that’s generally lacking in conversations about today’s publishing turmoil. You also had an interesting piece at edge.org recently, about how publishing is the new literacy. You said, “It is our misfortune to live through the largest increase in expressive capability in the history of the human race -- a misfortune because surplus always breaks more things than scarcity.”
Andrew Keen: This idea of publishing as “the new literacy” sounds like a sexy, kind of Twitter remark, but what actually does that mean?
CS: We have this whole complex of words, “publish,” “publisher,” “publicity,” “publicist,” that all refer to either jobs or the work of making things public. Because it used to be incredibly difficult, complicated, and expensive to simply put material into the public sphere, and now it’s not. So I’m comparing it to literacy -- literacy used to be reserved for a specialist class prior to the printing press, and, much more importantly, prior to the spread of publishers and the rise of a real publishing industry.
AK: What do you mean by “reserved”?
CS: That it was reserved for a professional class. There was no point in educating people to read and write who weren’t also going to have access to books, and the people who had access to books were generally in centers of learning or churches. You couldn’t have mass literacy without also mass availability of things to read, which didn’t happen until after Gutenberg. So literacy went through this curious transition where it became more critical to society, and you could no longer make a living just by the ability to read and write.
So when I say “publishing is the new literacy,” I don’t mean there’s no role for curation, for improving material, for editing material, for fact-checking material. I mean literally, the act of putting something out in public used to be reserved in the same way. You used to have to own a radio tower or television tower or printing press. Now all you have to have is access to an internet café or a public library, and you can put your thoughts out in public.
So what happened to literacy in the 1600, 1700 and 1800s is that it went from being reserved for a specialist class to being a general feature of the middle class. The same thing is happening to publishing -- the ability to put something out in public is becoming more important to society, but the delta between “I can put something out in public” and “I can’t put something out in public” is no longer so great that you can automatically make money simply by having access to the means of publication.
AK: Is that change technological or cultural?
CS: Well, it’s a technological change whose ramifications are mostly cultural, and culture is I think lagging the technology, as it often does, because the raw capability isn’t what changes society. As I put it in Here Comes Everybody, “society doesn’t change when people adopt new tools; it changes when people adopt new behaviors.” So in the 1990s, we had a population of some tens of millions of people in this country who had access to online material, but that wasn’t yet the majority of the popularity. More importantly, those people hadn’t yet fully formed behaviors around what was possible digitally. Now, having gone through the first decade in which digital freedom was a normal part of life for more than half the country, we’re seeing the cultural change that comes about as a result of the technological change. So the ability to publish, the ability to put things in public no matter who you are, as long as you have access to, again, a public library or an internet café -- that’s a technological change. But the change in perception and reaction to what gets published and why, that’s the cultural change.
JM: What interests me in what you wrote about the printing press, and the immediate changes its advent provoked, is how unclear the effect of its influence was to those subject to it at the time. Today, in the throes of another massive technological change, we’re trying to see very clearly what’s happening, and part of the point of your piece is that we can’t see it either, because we don’t know what the behaviors are going to be that are engendered by the technology. But as a book guy, what I have often looked at is how much the book publishing industry defines the situation only in the context of its very recent history. . .
CS: Yes, right. Absolutely.
JM: . . . which makes them think of content in terms of these finished products, books; that’s the only way people get information, or the only way creative work gets to people. But those products in that form are really consequences of industrial organization, or management decisions made in the very recent past. If you look back to the 19th century or before that, printed creative work was much more dynamic. It came about in pieces, then it was collected in books. Very few people were sitting down and writing books the way they do now.
CS: A whole book, right. Dickens was paid by the word in newspapers, and then Pickwick Papers came out of the assembly of that work.
JM: Exactly. It seems to me that what we’re seeing is, in some ways, that technology is allowing us to go backward to a more dynamic kind of form of communication of these works. I’m wondering if you’ve given any thought to that, and what that thought might be.
CS: One of the problems with any kind of talking about the media landscape is that we’ve just been through an unusually stable period in which, for fifty years, English language media was centered in three cities -- London, New York, and Los Angeles -- around a very stable group of people working in a relatively stable set of media. This is the media landscape where getting your television in through a wire rather than through the air constitutes a revolution. That was a really big deal. Cable was supposed to be a huge change. And indeed it was, within the context of television, a large change. It’s just that now we’re actually dealing with a change that’s a shock across the whole environment.
I have this theory. I call it the Russia-Poland Theory. Which is: one of the reasons Poland did better than Russia after the collapse of Communism is they’d only had one generation under the Communists, so there were still people who could remember that it had been different. Whereas, under Russia, no one alive remembered what life was like in 1916. When people go through two generations of stability, it’s easy enough to adopt an attitude that it has always been this way. So for somebody entering the book publishing business in, say, the year 2000, some 23-year-old just out of school, it has ALWAYS been this way. No one in the publishing industry has known anything but the postwar landscape. What you get when a situation like that happens is that one word comes to stand in for a business, a production method, a product, a cultural signifier -- the whole range of it is all compacted into that single thing.
You can see it really clearly with television. You go to the store to buy a television, and then you come home and you watch some television. But the television you buy isn’t the television you watch, and the television you watch isn’t the television you buy. We use the same word to refer to the object and the content flow, and nobody gets confused because we all know what television is. Now all of a sudden, we have video spilling out of phones and personal computers, and the question “Is that television?” becomes really complicated.
To books specifically: books are a considered form of long-form writing. They’re a physical product, and then, as you say as “a book guy” -- there are book guys, right? There are people who live their whole lives in the context of producing this long-form writing and turning it into those physical objects. And all that stuff is coming apart.
AK: But books are Russia. Not Poland.
CS: Yes, books are Russia, no Poland. That’s exactly right.
AK: Whereas you could argue television or the music business is more like Poland. They’re all relatively new. Whereas the book business is much older.
CS: You’re right. The book business is, in this metaphor, Russia, which is to say the stability of the book business predates the Second World War. In fact, you read all this stuff about the rise of paperbacks, mass market paperbacks in the 1950s -- people were freaking out that it didn’t have a hard cover. That constituted a revolution in books.
But what we’re dealing with now, I think, is the ramification of having long-form writing not necessarily meaning physical objects, not necessarily meaning commissioning editors and publishers in the manner of making those physical objects, and not meaning any of the sales channels or the preexisting ways of producing cultural focus. This is really clear to me as someone who writes and publishes both on a weblog and books. There are certain channels of conversation in this society that you can only get into if you have written a book. Terry Gross has never met anyone in her life who has not JUST published a book. Right?
JM: [LAUGHS] Right.
CS: It’s like every judge thinking that criminals dress in blue suits all day long. Terry Gross’ experience is only talking to people who have just written books.
AK: Why does Terry Gross only talk to the traditional author?
CS: I think because the cost of writing a book is very large. Someone has committed a lot of time to it. They’ve put a lot of their thinking into it. But also, a whole bunch of other people who have significant amounts of capital on the line have said, “This is worth publishing.” They’ve either said it in the context of the academic press, which says, “This will redound to our credit,” or they’ve said it in the context of the commercial press, which says, “Revenues will exceed expenses.” We use the phrase “self-published author” to mean “vaguely suspect.” Right? Or take painters. Anyone can be a painter, but the question is then, “Have you ever had a show; have you ever had a solo show?” People are always looking for these high-cost signals from other people that this is worthwhile.
That I think is one of the big changes in book culture, that it used to be a pretty safe way to say, “I’m talking to the people who I should be talking to if I’m talking to the people who’ve written books about the subject,” and now that is less the case for two reasons. For one, the book world is opening up -- the maw of production of the book world is opening up -- the iUniverses and so forth of the world. Getting a physical object no longer means somebody else took a big economic flyer on it. At the same time, more thoughtful long-form writing is happening outside of the traditional publishing industry. So the old rough-and-ready, “I’m vetting for quality by only talking to authors of books” model is suddenly up in the air. Books are less valuable as signifiers, and people who you ought to be talking to, some of them don’t write books.
AK: One example of this type of new author is Andrew Sullivan. He is a classic 19th-century guy who now is in the 21st century, who has decided that the long-form world doesn’t work. But he is a star of both the old and the new world. He actually proves that the arguments about elitism of the old world are in some ways just as relevant in the new world.
CS: You said to me on Twitter the other day, “Oh, you’re secretly an elitist.” I remember thinking I’m actually, I think, kind of openly an elitist.
AK: You’ve said it now. That’s the end of your career (LAUGHS)
CS: That’s the end, right (LAUGHS). I’ve always adopted the Bill Burroughs mantra, which is, “If a thing is worth doing, it’s worth doing badly.” Which is to say that if there is any intrinsic value in writing or expressing yourself or taking a photo, it’s worth doing even if the results are mediocre. Whenever the production maw has opened more widely, whether it’s cheap photography or it’s weblogs, the average quality falls. The average quality of a piece of writing is now lower because the denominator has exploded. The question becomes how do you find the good stuff in this much larger group. I am not somebody who believes everyone is equally talented; talent remains unequally distributed. What’s interesting now is that the old gatekeepers for identifying, anointing, and promoting talent are different in this generation than they were previously.
There’s an interesting natural experiment going on around this very question of elitism with Nick Denton’s Gawker empire. Denton is the person who discovers Elizabeth Spiers. Denton hires Anna Marie Cox. He finds this great group of early writers, who then all get picked up by traditional media and switch jobs, because whatever else you can say about the platform for Gawker weblogs, they don’t pay that well. So all of a sudden, Nick is now the recruiter for traditional media. And the question becomes: is there a large enough, an unlimited enough talent pool that Nick can do that 100 times in a row? It doesn’t matter how many times mainstream media recruits these people away from him, because he can always find somebody else -- or, is he going to run out of the talent pool, and end up just being a recruiter. We don’t know how widely distributed the talent that he relies on is; if there are only ten writers as good as Cox, he’s got a problem. If there’s a hundred writers as good as that on that subject, then instead it’s the mass media that has a problem. If it turns out that the medium can’t employ everyone who’s talented, the supply and demand equation switches, and the premium you get for talent actually turns out to be a premium for talent plus happening to have the microphone in your hand. We don’t know yet. Plainly, supply is larger than the current available slots in the mass media. But we don’t know if it’s 10 times larger, at which point we see a rebalancing, or if it’s 100 times larger, at which point we really see a restructuring.
AK: Can we go back to the book? It occurs to me that one of the reasons the traditional book lasted so long -- one of the reasons it was Russia -- is that the form and the function went very well together, and the book was a great way of tracking talent. Take the birth of the 19th-century novel, which is the classic way of putting together a finished product, which then the Industrial Revolution was able to polish and distribute. So when Poland went, it wasn’t so dramatic. But when Russia goes, it’s going to be really dramatic. We haven’t even seen the beginning in the book revolution, have we?
CS: I think we are literally just seeing the beginning now. Just yesterday, Google says “Our negotiating position vis-à-vis the publishers has changed dramatically in the last 30 days.” Google has been doing this stuff quietly, one way or another, since 2005 -- Google Scholar, Google Books, digitization, negotiating digital rights, and so forth. It was because they were essentially going to be the second entrant in a monopolistic environment largely dominated by Amazon. The rise of the iPad and the at least not completely accidental renegotiation of the MacMillan-Amazon relationship at the same time has meant that supply and demand are more nearly balanced now, and that the publishers have greater leverage to use that platform.
That is a two-edged sword, which is to say that the ability to engage in price competition with one another cuts both ways in a digital environment because the marginal cost of distribution is still zero. But I think that the last 60 days are the beginning of the real change.
I’ve got a book coming out in June. The first one, I signed -- I started the proposal in 2005, started to work on it in 2006, finished it in 2007, it came out at the beginning of 2008. So I’m sitting down at the beginning of 2010, almost two years to the day since the previous one came out...well, it’s actually 2½ years since I was having these kinds of conversations, and in that 2½ years the eBook has gone from being an afterthought, “let’s try it if we can,” to an absolutely normal part of the process.
AK: What is the book, and who is publishing it?
CS: Oh, it’s Penguin Group. It’s currently called Cognitive Surplus, and it’s about being able to treat people’s free time as an aggregate rather than as a series of individual silos.
AK: Ok. But digitalizing a book doesn’t change anything about it in grand historical terms, does it?
CS: I think it does, because it puts it into an ecosystem where more people have access to more books. The digitizing of a book adds to searchability, it adds to portability, it adds to...
CS: Search is essentially the current model of information-finding, where the old model of was you go to the library and they tell you that you have to know what database you’re looking in before you look. That’s fine when there are 500 databases, maybe, and someone can help me decide. But when there’s an unlimited number of data sources, search becomes the intellectual model of the age. I remember knowing when I’d switched over to thinking digitally when I picked up a copy of Naked Lunch, and I wanted to find a little passage called “Hauser and O’Brien,” about two cops in New York City. I realized, “I can’t search for that.” I had to remember that it was about three-quarters of the way through the book, and I can kind of vaguely remember it was midway on the right-hand page or something. That experience of not being able to recapture what you’ve done before is one of the great infelicities of the book world, and I think it’s especially frustrating to people with nonfiction when there is a particular point they want to go back to.
The other thing it does, though -- which is good or bad, depending on your taste -- is it encourages the ability to skip ahead to the parts they want to read. I mean, nonfiction books are going to be transformed, I think, much more dramatically than fiction, precisely because their utility means that people are going to essentially disassemble them mentally even if they’re sold as a single package. So to your point about Dickens being assembled in the book after having been created in this disassembled way, we may potentially be seeing something like that on the demand side, which is: I’d like to be able to take this nonfiction book and take it apart again, and preserve or flag the parts I’m going to refer to continually.
JM: That’s an interesting idea for me, because we’re not just talking about the object, Because the book shaped our model of knowledge: when the index was invented in the 12th century, that made a book a certain kind or ordered thing, and it led to three centuries of Scholasticism -- the whole university system where knowledge was contained in these ordered things and could be found in these books by looking a certain way. And all the content to come was shaped by this intellectual etiquette, if you will. What’s important about digitalization, when the model is search instead of an alphabetical index, is that it changes what one’s model of knowledge is.
JM: Which is really the subtext of Here Comes Everybody -- that there is a different way of apprehending the world now because of this.
CS: Yes. I think one of the ways of apprehending the world that’s actually showing up already in the academy is the so-called “one-box search,” where you don’t have to say, “This is the database I’m looking in.” One-box search privileges interdisciplinary work. Because if I search for a particular string or phrase, I am suddenly getting back results from psychology, sociology, economics, political science -- all in the same search query. Disciplinary boundaries are just assaulted, rather than doubled down; if I have to know the database before I search it, then to become a good political scientist I have to know which journals are relevant.
AK: Do you think that that’s one of the reasons why you’ve been intellectually successful, because you cross boundaries so naturally? You started life as a creative artist. You’ve been a technologist, a theorist...
CS: I’m not sure that I’ve gotten to the level of theory. In the academy, there’s a pretty rigid definition.
AK: But, in a recent Vanity Fair piece, Michael Wolff referred to you as the Marcuse of the early 21st century.
CS: [LAUGHS] That’s on my to-read list. I haven’t read it yet.
AK: So you’re in high company now. You’re the Frankfurt School 2.0.
CS: I will tell my wife, who is a political philosopher. She’ll be tickled. But certainly, there are revolutions in which people’s principal skill is not being afraid of what they don’t understand. These people do well in revolutionary times. I jumped into this not because I was good at it, but because I didn’t have much to lose. That will give way -- in fact, it even is giving way now. I started doing this in a day when you had to understand something about how the Internet works just to use it. Literally. There was no web, there was no graphic interface or anything like that. You had to understand something about the plumbing just to go to the bathroom. It’s like having to know how your car started to own a car. Those days are long gone. In fact, some of the interesting commentary on the iPad considers it as a new model for how little you have to know about your computer in order to get it to do what you want to do.
AK: Which is why the techies don’t like it.
CS: Exactly! But I do think that early on in any revolution, people who are comfortable operating without strong disciplinary boundaries are liable to do well, just because nobody knows where the next good idea is going to come from. Louis Menand just put four of his essays in a very interesting book on higher education. It’s clear that disciplinary boundaries are a response to the profusion of knowledge; that response says, “This is where psychology ends and sociology begins, and if you cross the hall, you’re operating in their discipline and not ours, and they have different choices.”
These kinds of boundaries become really significant in two different areas. They become significant intellectually, and they also become significant for the development of things like tenure. So the really mundane -- “This is how the profession works, recognizes quality and promotes itself” -- and also -- “This is the intellectual output that’s consumed by society and shapes people’s ideas” and so forth -- all get bound together tightly, and nobody inside the system can really imagine a change.
I think one of the other questions right now arises because we’ve plainly lowered the threshold of disciplinary boundaries in the early days of this change, because there are so many inputs. There are people who are willing to dive in and try stuff and are getting things done. But as people get better at things, we are starting to see the return of some kind of discipline -- people specializing in different aspects of the service. At Google there are people who do nothing but optimize the file system all day long. So it’s not just service side and client side. It’s really, really specialized.
But are the disciplinary models of the new medium going to be more like a network or are they going to be more like a series of silos? I’m going to bet on the network model, which is to say it’s likelier that disciplines in the world we’re entering are going to have not so much a canon that says, “This is the edge of what’s important,” so much as, “This is the core of what we’re interested in wherever the currents come from.” There is still going to be a strong difference between psychology and sociology at the center of those two professions -- a concentration on individual thinking versus a concentration on group dynamics. But I think there’s going to be less of a sharp edge between them, and I think there will be more people -- or really, probably, more pairs or groups of people -- who are doing and publishing research that crosses that boundary back and forth.
AK: When it comes to books, though, one of the big traditional boundaries is between fiction and nonfiction. Do you think that that’s done away with?
CS: Ah. The James Frey problem.
AK: Yes. But as the book becomes alive, and the novelists can write more factually...
JM: I’m not sure that this concern isn’t a very recent development as well, created by the industry to shelve books in bookstores and to disseminate books in the trade, and then inflated to another kind of discussion…
CS: Look at the difference between how a library shelves books, how serious fiction as a category exists in bookstores but not in libraries.
JM: If we go back to Dickens again, there’s a combination journalistic and storytelling impulses. Our obsession with whether a memoir is true or a novel is based on real events -- in any interview you hear with a novelist, the interviewer is general asking again and again the same question of the author: “Did this really happen to you?” We don’t -- we can’t -- observe those boundaries in our imaginative lives as clearly as the industry or the media does, or wants to.
CS: There was this moment when Oprah got called out on the James Frey thing -- in what must in retrospect have been a moment of absent-mindedness, she told her audience the truth, and she said something like, Look, if it’s out in the public eye, it’s been massaged. Anyone telling a story is telling a story. There is no such thing as unmediated expression, there is no direct access to truth. Those weren’t the words she used, but that was the message, in which she said, essentially, the Frey made you feel something, and your feelings were real, and don’t get so hung up on this. Her audience went berserk. And because they went berserk in an age where they could amplify one another’s anger using all the tools we’re used to, there was a public relations shitstorm -- she was called to task for possibly the only time in her career, or certainly for the most public time in her whole career.
If I want to talk about the border between fiction and nonfiction getting erased, I point to Oprah’s audience. For the mass of the population, I don’t think that we are going to quickly enter a new world in which the truth or falsehood of an assertion is ever thought of in complicated and subtle ways. This may be truer here than it is in Europe. The U.S. is unusual in a lot of cultural ways. People want to know if this really happened to the author. The radio interviewers ask that question over and over again, because the demand for there to be an uncomplicated answer to that question is in no way assuaged by telling them that there isn’t an uncomplicated answer.
AK: It’s no coincidence that technology enables this kind of intimacy.
CS: Yes, exactly. The ability to invent a persona whose signature can be so managed is possible because there’s less face-to-face contact on the Internet, and even less telephone contact, much more digital traces of leaving websites. People didn’t just love the Frey memoir because they thought it was true. They loved his memoir because it seemed impossible that it was true, and they were still being told it was true. Augusten Burroughs, same thing. For as long as memoir culture is in its current mode, there’s always going to be a premium on a kind of faking it, because those are the books that sell well. It’s the stuff that’s right on the edge.
I remember years ago, a guy I worked with in the theater found a ten-year-old bottle of moonshine in his basement in North Carolina. He said, “I don’t know if I should drink this or not.” So he called up a friend of his who knew a little bit more about moonshine than he did, and he said, “I found this moonshine; can I drink this?” His friend said, “I’ll tell you what. Just pour out a little capsule of it and set it on fire, and if it burns blue, it’s fine -- drink it. If it burns yellow, don’t drink it. If it burns blue with a yellow tip, I’ll pay you ten dollars for a glass.” That’s the memoir, right? If it burns blue with a yellow tip... You can’t even believe it’s true, but also it’s just barely palatable to consume. That’s what James Frey and J.T. Leroy and Augusten Burroughs write, these impossible crushing circumstances of their life, after which they acquire a kind of literary ability to tell it as a story.
The demand for that is going to remain there. So I think while the line between fiction and nonfiction may be increasingly blurred in practice, I think the public’s demand to be told there’s a sharp line is unlikely to shift quickly.
AK: Actually, I don’t agree with what you said earlier, Clay. I think technology has caught up with culture, rather than the other way around. In this sense, technology now is feeding our appetite for intimacy.
CS: Yeah, that’s right. I will agree with Andrew here.
AK: Not for the first time.
CS: Not for the first time. [LAUGHS] One of the things that freaks me out about the music scene is that hip-hop preceded the digital encoding of music. They were doing sampling and remixing and intercutting and mashups -- call it whatever you want -- with turntables and a microphone. When you hear what Kool DJ Herc or Double Dee and Steinski were doing -- insane! Insane stuff you would never try and do with only analog equipment, except that that was all they had. So when digital music came on, it was like gasoline on the fire, because all of a sudden, all the stuff that they’d just barely been able to hold together with two tables and a microphone turned into something that was able to be cut-and-pasted.
I guess what I’m saying about technology preceding culture versus culture preceding technology is: when there’s deep change, it takes a long time for the culture to catch up. But deep changes never happen without some precursor. Take, for example, the early history of the book. Scholastic culture arose around the book as an object, and it was the automation of the production of that object that Gutenberg was responsible for, not the fact of the original intuition that folded and cut pages were better than rolled parchments.
TV, weirdly, created a grid of intimacy among 10 million. You would not think that a medium that reaches 10 million people would have intimacy as its core virtue.
AK: That’s why the most valuable TV guys were the late-night talk show hosts whose whole premise, whose whole value was building intimacy with their audience.
CS: There’s was a really interesting article in the Atlantic about George Noory, the guy who does a late night show called Coast to Coast, and about exactly this -- that late night is when you’re reaching people. QVC -- Quality Value Convenience -- I think that was the original home shopping network. QVC has this long training course to be a phone sales person, because you don’t get on, make the sale, and get off. You get on, you talk to the person, you compliment the person. Because what do you know about the caller? That this is a person who is sitting alone at 3:00am. So it’s very clear what the value of a phone call is at that point, and it’s not just reflected in the transactional value.
So I think that Andrew is right in that the desire for intimacy in a largely dissociated environment, coinciding with the decline of social capital, created a demand that made the Internet, again, like gasoline on the fire.
AK: We’ve talked a little bit about what new books are or might be, but to me a more interesting question is who or what the new author is. Do you want to say something about that?
CS: This is the literacy question again.
JM: It circles back to why it’s important that publishing is the new literacy. What strikes me is that, if you look at other periods of great cross-disciplinary ferment, the early years the Enlightenment, say, you had people who found ways to communicate across disciplines effectively -- through pamphlets and international newsletters then, rather than the Internet.
JM: Your piece, for instance, on Eisenstein, which we got on the web, because you could publish it there easily, is not that different from what Diderot or Melchior Grimm were doing in sending these newsletters back and forth between Germany and France. It’s just easier now, and everybody can do it. That’s what I was trying to say before about writing being free of the book for a long time before the modern commodity of the book contained it. I’m not talking about the history of the codex and Gutenberg, but of the act of setting something on paper and sending it out into the world without imprisoning it in the book. Self-publishing -- publishing as the new literacy -- allows that on a massive scale.
CS: Yes. Putting something on paper used to be a way of increasing the number of copies in circulation, and now it’s a way of decreasing the number of copies in circulation, by comparison to the digital media.
It’s interesting. From my point of view, I am a writer but not an author, which is to say I am a person who writes. My introduction to this medium was on Usenet, a medium with no graphic capabilities, and so to have a presence, literally to be there at all, was to write all the time. And I write in a very conversational style. It’s not the same style as an essay style. But nevertheless, it’s where I learned to write. I should have learned it in college, but alas, instead I learned it on Usenet.
There are still people in this city -- I went to school with many of them -- for whom the kind of Algonquin Club energy of authorship and being a writer and so forth is the aspiration. My sense is there are fewer of them now, fewer 23-year-olds.
AK: You went to Yale, right?
CS: I went to Yale.
AK: Are you saying that you didn’t learn to write at Yale? You did theater studies at Yale, right?
AK: [LAUGHING] You must have been semi-literate to get in.
CS: [LAUGHS] I was not illiterate prior to applying.
AK: So what do you mean when you say you learned to write on Usenet, having been a Yale grad?
CS: What I mean is that what I wrote at Yale was for an audience of a single person, my professor, and that it was intended to convince him that I knew what I was talking about so he would give me a good grade, rather than being intended to communicate something to him that would convince them to change him mind, or trying to give him a framework for thinking about something. In a way, writing a college paper in its current structure is almost custom-designed to crush in the student the idea of writing as a communicative act, because it feels like a long, highly structured interoffice memo rather than an address to the world.
I’ll tell you two things I’ve done here at NYU with the writing my students do for me. One, I assign them write for each other. So they think, “My peers are going to read this and also my professor is going to read this.” You’d think they’d be more concerned about me reading it, but the quality goes up when they know their friends are going to read it.
The other thing I do, with some of their stuff, is publish it online. I took a whole bunch of papers by my students from a class we did on the effect of the Internet on the 2008 Presidential election, and I just put them in a big folder and put them online. People’s reaction to this was: “Oh, I may actually be communicating something; I’d better get it together here.”
I never had that experience at Yale, not because Yale was not good at teaching writing. In fact, famously, the Daily Themes course is a boot camp for writers essentially. But in my ordinary classes, my experience of writing was that it wasn’t a communicative act to people I didn’t know.
AK: You’re basically saying that the disappearance of privacy might be a good thing for writers. Although I think Proust or certain other confessional writers might disagree with what you say.
CS: Right. The Saint Augustines of the world are always going to need to remove themselves from this. But writing is a big tent. The kind of writing I do has always been designed either to elicit a conversation or to provide some framework for thinking about a problem, and you do that better if you’re dealing with people whom you don’t know in advance and who may not be inclined to agree with you. Usenet is a much better environment for that, frankly, than the Yale campus.
AK: Let’s say some of these kids at NYU grow up to be 21st-century professional authors. Given the kind of training they’re getting and the media they’re growing up surrounded by, why would they be different as authors from you or me?
CS: First of all, I think we will see fewer authors and more writers. There’s this long, long, lonely gap between the 8,000-word New Yorker article and the 80,000-word book. And there are a bunch of interesting things that are about 20,000 words long. In fact, it’s gotten to the point where, if you’re reviewing a nonfiction book, it’s commonplace, if you like it, to assure the readers of the review that this is not just a magazine article inflated to 80,000 words so that it can be sold on the shelves at the bookstore. Which, in a way, is saying there’s a bunch of stuff that actually would be better at 20,000 or 25,000 words than at 80,000 words.
If that stretch opens up, then I think one of the things we’ll see is that an enormous amount of long-form writing that was kind of just pushed past the finish line of 80,000 words is going to revert to 40,000, or 20,000. If I could read an 80,000-word essay by a science writer about a particular branch of science, or a series of 20,000-word essays from scientists working in different disciplines, for anybody except for the best science writers -- the people who are actually adding their own thoughts to the mix rather than just concatenating -- I’d rather read the essays. The big question for me isn’t so much what happens to writers (although I think it’s an interesting question), but rather, what happens to the support writers typically get from the publisher?
The hard question, I think, is: long-form writing benefits enormously from a second set of eyes, or a second-third-fourth-fifth-sixth set of eyes -- a copy editor, proofreader, etc. When I do a book manuscript and hand it over to Penguin, the amount it improves after I’m nominally done with it is astonishing. I can handle a process of going over it and over it and over it to get a 2,500-word essay, like the Eisenstein one, into that sort of form. I can’t do it for a book-length manuscript. Yet, once the book moves away from the bottleneck that allows the publisher to charge for the scarcity, which is where the copy editor’s fee comes from in the first place, I don’t know how writers of the future, at whatever length, take advantage of those capabilities.
People often ask me, “Why are you writing a book, given that it gets folded between the pages of dead trees?” And so on. My response to this has been, from the beginning, that I’m not getting edited and copy-edited and fact-checked and legally checked as the price I pay for having my name on the spine of the book -- that has really never been a goal of mine. I didn’t grow up with that sort of Algonquin Club energy. It’s the other way around. Right now, the way you get other people to look at your book and comb through it for inconsistencies and talk about more felicitous phrasing is to agree to publish it. If there was some way to support that ecosystem -- the ecosystem of “we are going to make long-form writing better by treating the question of quality and accuracy and felicity as a group effort” -- that would work for weblogs, I’d be all over it. I think a lot of people would.
One of the things I noticed doing the first book is that you learn a lot of things doing a book that are lessons you can only apply to doing another book. They are really specialized things you do that aren’t about the argument you’re making, but about being part of the publishing industry. In a way, the notion of authorship retains it power in part because that hazing ritual is still high enough that, once you’re on the far side of it, doing another book is the most cost-effective use of your time, because you’ve already mastered these somewhat arcane skills. To the degree that writing -- long-form writing in particular -- becomes more broadly produced, I think the question will be reversed: how can we make the skills that publishers have mastered now flow outwards to new forms of long-form writing? That requires new business models that are yet to be on the horizon. And I’m not the business model guy.
That to me is the interesting part -- not so much what the writers of tomorrow will be like, but rather, what’s the ecosystem for improving writing going to be like? Because right now, you’re basically either self-published and there’s no ecosystem, or you’re published by a publisher, and then you get copy-edited and legally edited, and all the rest of it. It’s that second set of values that are, in fact, more at risk than the writing itself in the current environment.
AK: Tell us about the new book, Cognitive Surplus. What’s it about?
CS: It’s about the idea of treating people’s free time as an aggregate resource that’s used for joint collaborative projects, Wikipedia and Open Source being the two most famous ones. But I’m also interested in things like environmental groups, ride-sharing, the responsible citizens who are a group of kids in Pakistan cleaning up market streets to try to create a broader civic culture -- all of these ways of trying to use our new tools to create collective and not just personal value. Whereas the last book, Here Comes Everybody, was just “How did we get here?”, Cognitive Surplus is: “We’ve got this set of capabilities, where are we going?”
What’s different I think about Cognitive Surplus is saying that the cultural norms that we set now will determine the difference between how much of what we’re doing online is essentially self-amusement (mutually created value, and so you get something funny to look at on your coffee break or whatever) versus stuff that really throws off a lot of significant public and civic value. I like lol-cats as much as the next guy, and actually maybe more. But the precious end of the scale, and the end of the scale that’s hardest to get going, is the civic value. The book is essentially about why that civic value matters and how to foster it.
Posted May 10, 2011
This brainy book, with its fascinating historical and scientific references, illuminates a central aspect of 21st century life - what people are doing on the Internet actively and jointly with the thinking time they used to spend watching TV passively and alone - and enables readers to see this slice of human experience in a new way. New York University professor Clay Shirky intelligently and insightfully explains how putting the Internet and its online social media tools into the hands of nearly two billion people who have more than a trillion hours of free time is resulting in a new, optimistic and empowered world. He cites such unique, useful Web developments as Wikipedia, PickupPal, the Apache Project and countless other online wonders. If you don't yet fully understand the potential of social media, you will when you read this book. getAbstract recommends this outstanding work to anyone who wants to know more about how and why the Internet and social media are dramatically changing the world.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted June 28, 2010
No text was provided for this review.