Moscow Calling

A couple of weeks ago it was my very great pleasure to attend and give a talk at a conference in Moscow. Yes, Moscow. It went a little something like this... Hold on a sec. How did this come about? Well thanks goes to Brian Green of Editeur, the bibliographic standards body, and the guys at Nature Web Publishing who put him on to me. Of course thanks also goes to Biblio-Globus, the fantastic bookshop who took us out there. Anyway through a complicated series of events I found myself tasked with writing a piece on social networking sites and publishing for a conference on Standards in a Digital Age. Having never done anything like this it was going to be interesting.

So where to start? The talk had three parts. Part 1 looked at what we mean by social networking sites, extending the definition from your basic sites like Facebook (or indeed vkontakte) to social media,  including concepts of web 2.0, micro blogging sites, portals for ugc, business networking and specifically bookish activities like librarything or Shelfari et al. In Part 2 I then tried to illustrate why these sites and services might possibly have some relevance to the whole business of publishing by arguing that 1) they have altered the structure of broadcast communication to some extent 2) they are the frontier of marketing, and viral marketing in particular and 3) they are loci of user generated content, the now hackneyed phrase that nonetheless describes a key concept for contemporary media industries. Part 3 then went on to describe ways in which publishers can get engaged, from the mirco scale of building a book group/profile page to the macro scale of building a site like Authonomy or Lovelybooks.

This was the conclusion: "there are no hard and fast rules for publishers and booksellers entering social networking sites. But it does require creativity, engagement, listening and good faith. It can be done cheaply or expensively, it can work or it can fail, but it can’t be ignored. "

The main challenge of the talk was trying to pitch at a level where people both completely new and highly familiar with the sites and ideas would still feel they had gotten something. A further challenge was catering for simultaneous translation into Russian- text heavy powerpoint slides were out, with screen shots providing the bulk of the visuals. In the end I think it went quite well. Well, I hope so anyway. Russia has the fastest growing internet usage of any European country and is also seeing the steepest growth in social network usage. The two biggest, the aforementioned vkontakte and odnoklassniki, have over 26 million users between them.

The whole experience of Moscow was fantastic. Huge thanks to our legendary host, Biblio-Globus owner Boris Semenovich whose hospitality was utterly exceptional. Huge thanks also to Nelly, Zhanna and everyone else who made our stay so memorable and enjoyable. My co-delegates were great fun, hailing from England, Scotland, Germany and Canada.  Before going to Russia I had slightly imagined that vodka toast after vodka toast was a slightly mythical ideal of a true Russian dinner. I can happily confirm that it is not.  Moscow itself is a vast and vastly impressive city, at once familiar and alien.  Awesome stuff.

A book publisher's manifesto - Part VI (The End)

The marathon is almost over. Here's my final posting. Phew. Thanks for staying with me, for all the comments and the links. Great to have stirred up such a debate! Publishers have always spoken proudly of their role as custodians of copyright, preservers of culture, but how much have they really done to ensure the existence of a digital archive? This – along with developing the interconnections within and across archives of content from multiple publishers - would be a clear role for publishers to take, but has Google already stolen a march there, too? The publishing world awaits the outcome of Google’s legal battle with the Author’s Guild, but in a way, the bluster about Google’s generous interpretation of the fair use clause often only serves to cover up a sense of shame that it was not publishers who first chose to invest in the digitisation of our print archives and to develop the means to access them. Many historians and archivists and librarians are concerned about the possible impact on content quality of a mega-corporation focused in the main on expanding search, adding to its advertising revenue potential and providing ‘good enough’ information for the attention poor consumers of today. Robert B Townsend outlines some of the flaws in the content and the metadata provided via Google Book Search and asks: “…what's the rush? In Google's case the answer seems clear enough. Like any large corporation with a lot of excess cash the company seems bent on scooping up as much market share as possible, driving competition off the board, and increasing the number of people seeing (and clicking on) its highly lucrative ads or "renting" copies of the books. But I am not sure why the rest of us should share the company's sense of haste. Surely the libraries providing the content, and anyone else who cares about a rich digital environment, need to worry about the potential costs of creating a "universal library" that is filled with mistakes and an increasingly impenetrable smog of (mis)information. As historians we should ponder the costs to history if the real libraries take error-filled digital versions of particular books and bury the originals in a dark archive or the dumpster. And we should weigh the cost to historical thinking if the only substantive information one can glean from Google is precisely the kind of narrow facts and dates that earn history classes such a poor reputation. It is time, it seems, to think in a careful and systematic way about how this will affect our discipline, and the new modes of training and apparatus that will make it possible to negotiate the volume and flaws of the emerging digital landscape.” (Robert B Townsend, Google Books: Is it good for History?, Perspectives, September 2007). Whilst Google has led the drive to make book content ‘discoverable’ online, publishers have been slow to harness web techniques to promote and sell books, both in print and in digital formats. Many, many publishers are still nowhere near even managing the basics, of systematically creating and storing and ‘seeding’ sample chapters, excerpts, audio or video author interviews, schedules of author appearances, links to media coverage, featured material on social networking sites and rich bibliographic material.

Whether publishers will find a way to cohabit with Google and the other search engines, to ensure that their content is discoverable through search but on their terms, to regain the lead as specialists in the marketing and selling of books, of content, remains to be seen. Publishers certainly could have a role to play in trying to work with Google and the other search engines to ensure the highest standards of quality are upheld, that the metadata is accurate, that the future users of the digital archive will find more than simply ‘good enough’ information and will be able to plough a rich seam of digital marketing materials in support of authors and their books. Let’s hope that is possible for a moment. Whichever way it goes, in order for publishers to break their traditional boundaries and to develop into the publishing companies of tomorrow will require a step change in their form, culture and approach. Digital publishing strategies will need to move from defensive or protective to creative and liberal, with an emphasis on enabling readers to share and to change what they read. A move away from text-centricity and towards multimedia will no doubt be key and this has repercussions for the kinds of rights that publishers will need to negotiate as well as for the skills they will require of their staff. Publishers will need to view themselves as shapers and enablers rather than producers and distributors, to take a project rather than a product approach and to embrace their position as merely a component element in a reader, writer, publisher circularity. They will need to embrace new business models and they may even need to become media companies rather than publishing companies. They will need to understand and know and connect with their readers far, far better and they will need to develop brands that hold the highest kudos for authors and imply brand values to consumers that appeal to readers around identifiable niches. Ultimately they may need to ready themselves sooner rather than later for a fight to the death not only with their current partners in the distribution chain but also with non-traditional competitors who are rapidly devouring the space which has traditionally been reserved for them.

A book publisher's manifesto - Part V

The weekend brought us a break from my epic article posting marathon, as our network server connection broke down and I could not retrieve the original article... So after a short break, here's Part V. We're nearly there now.  The question really is no longer, “Will consumers read on screens in the future?” or “Will all content be found on the Internet?” The question is rather, “How will consumers read on screens in the future?” and “How will all content be found on the Internet?” And as publishers have been latecomers to the online party, the question lurking behind all of this is what, if any, role do publishers have in the digital future? It’s a future which is not too distant and in which texts are potentially increasingly inter- related, multiple information sources and media types are mashed, and a combination of search and social networks provides the gateway and the guide to content online. Perhaps publishers might position themselves in new intermediary roles: helping authors to write through platforms, or bringing authors and readers together in new and creative ways. However, by and large, on a strictly technical level at least, publishers aren't needed at all for these functions. There is a tremendous amount of available application software online which can bring most of this about. Initiatives such as Amazon’s CreateSpace bring authors and readers together and then apply the ‘Wisdom of Crowds’ to ensure that the best and most popular content rises to the top. Perhaps it could be argued that publishers will always be required in order to bear – or at least share – the financial risk of publishing a work, but again, with print distribution out of the equation, and with print on demand offering the ability to print a single copy for each single order, financial outlay in terms of production and product storage and delivery disappears. Publishers need to work quickly to define what the quintessence of publishing is, what the core value provided by the publisher is beyond the technicalities of matching content with readers. When pressed to think about this, much of what publishers have to offer beyond the technicalities is qualitative rather than quantitative: stewardship, consultancy, an imprimatur. Will authors continue to value these things enough to believe that publishers are critical to the publication of their works? An interesting question is that of scale. Should publishers be joining forces to create multi-publisher platforms, to dominate content networks by developing critical mass across content types and ensuring that content is interlinked in the most valuable and rich ways? If that is the case then publishers are probably mistaken in handing off this role to Google. In its current form, Google Book Search is already providing the access key to multi-publisher book content. It is, in effect, creating the online book platform. It does little to interlink the various texts but that would be a logical next step. Any publisher which continues to regard Google as a benign partner helping to bring their valuable content to light on the Internet has their head firmly buried in the sand, but in the Internet space, publishers attempting to stand up to Google is a little like a small shoal of fish attempting to push back a tidal wave. In fact, ‘standing up to Google’ may not be the answer at all, but finding a way to complement Google is difficult, when this Internet giant is so easily able to move and occupy new digital spaces. And Google’s quiet announcement that it will invite Internet users to produce ‘Knols’ (units of knowledge; introductions to topics that will appear when a user searches on that subject) has been widely touted as a direct competitor to Wikipedia, but, more to the point, it firmly signals the search company’s intent to move directly into the publishing space. Perhaps the only way to answer this will be for publishers to focus back on developing specialist expertise around vertical niches, taking advantage of the ‘deep niche’ provided in the long tail world of the Internet, as described so well by Michael Jensen in his article on the subject in the Journal of Electronic Publishing. In this context publishers would focus value around subject or genre expertise and intimate, direct market knowledge, providing editorial and marketing functions beyond the merely ‘technical’. In this scenario publishers would need to move back further into the territory of filter and editorial consultant and to re-focus energies on their (oft forsaken) role as career nurturers for authors (a space currently shared at least by agents in the trade space). They would also need to develop brands around subject or genre niches so that their platforms are able to gain traction over those developed by competitors and to become far, far better at direct sales and marketing. Publishers will need to press further into the retail space, developing direct relationships with consumers of their content, if they are to become an effective bridge between authors and readers. Whatever shape the future holds, it looks like publishers won’t survive unless they regain some of the roles that over the years have been handed off to other partners in the distribution chain.

A book publisher's manifesto - Part III

Continuing the serialised version of my article for Library Trends: And whilst the edges of the book become more porous, the concept of a ‘book as unit’ slowly disappears further into history, new business models are already emerging. The value in the chain moves from a model which intertwines content with distribution to a model which simply values the content. Tim O’Reilly spotted this years ago and his company built Safari books online as a subscription service accessed with a browser, which now has revenues in excess of those widely cited for the entire downloadable eBook industry. As he points out in his recent blog post Bad Math among eBook enthusiasts on O’Reilly Radar (5th December 2007) “… as for the kind of books that you don’t read from beginning to end, but just use to do a job like looking up information, or learning something new, the “all you can eat” subscription model may be more appropriate [than unitary pricing]. With Safari, we’ve increasingly moved from a “bookshelf” model (in which you put books on a bookshelf and can only swap at month end) to an all you can eat model, because we’ve discovered that people consume about the same amount of content regardless of how much you make available. All you can eat pricing lets people take what they need from more books, but it doesn’t increase the total amount of content they consume. It merely changes the distribution, and in particular, favors the long tail over the head.”

As Scott Karp observes on O’Reilly’s comments in his blog post on The Future of Print Publishing and Paid Content (6th December 2007) on Publishing 2.0, “Instant full access to a searchable digital library is a radically different form of distribution from buying reference books one at a time and putting them on your bookshelf. But here’s the fascinating part — “it doesn’t increase the total amount of content they consume.” People still value and use the content in much the same way, despite the radically different distribution model. By unbundling these books into a digital library, consumers essentially repackage them by searching for and selecting specific content items. So even when consumers value content enough to pay for it, they intuitively understand that it doesn’t cost the publisher nearly as much to make the content available digitally as it did to put all of those books physically on a shelf. That’s why consumers aren’t willing to pay for the equivalent of buying ALL the books in print. You can’t price a bus ticket the same as a plane ticket simply because they both get you from point A to point B — it costs a lot less to drive a bus than fly a plane.” Online science fiction publisher Baen Books’ webscriptions offering puts a value on material pre-publication and demonstrates a successful, early move from unitary distribution and pricing to a flexible, subscription offering. This web based re-creation of the serialized novel using Science Fiction published by Baen Books offers novels published in three segments, one month apart, beginning three months before the actual publication date. Each month four books are made available for $15 per month. About two weeks after the last quarter is delivered, print versions of the books become available in bookshops. Publishers are also slowly waking up to the idea that, whilst the book online can no longer always afford to be an island, neither can the publisher. Consumers of books care very little, if at all, about publisher brands. Some authors are brands, but publishers have largely remained invisible to consumers in terms of branding. In the online space, publishers need to recognise that readers simply want the content they require – and fast, simply, without barriers or walls ring-fencing random selections of content purely because one content set belongs to one publisher and another set to a second, different publisher. A useful network of books will almost always, inevitably, cross the boundaries between a number of publishers. In the journals world this has been recognised and resolved by cross-publisher platforms and linking systems such as CrossRef and IngentaConnect. As books move online, similar developments will be necessary to connect the multiple references between books published by many different publishers, but book publishers have been far slower to develop cross-publisher platforms than journals publishers were, perhaps because the critical nature of citations in journals publishing offered a clearer strategic and commercial driver in the journals world. In the education market at least, the requirements for custom publishing in which institutions, their academics and students are able to construct bespoke textbooks and course materials drawn from content published by multiple publishers will also no doubt only increase, and publishers will need to get a whole lot better at finding ways to come down from their ivory towers and work together.

A book publisher's manifesto - Part II

Continuing my six part epic essay on the future of publishing. If there is one.... As digital reading devices go, Amazon’s Kindle is probably the first to at least recognise the importance of the ‘connectivity’ between our differing modes of reading, the fact that readers might like to follow up references within the text or to conduct a related search. The addition of wireless connectivity to the device and the capacity (although frustratingly limited) to connect to blogs, online newspapers and other web-based content goes some way towards recognising this as well as to acknowledging the fragmented, ‘always on’ nature of most people’s reading habits today, allowing readers to move seamlessly from reading a few pages of a novel, say, to snacking on some news, before picking up a couple of blog feeds. This is absolutely not to say that the Kindle has tied up the future of digital reading and defined what the experience should be; far from it. It signals a step change in that it connects downloadable digital units of reading matter (‘eBooks’) with the more exploratory-style online reading and researching, and it is the first device to be intrinsically connected to a commercially viable eBook platform. However, the Kindle is merely one device with one very specific agenda and, as such, it only provides one small, rather flawed element of the picture that is emerging of a future for digital reading.

Reading is not an activity that can be defined simply and it is all too often described as a solitary, immersive experience, as in the experience of reading a novel for hours at a time. This is only one type of reading, and it is important to recognise that narrative fiction makes up less than 25% of the entire book market. In any case, even if a reader spends some solitary time reading, readers have always traditionally liked to swap views and ideas about the content of books, to turn over the corners of pages in which favourite passages appear to which they want to refer again, and to write notes in the margins. Reading is a much less passive activity than it at first appears, and it is connected with many and diverse related activities. The Internet has not created a more active or proactive approach to reading but it has enhanced it, enabled it to happen across more disparate networks and allowed it to be recorded, aggregated and interlinked in exciting new ways. The way in which books might begin to ‘live’ on the Internet will perhaps be the most palpable incarnation of Roland Barthes’ theories in The Death_of_the_Author, in which the author is no longer the focus of creative influence but merely a scriptor, and every work is “eternally written here and now,” with each re-reading, because the “origin” of meaning lies exclusively in “language itself” and its impressions on the reader.

Publishers need to provide the tools of interaction and communication around book content and to be active within the digital spaces in which readers can discuss and interact with their content. It will no doubt become standard for digital texts to provide messaging and commenting functions alongside the core text, to enable readers to connect with other readers of the same text and to open up a dialogue with them. Readers are already connecting with each other – through blogs, discussion forums, social book-marking sites, book cataloguing sites and wikis. Publishers need to be at the centre of these digital conversations, driving their development and providing the tools for readers to engage with the text and with each other if they are to remain relevant. Bob Stein at the Institute for the Future of the Book talks about "the networked book." … the book as a place, as social software - but basically .. the book at its most essential, a structured, sustained intellectual experience, a mover of ideas - reinvented in a peer-to-peer ecology.”

I like Chris Meade’s not drowning but waving illustrating how publishers should not hold on too tight to the shores as we set sail into future waters:

“We (a novelist friend and I) visit(ed) a fish shop by the river that was flooded out. They’d only just opened an extension built at a height recommended by a local fisherman who had told them, “That’s as high as the tide went nine years ago – you’ll be all right.” They weren’t.

Bloggers mix text with still images with moving pictures embedded from YouTube etc. – young people take that media mix for granted, and as consumers we all do, watching tv adaptations of favourite books, using the web to research more about the author to discuss at our reading group. A new generation of more consciously transliterate reader will take it as read that the text is surrounded by researches, images, networks of reader response to the point where these become an entirely integral part of the work of art, the author’s creative voice distinct but no longer so alone. The flooded fields are rather beautiful and it’s already hard to recall what the landscape looked like before. Nature can adapt instantly to change; it takes longer to redraw the maps.”

Not all books need to be networked books. There will still be a place for that deeply immersive, solitary reading, I hope, in the future. But publishers had better be the ones defining what the shape of a ‘networked book’ should be nonetheless, because if they are not someone else sure as hell will be.

A book publisher’s manifesto for the 21st century

Over the next few days I am going to blog a piece I have written for a US-based library journal, Library Trends, on how traditional publishers need to position themselves in the changing media flows of a networked era. It's a very long article so I'm gonna serialise it and blog it in six 'bite-sized' chunks over six days. Here's the introduction, which aims to set the picture. Scary. Print sales are falling. According to the National Endowment for the Arts’ 2007 report To Read or Not to Read both reading standards and voluntary reading rates of traditional print material amongst young people are falling. Textbook publishers are fighting for sales; campaigning to alert students to the necessity of using their products. Hardback fiction has almost gone the way of the dinosaur. The open access debate rages on. Publishers and retailers have consolidated. More and more books are produced, but there is less and less choice on the high street. Leisure time is transferring away from books and reading, away from television even, to the Web; to social networking sites, blogs, instant messaging, video and music file sharing sites. The attention economy is shrinking, fast. Academic research is – for many students – all about search. Let’s face it, for most students, actually, it’s all about Google. Who needs books anymore? More to the point, who needs publishers?

In an ‘always on’ world in which everything is increasingly digital, where content is increasingly fragmented and ‘bite-sized’, where ‘prosumers’ merge the traditionally disparate roles of producer and consumer, where search replaces the library and where multimedia mash-ups – not text - holds the attraction for the digital natives who are growing up fast into the mass market of tomorrow, what role do publishers still have to play and how will they have to evolve to hold on to a continuing role in the writing and reading culture of the future? Will there even be a writing and reading culture as we know it, tomorrow? Is the publishing industry acting fast enough and working creatively enough to adapt to the new information and leisure economies?

Publishing is an old and established industry with its foundations firmly rooted in print culture. The publishing model has evolved over history in a very slow, organic fashion. The sedate pace of change has suited publishers. Stated simply, the journey of a text from author to reader has been a linear one, with publishers traditionally fulfilling the intermediary roles of arbiter, filter, custodian, marketer and distributor. There has been some blurring at the edges, some tinkering with the process, but little radical change. In the literary world, agents have, at least partially, usurped the arbiter and filter roles. Retailers have become, to some extent, marketers and, occasionally, have even become publishers themselves. However, by and large, the stages in the process have been clearly delineated and the role of the publisher clearly defined. From a print perspective at least, publishers have offered one key, relatively unique set of abilities: to produce, store and distribute the product to the market. The rise and rise of the Internet has begun to disrupt this linear structure and to introduce the circularity of a network. More challengingly, perhaps, it has raised the distinct possibility of publisher disintermediation by more or less removing as an obstacle the one critical offering previously unique to publishers - distribution.

Publishers – and, importantly, authors - will need increasingly to accept huge cultural and social and economic and educational changes and to respond to these in a positive and creative way. We will need to think much less about products and much more about content; we will need to think of ‘the book’ as a core or base structure but perhaps one with more porous edges than it has had before. We will need to work out how to position the book at the centre of a network rather than how to distribute it to the end of a chain. We will need to recognise that readers are also writers and opinion formers and that those operate online within and across networks. We will need to understand that parts of books reference parts of other books and that now the network of meaning can be woven together digitally in a very real way, between content published and hosted by entirely separate entities. Perhaps most radically, we will have to consider whether a primary focus on text is enough in a world of multimedia mash-ups. In other words, publishers will need to think entirely differently about the very nature of the book and, in parallel, about how to market and sell those ‘books’ in the context of a wired world. Crucially, we will need to work out how we can add value as publishers within a circular, networked environment.

One of the key perception shifts that publishers need to make, then, is about the book as ‘product’. Whilst the book continues to be viewed as a definable object within covers, as a singular ‘unit’, publishers will continue to limit their role in its production and distribution, and this is a sure fire way for publishers to write themselves out of the future of content creation and dissemination. There are two areas of activity in the linear progression of a text between author and reader which have previously remained hidden to the reader: the development of the text itself; the writing and editing process, and the sales, marketing and distribution of the text. Readers have traditionally had no role in the former and only a limited role in the latter, through word of mouth recommendations or viral marketing. It is likely that today’s digital natives, who have become ‘prosumers’ (producer / consumers) with alarming speed and perhaps even more alarmingly different levels of proficiency, will expect a great deal more involvement in both of these areas of activity if they are to be engaged by texts. Witness two main stream examples, the Star Wars films and the Harry Potter books and films, both of which have developed massive prosumer (or ‘superfan’) followings, and both of which have seen conflict between the film companies and the fans that are creating content.

A minority of publishers have begun to experiment with the blurring of these traditionally distinct boundaries already. Chris Anderson’s The Long Tail was of course written ‘in public’ via a blog, allowing readers to post comments and to be involved in the very act of writing the book. O’Reilly’s Rough Cuts make a virtue of the concept of developing a book online first and have established a business model for combining pre-publication and post-publication access. McKenzie Wark’s Gamer Theory was also blogged before it was produced as a book, allowing readers to post comments and to make suggestions about the shape of the book. GAM3R 7H30RY 1.1 was “a first stab at a new sort of “networked book,” a book that actually contains the conversation it engenders, and which, in turn, engenders it.” At http://www.futureofthebook.org/mckenziewark/ readers can read the original version (v1.1), view the fully annotated version with all the reader comments alongside the core text, read v2.0, join a related discussion forum or view visualisations of theories within the text.

The locked-in perception of the book as a unit or a product has also led to digital ‘strategies’ which largely consist of the digitisation of existing print texts in order to create eBooks. This in turn has led to an obsessive focus on the reading device and a perception that the emergence of a ‘killer device’ will be a key driver in unlocking a digital future for books in the way that the iPod was, say, for music. This is a flawed perspective in a number of ways, not least because it fails to recognise the enormous amount of online or digital ‘reading’ that already takes place on non-book-specific devices such as desktop PCs, laptops, PDAs and mobiles, but also because it fails to recognise that the very nature of books and reading is changing and will continue to change substantially. What is absolutely clear is that publishers need to become enablers for reading and its associated processes (discussion; research; note-taking; writing; reference following) to take place across a multitude of platforms and throughout all the varying modes of a readers’ activities and lifestyle.

Telling Stories

story.jpgChances are that if your reading this blog you will have come across Penguin's grands projets, We Tell Stories. In case you haven't (where have you been?) its six digital stories and an ARG from Penguin UK and Six to Start, a funky start up that builds cool games. Enough has been said, for and against, in terms of content and conception but this piece on blog powerhouse Gawker got me thinking. Its hard to know exactly what Penguin's criterion of success in this project is- it must have cost a bomb and has no obvious revenue stream. As for traffic figures, I haven't clue. In terms of coverage I think it can definitely be considered a success and has been featured in Newsweek, USA Today and Wired amongst others despite the ARG being a UK only affair. If nothing else it has introduced many people to a new way of storytelling and pioneered digital fiction in mainstream publishing.

Gawker don't seem to like this. In the louche style characteristic of the site(s) they ask: "There's got to be a better way for publishers to get people to read more books... using actual books. Um, right?" Um, no. Because I don't think Penguin were trying to get people to read more books.

Jeremy Ettinghausen, the man behind the project and new found web celebrity, has specifically stated that the project is not about print, in fact quite the reverse, saying to Newsweek "[ebooks] are pretty much the same thing as the print book but delivered in a different way. We thought we'd try something a little more ambitious and actually develop stories designed for the Internet, not adapted to it." Rather than being about books this is specifically about moving away from them.

Fair enough. As the name suggests this is part of a view that sees publishers not just as creators of books but as curators of stories. Had this attitude been more prevalent over the past few hundred years no doubt that the media landscape would look very different today. Opportunities missed from film to gaming might have been taken and a more integrated approach to narrative entertainment prevailed.

When Gawker say "[they] have a new project to tell the stories of books online — using new media, get it? " it's the sneer of a new media company suddenly fearful that its very cutting edge newness is getting eroded by so called old media companies keen to redefine exactly what that means. Gawker suggest that they read books to get away from the internet- something I can sympathise with- but publishers are still well poised to make entertaining interventions on the web, using capacities built up from the book world to find new species of storytelling.

Publishing, in some areas at least, has been hit hard by the web. Take maps. Why buy a map when Google Maps is free? And better? The 21 Steps was an inventive use of Google Maps that in some small way marked a kind of reclamation of the space. Ok, it might not do anything in itself, but it points to a future where publishers can more than just co-exist with the web, aloof new media neighbors or no. For me that has to be a good thing.

Meanwhile you can watch the ARG evolve on the unfiction boards- as good as playing for those with no time, I tell myself.

Photo: 16/06/06 Dramatis Personae by Andrew Coulter Enright

The Literary Internet

Literary areaFaber CEO Stephen Page has caused a mini storm by arguing that the web offers a haven for embattled literary publishing in an article written for the Guardian. Much of the fuss seems to be that Faber & Faber, the epitome of high brow, the aristocracy of publishing etc, is now getting involved with the web, something applauded and decried in roughly equal measure. Page is to be applauded, not least because his sentiments echo some recent posts here on the digitalist. He writes: "So publishers must harness the great power of online networks through enriching reader experience. We must provide content that can be searched and browsed, and create extra materials - interviews, podcasts and the like...The key to this is just to make available and to resist too much control". So far so commonsensical, a fair point amounting to no more than what is currently the standard modus operandi of most media organisations. His contention that "Literature can thrive in these [web] places" is more interesting in that Page is arguing that specifically literary fiction, harassed by an indifferent readership, squeezed by the exigencies of economic survival, has not only a role and place on the web, but that the web might be its saviour.

In a fascinating response Sebastian Mary, writing at if:book, argues that in fact the web is antithetical to the whole concept of the literary, that "the ideology of 'literary' is inseparable from print". Key to this are ideas of authorship, originality and publication in the grand, traditional sense, ideas that begin to disintegrate in the participatory, collaborative and imitative forms facilitated by and in web culture. Offering concrete examples like Protagonize as enacting a death of the author beyond those conceived on the Left Bank of Paris the ideals of the literary are undercut and circumvented.

While I agree that there are fundamental differences between literary and web cultures, they are more closely aligned than might first appear. The whole concept of authorship and originality would have been alien to Shakespeare; in the Renaissance plotlines and phrases were freely borrowed while the whole acting company would have had input into the writing of a play. Yet Shakespeare rests at the very acme of what we consider the literary. This suggests that the literary is a malleable construct and in the transition to the web, channeling that unique ability to bring together dispersed, niche groups, we will have to once again redefine our concept of the literary, just as Romanticism gave rise to the cult of genius and in doing so created many of our present notions of authorship. Never immutable, the web is evolving the "ideology of literature" rather than superseding or conflicting with it.

What Page is doing might seem more rearguard, in that he proposes almost using the web as the last redoubt for literary fiction rather than seeking to alter what we mean by literary in the first place, whereas Mary argues that there is no point in transplanting print cultures to the web in any case. Overall though I think Mary and Page are gesturing towards the same conclusion in that they both see the web as a place primarily for the discussion of literature (Mary: "This isn’t to suggest that there’s no room for ‘the literary’ online. Finding new writers; building a community to peer-review drafts; promoting work; pushing out content to draw people back to a publisher’s site to buy books"; Page: "publishers can now build powerful online places to showcase their books through their own and others' websites and build communities around their own areas of particular interest and do so with writer"). Crucially though neither sees the web as the actual locus of distinctively "literary" creation and delivery.

On that the jury is still out; my feelings are that it will. What seems assured is that the web is now the central forum for the discussion of literature (especially given the demise of the review pages in the US), many people already read more on a screen than they do in print, digital delivery can be extremely simple and efficient and that the conjunction of these factors indicate that the web might well be literary in more than the discursive sense, even as it remoulds that sense.

Scraping Fiction

Following on from James' post about fan fiction, it seems that some of the issues are not just applicable to content as such, but also the wider concept of data. The idea behind scraping is simple: a program takes information from a web page and translates it to another webpage. It means that websites can, in theory, take data and then use it new ways. Popular scraping services like Dapper make the process easy and efficient, while a whole sub-industry as built up around translating information from one site to another, with tools such as this Ruby on Rails kit being widely available.

However as this feature points out the whole concept is increasingly problematic. Scraping essentially relies on the co-operation of the sites being scraped, and those tend to be the most popular: Google, eBay, Amazon etc. Most of the time sites are happy being scraped as it increases the profile of the site and the data they are displaying. Plus it can be difficult to stop.

As the article makes clear though, there are an increasing number of sites that are not willing to let their data be scraped. For example the listing site Craigslist has cracked down on sites that scraped their listings and repackaged them, as did Alexa, the Amazon owned web information service, which clamped down on sites using Alexa data.

There is an obvious parallel between fan fiction and scraping sites. Both are engaged with taking a proprietary piece of information and then converting this for a new and altered consumption that in some way augments or transforms the original information. There is, in both, a delicate balance between the risks of revenue and reputation damage (e.g. Warner Bros argument in the protracted Harry Potter Wars)/copyright theft and the massive benefits of visibility for the original owner or producer of the data. This is the critical faultline of the web. What is better: control or visibility?

In a perceptive post Tim O'Reilly draws an analogy between banks trading for their own accounts and websites that formerly directed traffic away from their sites to within their sites, trading for their own screen views rather than those of others. Here is a shift from, say, a visibility aid to producer of proprietary concept. It suggests a trend towards control even as DRM and fan fic lawsuits begin to look more and more anachronistic.

Perhaps the best way of balancing these competing demands is through Creative Commons licences: this ensures the integrity of the original work and revenue stream whilst also increasing the much needed visibility of the product or data. In the case of Craigslist for instance one of their major problems was with Google ads that were being displayed next to the scraped listings. If there were not ads on the original, they argued, why should someone profit from the data they had aggregated?

It makes sense though for Craigslist entries to be scraped as anything that increases views of those listings is by definition improving the listings; a CC licence obviates the issue of who is monetizing the content. Likewise the ugly spectacle of media companies hounding their fans could be eased in a similar way. Seeing as a data content distinction is fairly meaningless on the web some kind of creative commons could be built in to future works of HP magnitude, which would allow people to build on them assuming they acknowledged the source. Some kind of royalty arrangement could even be built in (if a fan fic became profitable or if a company was willing to take the risk).

Creative Commons come with the potential flexibility to allow alteration, a key sticking point in the debate over fan fiction.

A perfect illustration of this movement that skirts the transformative, free culture of the web, a need for visibility and the demands for greater proprietary content is the Google Knols project, at least as far as it is currently possible to tell. In the screen shot the Knol is released under a Creative Commons 3.0 licence. The content is thus hosted and displayed by Google, but is available for use elsewhere.

Scraping and fan fiction are raising new questions for viewer hungry websites and media producers, questions that require a new approach to concepts of ownership and data usage.

For fans of fan fiction

There's been some discussion on blogs recently about the Organization for Transformative Works (OTW), which is a new "nonprofit organization established by fans to serve the interests of fans by providing access to and preserving the history of fanworks and fan culture in its myriad forms."

There is a post about the OTW on if:book:

Interestingly, the OTW defines itself — and by implication, fan culture in general — as a "predominately female community." The board of directors is made up of a distinguished and, diverging from fan culture norms, non-anonymous group of women academics spanning film studies, english, interaction design and law, and chaired by the bestselling fantasy author Naomi Novik.

John Scalzi posts about the OTW on his Whatever blog, making some interesting points, and eliciting a fair few comments:

Among [the OTW's] many plans is “Establishing a legal defense project and forming alliances to defend fanworks from legal challenge,” which basically means that the OTW is planning to make the argument that fan writing is fair use under copyright. Or, as the organization states in its “Our Vision” statement: “We envision a future in which all fannish works are recognized as legal and transformative and are accepted as a legitimate creative activity.”

And another on Everybody's Libraries:

If you maintain a library, you might want to watch the sort of interaction going on here, even if you don’t particularly care about fanfic. Collection building and public service functions in the digital age often have to negotiate similar gray areas that aren’t neatly covered in law, but have important social aspects. It can be useful to look and see what sorts of practices build up owner and user communities, and what tears them down.

Reading these posts, and the OTW site, put me in mind of a (fairly) recent article in Wired about the fanfic equivalent of manga comics in Japan. Daniel Pink writes about the manga industry in Japan, and about how fan manga has found a place alongside manga publishers.

I spent two days at Super Comic City. But an American intellectual property lawyer probably would not have lasted more than 15minutes. After cruising just one or two aisles, he would have thudded to the floor in a dead faint. About 90 percent of the material for sale — how to put this — borrows liberally from existing works. Actually, let me be blunter: The copyright violations are flagrant, shameless, and widespread.

It seems that publishers accept this position, unofficially, of fans essentially violating intellectual property rights and making more or less money out of it, because experience has taught publishers that a vibrant and open fan fiction market supports sales of the copyright works rather than diminishing them.

Given the freedom to fill-in back story and plot loopholes, complete unfinished stories and extend characters, fans are bolstering the position of the publishers' manga as the 'original', the 'source'. Readers return, then, to the source and in so doing extend the lifespan of those publications; as it grows longer, the tail feeds the head.