By some accounts, there's an awful lot riding on the success of Scalar, a much buzzed-about digital publishing platform that is still working towards a beta release. As early as last February, Marc Parry suggested in The Chronicle of Higher Ed that if a tool like Scalar were to succeed in bridging some of the gap between popular web media and scholarly work, such a success could "perhaps bolster the agencies that finance it, a timely move now that Republican lawmakers, looking for federal budget cuts, are calling for the elimination of the [National Endowment for the Humanities]."
As consensus continues to coalesce around the idea that academic publishing must and shall do more to embrace digital media, the Scalar software is emerging as perhaps the best bet in what looks more and more like a high-stakes game. An earlier attempt at developing a multimedia authoring tool similar to what the Scalar platform promises produced a first-version failure and considerable frustration, despite high hopes and more than $2.5 million in funding from some of the same backers who are currently betting on Scalar. In the Chronicle article mentioned above, reporter Marc Parry quotes Bob Stein, a director of the Institute for the Future of the Book and collaborator on the failed version of the earlier authoring platform (known as Sophie and released in 2.0 last October), as saying that a new version of the tool would be a "holy grail" of sorts, and one has the sense that others share Stein's sentiment.
The pressure, in other words, is on to find the tool that will revolutionize the publication and reception of scholarly work on the web, and while neither Sophie 2.0 nor 2010's Anthologize platform has revealed itself to be the "chosen" technology, the people who are developing Scalar at the Alliance for the Networking of Visual Culture think that they, in fact, may have cracked the riddle of the much-anticipated "next big thing" in academic publishing.
Like Sophie, Scalar is premised on the idea that engaging, media rich scholarly work can be self-published on the web in a way that minimizes both requisite technical know-how and the probability of technical hassle. The difficulty in developing such an authoring tool, however, lies in striking the right balance between ease of use and sophisticated functionality. As Stein told Parry last spring: "It's easy to build an authoring environment that requires experts to use. It's very hard to build an authoring environment that somebody can use after reading two pages of instructions."
The devil here is indeed going to be in the details. Developers, publishers, and academics (the core communities, naturally, represented on the Scalar team) alike realize that any hope of engaging a crossover audience--a trick at which traditional academic publishing hardly excels--hinges crucially on being able to present the kind of immersive, visually compelling, web-based media product that increasingly tech-savvy and tech-demanding consumers are conditioned to expect. But that product also needs to be producible as the end result of a process within the reach of, say, an assistant professor of art history with a heavy teaching load, some substantial research commitments, and a CV that doesn't prominently feature code-wrangling.
If the question, then, is one of delivering the right means of production to the right people at the right time, is Scalar the answer? As yet, it's much too early to tell. The first article published with Scalar only appeared a few months back, and although the platform's immersive "text as network and network as text" repackaging of the traditional scholarly journal article as something more akin to a thoroughly linked and media rich website is certainly provocative, a great deal remains to be decided about the fate of the software.
For that matter, even if Scalar does turn out to be the "holy grail" authoring tool that everyone is awaiting, there's no gaurantee that its adoption (which would, in turn, depend heavily on the academy coming around to the idea that Scalar-produced-type works could count towards tenure) would be followed by any real expansion of the current audience for scholarly work. Such an expansion (and the salutary effects it might have on funding prospects) certainly constitutes an appealing vision, but for the time being, it remains only that.
A good friend of mine recently purchased a slick, HD-ready, wall-mountable, LCD panel TV, and as a consequence, I inherited from her a shockingly heavy old tube that will, in all likelihood, be my last CRT television. With the TV, she offered me a stack of forlorn looking VHS cassettes, but I passed on those, explaining that I hadn't owned a VCR in years. Later, I realized that I know exactly one person who maintains a commitment to the old cassette format, and he has long since taken to positioning himself as a sort of renegade VHS archivist dedicated to the preservation of 1970s and '80s B-movies that might never see the light of a digital reissue.
While the rest of us are busy celebrating the vast improvements in sound and image quality realized by the new formats, this lonely partisan of VHS technology is actively concerned with what we've cast aside on the merry march of digital progress. When you think about it, his quest is more than just a quixotic exercise in nostalgia. The man has a legitimate point about the lifecycle of certain media, and his point applies to a wealth of material extending far beyond the forgotten action films of his youth. At today's frenetic pace of both production and development, we run a real risk of losing not only artifacts of the analog era but those of the earliest days of digital as well.
A pair of popular press articles from last week nicely illustrate a tension that exists between innovation and preservation in the world of digital media. In the UK's Guardian, Vanessa Thorpe argues that "the fast pace by which technology changes means that many of the earliest works of art created on computer are in danger of being lost, or are already impossible to read, while new interactive digital artworks, such as 3D visualisations and video games, are so complex that scientists are not yet capable of faithfully preserving them." Meanwhile, a profile of U. Chicago's new Joe and Rika Mansueto Library in the tech mag Wired suggests that a hybrid library system offering new school web resources, streamlined physical collections, and robotic retrieval services strikes a quirky balance that just might be the future of campus-based research.
The Guardian article hinges on the point (previously made here and elsewhere) that in the relentless expansion of the digital, there is a quasi-Schumpterian process of creative destruction in which new technologies build and improve upon their forebears, leaving behind a wake of obsolete programming and discarded formats. Reporter Thorpe writes that the "race is on against the fast pace of technological change as scientists search for ways to preserve today's most innovative artworks," and a novel tension arises here in that technology gets double billing as both problem and solution. The challenge facing those scientist-preservationists is very much a technical quandary that is thought to have a technical fix, and Thorpe argues that the challenge is made all the more daunting by the fact that the intricacy of some new media artifacts is such that the act of preservation is no longer anything so simple as keeping a copy of a unique work intact and safe from degradation. To a real extent, the context--or the means of accessing and viewing that artifact--must now somehow be preserved as well, thus posing problems that are analogous to those presented by installations and performance art but are, if anything, more multifarious.
Angela Watercutter's Wired profile of the Joe and Rika Mansueto Library opens another window onto the complicated world of digital collecting. The University of Chicago's recent entry into the hotly debated "library of the future" field offers some things we have come to expect (an emphasis on web-based research and the general absence of bookshelves) and some that we might not (an Asimov-meets-Borges physical collection housed underground in space-economizing and vertiginous stacks serviced by robotic cranes). As read by Watercutter, the meaning of the library is that digital collections have limitations that are still handily addressed by the incessantly eulogized but remarkably hardy print volumes. Quoting U. Chicago's head librarian, Judith Nadler, Watercutter argues that copyright issues alone are enough to render even the capacious Google Books collection frustratingly incomplete for the foreseeable future.
When read alongside Thorpe's article on the lifecycle of digital media artifacts, it's easy to both appreciate the Mansueto compromise and to imagine a host of further problems presenting themselves along the road to universal digitization. From that standpoint, the physical collection, invulnerable as it is to issues of compatability and interfacing, seems downright enduring. And while the musty old book may yet be destined to go the way of the VHS, my good friend and the authors of these articles would all contend, going digital is still not without its downside.
It’s easy to forget that Google isn’t actually omnipotent. In the most recent of the periodic reminders that there are significant obstacles on the path to total information organization, a federal judge rejected the massive settlement (details of which were demysitified by Berkeley Prof. Pam Samuelson at a THL lunch forum viewable here) that Google reached in 2008 with the Authors Guild and the Association of American Publishers to clear the way for the digitization and inclusion of millions of additional books in the tech giant’s electronic library.
Providing further confirmation that these days, everything is indeed online, the California Digital Library announced last week that the UC Libraries have now digitized over 3 million books. The mass digitization project is ongoing and has involved collaboration with Google, Microsoft, and the Internet Archive over the past few years.
A book from, say, the Berkeley campus library system begins its journey into cyberspace by being first checked out to the Northern Regional Library Facility (NRLF), where it will be packed into a large shipment headed to one of the California Digital Library's partners. At the San Francisco-based Internet Archive, the book is scanned using a digitization device called a "scribe". Each page is turned carefully (by hand, no less) and photographed, and the scanned copies are checked for image quality. The virtual book is then tagged with additional metadata (publication info, cover, etc.) and parts ways with the physical copy, which returns to its original library location.
The digital book, however, is just beginning a journey of its own. Depending on what sort of partnership it was produced through (you can see the UC system's contracts with its digitization collaborators here) and what sort of copyright restrictions apply to the text, the book may be accessible in full or in part through a variety of web outlets--i.e., Google Books, the Internet Archive, the HathiTrust, and the UC's own Melvyl catalogue. In a recently produced video on the mass digitization project, CDL estimates that 20% of the scanned collection is available in full online and free of any copyright restrictions.
All of this digitization has implications that reach far beyond reducing PhD students' overdue book fines (although that is a plus). Brick-and-mortar university libraries full of aging paper resources have traditionally existed to serve their faculty and students, not the general public per se. Digitization has the potential to vastly increase both reader access and material longevity through open circulation of electronic copies. Additionally, digitzed texts promise to facilitate new types of scholarly research, ranging from the simple advantage of enabling rapid full-text searches to the more complicated computational textual analyses that researchers are only beginning to utilize.
Heather Christenson, CDL's Mass Digitization Project Manager, sums up the promise of the digital quite succintly in the recent video: "You can't read a million books, but a machine can." Make that 3 million and counting.
News from the productivity front: the One Week | One Tool summer institute funded by the NEH and hosted by the Center for History and New Media at George Mason met its ambitious goal of developing a new open-source digital humanities tool entirely within the space of a week. What's more, Anthologize, the end result of all that frenzied collaboration, is well worth talking (and blogging) about--hence the digital humanities web hubbub of the past few days.
The new tool is notable both for its functionality and its inspiring origin story, and those two threads are arguably relatable in a greater trend that says a lot about what's going under the big digital humanities tent. On the one hand, the blog-to-book ethos of the Anthologize plugin suggests an exciting new wrinkle in DIY electronic publishing, allowing WordPress 3.0 users to quickly aggregate, edit, and remix blog posts and external feeds with new content and to export the end product in multiple formats. And on the other hand, the impressive utility of this prototype that went from inception to launch in just seven days seems to point toward additional horizons.
As others (see CHNM director Dan Cohen's blog) have well noted, the whole process that led to Anthologize is something of an anomaly in the typically deliberative environment of academe, and the idea of brainstorming, crash developing, and unleashing anything in a hurry runs directly counter to that sort of deliberation--perhaps refreshingly so. In that sense, the apparent success and considerable boldness of the One Week | One Tool institute is an inspiration, and similarly, Anthologize itself seems to encourage a bold new sort of publication--one that pulls back the curtain on the process of scholarship and promises to shine a new light on rough drafts and alpha versions.
Anthologize users may never match the dizzying workflow of the tool's creators, but the events of the past week surely represent a provocative step in that direction.
It is our ongoing mission to introduce our readers to other innovative web-based projects that indicate new tools and techniques for humanists. One such website worth checking out is the UK-based The Literary Platform, which was recently written up by the Guardian Books blog. The Guardian editor calls the site, "“an inspiring browse around some of the innovative and collaborative experiments taking place in the exciting physical-to-digital realm.”
The Literary Platform describes itself as a "showcase," that "will demonstrate how traditional publishers and developers are experimenting with multimedia formats, how established authors are going it alone, how first-time novelists are bypassing publishers and how niche literary magazines are finding wider audiences." The site offers an edited compendium of these experiments, each described in a brief profile complete with screenshots and links.
Browsing the different projects in their showcase, the theme is e-publishing, and the emphasis is innovative web and iPhone/iPad apps. They have culled together a fascinating selection of different tools and sites that might change the way we read. Further, they suggest significant developments in the way authors are making a living these days.
A website called BookSeer, for example, takes the last book you read and recommends your next, based off searches in a number of online bookstores and reading lists. The free iPhone app, "zehnSeiten" streams videos of new authors reading 10 pages from their latest works. These are just a few of the fascinating projects in e-publishing and online reading that you'll find at the Literary Platform, and as you'd expect, the site is steadily growing.
The THL is pleased to announce a Berkeley Center for New Media campus event and exhibit called "The Future of the Books," a new media work by Judith Donath, Gilad Lotan, and Martin Wattenberg. The exhibit will run from April 19 -- August 6, 2010, in the BCNM Commons, 340 Moffitt. There also will be an Artists' Lecture and Reception, with Judith Donath and Gilad Lotan, in the BCNM Commons on Monday, April 19, 4:30 -- 5:30 pm.
From the BCNM description of the work:
"The installation is a meditation on the turning point in history from the written and printed word to the digital book.
The earliest writings were carved in stone or scratched in clay and tree-bark. By 2400 BCE people had begun rolling papyrus sheets into scrolls. It would take thirteen more centuries for these two technologies to come together to form the book.
Today, books are ubiquitous. We read them on subways, build shelves for them in our houses, and sell them in bookshops, cafes, and on the banks of the Seine. /The Future of the Book/ asks, after six centuries of world-changing influence, is the printed book about to join the clay table, the scroll, and the parchment codex as a historic, but obsolete, writing technology?
"The Future of the Book" will be on view from April 19 through August 6, 2010. The exhibition is free and visible in the window of 340 Moffitt Library, next to the Free Speech Movement Café. A reception and lecture with Judith Donath and Gilad Lotan will take place on April 19 in 340 Moffitt Library from 4:30pm -- 5:30pm. Admission is free.
Judith Donath is a Harvard Berkman Faculty Fellow and formerly director of the Sociable Media Group at MIT Media Lab. Gilad Lotan is a designer at the Microsoft Future Social Experiences Labs.
BCNM's mission is to understand what is new about each new media from cross-disciplinary and global perspectives that emphasize humanities and the public interest. Visit bcnm.berkeley.edu."