Recent history, many instructors say, is the most difficult to teach. The dearth of materials and historical commentary is exasperated by the lack of perspective and the incomplete construction of national memory. But the events of 911 might signal a radical shift in the historiography of contemporary events due to developments in digital archiving. The decade following 911, after all, was also the decade of Web 2.0.
In a recent post in this space, I discuss the changing shape of the author as “distant reading”—and particularly Culturomics—concomitantly sever writer from text and enable new methods of authenticating their fusion. The aim of fusing writer with text has given rise to quantitative methodologies that resemble forensic science. Like the baroque phrenological catalogues of late 19th century positivist criminology, these new data mining techniques resolve to identify an individual, to discover "whodunit": in this case, who authored the text in question.
David L. Hoover's chapter, “Quantitative Analysis and Literary Studies” in A Companion to Digital Literary Studies, provides a useful introduction to author attribution methods that rely on digital technologies. For the most part, these methods involve counting the frequency of certain words and comparing words frequencies across a larger set of texts. As such, the focal point of a work, from the perspective of computational author attribution, is a smaller unit of text than that of more traditional literary criticism. Hoover writes:
“The frequencies of various letters of the alphabet and punctuation marks, though not of obvious literary interest, have been used successfully in authorship attribution, as have letter n-grams (short sequences of letters). Words themselves, as the smallest clearly meaningful units, are the most frequently counted items, and syntactic categories (noun, verb, infinitive, superlative) are also often of interest, as are word n-grams (sequences) and collocations (words that occur near each other).”
I conclude my last post by asking how our understanding of authorial authenticity will change as it is produced by data mining techniques rather than traditional philology, and what semiotic unconscious will be bared by so much data. More concretely, we might ask, will the reliance on smaller units of text in quantitative analysis of texts also influence the focus of more traditional interpretive work? Will narratology yield to phraseology? That is, will the frequency of various punctuation, letters, and words, deemed even by Hoover to hold "no obvious literary interest" seem more meaningful as they come to bear the most vivid traces of an artist's hand?
To answer this question we might consider its precursors in criminology, art history, and psychoanalysis—three fields linked by Carlo Ginzburg in "Clues: Roots of an Evidential Paradigm" because of a shared epistemological paradigm developed in the late 19th century: “a method of interpretation based on discarded information, on marginal data, considered in some way significant.”
Narratologists might take comfort in noting that a cigarette butt left at the scene of a crime—though it may help identify the killer—hardly changes the meaning of murder. Similarly, fingerprints and other forensic evidence may lead detectives to a subject, but are not necessarily essential to the experience or understanding of subjectivity.
Whether that is changing--in criminology or in the humanities--remains to be seen.
Six weeks ago, an entrepreneurial/tech magazine, Fast Company, profiled the Spatial History Project over at Stanford in a piece on GIS-mapping (Geographic Information Systems) and the digital humanities (read our commentary on the article and the Stanford project here), and just last Tuesday, the web edition of the New York Times had a front page story--the latest in Patricia Cohen's "Humanities 2.0" series--that took a wider-angle approach to the same subject. It's safe to say the news is out on two fronts: digital mapping isn't just for planning your commute, and humanists are actually on the cutting edge of developing new uses for the technology.
When the Fast Company piece came out, I argued in this space that despite representing but one methodological branch of the digital humanities, the new spatial history projects can serve as a particularly good point of access for those curious about what the digital humanities bring to the scholarly table. Part of the appeal of a project like Stanford historian Richard White's visualization of western railroad history, I suggested then, "surely lies in the fact that White's subject (the development of the rail network) is fantastically well-suited to his chosen tools--digital mapping and GIS. Paired with a historical subject that we intuitively grasp as being spatially-oriented, White's methodology seems neither ostentatiously novel nor at all whimsical but entirely appropriate, even to the non-specialist."
Reading Cohen's article earlier this week, I had the sense that what I'd already written about White's project might hold for the GIS-based methodology as a whole. This is not to privilege spatial history among the various innovations of the new school humanists but to make an argument in isolation about an aspect of visualization that is inherent to much historical storytelling. While I had previously suggested that Richard White's project involved a subject that might be intuitively understood as having a particularly strong spatial orientation, it could be argued that a great deal of fine history-writing hinges on the author's ability to reconstruct a place--be it George Chauncey's New York or Antony Beevor's Stalingrad--as it was at a moment in time.
Chauncey's Gay New York and Beevor's Stalingrad are, respectively, a cultural history and a military one, but both works trade on a strong sense of place, a mapping of the action onto a historical urban environment. A feat of visualization is performed, then, as a collaboration between writer and reader; place is evoked by the descriptive power of the writing and the persuasive grain of the detail. And as easy as Beevor's prose makes it to imagine the burned out hell of the Soviet city-cum-war zone, it is equally easy to imagine how a story like that of Stalingrad could be richly augmented with a GIS project showing, for example, troop movements over historical maps of the ravaged city. Similarly, a project like Chauncey's could layer data about spots of subcultural significance over an urban map of old New York. Such maps could come from a variety of sources (I'm put in mind of the digital collection of the old Sanborn fire insurance maps that cover much of urban California ca. 1900), and more importantly, these maps can then be layered with a virtually endless variety of supplemental data sets, each of which has the potential to illuminate some hitherto unnoticed or underappreciated strand of a story.
GIS-empowered spatial history, then, represents a new wrinkle in the tradition of visualization in historical storytelling. The technology of digital mapping transposes some of the act of visualizing from the moment of the telling (the transmission of the story) to an earlier moment in the research process (the construction of the story). Consider an example from Cohen's article describing the digital rendering of the Gettysburg battlefield using a variety of sources to recreate the historical terrain. What researchers are constructing here is more than a new means for the end user or reader to more fully "experience" the story of Gettysburg: the new visualization will, it is hoped, change the story itself as historians are able to reinterpret events with previously unrealizable perspectives on the data now given visual and spatial form.
Almost paradoxically, the innovation of the new spatial histories manages to be both cutting edge and about as profoundly relatable and intuitive as a revolutionary methodology can be. It takes only a second to realize that the telling of a battlefield story could benefit from a familiarity with the topography of that battlefield. This makes sense to us--to specialist and layman alike, and I think the combination of an exciting but relatable methodological advance with a technology (digital mapping) that is at least basically familiar to many people in 2011 sets up the new spatial history to be a field with tremendous potential for groundbreaking research AND relatively broad-based appeal. The methodological swing of the "spatial turn" in the 21st century stands to be a good deal less alienating of the layman than was, say, the theoretical swing of the "linguistic turn" of the 20th.
While digital technologies may drive innovation in virtually all aspects of humanistic study--offering new pedagogical tools, facilitating close collaboration with far-flung colleagues, and enabling online publishing (and thus prompting a reassessment of the peer review system)--in terms of the actual stuff of research, the most momentous breakthrough may be the methodology known as "distant reading." We’ve already devoted some attention to distant reading here on this blog, and its centrality to the young field of the digital humanities was signalled by an article in NY Times’ Humanities 2.0 series and the first of the same newspaper's recently-launched Mechanic Muse column in the Sunday Book Review.
Most basically, distant reading is the practice of using data to read--or at least to understand the cultural significance of--books: tracking the ebb and flow of given terms, studying the frequency of critical strings of words. (The emergent field of Culturomics, research feuled by Google’s n-gram viewer and dataset of the more than 500 billion words contained in about 5.2 million digitized books, is but one form of distant reading.)
In this academic context, as scholars step back from individual works to take in a broader panorama, the figure of the author appears at once blurry and bold. Within Culturomics, the author is all but absent: in order to avoid copyright infringement, Google's datasets are composed of sequences of up to five words--associated with a publication year, but not particular authors or works.
To the extent that the digital humanities represents a cohesive movement, we might find its dogma expressed in the Digital Humanities Manifesto, published by the UCLA's Center for Digital Humanities. And the Manifesto is fairly unequivocal in its marginalization of authorship by way of a disregard for intellectual property rights:
“Copyright and IP standards must [...] be freed from the stranglehold of Capital. Pirate and pervert Disney materials on such a massive scale that Disney will have to sue… your entire neighborhood, school, or country. Practice digital anarchy by creatively undermining copyright and mashing up media.”
(The updated, "2.0" version of the Manifesto is more cautious with respect to intellectual property, adding the stipulation: "Digital humanists defend the rights of content makers, whether authors, musicians, coders, designers, or artists, to exert control over their creations and to avoid unauthorized exploitation; but this control mustn’t compromise the freedom to rework, critique, and use for purposes of research and education. Intellectual property must open up, not close down the intellect and proprius.")
While the digital humanities in general and distant reading in particular appear to marginalize the figure of the author, the traditional humanities seem to be marked by the return of the author, once banished for almost fifty years by the seminal essays of Roland Barthes and Michel Foucault. [Two recent examples of such a return here at Berkeley are Dante and the Making of a Modern Author by Albert Ascoli (Italian Studies) and Shakespeare Only by Jeffrey Knapp (English).]
But even within the digital humanities--and despite the position of the Manifesto--the author may be gaining a new prominence, as distant reading is put to use to identify authors. Here at Berkeley, Associate Professor of English Bryan Wagner recently received a Digital Humanities Start-Up grant from the National Endowment for the Humanities to develop a text analysis tool for examining and visualizing grammatical and stylistic features to assist in authorship identification.
If the distant reading practices of Culturomics and developing author attribution tools suggest opposing ideologies with respect to the figure of the author, the question may not be whether the author stands in the foreground or fades into the background of the digital humanities. Instead we might ask how our understanding of authorial authenticity will change as it is produced by data mining techniques rather than traditional philology, and what semiotic unconscious will be bared by so much data?
Image: Detail of The Donation of Constantine by Gianfrancesco Penni and/or Giulio Romano in the Apostolic Palace at the Vatican. The most famous philological work of author (dis)identification long pre-dates digital technologies: Lorenzo Valla's 15th-century treatise exposing the documentation of Constantine's donation as a forgery.
Although it's too easy to proclaim that any new feature in a glossy, popular publication announces the true arrival of the digital humanities as a tangible force in the larger culture, the recent spot that Stanford's Spatial History Project landed in Fast Company--the progressive entrepreneurial tech mag--definitely warrants a little evangelism. And while the article may not represent a genuine watershed moment in the field itself (Stanford's Matthew Jocker, for instance, is even quoted as saying that his colleagues have "a long history of doing this work, but quietly, without the fanfare digital humanities is getting now."), a great deal of what is exciting about the digital humanities today comes through in the Spatial History Project profile.
A cardinal virtue of the write-up is that it is immediate and accessible, and Fast Company's David Zax has done a fine job of explicating the SHP's work and methodology in easily digestible terms. In his opening line, Zax cuts with commendable to clarity to the sine qua non of the new digital humanities: "[v]ast troves of data that are undeniably useful to history--but too complex to make narratively interesting." Here is a beautifully simple articulation of what the new school is all about. But credit doesn't lie exclusively with the author on this account. The Spatial History Project website is itself remarkable for its intelligibility and appeal to the non-specialist.
As Zax makes clear, Stanford historian Richard White and his colleagues are still telling stories in the best tradition of historical research, but the SHP team is using new methods to tell those stories--and to identify them in the first place. Part of the appeal for Zax (and for anyone wishing to use the scholarship coming out of the SHP as an exemplar of some of the new techniques) surely lies in the fact that White's subject--the development of the rail network in the American West--is fantastically well-suited to his chosen tools--digital mapping and GIS. Paired with a historical subject that we intuitively grasp as being spatially-oriented, White's methodology seems neither ostentatiously novel nor at all whimsical but entirely appropriate, even to the non-specialist. His attempt to "represent and analyze visually how and to what degree the railroads created new spatial patterns and experiences in the 19th-century American West" seems entirely like an enterprise whose time, technologically speaking, has come.
And this is important. White is not an upstart geek looking to storm the ivory tower and make a name for himself with faddish tools. Quite the contrary, Professor White is (and has long been) among the foremost historians of the American West, Native American history, and environmental history, and he has made ample contributions to those fields through decades of more traditional scholarship. What's particularly noteworthy about his "Shaping the West" spatial history project, then, is that in it, we see a long-established expert returning to one of the essential stories (the railroad) of his field and finding entirely new ways of understanding and relating the data in the sources.
Surely, this speaks volumes about the potential of digital techniques. And on that note, Berkeley scholars with an inclination toward the spatial-historical should not miss the opportunity to acquanit themselves with the Geospatial Innovation Facility here on campus.
Some days, making a blog worthwhile requires a considerable contribution of time and analytical energy on the part of the blogger; other days, it requires nothing more than posting the right link. Today is one of the latter sort. If THL readers are going to devote some of their web time this week to staying abreast of what's happening at the intersection education, technology, and the humanities, their best bet is to check out the special Digital Campus issue that The Chronicle of Higher Education published a few days ago.
There's more on offer in the Digital Campus features than I can cover here, but suffice it to say that The Chronicle has served up a perfect end-of-the-academic-year read for anyone interested in a well-rounded discussion of technologies in and of the classroom. Collectively, the pieces survey the changing landscape of education and try to solve the riddle of where we're going before we get there. That new technologies will continue to reshape the learning and teaching environment in higher ed is taken as a given; the question that emerges across the articles is that of how to best and--ideally--most gracefully be sure that those technologies are serving educational aims rather than the other way around.
A few "must reads" that pick up topics we regularly cover in this space include: Josh Keller on the easily underestimated difficulties of providing a digital campus with an adequate IT infrastructure; Jennifer Howard on a few pioneering steps toward the academic library of the future; Ryan Cordell's guide to new technologies that actually serve pedagogy; and of course, Kathleen Fitzpatrick's primer piece on the digital humanities.
That's the required reading for the week, class. Get to it.
Despite the strong showing of the digital at January's MLA conference and other big to-dos on the humanities circuit, some good humanists remain skeptical of what the ever-proliferating array of new technologies is really adding to the field. That's a fair and important question, and it's one that too frequently is met with a surprisingly one-dimensional answer.
Card-carrying digital humanists, the partisans of the new school, are quick to point to projects like medievalist Martin Foys' virtual rendering of the Bayeux Tapestry as irrefutable evidence that the windows of the web open to vast new horizons in humanisitic and social science research. And so the argument goes—almost always with a keen focus on what’s new in terms of research. Less attention tends to be paid to what digital technologies are bringing to teaching in the humanities.
It has been refreshing, then, to see that Patricia Cohen’s much talked and blogged about “Humanities 2.0” series for The New York Times has finally devoted an installment to the use of digital tools in the classroom. In her piece, Cohen culls some interesting examples from the field. The greatest "ooh-ahh" moment is surely Bryn Mawr Professor Katherine Rowe's use of a virtual re-creation of the Globe Theater for students of her introductory Shakespeare course to stage the Bard's dramas with their web avatars. And beyond this, Cohen does a commendable job of considering the classroom and coursework possible in the "new" digital humanities as an important space of productive engagement for faculty and students.
In the first "Humanities 2.0" article--published last fall, Princeton historian Anthony Grafton was quoted as saying that "[i]t’s easy to forget the digital media are means and not ends.” In context, the "ends" of Prof. Grafton's formulation were to be the research results produced with the new digital methodology, but it's also important to remember that research is not the only desirable "end" to which digital media promise the "means". In higher education, the scholar is also a teacher, and the promise of the digital extends to the humanist working in both capacities.
At Berkeley, as elsewhere, digital tools are increasingly employed in the humanities classroom. THL and bSpace both offer instructors a project-based platform and web 2.0 tools to augment class instruction, and to pull only one worthy example from the variety of digital resources recently developed by and for the campus teaching community, the Berkeley Language Center's Library of Foreign Language Film Clips offers language instructors a digital treasure trove of organized, subtitled video clips for use in lessons.
And while skeptics will argue here as well that some of these new tools won't pan out as anything more than novelties in the classroom, more or less the same can be (and always has been) said of any number of traditional teaching strategies. Is Prof. Rowe's new approach to Shakespeare likely to prove the best, last, and only way to teach "Titus Andronicus"? Probably not. But it's a novel technique for engaging with her students and might bring some new insights to teacher and pupils alike.
The one inviolable law of the classroom is that not all learning strategies are equally effective for all students. Arguing from that premise alone, adding some digital arrows to the instructor's educational quiver seems like an unequivocally positive thing, and any assessment of the promise of the digital humanities won't be complete without recognizing what they are bringing not just to cutting-edge research, but also to cutting-edge teaching.
The growing field of the digital humanities has, for some time, been immersed in the most fundamental of struggles: that of self-definition. In a recent post in this space, Jeff Rogers explores the fallout of a contentious MLA position paper that designated the digital humanist a builder who therefore, must know how to code. Recent definitions of the digital humanities have been more inclusive, ranging from the modest: “The use of digital tools and methods in humanities study and dissemination” (Geoffrey Rockwell, University of Alberta); to the grandiose: “DH is a multi-discipline through which criticism, analysis, and speculation is focused on the past, present, and future of the human condition” (Richard Cunningham, Acadia University). Perhaps most apt is the response of Lou Burnard (Oxford), who defines the digital humanities, “With extreme reluctance.”
These definitions are offered by participants of “A Day in the Life of the Digital Humanities”—more popularly (and graphically) known as Day of DH—a community publication project organized by scholars from the Humanities Computing Program at the University of Alberta. Though the Day of DH asks participants to explain their understanding of the digital humanities, the implicit claim of the project is that the field can be defined only through its practice. And so the Day of DH sets out to answer the question, “Just what do computing humanists really do?” by asking self-identified digital humanists from around the world to collaborate in producing a richly varied answer: a website that weaves together participants' blog-like journals chronicling their daily activities on March 18. (Apply here by March 15 to participate.)
The activities of digital humanists are to some extent predetermined by the standardized tags that will be used to stitch together the disparate participant journals into one Day of DH website: Communication, Email, Data Collection, Editing, Writing, Reading, Blogging, Reflecting, Programming, and Visualization. It turns out; however, that the daily lives of digital humanists, much like those of the rest of us, are composed of many moments more mundane than these activities. And so browsing through the Day of DH 2010 journals is a little like stumbling upon someone else’s Facebook news feed in that it too demonstrates the difficulty of sustaining an interest in, for example, the music strangers listen to en route to campus, or their preferred caffeinated beverages.
The project might be more effective in defining the digital humanities if, rather than chronicling the daily lives of digital humanists, it catalogued their projects. That said, what the Day of DH does achieve is impressive: it forges a notable sense of community among digital humanists. And if the work of the digital humanist includes a consideration of new forms of subjectivity and solidarity that emerge from social media, then the Day of DH constitutes both its study and its performance.
For decades now, there has been an emphasis in the work of academic historians and practitioners of the historically inclined humanities and social sciences on the recovery of "lost voices". That great enterprise, which has found its best expression in the (once) "new" social history and the work of the subaltern studies pioneers, concerns itself with the margins of the historical record and seeks to restore the disenfranchised, the oppressed, the inconvenient, and the voiceless to the larger picture.