Opening Up Digital Humanities Education

Sorry I’ve been gone from the blog for so long. I’ve been absorbed by several wee writing projects, including a CLIR report investigating “Can a New Research Library Be All-Digital?” co-authored with Geneva Henry; an essay on collaboration in the digital humanities for Collaborative Approaches to the Digital in English Studies, edited by Laura McGrath; and “What Is She Doing Here?” for #alt-ac: alternate academic careers for humanities scholars, edited by Bethany Nowviskie. Now I’m working on an essay for Brett Hirsch’s Teaching Digital Humanities: Principles, Practices, and Politics, but this time I plan to blog my research, getting feedback (I hope) and refining my ideas along the way.

Recently several leaders in the digital humanities (DH) have called for greater inclusiveness. As Geoffrey Rockwell warns, DH risks operating as an old boys’ (and girls’) network because “there are few formal ways that people can train.” Developing and demonstrating the skills required to become a digital humanist typically requires apprenticing on a major project, but not everyone has the opportunity to do so. Further, such an education can be partial, exposing participants to the skills needed for a particular project but not necessarily the broader context of digital humanities.

To provide flexible opportunities for professional education, the DH community should experiment with a distributed, mostly online, open certificate program. I propose a certificate program because it would enable graduate students and working professionals to acquire necessary knowledge and skills without re-arranging their lives to pursue a full-fledged masters or Ph.D. program. Participants would build knowledge, engage in community, produce significant work, and acquire a professional credential that would, one hopes, open up more opportunities.

Not only would such a program offer more paths to entry, but it would also provide a focused way for the DH community to re-imagine how professional education is conceived, structured and delivered. Reforming humanities education should be fundamental to DH’s broader goal to explore “how the humanities may evolve through their engagement with technology, media, and computational method.” DH already has some compelling educational initiatives, such as the intensive learning-by-doing of One Week One Tool; the immersion offered by programs like Digital Humanities Summer Institute and the NEH’s Institutes for Advanced Topics in the Digital Humanities; the student engagement of participatory learning, such as through crowdsourcing grading; the community-based, flexible learning of THATCamp; the multi-campus, aggregated learning of Looking for Whitman; and the networked conversations facilitated by HASTAC and Twitter. Can we build on these efforts and deliver professional education that engages the global DH community, offers project-based learning, leverages the network, and provides participants with key skills and knowledge?

Education, by Xin Li 88

A certificate program in DH would likely have these features:

  1. Open: Anyone should be able to use and even to create or modify course materials produced for the certificate program. Openness promotes the values of DH and indeed of education (as David Wiley argues, “education is sharing“). Moreover, the DH community can share labor in developing course materials and keeping them dynamic, fresh, and relevant. Julie Meloni has proposed a compelling model to “Develop Self-Paced Open Access DH Curriculum for Mid-Career Scholars Otherwise Untrained” that I think can be extended to early-career scholars as well.
  2. Distributed: Rather than assuming that expertise is concentrated at a particular location, the program would give participants access to experts around the world, who would serve as mentors for projects and online seminars. We can see a model for distributed, open online seminars in massive open online courses such as Dave Cormier and George Siemens’ “Education Futures” and David Wiley’s “Introduction to Open Education.”
  3. Community-focused: Already digital humanists are sharing and building knowledge using social technologies such as Twitter, but this approach should extend to education. Students would regularly participate in online forums and other networked conversations with their cohort group, as well as with the larger community. They could also help to coordinate crowdsourcing efforts such as transcriptions or distributed editions, gaining an embedded knowledge of how networked communities work.
  4. Balanced between making and reflecting: The DH community has been divided over debates over whether its focus should be computation or theory, method or communication. Why not both? Participants could both build–collections, networks, tools, methods–and reflect on the process, significance, and theoretical dimensions of what they have built.
  5. Competency rather than credit based: The traditional course structure may be too artificial, driven by the institutional need to cut up learning into semesters and credit hours. Instead, the certificate program could focus on core competencies for digital humanities professionals, such as a basic understanding of programming, knowledge representation, media studies, digitization, networked communication, and the history of digital humanities/humanities computing. Prior knowledge and participation at other DH events–summer institutes, hackfests and the like–should count. Participants in the DH certificate program could demonstrate their competencies through open online portfolios that would be evaluated by the community through an open peer review process.

Much of what I’m proposing has been kicked around in the distance education and open education communities for a while. But I think the DH community can actually pull something like this off. I anticipate that the greatest challenges to implementing such a program would be eliciting participation among the mentors and learning module developers and coming up with an effective model for administering, financing and certifying it. Perhaps a funding organization with an interest in distance education and/or DH could invest in the program to get it started. Perhaps the program could be coordinated by a digital humanities center, library school, collaboration among multiple institutions, or even by a professional organization such as ACH. My next posts will look at existing educational programs in DH, the DH curriculum, approaches to open education, the structure of a proposed program, and strategies for assessment and certification.

Note: Parts of this post come from my abstract for Teaching Digital Humanities.

Advertisements

Examples of Collaborative Digital Humanities Projects

Observing that humanities scholars rarely jointly author articles, as I did in my last post, comes as no surprise.  As Blaise Cronin writes, “Collaboration—for which co-authorship is the most visible and compelling indicator—is established practice in both the life and physical sciences, reflecting the industrial scale, capital-intensiveness and complexity of much contemporary scientific research. But the ‘standard model of scholarly publishing,’ one that ‘assumes a work written by an author,” continues to hold sway in the humanities’ (24).   Just as I found that only about 2% of the articles published in American Literary History between 2004 and 2008 were co-authored, so Cronin et al discovered that just 2% of the articles that appeared in the philosophy journal Mind between 1900 and 2000 were written by more than one person, although between 1990 and 2000 that number increased slightly to 4% (Cronin, Shaw, & La Barre).   Whereas the scale of scientific research often requires scientists to collaborate with each other, humanities scholars typically need only something to write with and about.  But as William Brockman, et al suggest, humanities scholars do have their own traditions of collaboration, or at least of cooperation:  “Circulation of drafts, presentation of papers at conferences, and sharing of citations and ideas, however, are collaborative enterprises that give a social and collegial dimension to the solitary activity of writing. At times, the dependence of humanities scholars upon their colleagues can approach joint authorship of a publication” (11).

Information technology can speed and extend the exchange of ideas, as researchers place their drafts online and solicit comments through technologies such as CommentPress, make available conference papers via institutional repositories, and share citations and notes using tools such as Zotero.  Over ten years, ago John Unsworth described an ongoing shift from cooperation to collaboration, indicating perhaps both his prescience and the slow pace of change in academia.

In the cooperative model, the individual produces scholarship that refers to and draws on the work of other individuals. In the collaborative model, one works in conjunction with others, jointly producing scholarship that cannot be attributed to a single author. This will happen, and is already happening, because of computers and computer networks. Many of us already cooperate, on networked discussion groups and in private email, in the research of others: we answer questions, provide references for citations, engage in discussion. From here, it’s a small step to collaboration, using those same channels as a way to overcome geographical dispersion, the difference in time zones, and the limitations of our own knowledge.

The limitations of our own knowledge.  As Unsworth also observes, collaboration, despite the challenges it poses, can open up new approaches to inquiry: “instead of establishing a single text, editors can present the whole layered history of composition and dissemination; instead of opening for the reader a single path through a thicket of text, the critic can provide her with a map and a machete. This is not an abdication of the responsibility to educate or illuminate: on the contrary, it engages the reader, the user, as a third kind of collaborator, a collaborator in the construction of meaning.”  With the interactivity of networked digital environments, Unsworth imagines the reader becoming an active co-creator of knowledge.  Through online collaboration, scholars can divide labor (whether in making a translation, developing software, or building a digital collection), exchange and refine ideas (via blogs, wikis, listservs, virtual worlds, etc.), engage multiple perspectives, and work together to solve complex problems.  Indeed, “[e]mpowering enhanced collaboration over distance and across disciplines” is central to the vision of cyberinfrastructure or e-research (Atkins).  Likewise, Web 2.0 focuses on sharing, community and collaboration.

Work in many areas of the digital humanities seems to both depend upon collaboration and aim to support it.  Out of the 116 abstracts for posters, presentations, and panels given at the Digital Humanities 2008 (DH2008) conference, 41 (35%) include a form of the word “collaboration,” whether they are describing collaborative technologies (“Online Collaborative Research with REKn and PReE”) or collaborative teams (“a collaborative group of librarians, scholars and technologists”).  Likewise, 67 out of 104 (64%) papers and posters presented at DH 2008 have more than one author.  (Both the Digital Humanities conference and LLC tend to focus on the computational side of the digital humanities, so I’d also like to see if the pattern of collaboration holds in what Tara McPherson calls the “multimodal humanities,” e.g. journals such as Vectors.  Given that works in Vectors typically are produced through collaborations between scholars and designers, I’d expect to see a somewhat similar pattern.)

I was having trouble articulating precisely how collaboration plays a role in humanities research until I began looking for concrete examples—and I found plenty.   As computer networks connect researchers to content, tools and each other, we are seeing humanities projects that facilitate people working together to produce, explore and disseminate knowledge.  I interpret the word “collaboration” broadly; it’s a squishy term with synonyms such as teamwork, cooperation, partnership, and working together, and it also calls to mind co-authorship, communication, community, citizen humanities, and social networks.  In Here Comes Everybody, Clay Shirky puts forward a handy hierarchy of collaboration: 1) sharing; 2) cooperation; 3) collaboration; 4) collectivism (Kelly).  In this post, I’ll list different types of computer-supported collaboration in the humanities, note antecedents in “traditional” scholarship, briefly describe example projects, and point to some supporting technologies.  This is an initial attempt to classify a wide range of activity; some of these categories overlap.

–FACILITATING COMMUNICATION AND KNOWLEDGE BUILDING–

ONLINE COMMUNITIES/ VIRTUAL ORGANIZATIONS

  • Historical antecedents: conferences, colloquia, letters
  • Supporting technologies: listservs, online forums, blogs, social networking platforms, virtual worlds, microblogging (e.g. Twitter), video conferencing
  • Key functions: fostering communication and collaboration across a distance
  • Examples:
    • Listervs: Perhaps the most well-known online community in the humanities is H-NET, which was founded in 1992  and thus predates Web 2.0 or even Web 1.0.  According to Mark Kornbluh, H-Net provides an “electronic version of an academic conference, a way for people to come together and to talk about their research and their teaching, to announce what was going on in the field, and to review and critique things that are going on in the field.”  Currently H-Net  supports over 100 humanities email lists and serves over 100,000 subscribers in more than 90 countries.  Although H-Net has been criticized for relying on an old technology, the listserv, and is facing economic difficulties, it remains valued for supporting information sharing and discussion.  For digital humanities folks, the Humanist list, launched in 1987, serves as “an international online seminar on humanities computing and the digital humanities” and has played a vital part in the intellectual life of the community.
    • Online forums: HASTAC, “a virtual network, a network of networks” that supports collaboration across disciplines and institutions, sponsors lively forums about technology and the humanities, often moderated by graduate students.  HASTAC also organizes conferences, administers a grant competition, and advocates for “new forms of collaboration across communities and disciplines fostered by creative uses of technology.” In my experience, online communities often break down the hierarchies separating graduate students from senior scholars and bring recognition to good ideas, no matter what the source.
    • Online communities: Since 1996, Romantic Circles (RC) has built an online community focused on Romanticism, not only fostering communication among researchers but also collaboratively developing content.  Romantic Circles includes a blog for sharing information about news and events of interest to the community; a searchable archive of electronic editions; collections of critical essays; chronologies, indices, bibliographies and other scholarly tools; reviews; pedagogical resources; and a MOO (gaming environment).  Over 30 people have served as editors, while over 300 people have contributed reviews and essays.  Alan Liu aptly summarizes RC’s significance: “Romantic Circles, which helped pioneer collaborative scholarship on the Web, has become the leading paradigm for what such scholarship could be. One can point variously to the excellence of its refereed editions of primary texts, its panoply of critical and pedagogical resources, its inventive Praxis series, its state-of-the-art use of technology or its stirring commitment (nearly unprecedented on the Web) to spanning the gap between high-school and research-level tiers of education. But ultimately, no one excellence is as important as the overall, holistic impact of the site. We witness here a broad community of scholars using the new media vigorously, inventively, and rigorously to inhabit a period of historical literature together.”In building a community that supports digital scholarship, NINES focuses on three main goals: providing peer review for digital scholarship in 19th century American and British studies (thus helping to legitimize and recognize emerging scholarly forms), helping scholars create digital scholarship by providing training and content, and developing software such as Collex and Juxta to support inquiry and collaboration.
    • Advanced videoconferencing: With budgets tight, time scarce, and concern about the environmental costs  of travel increasing, collaborators often need to meet without having to travel.  AccessGrid supports communication among multiple groups by providing high quality video and audio and enabling researchers to share data and scientific instruments seamlessly.  AccessGrid, which was developed by Argonne National Laboratory and uses open source software, employs large displays and multiple projectors to create an immersive environment.   In the arts and humanities, AccessGrid has been used to support “telematic” performances, the study of high resolution images, seminars, and classes.
CollabRoom by Modbob

CollabRoom by Modbob

COLLABORATORIES

  • Historical antecedents: laboratories, research centers,
  • Supporting technologies: grid technologies/ advanced networking, large displays, remote instrumentation, simulation software, collaboration platforms such as HubZero, databases, digital libraries
  • Key functions: fostering communication, collaboration, resource sharing, and research regardless of physical distance
  • Examples:

William Wulf coined the term collaboratory in 1989 to describe a “center without walls, in which the nation’s researchers can perform their research without regard to physical location, interacting with colleagues, accessing instrumentation, sharing data and computational resources, [and] accessing information in digital libraries.” Most of the collaboratories listed on the (now somewhat-out-of-date) Science of Collaboratories web site focus on the sciences.  For example, scientific collaboratories such as NanoHub, Space Physics and Astronomy Research Collaboratory (SPARC) and Biomedical Informatics Research Network (BIRN) have supported online data sharing, analysis, and communication.

What would a collaboratory in the humanities do? The term has been used in the humanities to refer to:

“Collaboratory” has thus taken on additional meanings, referring to “a new networked organizational form that also includes social processes; collaboration techniques; formal and informal communication; and agreement on norms, principles, values, and rules” (Cogburn, 2003, via Wikipedia).

“Virtual research environment” seems to be replacing “collaboratory” to refer to online collaborative spaces that provide access to tools and content (e.g. Early Modern Texts VRE, powered by Sakai). Through its funding program focused on Virtual Research Environments, JISC has sponsored the Virtual Research Environment for Archaeology, a VRE for the Study of Documents and Manuscripts, Collaborative Research Events on the Web, and myExperiments for sharing scientific workflows.

–SHARING AND AGGREGATING CONTENT—

DIGITAL MEMORY BANKS/ USER-CONTRIBUTED CONTENT

  • Historical antecedents: museums, archives, personal collections
  • Supporting technologies: Web publishing platforms (e.g. Omeka, Drupal), databases
  • Key functions: “collecting & exhibiting” content (to borrow from CHNM)
  • Examples:
    When the Valley of the Shadow project was launched in the 1990s, project team members went into communities in Pennsylvania and Virginia to digitize 19th century documents held by families in personal collections, thus building a virtual archive.  As scanners and digital cameras have become ubiquitous and user-contributed content sites such as Flickr and YouTube have taken off, people can contribute their own digital artifacts to online collections.  For example, The Hurricane Digital Memory Bank collects over 25,000 stories, images, and other multimedia files about Hurricanes Katrina and Rita.  Using a simple interface, people can upload items and describe the title, keywords, geographic location, and contributor.  The archive thus becomes a dynamic, living repository of current history, a space where researchers and citizens come together—or, in the terminology of the Center for History and New Media (CHNM), a memory bank that “promote[s] popular participation in presenting and preserving the past.”  As the editors of Vectors write in their introduction to “Hurricane Digital Memory Bank: Preserving the Stories of Katrina, Rita, and Wilma,” “Their work troubles a number of binaries long reified by history scholars (and humanities scholars more generally), including one/many, closed/open, expert/amateur, scholarship/journalism, and research/pedagogy.”  CHNM also sponsors digital memory banks focused on Mozilla, September 11, and the Virginia Tech tragedy.  Likewise, the Great War Archive, sponsored by the University of Oxford, contains over 6,500 items about World War I contributed by the public.

CONTENT AGGREGATION AND INTEGRATION

  • Historical antecedents: museums, archives
  • Supporting technologies: databases, open standards
  • Key functions: making it easier to discove, share and use information
  • Examples:
    Too often digital resources reside in silos, as each library or archive puts up its own digital collection.  As a result, researchers must spend more time identifying, searching, and figuring out how to use relevant digital collections.  However, some projects are shifting away from a siloed approach and bringing together collaborators to build digital collections focused on a particular topic or to develop interoperable, federated digital collections.  For instance, the Alliance for American Quilts, MATRIX: Center for Humane Arts, Letters and Social Sciences Online, and Michigan State University Museum have created the Quilt Index, which makes available images and descriptions of quilts provided by 14 contributors, including The Library of Congress American Folklife Center and the Illinois State Museum.  As Mark Kornbluh argues, interoperable content enables new kinds of inquiry: “In the natural sciences, large new datasets, powerful computers, and a rich array of computational tools are rapidly transforming knowledge generation. For the same to occur in the humanities, we need to understand the principle that ‘more is better.’ Part of what the computer revolution is doing is that it is letting us bring huge volumes of material under control. Cultural artifacts have always been held by separate institutions and separated by distance. Large–scale interoperable digital repositories, like the Quilt Index, open dramatically new possibilities to look at the totality of cultural content in ways never before possible.” Other examples of content aggregation and integration projects include the Walt Whitman Archive’s Finding Aids for Poetry Manuscripts and NINES.

DATA SHARING

  • Historical antecedents: informal exchange of data
  • Supporting technologies: databases (MySQL, etc), web services tools
  • Key functions: support research by enabling discovery and reuse of data sets
  • Example projects:
    By sharing data, researchers can enable others to build on their work and provide transparency.  As Christine Borgman writes, “If related data and documents can be linked together in a scholarly information infrastructure, creative new forms of data- and information-intensive, distributed, collaborative, multidisciplinary research and learning become possible.  Data are outputs of research, inputs to scholarly publications, and inputs to subsequent research and learning.  Thus they are the foundation of scholarship” (Borgman 115).  Of course, there are a number of problems bound up in data sharing—how to ensure participation, make data discoverable through reliable metadata, balance flexibility in accepting a range of formats and the need for standardization, preserve data for the long term, etc.  Several projects focused on humanities and social science data are beginning to confront at least some of these challenges:

    • Open Context “hopes to make archaeological and related datasets far more accessible and usable through common web-based tools.”  Embracing open access and collaboration, Open Context makes it easy for researchers to upload, search, tag and analyze archaeological datasets.
    • Through Open Street Map, people freely and openly share and use geographic data in a wiki-like fashion.  Contributors employ GPS devices to record details about places such as the names of roads, then upload this information to a collaborative database.  The data is used to create detailed maps that have no copyright restrictions (unlike most geographical data).
    • Through the Reading Experience Database researchers can contribute records of British readers engaging with texts.

–COLLABORATIVE ANNOTATION, TRANSCRIPTION, AND KNOWLEDGE PRODUCTION–

CROWDSOURCING TRANSCRIPTION

  • Historical antecedents: genealogical research(?)
  • Supporting technologies: wikis
  • Key functions: share the labor required for transcribing manuscripts
  • Examples:
    Much of the historical record is not yet accessible online because it exists as handwritten documents—letters, diaries, account books, legal documents, etc.  Although work is underway on Optical Character Recognition software for handwritten materials, making these variable documents searchable and easy to read usually still requires a person to manually transcribe the document.  Why not enable people to collaborate to make family documents and other manuscripts available through commons-based peer production? At THATCamp last year, I learned about Ben Brumley’s FromthePage software, which enables volunteers to transcribe handwritten documents through a web-based interface.  The right side of the interface shows a zoomable image of the page, while on the left volunteers enter the transcription through a wiki-like interface.  Likewise, the FamilySearch Indexing Project, sponsored by the LDS, recruits volunteers to transcribe family information from historical documents.   (See Jeanne Kramer-Smyth’s great account of the THATCamp session on crowdsourcing transcription and annotation.)  Not only can collaborative transcription be more efficient, but it can also reduce error.  Martha Nell Smith recounts how she, working solo at the Houghton, transcribed a line of Susan Dickinson’s poetry as “I’m waiting but the cow’s not back.’’  When her collaborators at the Dickinson Electronic Archives, Lara Vetter and Laura Lauth, later compared the transcriptions to digital images of Dickinson’s manuscripts, they discovered that the line actually says “‘I’m waiting but she comes not back.”  As Smith suggests, “Had we not been working in concert with one another, and had we not had the high quality reproductions of Susan Dickinson’s manuscripts to revisit and thereby perpetually reevaluate our keys to her alphabet, my misreading might have been congealed in the technology of a critical print translation and what is very probably a poetic homage to Emily Dickinson would have lain lost in the annals of literary history”(Smith 849).

    Efforts to crowdsource transcription seem similar to the distributed proofreading that powers Project Gutenberg, which has enlisted volunteers to proofread over 15,000 books since 2000.  Likewise, Project Madurai is using distributed proofreading to build a digital library of Tamil texts.

COLLABORATIVE TRANSLATION

  • Historical antecedents: translation teams, e.g. Pevear and Volokhonsky
  • Supporting technologies: wikis, blogs, machine translation supplemented by human intervention
  • Examples:
    Rather than requiring an individual to undertake the time-intensive work of translating a complex classical text solo, the Suda Online (SOL)  brings together classicists to collaborate in translating into English the Suda, a tenth century encyclopedia of ancient learning written by a committee of Byzantine scholars (and thus itself a collaboration).  In addition to providing translations, SOL also offers commentaries and references, so it serves as a sort of encyclopedic predecessor to Wikipedia.  As Anne Mahoney reports in a recent article from Digital Humanities Quarterly, an email exchange in 1998 sparked the Suda Online; one scholar wondered whether there was an English translation of the Suda (there wasn’t) and others recognized that a translation could be produced through web-based collaboration.  Student programmers at the University of Kentucky quickly developed the technological infrastructure for SOL (a wiki might have been used today, but the custom application has apparently served its purpose well).  Now a self-organizing team of 61 editors and 95 translators from 12 countries has already translated over 21,000 entries, about 2/3 of the total.  Translators make the initial translations, which are then reviewed and augmented by editors (typically classics faculty) and given a quality rating of “draft,” “low,” or “high.”   All who worked on the translation are credited through a sort of open peer review process.  Whereas collaborative projects such as Wikipedia are open to anyone, SOL translators must register with the project.  Mahoney suggests that the collaboration has succeeded in part because it was focused and bounded, so that collaborators could feel the satisfaction of working toward a common goal and meeting milestones, such as 100 entries translated.  According to Mahoney, SOL has made this important text more accessible by offering an English version, making it searchable, and providing commentaries and references.  Moreover, “[a]s a collaboration SOL demonstrates the feasibility of open peer review and the value of incremental progress.” Other collaborative translation projects include The Encyclopédie of Diderot and d’Alembert, Traduwiki, which aims to “eliminate the last barrier of the Internet, the language’; the WorldWide lexicon project; and Babels.

COLLABORATIVE EDITING

  • Historical antecedents: creating critical editions
  • Supporting technologies: grid computing, XML editors, text analysis tools, annotation tools
  • Example Projects:

As Peter Robinson observed at this year’s MLA, the traditional model for creating a critical edition centralizes authority in an editor, who oversees work by graduate assistants and others.  However, the Internet enables distributed, de-centralized editing.  To create “community-made editions,” a library would digitize texts and produce high quality images, researchers would transcribe those images, others would collate the transcriptions, others would analyze the collations and add commentaries, and so forth.

Explaining the need for collaborative approaches to textual editing, Marc Wilhelm Kiister, Christoph Ludwig and Andreas Aschenbrenner of TextGrid describe how 3 different editors attempted to create a critical edition of the massive “so-called pseudo-capitulars supposedly written by a Benedictus Levita,” dying before they could complete their work.  Now a team of scholars is collaborating to create the edition, increasing their chances of completion by sharing the labor.  The TextGrid project is building a virtual workbench for collaborative editing, annotation, analysis and publication of texts.  Leveraging the grid infrastructure, TextGrid provides a platform for “software agents with well-defined interfaces that can be harnessed together through a user defined workflow to mine or analyze existing textual data or to structure new data both manually and automatically.” TextGrid recently released a beta version of its client application that includes an XML editor, search tool, dictionary search tool, metadata annotator, and workflow modules. As Kiister, Ludwig and Aschenbreener point out, enabling collaboration requires not only developing a technical platform that supports real-time collaboration and automation of routine tasks, but also facilitating a cultural shift toward collaboration among philologists, linguists, historians, librarians, and technical experts.

SOCIAL BIBLIOGRAPHIES, COLLABORATIVE FILTERING, AND ANNOTATION

  • Historical antecedents: shared references, bibliographies
  • Key functions: share citations, notes, and scholarly resources; build collective knolwedge
  • Supporting technologies: social bookmarking, bibliographic tools
  • Projects:
    With the release of Zotero 2.0, Zotero is taking a huge step toward the vision articulated by Dan Cohen of providing access to “the combined wisdom of hundreds of thousands of scholars” (Cohen).  Researchers can set up groups to share collections with a class and/or collaborators on a research project.   I’ve already used Zotero groups to support my research and to collaborate with others; I discovered several useful citations in the collaboration folder for the digital history group, and with Sterling Fluharty I’ve set up a group to study collaboration in the digital humanities (feel free to join).  Ultimately Zotero will provide Amazon-like recommendation services to help scholars identify relevant resources.  As Stan Katz wrote in hailing Zotero’s collaboration with the Internet Archive to create a “Zotero commons” for sharing research documents, “For secretive individualists, which is to say old-fashioned humanists, this will sound like an invasion of privacy and an invitation to plagiarism. But to scholars who value accessibility, collaboration, and the early exchange of information and insight -– the future is available. And free on the Internet.”

    Similarly, the eComma project suggests that collaborative annotation can facilitate collaborative interpretation, as readers catalog poetic devices (personification, enjambment, etc.) and offer their own interpretations of literary works.  You can see eComma at work in the Collaborative Rubáiyát, which enables users to compare different versions of the text, annotate the text, tag it, and access sections through a tag cloud.   Likewise, Philospace will allow scholars to describe philosophical resources, filter them, find resources tagged by others, and submit resulting research for peer review. Other projects and technologies supporting collaborative annotation include Flickr CommonsAus-e-Lit: Collaborative Integration and Annotation Services for Australian Literature Communities, NINES’ Collex, and STEVE.

COLLABORATIVE WRITING

  • Historical antecedents: Encyclopedias
  • Supporting technologies: Wikis
  • Key functions: sharing knowledge, synthesizing multiple perspectives
  • Examples:
    With the rise of Wikipedia, academics have been debating whether collaborative writing spaces such as wikis undermine authority, expertise, and trustworthiness.  In “Literary Sleuths Online,” Ralph Schroeder and Matthijs Den Besten examine the Pynchon Wiki, a collaborative space where Pynchon enthusiasts annotate and discuss his works.  Schroeder and Den Besten compare the wiki’s section on Pynchon’s Against the Day with a print equivalent, Weisenburger’s “A Gravity’s Rainbow Companion.”  While the annotations in Weisenburger’s book are more concise and consistent, the wiki is more comprehensive, more accurate (because many people are checking the information), and more speedily produced (it only took 3 months for the wiki to cover every page of Pynchon’s novel).   Moreover, the book is fixed, while the wiki is open-ended and expansive. Schroeder and Den Besten suggest that competition, community and curiosity drive participation, since contributors raced to add annotations as they made their way through the novel and “sleuthed” together.

GAMING: “Collaborative Play”/ Games as Research

  • Historical antecedents: role playing games, board games, etc.
  • Key functions: problem solving, team work, knowledge sharing
  • Supporting technologies: gaming engines, wikis, networks
  • Example Projects:
    Perhaps some of the most intense collaboration comes in massively multiplayer online games, as teams of players consult each other for assistance navigating virtual worlds, team up to defeat monsters, join guilds to collaborate on quests, and share their knowledge through wikis such as the WOWWiki, which has almost 74,000 articles.  Focusing on World of Warcraft, Nardi and Harris explore collaborative play as a form of learning.  They also point to potential applications of gaming in research communities: “Mixed collaboration spaces, whether MMOGs or another format, may be useful in domains such as interdisciplinary scientific work where a key challenge is finding the right collaborators.”

    Sometimes those collaborators can be people without specialized training.  Recently Wired featured a fascinating article about FoldIt, a game to come up with different models of proteins that is attracting devoted teams of participants (Bohannon).  The game was devised by the University of Washington Departments of Computer Science & Engineering and Biochemistry to crowdsource solutions to Community-Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP), a scientific contest to predict protein structures.   Previously biochemist David Baker had used Rosetta@home to harness the spare computing cycles of 86,000 PCs that had been volunteered to help determine the shapes of proteins, but he was convinced that human intelligence as well as computing power needed to be tapped to solve spatial puzzles.  Thus he and his colleagues developed a game in which players fold proteins into their optimal shapes, a sort of “global online molecular speed origami.” Over 100,000 people have downloaded the game, and a 13 year-old is one of the game’s best players. Using the game’s chat function, players formed teams, “and collective efforts proved far more successful than any solo folder.”  At the CASP competition, 7 of the 15 solutions contributed through FoldIt worked, and one finished in first place, so “[a] band of gamer nonscientists had beaten the best biochemists.”

    How might gaming be used to motivate and support humanities research?  As we see in the example of FoldIt, games provide motivation and a structure for collaboration; teamwork enables puzzles to be solved more rapidly.  I could imagine, for example, a game in which players would transcribe pieces of a diary to unravel the mystery it recounts, describe the features of a series of images (similar to Google’s Image Labeler game), or offer up their own interpretations of abstruse philosophical or literary passages.  In “Games of Inquiry for Collaborative Concept Structuring,” Mary A. Keeler and Heather D. Pfeiffer envision a “Manuscript Reconstruction Game (MRG)” where Peirce scholars would collaborate to figure out where a manuscript page belongs. “The scholars rely on the mechanism of the game, as a logical editor or ‘logical lens,’ to help them focus on and clarify the complexities of inference and conceptual content in their collaborative view of the manuscript evidence” (407).  There are already some compelling models for humanities game play.  Dan Cohen recently used Twitter to crowdsource solving an historical puzzle. Ian Bogost and collaborators are investigating the intersections between journalism and gaming.  Jerome McGann describes Ivanhoe as an  “online playspace… for organizing collaborative interpretive investigations of traditional humanities materials of any kind,” as two or more players come together to re-imagine and transform a literary work (McGann).

PUBLISHING

  • Historical antecedents: exchange of drafts, letters, critical dialogs in journals
  • Supporting technologies and protocols: CommentPress, blogs, wikis, Creative Commons licenses, etc.
  • Projects:
    Bob Stein defines the book as “a place where readers (and sometimes authors) congregate.” Recent projects enable readers to participate in all phases of the publishing process, from peer-to-peer review to remixing a work to produce something new.  For instance, LiquidPub aims to transform the dissemination and evaluation of scientific knowledge by enabling “Liquid Publication that can take multiple forms, that evolves continuously, and is enriched by multiple sources.”  Using CommentPress, Noah Wardrip-Fruin  experimented with peer-to-peer review of his new book Expressive Processing alongside traditional peer review, posting a section of the book each week day to the Grand Text Auto blog.  Although it was difficult for many reviewers to get a sense of the book’s overall arguments when they were reading only fragments, Wardrip-Fruin found many benefits to this open approach to peer review: he could engage in conversation with his reviewers and determine how to act on their comments, and he received detailed comments from both academics and non-academics with expertise in the topics being discussed, such as game designers.  Similarly, O’Reilly recently developed the Open Publishing Feedback System to gather comments from the community.  Its first experiment, Programming Scala, yielded over 7000 comments from nearly 750 people. New publishing companies such as WeBook and Vook are exploring collaborative authorship and multimedia.

SOCIAL LEARNING

  • Historical antecedents: Students as research assistants?
  • Supporting technologies: blogs, wikis, social bookmarking, social bibliographies
  • Motto: “We participate, therefore we are.” (via John Seely Brown)
  • Example:
    As John Seely Brown explains, “social learning is based on the premise that our understanding of content is socially constructed through conversations about that content and through grounded interactions, especially with others, around problems or actions.”  Social learning involves “learning to be” an expert through apprenticeship, as well as learning the content and language of a domain.  Brown points to open source communities as exemplifying social learning.  I would guess that many, if not most, collaborative digital humanities projects have depended on contributions from undergraduate and graduate students, whether they digitized materials, did programming, authored metadata, contributed to the project wiki, designed the web site, or even managed the project.

    Why not create a network of research projects, so that students studying a similar topic could jointly contribute to a common resource?  Such is the vision of “Looking for Whitman: The Poetry of Place in the Life and Work of Walt Whitman,” led by Matthew Gold.   Working together to build a common web site on Whitman, students will document their research using Web 2.0 technologies such as CommentPress, BuddyPress (Word Press + social networking), blogs, wikis, YouTube, Flickr, Google Maps, etc.m  Students at City Tech, CUNY’s New York City College of Technology and New York University will focus on Whitman in New York;  those at Rutgers University at Camden will look at Whitman as “sage of Camden”; and those at the University of Mary Washington will examine Whitman and the Civil War.   Similarly, Michael Wesch, the 2008 CASE/Carnegie U.S. Professor of the Year for Doctoral and Research Universities, asks his students to become “co-creators” of knowledge, whether in simulating world history and cultures, creating an ethnography of YouTube, or examining anonymity and new media.

While collaboration in the humanities is certainly not new, these projects suggest how researchers (both professional and amateur) can work together regardless of physical location to share ideas and citations, produce translations or transcriptions, and create common scholarly resources.  Long as this list is, I know I’m omitting many other relevant projects (some of which I’ve bookmarked) and overlooking (for now) the challenges that collaborative scholarship faces.  I’ll be working with several collaborators to explore these issues, but I of course welcome comments….

Works Cited

Atkins, Dan. Report of the National Science Foundation Blue-Ribbon Advisory Panel on Cyberinfrastructure. NSF. January 2003. <http://www.nsf.gov/od/oci/reports/toc.jsp>.
Bohannon, John. “Gamers Unravel the Secret Life of Protein.” Wired 20 Apr 2009. 26 May 2009 <http://www.wired.com/medtech/genetics/magazine/17-05/ff_protein?currentPage=all>.
Borgman, Christine L. Scholarship in the Digital Age: Information, Infrastructure, and the Internet. Cambridge, Mass.: The MIT Press, 2007.
Brockman, William et al. Scholarly Work in the Humanities and the Evolving Information Environment. CLIR/DLF, 2001. 24 Jul 2007 <http://www.clir.org/PUBS/reports/pub104/pub104.pdf>.
Cohen, Daniel J. “Zotero: Social and Semantic Computing for Historical Scholarship.” Perspectives (2007). 27 May 2009 <http://www.historians.org/perspectives/issues/2007/0705/0705tec2.cfm>.
Cronin, Blaise, Debora Shaw, and Kathryn La Barre. “A cast of thousands: Coauthorship and subauthorship collaboration in the 20th century as manifested in the scholarly journal literature of psychology and philosophy.” Journal of the American Society for Information Science and Technology 54.9 (2003): 855-871.
Cronin, Blaise. The hand of science. Scarecrow Press, 2005.
Kelly, Kevin. “The New Socialism: Global Collectivist Society Is Coming Online.” Wired 22 May 2009. 26 May 2009 <http://www.wired.com/culture/culturereviews/magazine/17-06/nep_newsocialism?currentPage=all>.
Kornbluh, Mark. “From Digital Repositorities to Information Habitats: H-Net, the Quilt Index, Cyber Infrastruture, and Digital Humanities.” First Monday 13.8: August 4, 2008. 
Kuster, M.W., C. Ludwig, and A. Aschenbrenner. “TextGrid as a Digital Ecosystem.” Digital EcoSystems and Technologies Conference, 2007. DEST ’07. Inaugural IEEE-IES. 2007. 506-511.
Mahoney, Anne. “Tachypaedia Byzantina: The Suda On Line as Collaborative Encyclopedia.”  Digital Humanities Quarterly. 3.1 (2009). 22 Mar 2009 <http://www.digitalhumanities.org/dhq/vol/003/1/000025.html>.
McGann, Jerome J. “Culture and Technology: The Way We Live Now, What Is to Be Done?.” New Literary History 36.1 (2005): 71-82.
Nardi, Bonnie, and Justin Harris. “Strangers and friends: collaborative play in world of warcraft.” Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work. Banff, Alberta, Canada: ACM, 2006. 149-158. 18 May 2009 <http://portal.acm.org/citation.cfm?id=1180875.1180898>.
O’Donnell, Daniel Paul. “Disciplinary Impact and Technological Obsolescence in Digital Medieval Studies.” A Companion To Digital Humanities. 2 May 2009 <http://www.digitalhumanities.org/companion/view?docId=blackwell/9781405148641/9781405148641.xml&chunk.id=ss1-4-2&toc.id=0&brand=9781405148641_brand>.
Schroeder, Ralph, and Matthijs Den Besten. “Literary Sleuths On-line: e-Research collaboration on the Pynchon Wiki.” Information, Communication & Society 11.2 (2008): 167-187.
Smith, Martha Nell. “Computing: What Has American Literary Study To Do with It.” American Literature 74.4 (2002): 833-857.
Unsworth, John M. “Creating Digital Resources: the Work of Many Hands.” 14 Sep 1997. 10 Mar 2009 <http://www3.isrl.uiuc.edu/%7Eunsworth/drh97.html>.

Revisions: Fixed From the Page link, 6/1/09; Tanya ] Tara, 6/2/09; fixed typos (6/14/09)

Collaborative Authorship in the Humanities

Recently I heard the editors of a history journal and a literature journal say that they rarely published articles written by more than one author—perhaps a couple every few years.   Around the same time, I was looking over a recent issue of Literary and Linguistic Computing and noticed that it included several jointly-authored articles.  This got me wondering:  is collaborative authorship more common in digital humanities than in “traditional” humanities?

“Collaboration” is often associated with “digital humanities.”  Building digital collections, creating software, devising new analytical methods, and authoring multimodal scholarship typically cannot be accomplished by a solo scholar; rather, digital humanities projects require contributions from people with content knowledge, technical skills, design skills, project management experience, metadata expertise, etc.  Our Cultural Commonwealth identifies enabling collaboration as a key feature of the humanities cyberinfrastructure, funders encourage multi-institutional and even international teams, and proponents of increased collaboration in the humanities like Cathy Davidson and Lisa Ede and Andrea A. Lunsford cite digital humanities projects such as Orlando as exemplifying collaborative possibilities.

As a preliminary investigation, I compared the number of collaboratively-written articles published between 2004 and 2008 in two well-respected quarterly journals, American Literary History (ALH) and Literary and Linguistic Computing (LLC).  Both journals are published by Oxford University Press as part of its humanities catalog. I selected ALH because it is a leading journal on American literature and culture that encourages critical exchanges and interdisciplinary work—and because I thought it would be fun to see what the journal has published since 2004. (The hardest part of my research: resisting the urge to stop and read the articles.)  LLC, the official publication of the Association for Literary and Linguistic Computing and the Association for Computers and the Humanities, includes contributions on digital humanities from around the world—the UK, the US, Germany, Australia, Greece, Italy, Norway, etc.—and from many disciplines, such as literature, linguistics, computer and information science, statistics, librarianship, and biochemistry.  To determine the level of collaborative authorship in each issue, I tallied articles that had more than one author, excluding editors’ introductions, notes on contributors, etc.  For LLC, I counted everything that had an abstract as an article.  While I didn’t count LLC’s reviews, which typically are brief and focus on a single work, I did include the review essays published by ALH, since they are longer and synthesize critical opinion about several works.

So what did I find? Whereas 5 of 259 (1.93%) articles published in ALH—about one a year–feature two authors (none had more than two), 70 out of 145 (48.28%) of the articles published in LLC were written by two or more authors.  Most (4 of 5, or 80%) of the ALH articles were written by scholars from multiple institutions, whereas 49% (34 of 70) of the LLC articles were.  About 16% (11 of 70) of the LLC articles featured contributors from two or more countries, while none of the ALH articles did.  Two of the five ALH articles are review essays, while three focus on hemispheric or transatlantic American studies.  Although this study should be carried out more systematically across a wider range of journals, the initial results do suggest that collaborative authorship is more common in digital humanities. [See the Zotero reports for ALH and LLC for more information.]

Why does LLC feature more collaboratively written articles than ALH? I suspect that because, as I’ve already suggested, digital humanities projects often require collaboration, whereas most literary criticism can be produced by an individual scholar who needs only texts to read, a place to write, and a computer running a word processing application (as well as a library to provide access to texts, colleagues to consult and to review the resulting research, a university and/or funding agency to support the research, a publisher to disseminate the work, etc.).   Moreover, LLC represents a sort of meeting point for a range of disciplines, including several (such as computer science) that have a tradition of collaborative authorship.  Whereas collaborative authorship is common (even expected) in the sciences, in the humanities many tenure and promotion committees have not yet developed mechanisms for evaluating and crediting collaborative work. In a recent blog post, for example, Cathy Davidson tells a troubling story about being told (in a public and humiliating way) by a member of a search committee that her collaborative work and other “non-traditional” research didn’t “count.”  Literary study values individual interpretation, or what Davidson calls “the humanistic ethic of individuality.”

While individual scholarship remains valid and important, shouldn’t humanities scholarship to expand to embrace collaborative work as well?  Indeed, in 2000 the MLA launched an initiative to consider “alternatives to the adversarial academy” and encourage collaborative scholarship.  (By the way, I’m not criticizing ALH; I doubt that it receives many collaboratively-authored submissions, and it has encouraged critical exchange and interdisciplinary research.)  Of course, collaboration poses some significant challenges, such divvying up and managing work, negotiating conflicts, finding funding for complex projects, assigning credit, etc.    But as Lisa Ede and Andrea A. Lunsford point out, collaborative authorship can lead to a “widening of scholarly possibilities.”  In talking to humanities scholars (particularly those in global humanities), I’ve noticed genuine enthusiasm about collaborative work that allows scholars to engage in community, consider alternative perspectives, and undertake ambitious projects that require diverse skills and/or knowledge.

What kind of collaborations do the jointly-written articles in LLC and ALH represent? Since LLC often lists only the authors’ institutional affiliations, not their departments, tracing the degree of interdisciplinary collaboration would require further research.  However, I did find examples of several types of collaboration (which may overlap):

  • Faculty/student collaboration: In the sciences, faculty frequently publish with their postdocs and students, a practice that seems to be rare in the humanities.  I noted at least one example of a similar collaboration in LLC—involving, I should note, computer science rather than humanities grad students.
    • Urbina, Eduardo et al. “Visual Knowledge: Textual Iconography of the Quixote, a Hypertextual Archive.” Lit Linguist Computing 21.2 (2006): 247-258. 5 Apr 2009 <http://llc.oxfordjournals.org/cgi/content/abstract/21/2/247>.
      This article includes contributions by a professor of Hispanic studies, a professor of computer science, a librarian/archivist/adjunct English professor, and three graduate students in computer science.
  • Project teams: In digital humanities, collaborators often work together on projects to build digital collections, develop software, etc.  In LLC, I found a number of articles written by project teams, such as:
    • Barney, Brett et al. “Ordering Chaos: An Integrated Guide and Online Archive of Walt Whitman’s Poetry Manuscripts.” Lit Linguist Computing 20.2 (2005): 205-217. 5 Apr 2009 <http://llc.oxfordjournals.org/cgi/content/abstract/20/2/205>.
      Members of the project team included an archivist, programmer, digital initiatives librarian, English professor, and two English Ph.Ds who serve as library faculty and focus on digital humanities.
  • Interdisciplinary collaborations: In LLC, I noted several instances of teams that included humanities scholars and scientists working together to apply particular methods (text mining, stemmatic analysis) in the humanities.  For example:
    • Windram, Heather F. et al. “Dante’s Monarchia as a test case for the use of phylogenetic methods in stemmatic analysis.” Lit Linguist Computing 23.4 (2008): 443-463. 5 Apr 2009 <http://llc.oxfordjournals.org/cgi/content/abstract/23/4/443>.  The authors include two biochemists, a textual scholar, and a scholar of Italian literature
    • Sculley, D., and Bradley M. Pasanek. “Meaning and mining: the impact of implicit assumptions in data mining for the humanities.” Lit Linguist Computing 23.4 (2008): 409-424. 5 Apr 2009 <http://llc.oxfordjournals.org/cgi/content/abstract/23/4/409>.
      Authored by a computer scientist and a literature professor.
  • Shared interests: Researchers may publish together because they share an intellectual kinship and can accomplish more by working together.  For instance:
    • Auerbach, Jonathan, and Lisa Gitelman. “Microfilm, Containment, and the Cold War.” American Literary History 19.3 (2007).  I noticed that Jonathan Auerbach and Lisa Gitelman thank each other in works that each had previously published as an individual.

Observing that LLC publishes a number of collaboratively-written articles opens up several questions, which I hope to pursue through interviews with the authors of at least some of these articles (if you are one of these authors, you may see an email from me soon….):

1)    What characterizes the LLC articles that have only one author?
Based on a quick look at the tables of contents from past issues, I suspect that these articles are more likely to be theoretical or to focus on particular problems rather than projects.  Here, for example, are the titles of some singly-authored articles:  “The Inhibition of Geographical Information in Digital Humanities Scholarship,” “Monkey Business—or What is an Edition?,” “What Characterizes Pictures and Text?” and “Original, Authentic, Copy: Conceptual Issues in Digital Texts.”

2)    Why was the article written collaboratively?

What led to the collaboration?  Did team members offer complementary skill sets, such as knowledge of statistical methods and understanding of the content? How did the collaborators come together—do they work for the same institution? Did they meet at a conference? Do they cite each other?

3)    What were the outcomes of the collaboration?

What was accomplished through collaboration that would have been difficult to do otherwise?  Would the scale of the project be smaller if it were pursued by a single scholar? Did the project require contributions from people with different types of expertise?

4)    How was the collaboration managed and sustained?

Was one person in charge, or was authority distributed? What tools were used to facilitate communication, track progress on the project, and support collaborative writing? To what degree was face-to-face interaction important?

5)    What was difficult about the collaboration?

What was hard about collaborating: Communicating? Identifying who does what? Agreeing on methods? Coming to a common understanding of results? Finding funding?

We can find answers to some of these questions in Lynne Siemens’ recent article “’It’s a team if you use “reply all” ‘: An exploration of research teams in digital humanities environments.”  Siemens describes factors contributing to the success of collaborative teams in digital humanities, such as clear milestones and benchmarks, strong leadership, equal contributions by members of the team, and a balance between communication through digital tools and in-person meetings.  I particularly liked the description of “a successful team as a ‘round thing’ with equitable contribution by individual members.”

In doing this research, I realized how much it would benefit from collaborators.  For instance, someone with expertise in citation analysis could help enlarge the study and detect patterns in collaborative authorship, while someone with expertise in qualitative research methods could help to interview collaborative research teams and analyze the resulting data.  However, I think anyone with an interest in the topic could make valuable contributions.  This is by way of leading up to my pitch: I’m working on a piece about collaborative research methods in digital humanities for an essay collection and would welcome collaborators.  If you’re interested in teaming up, contact me at lspiro@rice.edu.

Works Cited

Davidson, Cathy N. “What If Scholars in the Humanities Worked Together, in a Lab?.” The Chronicle of Higher Education 28 May 1999. 18 Apr 2009 <http://chronicle.com/weekly/v45/i38/38b00401.htm>.

Ede, Lisa, and Andrea A. Lunsford. “Collaboration and Concepts of Authorship.” PMLA 116.2 (2001): 354-369. 18 Apr 2009 <http://www.jstor.org/stable/463522>.

Siemens, Lynne. “’It’s a team if you use “reply all” ‘: An exploration of research teams in digital humanities environments.” Lit Linguist Computing (2009): fqp009. 14 Apr 2009 <http://llc.oxfordjournals.org/cgi/content/abstract/fqp009v1>.

Digital Humanities in 2008, III: Research

In this final installment of my summary of Digital Humanities in 2008, I’ll discuss developments in digital humanities research. (I should note that if I attempted to give a true synthesis of the year in digital humanities, this would be coming out 4 years late rather than 4 months, so this discussion reflects my own idiosyncratic interests.)

1) Defining research challenges & opportunities

What are some of the key research challenges in digital humanities? Leading scholars tackled this question when CLIR and the NEH convened a workshop on Promoting Digital Scholarship: Formulating Research Challenges In the Humanities, Social Sciences and Computation. Prior to the workshop, six scholars in classics, architectural history, physics/information sciences, literature, visualization, and information retrieval wrote brief overviews of their field and of the ways that information technology could help to advance it. By articulating the central concerns of their fields so concisely, these essays promote interdisciplinary conversation and collaboration; they’re also fun to read. As Doug Oard writes in describing the natural language processing “tribe,” “Learning a bit about the other folks is a good way to start any process of communication… The situation is really quite simple: they are organized as tribes, they work their magic using models (rather like voodoo), they worship the word “maybe,” and they never do anything right.” Sounds like my kind of tribe. Indeed, I’d love to see a wiki where experts in fields ranging from computational biology to postcolonial studies write brief essays about their fields, provide a bibliography of foundational works, and articulate both key challenges and opportunities for collaboration. (Perhaps such information could be automatically aggregated using semantic technologies—see, for instance, Concept Web or Kosmix–but I admire the often witty, personal voices of these essays.)

Here are some key ideas that emerge from the essays:

  1. Global Humanistic Studies: Both Caroline Levander and Greg Crane, Alison Babeu, David Bamman, Lisa Cerrato, and Rashmi Singhal call for a sort of global humanistic studies, whether re-conceiving American studies from a hemispheric perspective or re-considering the Persian Wars from the Persian point of view. Scholars working in global humanistic studies face significant challenges, such as the need to read texts in many languages and understand multiple cultural contexts. Emerging technologies promise to help scholars address these problems. For instance, named entity extraction, machine translation and reading support tools can help scholars make sense of works that would otherwise be inaccessible to them; visualization tools can enable researchers “to explore spatial and temporal dynamism;” and collaborative workspaces allow scholars to divide up work, share ideas, and approach a complex research problem from multiple perspectives. Moreover, a shift toward openly accessible data will enable scholars to more easily identify and build on relevant work. Describing how reading support tools enable researchers to work more productively, Crane et . write, “By automatically linking inflected words in a text to linguistic analyses and dictionary entries we have already allowed readers to spend more time thinking about the text than was possible as they flipped through print dictionaries. Reading support tools allow readers to understand linguistic sources at an earlier stage of their training and to ask questions, no matter how advanced their knowledge, that were not feasible in print.” We can see a similar intersection between digital humanities and global humanities in projects like the Global Middle Ages.
  2. What skills do humanities scholars need? Doug Oard suggests that humanities scholars should collaborate with computer scientists to define and tackle “challenge problems” so that the development of new technologies is grounded in real scholarly needs. Ultimately, “humanities scholars are going to need to learn a bit of probability theory” so that they can understand the accuracy of automatic methods for processing data, the “science of maybe.” How does probability theory jibe with humanistic traditions of ambiguity and interpretation? And how are humanities scholars going to learn these skills?

According to the symposium, major research challenges for the digital humanities include:

  1. Scale and the poverty of abundance:” developing tools and methods to deal with the plenitude of data, including text mining and analysis, visualization, data management and archiving, and sustainability.
  2. Representing place and time: figuring out how to support geo-temporal analysis and enable that analysis to be documented, preserved, and replicated
  3. Social networking and the economy of attention: understanding research behaviors online; analyzing text corpora based on these behaviors (e.g. citation networks)
  4. Establishing a research infrastructure that facilitates access, interdisciplinary collaboration, and sustainability. “As one participant asked, “What is the Protein Data Bank for the humanities?””

2) High performance computing: visualization, modeling, text mining

What are some of the most promising research areas in digital humanities? In a sense, the three recent winners of the NEH/DOE’s High Performance Computing Initiative define three of the main areas of digital humanities and demonstrate how advanced computing can open up new approaches to humanistic research.

  • text mining and text analysis: For its project on “Large-Scale Learning and the Automatic Analysis of Historical Texts,” the Perseus Digital Library at Tufts University is examining how words in Latin and Greek have changed over time by comparing the linguistic structure of classical texts with works written in the last 2000 years. In the press release announcing the winners, David Bamman, a senior researcher in computational linguistics with the Perseus Project, said that “[h]igh performance computing really allows us to ask questions on a scale that we haven’t been able to ask before. We’ll be able to track changes in Greek from the time of Homer to the Middle Ages. We’ll be able to compare the 17th century works of John Milton to those of Vergil, which were written around the turn of the millennium, and try to automatically find those places where Paradise Lost is alluding to the Aeneid, even though one is written in English and the other in Latin.”
  • 3D modeling: For its “High Performance Computing for Processing and Analysis of Digitized 3-D Models of Cultural Heritage” project, the Institute for Advanced Technology in the Humanities at the University of Virginia will reprocess existing data to create 3D models of culturally-significant artifacts and architecture. For example, IATH hopes to re-assemble fragments that chipped off  ancient Greek and Roman artifacts.
  • Visualization and cultural analysis: The University of California, San Diego’s Visualizing Patterns in Databases of Cultural Images and Video project will study contemporary culture, analyzing datastreams such as “millions of images, paintings, professional photography, graphic design, user-generated photos; as well as tens of thousands of videos, feature films, animation, anime music videos and user-generated videos.” Ultimately the project will produce detailed visualizations of cultural phenomena.

Winners received compute time on a supercomputer and technical training.

Of course, there’s more to digital humanities than text mining, 3D modeling, and visualization. For instance, the category listing for the Digital Humanities and Computer Science conference at Chicago reveals the diversity of participants’ fields of interest. Top areas include text analysis; libraries/digital archives; imaging/visualization, data mining/machine learning; informational retrieval; semantic search; collaborative technologies; electronic literature; and GIS mapping. A simple analysis of the most frequently appearing terms in the Digital Humanities 2008 Book of Abstracts suggests that much research continues to focus on text—which makes sense, given the importance of written language to humanities research.  Here’s the list that TAPOR generated of the 10 words most frequently used terms in the DH 2008 abstracts:

  1. text: 769
  2. digital: 763
  3. data: 559
  4. information: 546
  5. humanities: 517
  6. research: 501
  7. university: 462
  8. new: 437
  9. texts: 413
  10. project: 396

“Images” is used 161 times, visualization 46.

Wordle: Digital Humanities 2008 Book of Abstracts

And here’s the word cloud. As someone who got started in digital humanities by marking up texts in TEI, I’m always interested in learning about developments in encoding, analyzing and visualizing texts, but some of the coolest sessions I attended at DH 2008 tackled other questions: How do we reconstruct damaged ancient manuscripts? How do we archive dance performances? Why does the digital humanities community emphasize tools instead of services?

3) Focus on method

As digital humanities emerges, much attention is being devoted to developing research methodologies. In “Sunset for Ideology, Sunrise for Methodology?,” Tom Scheinfeldt suggests that humanities scholarship is beginning to tilt toward methodology, that we are entering a “new phase of scholarship that will be dominated not by ideas, but once again by organizing activities, both in terms of organizing knowledge and organizing ourselves and our work.”

So what are some examples of methods developed and/or applied by digital humanities researchers? In “Meaning and mining: the impact of implicit assumptions in data mining for the humanities,” Bradley Pasanek and D. Sculley tackle methodological challenges posed by mining humanities data, arguing that literary critics must devise standards for making arguments based upon data mining. Through a case study testing Lakoff’s theory that political ideology is defined by metaphor, Pasanek and Sculley demonstrate that the selection of algorithms and representation of data influence the results of data mining experiments. Insisting that interpretation is central to working with humanities data, they concur with Steve Ramsay and others in contending that data mining may be most significant in “highlighting ambiguities and conflicts that lie latent within the text itself.” They offer some sensible recommendations for best practices, including making assumptions about the data and texts explicit; using multiple methods and representations; reporting all trials; making data available and experiments reproducible; and engaging in peer review of methodology.

4) Digital literary studies

Different methodological approaches to literary study are discussed in the Companion to Digital Literary Studies (DLS), which was edited by Susan Schreibman and Ray Siemens and was released for free online in the fall of 2008. Kudos to its publisher, Blackwell, for making the hefty volume available, along with A Companion to Digital Humanities. The book includes essays such as “Reading digital literature: surface, data, interaction, and expressive processing” by Noah Wardrip-Fruin, “The Virtual Codex from page space to e-space” by Johanna Drucker, “Algorithmic criticism” by Steve Ramsay, and “Knowing true things by what their mockeries be: modelling in the humanities” by Willard McCarty. DLS also provides a handy annotated bibliography by Tanya Clement and Gretchen Gueguen that highlights some of the key scholarly resources in literature, including Digital Transcriptions and Images, Born-Digital Texts and New Media Objects, and Criticism, Reviews, and Tools. I expect that the book will be used frequently in digital humanities courses and will be a foundational work.

5) Crafting history: History Appliances

For me, the coolest—most innovative, most unexpected, most wow!—work of the year came from the ever-inventive Bill Turkel, who is exploring humanistic fabrication (not in the Mills Kelly sense of making up stuff ;), but in the DIY sense of making stuff). Turkel is working on “materialization,” giving a digital representation physical form by using, for example, a rapid prototyping machine, a sort of 3D printer. Turkel points to several reasons why humanities scholars should experiment with fabrication: they can be like DaVinci, making the connection between the mind and hand by realizing an idea in physical form; study the past by recreating historical objects (fossils, historical artifacts, etc) that can be touched, rotated, scrutinized; explore “haptic history,” a sensual experience of the past; and engage in “Critical technical practice,” where scholars both create and critique.

Turkel envisions making digital information “available in interactive, ambient and tangible forms.”  As Turkel argues, “As academic researchers we have tended to emphasize opportunities for dissemination that require our audience to be passive, focused and isolated from one another and from their surroundings. We need to supplement that model by building some of our research findings into communicative devices that are transparently easy to use, provide ambient feedback, and are closely coupled with the surrounding environment.” Turkel and his team are working on 4 devices: a dashboard, which shows both public and customized information streams on a large display; imagescapes and soundscapes that present streams of complex data as artificial landscapes or sound, aiding awareness; a GeoDJ, which is an iPod-like device that uses GPS and GIS to detect your location and deliver audio associated with it ( e.g. percussion for an historic industrial site); and ice cores and tree rings, “tangible browsers that allow the user to explore digital models of climate history by manipulating physical interfaces that are based on this evidence.” This work on ambient computing and tangible interfaces promises to foster awareness and open up understanding of scholarly data by tapping people’s natural way of comprehending the world through touch and other forms of sensory perception. (I guess the senses of smell and taste are difficult to include in sensual history, although I’m not sure I want to smell or taste many historical artifacts or experiences anyway. I would like to re-create the invention of the Toll House cookie, which for me qualifies as an historic occasion.) This approach to humanistic inquiry and representation requires the resources of a science lab or art studio—a large, well-ventilated space as well as equipment like a laser scanner, lathes, mills, saws, calipers, etc. Unfortunately, Turkel has stopped writing his terrific blog “Digital History Hacks” to focus on his new interests, but this work is so fascinating that I’m anxious to see what comes next–which describes my attitude toward digital humanities in general.

Archival Management Systems Report, Wiki & Webinar

[Note: Typically my blog focuses on digital humanities research, but this post discusses some of my related work examining software that helps archives streamline their workflows.]

As archives acquire collections, arrange them, describe them, manage them, and make them publicly available, they produce data in multiple formats, such as notecards, Word documents, Excel files, Access databases, XML (EAD) finding aids, web pages, etc.  Chris Prom suggests that some archives use so many tools in creating this data that their workflows “would make a good subject for a Rube Goldberg cartoon.”   As a result, archives replicate data and effort, struggle with versioning control, face challenges finding and analyzing archival information, and have difficulty making that information publicly available.   By using archival management systems such as Archon and Archivists’ Toolkit, however, archives can streamline the production of archival information; make it simpler to find information and generate reports; enable non-professionals to more easily create archival description;  conform to archival standards; and share information such as finding aids with the public.  To help guide the archival community in selecting the appropriate archival management system, I recently wrote a report for the Council on Library and Information Resources (CLIR).

Working on the report led me to several (admittedly non-revolutionary) insights:

  1. If you want to know what features software users need, ask them.   In the course of interviewing over 30 archivists and developers, I gained a greater understanding of key criteria for archival management software including flexibility, conformity to standards, support for an integrated workflow, ease of use, remote access (since archivists may do initial work processing collections off site), customization capabilities, ability to import and export data, etc.
  2. There is no one-size-fits-all tool.  Some archives prefer to use open source software; others are leery of open source, need a hosted solution, or require lots of support in importing and exporting data, customizing the user interface, etc.  Some archives need a way to publish archival information on the web; others want to export finding aids and pull them into existing publishing tools.
  3. Reports go out-of-date as soon as they are published.  Why not release the report as a wiki so that the community can keep it current and relevant?  With the support of CLIR, I’ve created a wiki called Archival Software.  Right now it more or less replicates the structure and content of my original report, but I hope that it evolves according to the needs of the community.   I invite members of the archival community to update the information, add new sections, restructure the wiki, and do whatever else makes it most useful.
  4. If archival management systems integrate and streamline the archival workflow from accessioning the collection to describing it to managing it to making it publicly available, what would an integrated research tool for the humanities look like–or would such a tool even be desirable or possible, given the variation in research practices? My first thought: Zotero with add-ons for analyzing information (perhaps similar to the tools under development by SEASR), authoring and sharing research  (like the Word plug-in or plug-ins for multimedia authoring or mashup creation, sharing via Internet Archive collaboration), etc.

On March 31, the Society of American Archivists (SAA) will offer a web seminar, Archival Content Management Systems, that is based upon my report.  The webinar will examine the case for archival management systems, explore selection criteria, and provide brief demonstrations of 3 systems.  I think there’s still time to register.  (Apologies for the self-promotion, but I wanted to get the word out…)

Digital Humanities in 2008, II: Scholarly Communication & Open Access

Open access, just like dark chocolate and blueberries, is good and good for you, enabling information to be mined and reused, fostering the exchange of ideas, and ensuring public access to research that taxpayers often helped to fund.  Moreover, as Dan Cohen contends, scholars benefit from open access to their work, since their own visibility increases: “In a world where we have instantaneous access to billions of documents online, why would you want the precious article or book you spent so much time on to exist only on paper, or behind a pay wall? This is a sure path to invisibility in the digital age.”  Thus some scholars are embracing social scholarship, which promotes openness, collaboration, and sharing research.  This year saw some positive developments in open access and scholarly communications, such as the implementation of the NIH mandate, Harvard’s Faculty of Arts & Science’s decision to go open access (followed by Harvard Law), and the launch of the Open Humanities Press.  But there were also some worrisome developments (the Conyers Bill’s attempt to rescind the NIH mandate, EndNote’s lawsuit against Zotero) and some confusing ones (the Google Books settlement).  In the second part of my summary on the year in digital humanities, I’ll look broadly at the scholarly communication landscape, discussing open access to educational materials, new publication models, the Google Books settlement, and cultural obstacles to digital publication.

Open Access Grows–and Faces Resistance

In December of 2007, the NIH Public Access Policy was signed into law, mandating that any research funded by the NIH would be deposited in PubMed

Ask Me About Open Access by mollyali

Ask Me About Open Access by mollyali

Central within a year of its publication.  Since the mandate was implemented, almost 3000 new biomedical manuscripts have been deposited into PubMed Central each month.  Now John Conyers has put forward a bill that would rescind the NIH mandate and prohibit other federal agencies from implementing similar policies.  This bill would deny the public access to research that it funded and choke innovation and scientific discovery.   According to Elias Zerhouni, former director of the NIH, there is no evidence that the mandate harms publishers; rather, it maximizes the public’s “return on its investment” in funding scientific research.  If you support public access to research, contact your representative and express your opposition to this bill before February 28.  The Alliance for Taxpayer Access offers a useful summary of key issues as well as a letter template at http://www.taxpayeraccess.org/action/HR801-09-0211.html.

Open Humanities?

Why has the humanities been lagging behind the sciences in adopting open access?  Gary Hall points to several ways in which the sciences differ from the humanities, including science’s greater funding  for “author pays” open access and emphasis  on disseminating information rapidly, as well as humanities’ “negative perception of the digital medium.”   But Hall is challenging that perception by helping to launch the Open Humanities Press (OHP) and publishing “Digitize This Book.”  Billing itself as “an international open access publishing collective in critical and cultural theory,” OHP  selects journals for inclusion in the collective  based upon their adherence to publication standards, open access standards, design standards, technical standards, and editorial best practices. Prominent scholars such as Jonathan Culler, Stephen Greenblatt, and Jerome McGann have signed on as board members of the Open Humanities Press, giving it more prestige and academic credibility.  In a talk at UC Irvine last spring,  OHP co-founder Sigi Jӧttkandt refuted the assumption that open access means “a sort of open free-for-all of publishing” rather than high-quality, peer-reviewed scholarship.  Jӧttkandt argued that open access should be fundamental to the digital humanities: “as long as the primary and secondary materials that these tools operate on remain locked away in walled gardens, the Digital Humanities will fail to fulfill the real promise of innovation contained in the digital medium.”  It’s worth noting that many digital humanities resources are available as open access, including Digital Humanities Quarterly, the Rossetti Archive, and projects developed by CHNM; many others may not be explicitly open access, but they make information available for free.

In “ANTHROPOLOGY OF/IN CIRCULATION: The Future of Open Access and Scholarly Societies,” Christopher Kelty, Michael M. J. Fischer, Alex “Rex” Golub, Jason Baird Jackson, Kimberly Christen, and Michael F. Brown engage in a wide-ranging discussion of open access in anthropology, prompted in part by the American Anthropological Association’s decision to move its publishing activities to Wiley Blackwell.  This rich conversation explores different models for open access, the role of scholarly societies in publishing, building community around research problems, reusing and remixing scholarly content, the economics of publishing, the connection between scholarly reputation and readers’ access to publications, how to make content accessible to source communities, and much more.   As Kelty argues, “The future of innovative scholarship is not only in the AAA (American Anthropological Association) and its journals, but in the structures we build that allow our research to circulate and interact in ways it never could before.”  Kelty (who, alas, was lured away from Rice by UCLA) is exploring how to make scholarship more open and interactive.  You can buy a print copy of Two Bits, his new book on the free software movement published by Duke UP; read (for free) a PDF version of the book; comment on the CommentPress version; or download and remix the HTML.  Reporting on Two Bits at Six Months, Kelty observed, “Duke is making as little or as much money on the book as they do on others of its ilk, and yet I am getting much more from it being open access than I might otherwise.”  The project has made Kelty more visible as a scholar, leading to more media attention, invitations to give lectures and submit papers, etc.

New Models of Scholarly Communication, and Continued Resistance

To what extent are new publishing models emerging as the Internet enables the rapid, inexpensive distribution of information, the incorporation of multimedia into publications, and networked collaboration? To find out, The ARL/ Ithaka New Model Publications Study conducted an “organized scan” of emerging scholarly publications such as blogs, ejournals, and research hubs.  ARL recruited 301 volunteer librarians from 46 colleges and universities to interview faculty about new model publications that they used.  (I participated in a small way, interviewing one faculty member at Rice.)  According to the report, examples of new model publications exist in all disciplines, although scientists are more likely to use pre-print repositories, while humanities scholars participate more frequently in discussion forums.  The study identifies eight principal types of scholarly resources:

  • E-only journals
  • Reviews
  • Preprints and working papers
  • Encyclopedias, dictionaries, and annotated  content
  • Data
  • Blogs
  • Discussion forums
  • Professional and scholarly hubs

These categories provide a sort of abbreviated field manual to identifying different types of new model publications.  I might add a few more categories, such as collaborative commentary or peer-to-peer review (exemplified by projects that use CommentPress); scholarly wikis like OpenWetWare that enable open sharing of scholarly information; and research portals like NINES (which perhaps would be considered a “hub”).   The report offers fascinating examples of innovative publications, such as ejournals that publish articles as they are ready rather on a set schedule and a video journal that documents experimental methods in biology.   Since only a few examples of new model publications could fit into this brief report, ARL is making available brief descriptions of 206 resources that it considered to be  “original and scholarly works” via a publicly accessible database.

My favorite example of a new model publication: eBird, a project initiated by  the Cornell Lab of Ornithology and the Audobon Society that enlists amateur and professional bird watchers to collect bird observation data.  Scientists then use this data to understand the “distribution and abundance” of birds.  Initially eBird ran into difficulty getting birders to participate, so they developed tools that allowed birders to get credit and feel part of a community, to “manage and maintain their lists online, to compare their observations with others’ observations.” I love the motto and mission of eBird—“Notice nature.”  I wonder if a similar collaborative research site could be set up for, say, the performing arts (ePerformances.org?), where audience members would document arts and humanities in the wild–plays, ballets, performance art, poetry readings, etc.

The ARL/Ithaka report also highlights some of the challenges faced by these new model publications, such as the conservatism of academic culture, the difficulty of getting scholars to participate in online forums, and finding ways to fund and sustain publications.  In  Interim Report: Assessing the Future Landscape of Scholarly Communication, Diane Harley and her colleagues at the University of California Berkeley delve into some of these challenges.  Harley finds that although some scholars are interested in publishing their research as interactive multimedia, “(1) new forms must be perceived as having undergone rigorous peer review, (2) few untenured scholars are presenting such publications as part of their tenure cases, and (3) the mechanisms for evaluating new genres (e.g., nonlinear narratives and multimedia publications) may be prohibitive for reviewers in terms of time and inclination.” Humanities researchers are typically less concerned with the speed of publication than scientists and social scientists, but they do complain about journals’ unwillingness to include many high quality images and would like to link from their arguments to supporting primary source material. However, faculty are not aware of any easy-to-use tools or support that would enable them to author multimedia works and are therefore less likely to experiment with new forms.  Scholars in all fields included in the study do share their research with other scholars, typically through emails and other forms of personal communication, but many regard blogs as “a waste of time because they are not peer reviewed.”  Similarly, Ithaka’s 2006 Studies of Key Stakeholders in the Digital Transformation in Higher Education (published in 2008) found that “faculty decisions about where and how to publish the results of their research are principally based on the visibility within their field of a particular option,” not open access.

But academic conservatism shouldn’t keep us from imagining and experimenting with alternative approaches to scholarly publishing.  Kathleen Fitzpatrick’s “book-like-object” (blob) proposal, Planned Obsolescence: Publishing, Technology, and the Future of the Academy, offers a bold and compelling vision of the future of academic publishing.  Fitzpatrick calls for academia to break out of its zombie-like adherence to (un)dead forms and proposes “peer-to-peer” review (as in Wikipedia), focusing on process rather than product (as in blogs), and engaging in networked conversation (as in CommentPress). (If references to zombies and blobs make you think Fitzpatrick’s stuff is fun to read as well as insightful, you’d be right.)

EndNote Sues Zotero

Normally I have trouble attracting faculty and grad students to workshops exploring research tools and scholarly communication issues, but they’ve been flocking to my workshops on Zotero, which they recognize as a tool that will help them work more productively.  Apparently Thomson Reuters, the maker of EndNote, has noticed the competitive threat posed by Zotero, since they have sued George Mason University, which produces Zotero, alleging that programmers reverse engineered EndNote so that they could convert proprietary EndNote .ens files into open Zotero .csl files.  Commentators more knowledgeable about the technical and legal details than I have found Thomson’s claims to be bogus.  My cynical read on this lawsuit is that EndNote saw a threat from a popular, powerful open source application and pursued legal action rather than competing by producing a better product.  As Hugh Cayless suggests, “This is an act of sheer desperation on the part of Thomson Reuters” and shows that Zotero has “scared your competitors enough to make them go running to Daddy, thus unequivocally validating your business model.”

The lawsuit seems to realize Yokai Benkler’s description of proprietary attempts to control information:

“In law, we see a continual tightening of the control that the owners of exclusive rights are given.  Copyrights are longer, apply to more uses, and are interpreted as reaching into every corner of valuable use. Trademarks are stronger and more aggressive. Patents have expanded to new domains and are given greater leeway. All these changes are skewing the institutional ecology in favor of business models and production practices that are based on exclusive proprietary claims; they are lobbied for by firms that collect large rents if these laws are expanded, followed, and enforced. Social trends in the past few years, however, are pushing in the opposite direction.”

Unfortunately, the lawsuit seems to be having a chilling effect that ultimately will, I think, hurt EndNote.  For instance, the developers of BibApp, “a publication-list manager and repository-populator,” decided not to import citation lists produced by EndNote, since “doing anything with their homegrown formats has been proven hazardous.” This lawsuit raises the crucial issue of whether researchers can move their data from one system to another.  Why would I want to choose a product that locks me in?  As Nature wrote in an editorial quoted by CHNM in its response to the lawsuit, “The virtues of interoperability and easy data-sharing among researchers are worth restating.”

Google Books Settlement

Google Books by Jon Wiley

Google Books by Jon Wiley

In the fall, Google settled with the Authors Guild and the Association of American Publishers over Google Book Search, allowing academic libraries to subscribe to a full-text collection of millions of out-of-print but (possibly) in-copyright books.  (Google estimates that about 70% of published books fall into this category).  Individuals can also purchase access to books, and libraries will be given a single terminal that will provide free access to the collection.  On a pragmatic (and gluttonous) level, I think, Oh boy, this settlement will give me access to so much stuff.   But, like others, I am concerned about one company owning all of this information, see the Book Rights Registry as potentially anti-competitive, and wish that a Google victory in court had verified fair use principles (even if such a decision probably would have kept us in snippet view or limited preview for in-copyright materials).  Libraries have some legitimate concerns about access, privacy, intellectual freedom, equitable treatment, and terms of use.  Indeed, Harvard pulled out of the project over concerns about cost and accessibility.  As Robert Darnton, director of the Harvard Library and a prominent scholar of book history, wrote in the NY Review of Books, “To digitize collections and sell the product in ways that fail to guarantee wide access… would turn the Internet into an instrument for privatizing knowledge that belongs in the public sphere.” Although the settlement makes a provision for “non-consumptive research” (using the books without reading them) that seems to allow for text mining and other computational research, I worry that digital humanists and other scholars won’t have access to the data they need.  What if Google goes under, or goes evil? But the establishment of the Hathi Trust by several of Google Book’s academic library partners (and others) makes me feel a little better about access and preservation issues, and I noted that Hathi Trust will provide a corpus of 50,000 documents for the NEH’s Digging into the Data Challenge.  And as I argued in an earlier series of blog posts, I certainly do see how Google Books can transform research by providing access to so much information.

Around the same time (same day?) that the Google Books settlement was released, the Open Content Alliance (OCA) reached an important milestone, providing access to over a million books.  As its name suggests, the OCA makes scanned books openly available for reading, download, and analysis, and from my observations the quality of the digitization is better.  Although the OCA’s collection is smaller and it focuses on public domain materials, it offers a vital alternative to GB.  (Rice is a member of the Open Content Alliance.)

Next up in the series on digital humanities in 2008: my attempt to summarize recent developments in research.

New MA Program in History & Media at the University at Albany

A few days ago a commenter on my blog asked how he could learn to develop rich historical web sites “that would allow me to bring primary sources/scholarship from centuries ago to a wider audience.”  I had a hard time thinking of digital humanities programs that provide training in authoring digital media (George Mason? Georgia Tech?).  But then I heard about the new Masters concentration in History and Media at the University at Albany, which promises to prepare students to develop historical web sites, documentary films, oral histories, and other forms of media.   Albany seems to be well-positioned to offer such a program; for instance, it published the late lamented Journal for Multimedia History, a groundbreaking journal focused multimedia explorations of historical topics. In a recent discussion about “The Promise of Digital History” published in the Journal of American History, Amy Murrell Taylor, one of the professors developing Albany’s program, makes a persuasive case for thinking about digital history as a medium, “as the production of something that can stand alongside a book, something that takes a different form but nonetheless raises questions, offers analysis, and advances our historiographical knowledge about a given subject.”

Here’s the announcement, taken from H-Net:

The University at Albany’s Department of History has introduced a new 36-credit History and Media concentration to its Masters program, allowing students to learn and apply specialized media skills — digital history and hypermedia authoring, photography and photoanalysis, documentary filmmaking, oral/video history, and aural history and audio documentary production — to the study of the past. The History and Media concentration builds on the Department’s strengths in academic and public history and its reputation as an innovator in the realm of digital and multimedia history.

Among the History and Media courses to be offered beginning in the fall of 2009 are: Introduction to Historical Documentary Media; Narrative in Historical Media; Readings and Practicum in Aural History and Audio Documentary Production; Readings and Practicum in Digital History and Hypermedia; Readings in the History and Theory of Documentary Filmmaking; Readings in Visual Media and Culture; Introduction to Oral and Video History; Research Seminar and Practicum in History and Media.

Instructors in the History and Media concentration will vary but will include a core faculty including:
Gerald Zahavi, Professor; Amy Murrell Taylor, Associate Professor; Ray Sapirstein, Assistant Professor; Sheila Curran Bernard, Assistant Professor.

For more information, contact Gerald Zahavi, zahavi@albany.edu; 518-442-5427.

Prof. Gerald Zahavi
Department of History
University at Albany
1400 Washington Avenue
Albany, NY 12222
518-442-5427
Email: zahavi@albany.edu
Visit the website at http://www.albany.edu/history/histmedia/historymedia.pdf