Category Archives: digital humanities

Collaborative Authorship in the Humanities

Recently I heard the editors of a history journal and a literature journal say that they rarely published articles written by more than one author—perhaps a couple every few years.   Around the same time, I was looking over a recent issue of Literary and Linguistic Computing and noticed that it included several jointly-authored articles.  This got me wondering:  is collaborative authorship more common in digital humanities than in “traditional” humanities?

“Collaboration” is often associated with “digital humanities.”  Building digital collections, creating software, devising new analytical methods, and authoring multimodal scholarship typically cannot be accomplished by a solo scholar; rather, digital humanities projects require contributions from people with content knowledge, technical skills, design skills, project management experience, metadata expertise, etc.  Our Cultural Commonwealth identifies enabling collaboration as a key feature of the humanities cyberinfrastructure, funders encourage multi-institutional and even international teams, and proponents of increased collaboration in the humanities like Cathy Davidson and Lisa Ede and Andrea A. Lunsford cite digital humanities projects such as Orlando as exemplifying collaborative possibilities.

As a preliminary investigation, I compared the number of collaboratively-written articles published between 2004 and 2008 in two well-respected quarterly journals, American Literary History (ALH) and Literary and Linguistic Computing (LLC).  Both journals are published by Oxford University Press as part of its humanities catalog. I selected ALH because it is a leading journal on American literature and culture that encourages critical exchanges and interdisciplinary work—and because I thought it would be fun to see what the journal has published since 2004. (The hardest part of my research: resisting the urge to stop and read the articles.)  LLC, the official publication of the Association for Literary and Linguistic Computing and the Association for Computers and the Humanities, includes contributions on digital humanities from around the world—the UK, the US, Germany, Australia, Greece, Italy, Norway, etc.—and from many disciplines, such as literature, linguistics, computer and information science, statistics, librarianship, and biochemistry.  To determine the level of collaborative authorship in each issue, I tallied articles that had more than one author, excluding editors’ introductions, notes on contributors, etc.  For LLC, I counted everything that had an abstract as an article.  While I didn’t count LLC’s reviews, which typically are brief and focus on a single work, I did include the review essays published by ALH, since they are longer and synthesize critical opinion about several works.

So what did I find? Whereas 5 of 259 (1.93%) articles published in ALH—about one a year–feature two authors (none had more than two), 70 out of 145 (48.28%) of the articles published in LLC were written by two or more authors.  Most (4 of 5, or 80%) of the ALH articles were written by scholars from multiple institutions, whereas 49% (34 of 70) of the LLC articles were.  About 16% (11 of 70) of the LLC articles featured contributors from two or more countries, while none of the ALH articles did.  Two of the five ALH articles are review essays, while three focus on hemispheric or transatlantic American studies.  Although this study should be carried out more systematically across a wider range of journals, the initial results do suggest that collaborative authorship is more common in digital humanities. [See the Zotero reports for ALH and LLC for more information.]

Why does LLC feature more collaboratively written articles than ALH? I suspect that because, as I’ve already suggested, digital humanities projects often require collaboration, whereas most literary criticism can be produced by an individual scholar who needs only texts to read, a place to write, and a computer running a word processing application (as well as a library to provide access to texts, colleagues to consult and to review the resulting research, a university and/or funding agency to support the research, a publisher to disseminate the work, etc.).   Moreover, LLC represents a sort of meeting point for a range of disciplines, including several (such as computer science) that have a tradition of collaborative authorship.  Whereas collaborative authorship is common (even expected) in the sciences, in the humanities many tenure and promotion committees have not yet developed mechanisms for evaluating and crediting collaborative work. In a recent blog post, for example, Cathy Davidson tells a troubling story about being told (in a public and humiliating way) by a member of a search committee that her collaborative work and other “non-traditional” research didn’t “count.”  Literary study values individual interpretation, or what Davidson calls “the humanistic ethic of individuality.”

While individual scholarship remains valid and important, shouldn’t humanities scholarship to expand to embrace collaborative work as well?  Indeed, in 2000 the MLA launched an initiative to consider “alternatives to the adversarial academy” and encourage collaborative scholarship.  (By the way, I’m not criticizing ALH; I doubt that it receives many collaboratively-authored submissions, and it has encouraged critical exchange and interdisciplinary research.)  Of course, collaboration poses some significant challenges, such divvying up and managing work, negotiating conflicts, finding funding for complex projects, assigning credit, etc.    But as Lisa Ede and Andrea A. Lunsford point out, collaborative authorship can lead to a “widening of scholarly possibilities.”  In talking to humanities scholars (particularly those in global humanities), I’ve noticed genuine enthusiasm about collaborative work that allows scholars to engage in community, consider alternative perspectives, and undertake ambitious projects that require diverse skills and/or knowledge.

What kind of collaborations do the jointly-written articles in LLC and ALH represent? Since LLC often lists only the authors’ institutional affiliations, not their departments, tracing the degree of interdisciplinary collaboration would require further research.  However, I did find examples of several types of collaboration (which may overlap):

  • Faculty/student collaboration: In the sciences, faculty frequently publish with their postdocs and students, a practice that seems to be rare in the humanities.  I noted at least one example of a similar collaboration in LLC—involving, I should note, computer science rather than humanities grad students.
    • Urbina, Eduardo et al. “Visual Knowledge: Textual Iconography of the Quixote, a Hypertextual Archive.” Lit Linguist Computing 21.2 (2006): 247-258. 5 Apr 2009 <>.
      This article includes contributions by a professor of Hispanic studies, a professor of computer science, a librarian/archivist/adjunct English professor, and three graduate students in computer science.
  • Project teams: In digital humanities, collaborators often work together on projects to build digital collections, develop software, etc.  In LLC, I found a number of articles written by project teams, such as:
    • Barney, Brett et al. “Ordering Chaos: An Integrated Guide and Online Archive of Walt Whitman’s Poetry Manuscripts.” Lit Linguist Computing 20.2 (2005): 205-217. 5 Apr 2009 <>.
      Members of the project team included an archivist, programmer, digital initiatives librarian, English professor, and two English Ph.Ds who serve as library faculty and focus on digital humanities.
  • Interdisciplinary collaborations: In LLC, I noted several instances of teams that included humanities scholars and scientists working together to apply particular methods (text mining, stemmatic analysis) in the humanities.  For example:
    • Windram, Heather F. et al. “Dante’s Monarchia as a test case for the use of phylogenetic methods in stemmatic analysis.” Lit Linguist Computing 23.4 (2008): 443-463. 5 Apr 2009 <>.  The authors include two biochemists, a textual scholar, and a scholar of Italian literature
    • Sculley, D., and Bradley M. Pasanek. “Meaning and mining: the impact of implicit assumptions in data mining for the humanities.” Lit Linguist Computing 23.4 (2008): 409-424. 5 Apr 2009 <>.
      Authored by a computer scientist and a literature professor.
  • Shared interests: Researchers may publish together because they share an intellectual kinship and can accomplish more by working together.  For instance:
    • Auerbach, Jonathan, and Lisa Gitelman. “Microfilm, Containment, and the Cold War.” American Literary History 19.3 (2007).  I noticed that Jonathan Auerbach and Lisa Gitelman thank each other in works that each had previously published as an individual.

Observing that LLC publishes a number of collaboratively-written articles opens up several questions, which I hope to pursue through interviews with the authors of at least some of these articles (if you are one of these authors, you may see an email from me soon….):

1)    What characterizes the LLC articles that have only one author?
Based on a quick look at the tables of contents from past issues, I suspect that these articles are more likely to be theoretical or to focus on particular problems rather than projects.  Here, for example, are the titles of some singly-authored articles:  “The Inhibition of Geographical Information in Digital Humanities Scholarship,” “Monkey Business—or What is an Edition?,” “What Characterizes Pictures and Text?” and “Original, Authentic, Copy: Conceptual Issues in Digital Texts.”

2)    Why was the article written collaboratively?

What led to the collaboration?  Did team members offer complementary skill sets, such as knowledge of statistical methods and understanding of the content? How did the collaborators come together—do they work for the same institution? Did they meet at a conference? Do they cite each other?

3)    What were the outcomes of the collaboration?

What was accomplished through collaboration that would have been difficult to do otherwise?  Would the scale of the project be smaller if it were pursued by a single scholar? Did the project require contributions from people with different types of expertise?

4)    How was the collaboration managed and sustained?

Was one person in charge, or was authority distributed? What tools were used to facilitate communication, track progress on the project, and support collaborative writing? To what degree was face-to-face interaction important?

5)    What was difficult about the collaboration?

What was hard about collaborating: Communicating? Identifying who does what? Agreeing on methods? Coming to a common understanding of results? Finding funding?

We can find answers to some of these questions in Lynne Siemens’ recent article “’It’s a team if you use “reply all” ‘: An exploration of research teams in digital humanities environments.”  Siemens describes factors contributing to the success of collaborative teams in digital humanities, such as clear milestones and benchmarks, strong leadership, equal contributions by members of the team, and a balance between communication through digital tools and in-person meetings.  I particularly liked the description of “a successful team as a ‘round thing’ with equitable contribution by individual members.”

In doing this research, I realized how much it would benefit from collaborators.  For instance, someone with expertise in citation analysis could help enlarge the study and detect patterns in collaborative authorship, while someone with expertise in qualitative research methods could help to interview collaborative research teams and analyze the resulting data.  However, I think anyone with an interest in the topic could make valuable contributions.  This is by way of leading up to my pitch: I’m working on a piece about collaborative research methods in digital humanities for an essay collection and would welcome collaborators.  If you’re interested in teaming up, contact me at

Works Cited

Davidson, Cathy N. “What If Scholars in the Humanities Worked Together, in a Lab?.” The Chronicle of Higher Education 28 May 1999. 18 Apr 2009 <>.

Ede, Lisa, and Andrea A. Lunsford. “Collaboration and Concepts of Authorship.” PMLA 116.2 (2001): 354-369. 18 Apr 2009 <>.

Siemens, Lynne. “’It’s a team if you use “reply all” ‘: An exploration of research teams in digital humanities environments.” Lit Linguist Computing (2009): fqp009. 14 Apr 2009 <>.

Digital Humanities in 2008, III: Research

In this final installment of my summary of Digital Humanities in 2008, I’ll discuss developments in digital humanities research. (I should note that if I attempted to give a true synthesis of the year in digital humanities, this would be coming out 4 years late rather than 4 months, so this discussion reflects my own idiosyncratic interests.)

1) Defining research challenges & opportunities

What are some of the key research challenges in digital humanities? Leading scholars tackled this question when CLIR and the NEH convened a workshop on Promoting Digital Scholarship: Formulating Research Challenges In the Humanities, Social Sciences and Computation. Prior to the workshop, six scholars in classics, architectural history, physics/information sciences, literature, visualization, and information retrieval wrote brief overviews of their field and of the ways that information technology could help to advance it. By articulating the central concerns of their fields so concisely, these essays promote interdisciplinary conversation and collaboration; they’re also fun to read. As Doug Oard writes in describing the natural language processing “tribe,” “Learning a bit about the other folks is a good way to start any process of communication… The situation is really quite simple: they are organized as tribes, they work their magic using models (rather like voodoo), they worship the word “maybe,” and they never do anything right.” Sounds like my kind of tribe. Indeed, I’d love to see a wiki where experts in fields ranging from computational biology to postcolonial studies write brief essays about their fields, provide a bibliography of foundational works, and articulate both key challenges and opportunities for collaboration. (Perhaps such information could be automatically aggregated using semantic technologies—see, for instance, Concept Web or Kosmix–but I admire the often witty, personal voices of these essays.)

Here are some key ideas that emerge from the essays:

  1. Global Humanistic Studies: Both Caroline Levander and Greg Crane, Alison Babeu, David Bamman, Lisa Cerrato, and Rashmi Singhal call for a sort of global humanistic studies, whether re-conceiving American studies from a hemispheric perspective or re-considering the Persian Wars from the Persian point of view. Scholars working in global humanistic studies face significant challenges, such as the need to read texts in many languages and understand multiple cultural contexts. Emerging technologies promise to help scholars address these problems. For instance, named entity extraction, machine translation and reading support tools can help scholars make sense of works that would otherwise be inaccessible to them; visualization tools can enable researchers “to explore spatial and temporal dynamism;” and collaborative workspaces allow scholars to divide up work, share ideas, and approach a complex research problem from multiple perspectives. Moreover, a shift toward openly accessible data will enable scholars to more easily identify and build on relevant work. Describing how reading support tools enable researchers to work more productively, Crane et . write, “By automatically linking inflected words in a text to linguistic analyses and dictionary entries we have already allowed readers to spend more time thinking about the text than was possible as they flipped through print dictionaries. Reading support tools allow readers to understand linguistic sources at an earlier stage of their training and to ask questions, no matter how advanced their knowledge, that were not feasible in print.” We can see a similar intersection between digital humanities and global humanities in projects like the Global Middle Ages.
  2. What skills do humanities scholars need? Doug Oard suggests that humanities scholars should collaborate with computer scientists to define and tackle “challenge problems” so that the development of new technologies is grounded in real scholarly needs. Ultimately, “humanities scholars are going to need to learn a bit of probability theory” so that they can understand the accuracy of automatic methods for processing data, the “science of maybe.” How does probability theory jibe with humanistic traditions of ambiguity and interpretation? And how are humanities scholars going to learn these skills?

According to the symposium, major research challenges for the digital humanities include:

  1. Scale and the poverty of abundance:” developing tools and methods to deal with the plenitude of data, including text mining and analysis, visualization, data management and archiving, and sustainability.
  2. Representing place and time: figuring out how to support geo-temporal analysis and enable that analysis to be documented, preserved, and replicated
  3. Social networking and the economy of attention: understanding research behaviors online; analyzing text corpora based on these behaviors (e.g. citation networks)
  4. Establishing a research infrastructure that facilitates access, interdisciplinary collaboration, and sustainability. “As one participant asked, “What is the Protein Data Bank for the humanities?””

2) High performance computing: visualization, modeling, text mining

What are some of the most promising research areas in digital humanities? In a sense, the three recent winners of the NEH/DOE’s High Performance Computing Initiative define three of the main areas of digital humanities and demonstrate how advanced computing can open up new approaches to humanistic research.

  • text mining and text analysis: For its project on “Large-Scale Learning and the Automatic Analysis of Historical Texts,” the Perseus Digital Library at Tufts University is examining how words in Latin and Greek have changed over time by comparing the linguistic structure of classical texts with works written in the last 2000 years. In the press release announcing the winners, David Bamman, a senior researcher in computational linguistics with the Perseus Project, said that “[h]igh performance computing really allows us to ask questions on a scale that we haven’t been able to ask before. We’ll be able to track changes in Greek from the time of Homer to the Middle Ages. We’ll be able to compare the 17th century works of John Milton to those of Vergil, which were written around the turn of the millennium, and try to automatically find those places where Paradise Lost is alluding to the Aeneid, even though one is written in English and the other in Latin.”
  • 3D modeling: For its “High Performance Computing for Processing and Analysis of Digitized 3-D Models of Cultural Heritage” project, the Institute for Advanced Technology in the Humanities at the University of Virginia will reprocess existing data to create 3D models of culturally-significant artifacts and architecture. For example, IATH hopes to re-assemble fragments that chipped off  ancient Greek and Roman artifacts.
  • Visualization and cultural analysis: The University of California, San Diego’s Visualizing Patterns in Databases of Cultural Images and Video project will study contemporary culture, analyzing datastreams such as “millions of images, paintings, professional photography, graphic design, user-generated photos; as well as tens of thousands of videos, feature films, animation, anime music videos and user-generated videos.” Ultimately the project will produce detailed visualizations of cultural phenomena.

Winners received compute time on a supercomputer and technical training.

Of course, there’s more to digital humanities than text mining, 3D modeling, and visualization. For instance, the category listing for the Digital Humanities and Computer Science conference at Chicago reveals the diversity of participants’ fields of interest. Top areas include text analysis; libraries/digital archives; imaging/visualization, data mining/machine learning; informational retrieval; semantic search; collaborative technologies; electronic literature; and GIS mapping. A simple analysis of the most frequently appearing terms in the Digital Humanities 2008 Book of Abstracts suggests that much research continues to focus on text—which makes sense, given the importance of written language to humanities research.  Here’s the list that TAPOR generated of the 10 words most frequently used terms in the DH 2008 abstracts:

  1. text: 769
  2. digital: 763
  3. data: 559
  4. information: 546
  5. humanities: 517
  6. research: 501
  7. university: 462
  8. new: 437
  9. texts: 413
  10. project: 396

“Images” is used 161 times, visualization 46.

Wordle: Digital Humanities 2008 Book of Abstracts

And here’s the word cloud. As someone who got started in digital humanities by marking up texts in TEI, I’m always interested in learning about developments in encoding, analyzing and visualizing texts, but some of the coolest sessions I attended at DH 2008 tackled other questions: How do we reconstruct damaged ancient manuscripts? How do we archive dance performances? Why does the digital humanities community emphasize tools instead of services?

3) Focus on method

As digital humanities emerges, much attention is being devoted to developing research methodologies. In “Sunset for Ideology, Sunrise for Methodology?,” Tom Scheinfeldt suggests that humanities scholarship is beginning to tilt toward methodology, that we are entering a “new phase of scholarship that will be dominated not by ideas, but once again by organizing activities, both in terms of organizing knowledge and organizing ourselves and our work.”

So what are some examples of methods developed and/or applied by digital humanities researchers? In “Meaning and mining: the impact of implicit assumptions in data mining for the humanities,” Bradley Pasanek and D. Sculley tackle methodological challenges posed by mining humanities data, arguing that literary critics must devise standards for making arguments based upon data mining. Through a case study testing Lakoff’s theory that political ideology is defined by metaphor, Pasanek and Sculley demonstrate that the selection of algorithms and representation of data influence the results of data mining experiments. Insisting that interpretation is central to working with humanities data, they concur with Steve Ramsay and others in contending that data mining may be most significant in “highlighting ambiguities and conflicts that lie latent within the text itself.” They offer some sensible recommendations for best practices, including making assumptions about the data and texts explicit; using multiple methods and representations; reporting all trials; making data available and experiments reproducible; and engaging in peer review of methodology.

4) Digital literary studies

Different methodological approaches to literary study are discussed in the Companion to Digital Literary Studies (DLS), which was edited by Susan Schreibman and Ray Siemens and was released for free online in the fall of 2008. Kudos to its publisher, Blackwell, for making the hefty volume available, along with A Companion to Digital Humanities. The book includes essays such as “Reading digital literature: surface, data, interaction, and expressive processing” by Noah Wardrip-Fruin, “The Virtual Codex from page space to e-space” by Johanna Drucker, “Algorithmic criticism” by Steve Ramsay, and “Knowing true things by what their mockeries be: modelling in the humanities” by Willard McCarty. DLS also provides a handy annotated bibliography by Tanya Clement and Gretchen Gueguen that highlights some of the key scholarly resources in literature, including Digital Transcriptions and Images, Born-Digital Texts and New Media Objects, and Criticism, Reviews, and Tools. I expect that the book will be used frequently in digital humanities courses and will be a foundational work.

5) Crafting history: History Appliances

For me, the coolest—most innovative, most unexpected, most wow!—work of the year came from the ever-inventive Bill Turkel, who is exploring humanistic fabrication (not in the Mills Kelly sense of making up stuff ;), but in the DIY sense of making stuff). Turkel is working on “materialization,” giving a digital representation physical form by using, for example, a rapid prototyping machine, a sort of 3D printer. Turkel points to several reasons why humanities scholars should experiment with fabrication: they can be like DaVinci, making the connection between the mind and hand by realizing an idea in physical form; study the past by recreating historical objects (fossils, historical artifacts, etc) that can be touched, rotated, scrutinized; explore “haptic history,” a sensual experience of the past; and engage in “Critical technical practice,” where scholars both create and critique.

Turkel envisions making digital information “available in interactive, ambient and tangible forms.”  As Turkel argues, “As academic researchers we have tended to emphasize opportunities for dissemination that require our audience to be passive, focused and isolated from one another and from their surroundings. We need to supplement that model by building some of our research findings into communicative devices that are transparently easy to use, provide ambient feedback, and are closely coupled with the surrounding environment.” Turkel and his team are working on 4 devices: a dashboard, which shows both public and customized information streams on a large display; imagescapes and soundscapes that present streams of complex data as artificial landscapes or sound, aiding awareness; a GeoDJ, which is an iPod-like device that uses GPS and GIS to detect your location and deliver audio associated with it ( e.g. percussion for an historic industrial site); and ice cores and tree rings, “tangible browsers that allow the user to explore digital models of climate history by manipulating physical interfaces that are based on this evidence.” This work on ambient computing and tangible interfaces promises to foster awareness and open up understanding of scholarly data by tapping people’s natural way of comprehending the world through touch and other forms of sensory perception. (I guess the senses of smell and taste are difficult to include in sensual history, although I’m not sure I want to smell or taste many historical artifacts or experiences anyway. I would like to re-create the invention of the Toll House cookie, which for me qualifies as an historic occasion.) This approach to humanistic inquiry and representation requires the resources of a science lab or art studio—a large, well-ventilated space as well as equipment like a laser scanner, lathes, mills, saws, calipers, etc. Unfortunately, Turkel has stopped writing his terrific blog “Digital History Hacks” to focus on his new interests, but this work is so fascinating that I’m anxious to see what comes next–which describes my attitude toward digital humanities in general.

Digital Humanities in 2008, II: Scholarly Communication & Open Access

Open access, just like dark chocolate and blueberries, is good and good for you, enabling information to be mined and reused, fostering the exchange of ideas, and ensuring public access to research that taxpayers often helped to fund.  Moreover, as Dan Cohen contends, scholars benefit from open access to their work, since their own visibility increases: “In a world where we have instantaneous access to billions of documents online, why would you want the precious article or book you spent so much time on to exist only on paper, or behind a pay wall? This is a sure path to invisibility in the digital age.”  Thus some scholars are embracing social scholarship, which promotes openness, collaboration, and sharing research.  This year saw some positive developments in open access and scholarly communications, such as the implementation of the NIH mandate, Harvard’s Faculty of Arts & Science’s decision to go open access (followed by Harvard Law), and the launch of the Open Humanities Press.  But there were also some worrisome developments (the Conyers Bill’s attempt to rescind the NIH mandate, EndNote’s lawsuit against Zotero) and some confusing ones (the Google Books settlement).  In the second part of my summary on the year in digital humanities, I’ll look broadly at the scholarly communication landscape, discussing open access to educational materials, new publication models, the Google Books settlement, and cultural obstacles to digital publication.

Open Access Grows–and Faces Resistance

In December of 2007, the NIH Public Access Policy was signed into law, mandating that any research funded by the NIH would be deposited in PubMed

Ask Me About Open Access by mollyali

Ask Me About Open Access by mollyali

Central within a year of its publication.  Since the mandate was implemented, almost 3000 new biomedical manuscripts have been deposited into PubMed Central each month.  Now John Conyers has put forward a bill that would rescind the NIH mandate and prohibit other federal agencies from implementing similar policies.  This bill would deny the public access to research that it funded and choke innovation and scientific discovery.   According to Elias Zerhouni, former director of the NIH, there is no evidence that the mandate harms publishers; rather, it maximizes the public’s “return on its investment” in funding scientific research.  If you support public access to research, contact your representative and express your opposition to this bill before February 28.  The Alliance for Taxpayer Access offers a useful summary of key issues as well as a letter template at

Open Humanities?

Why has the humanities been lagging behind the sciences in adopting open access?  Gary Hall points to several ways in which the sciences differ from the humanities, including science’s greater funding  for “author pays” open access and emphasis  on disseminating information rapidly, as well as humanities’ “negative perception of the digital medium.”   But Hall is challenging that perception by helping to launch the Open Humanities Press (OHP) and publishing “Digitize This Book.”  Billing itself as “an international open access publishing collective in critical and cultural theory,” OHP  selects journals for inclusion in the collective  based upon their adherence to publication standards, open access standards, design standards, technical standards, and editorial best practices. Prominent scholars such as Jonathan Culler, Stephen Greenblatt, and Jerome McGann have signed on as board members of the Open Humanities Press, giving it more prestige and academic credibility.  In a talk at UC Irvine last spring,  OHP co-founder Sigi Jӧttkandt refuted the assumption that open access means “a sort of open free-for-all of publishing” rather than high-quality, peer-reviewed scholarship.  Jӧttkandt argued that open access should be fundamental to the digital humanities: “as long as the primary and secondary materials that these tools operate on remain locked away in walled gardens, the Digital Humanities will fail to fulfill the real promise of innovation contained in the digital medium.”  It’s worth noting that many digital humanities resources are available as open access, including Digital Humanities Quarterly, the Rossetti Archive, and projects developed by CHNM; many others may not be explicitly open access, but they make information available for free.

In “ANTHROPOLOGY OF/IN CIRCULATION: The Future of Open Access and Scholarly Societies,” Christopher Kelty, Michael M. J. Fischer, Alex “Rex” Golub, Jason Baird Jackson, Kimberly Christen, and Michael F. Brown engage in a wide-ranging discussion of open access in anthropology, prompted in part by the American Anthropological Association’s decision to move its publishing activities to Wiley Blackwell.  This rich conversation explores different models for open access, the role of scholarly societies in publishing, building community around research problems, reusing and remixing scholarly content, the economics of publishing, the connection between scholarly reputation and readers’ access to publications, how to make content accessible to source communities, and much more.   As Kelty argues, “The future of innovative scholarship is not only in the AAA (American Anthropological Association) and its journals, but in the structures we build that allow our research to circulate and interact in ways it never could before.”  Kelty (who, alas, was lured away from Rice by UCLA) is exploring how to make scholarship more open and interactive.  You can buy a print copy of Two Bits, his new book on the free software movement published by Duke UP; read (for free) a PDF version of the book; comment on the CommentPress version; or download and remix the HTML.  Reporting on Two Bits at Six Months, Kelty observed, “Duke is making as little or as much money on the book as they do on others of its ilk, and yet I am getting much more from it being open access than I might otherwise.”  The project has made Kelty more visible as a scholar, leading to more media attention, invitations to give lectures and submit papers, etc.

New Models of Scholarly Communication, and Continued Resistance

To what extent are new publishing models emerging as the Internet enables the rapid, inexpensive distribution of information, the incorporation of multimedia into publications, and networked collaboration? To find out, The ARL/ Ithaka New Model Publications Study conducted an “organized scan” of emerging scholarly publications such as blogs, ejournals, and research hubs.  ARL recruited 301 volunteer librarians from 46 colleges and universities to interview faculty about new model publications that they used.  (I participated in a small way, interviewing one faculty member at Rice.)  According to the report, examples of new model publications exist in all disciplines, although scientists are more likely to use pre-print repositories, while humanities scholars participate more frequently in discussion forums.  The study identifies eight principal types of scholarly resources:

  • E-only journals
  • Reviews
  • Preprints and working papers
  • Encyclopedias, dictionaries, and annotated  content
  • Data
  • Blogs
  • Discussion forums
  • Professional and scholarly hubs

These categories provide a sort of abbreviated field manual to identifying different types of new model publications.  I might add a few more categories, such as collaborative commentary or peer-to-peer review (exemplified by projects that use CommentPress); scholarly wikis like OpenWetWare that enable open sharing of scholarly information; and research portals like NINES (which perhaps would be considered a “hub”).   The report offers fascinating examples of innovative publications, such as ejournals that publish articles as they are ready rather on a set schedule and a video journal that documents experimental methods in biology.   Since only a few examples of new model publications could fit into this brief report, ARL is making available brief descriptions of 206 resources that it considered to be  “original and scholarly works” via a publicly accessible database.

My favorite example of a new model publication: eBird, a project initiated by  the Cornell Lab of Ornithology and the Audobon Society that enlists amateur and professional bird watchers to collect bird observation data.  Scientists then use this data to understand the “distribution and abundance” of birds.  Initially eBird ran into difficulty getting birders to participate, so they developed tools that allowed birders to get credit and feel part of a community, to “manage and maintain their lists online, to compare their observations with others’ observations.” I love the motto and mission of eBird—“Notice nature.”  I wonder if a similar collaborative research site could be set up for, say, the performing arts (, where audience members would document arts and humanities in the wild–plays, ballets, performance art, poetry readings, etc.

The ARL/Ithaka report also highlights some of the challenges faced by these new model publications, such as the conservatism of academic culture, the difficulty of getting scholars to participate in online forums, and finding ways to fund and sustain publications.  In  Interim Report: Assessing the Future Landscape of Scholarly Communication, Diane Harley and her colleagues at the University of California Berkeley delve into some of these challenges.  Harley finds that although some scholars are interested in publishing their research as interactive multimedia, “(1) new forms must be perceived as having undergone rigorous peer review, (2) few untenured scholars are presenting such publications as part of their tenure cases, and (3) the mechanisms for evaluating new genres (e.g., nonlinear narratives and multimedia publications) may be prohibitive for reviewers in terms of time and inclination.” Humanities researchers are typically less concerned with the speed of publication than scientists and social scientists, but they do complain about journals’ unwillingness to include many high quality images and would like to link from their arguments to supporting primary source material. However, faculty are not aware of any easy-to-use tools or support that would enable them to author multimedia works and are therefore less likely to experiment with new forms.  Scholars in all fields included in the study do share their research with other scholars, typically through emails and other forms of personal communication, but many regard blogs as “a waste of time because they are not peer reviewed.”  Similarly, Ithaka’s 2006 Studies of Key Stakeholders in the Digital Transformation in Higher Education (published in 2008) found that “faculty decisions about where and how to publish the results of their research are principally based on the visibility within their field of a particular option,” not open access.

But academic conservatism shouldn’t keep us from imagining and experimenting with alternative approaches to scholarly publishing.  Kathleen Fitzpatrick’s “book-like-object” (blob) proposal, Planned Obsolescence: Publishing, Technology, and the Future of the Academy, offers a bold and compelling vision of the future of academic publishing.  Fitzpatrick calls for academia to break out of its zombie-like adherence to (un)dead forms and proposes “peer-to-peer” review (as in Wikipedia), focusing on process rather than product (as in blogs), and engaging in networked conversation (as in CommentPress). (If references to zombies and blobs make you think Fitzpatrick’s stuff is fun to read as well as insightful, you’d be right.)

EndNote Sues Zotero

Normally I have trouble attracting faculty and grad students to workshops exploring research tools and scholarly communication issues, but they’ve been flocking to my workshops on Zotero, which they recognize as a tool that will help them work more productively.  Apparently Thomson Reuters, the maker of EndNote, has noticed the competitive threat posed by Zotero, since they have sued George Mason University, which produces Zotero, alleging that programmers reverse engineered EndNote so that they could convert proprietary EndNote .ens files into open Zotero .csl files.  Commentators more knowledgeable about the technical and legal details than I have found Thomson’s claims to be bogus.  My cynical read on this lawsuit is that EndNote saw a threat from a popular, powerful open source application and pursued legal action rather than competing by producing a better product.  As Hugh Cayless suggests, “This is an act of sheer desperation on the part of Thomson Reuters” and shows that Zotero has “scared your competitors enough to make them go running to Daddy, thus unequivocally validating your business model.”

The lawsuit seems to realize Yokai Benkler’s description of proprietary attempts to control information:

“In law, we see a continual tightening of the control that the owners of exclusive rights are given.  Copyrights are longer, apply to more uses, and are interpreted as reaching into every corner of valuable use. Trademarks are stronger and more aggressive. Patents have expanded to new domains and are given greater leeway. All these changes are skewing the institutional ecology in favor of business models and production practices that are based on exclusive proprietary claims; they are lobbied for by firms that collect large rents if these laws are expanded, followed, and enforced. Social trends in the past few years, however, are pushing in the opposite direction.”

Unfortunately, the lawsuit seems to be having a chilling effect that ultimately will, I think, hurt EndNote.  For instance, the developers of BibApp, “a publication-list manager and repository-populator,” decided not to import citation lists produced by EndNote, since “doing anything with their homegrown formats has been proven hazardous.” This lawsuit raises the crucial issue of whether researchers can move their data from one system to another.  Why would I want to choose a product that locks me in?  As Nature wrote in an editorial quoted by CHNM in its response to the lawsuit, “The virtues of interoperability and easy data-sharing among researchers are worth restating.”

Google Books Settlement

Google Books by Jon Wiley

Google Books by Jon Wiley

In the fall, Google settled with the Authors Guild and the Association of American Publishers over Google Book Search, allowing academic libraries to subscribe to a full-text collection of millions of out-of-print but (possibly) in-copyright books.  (Google estimates that about 70% of published books fall into this category).  Individuals can also purchase access to books, and libraries will be given a single terminal that will provide free access to the collection.  On a pragmatic (and gluttonous) level, I think, Oh boy, this settlement will give me access to so much stuff.   But, like others, I am concerned about one company owning all of this information, see the Book Rights Registry as potentially anti-competitive, and wish that a Google victory in court had verified fair use principles (even if such a decision probably would have kept us in snippet view or limited preview for in-copyright materials).  Libraries have some legitimate concerns about access, privacy, intellectual freedom, equitable treatment, and terms of use.  Indeed, Harvard pulled out of the project over concerns about cost and accessibility.  As Robert Darnton, director of the Harvard Library and a prominent scholar of book history, wrote in the NY Review of Books, “To digitize collections and sell the product in ways that fail to guarantee wide access… would turn the Internet into an instrument for privatizing knowledge that belongs in the public sphere.” Although the settlement makes a provision for “non-consumptive research” (using the books without reading them) that seems to allow for text mining and other computational research, I worry that digital humanists and other scholars won’t have access to the data they need.  What if Google goes under, or goes evil? But the establishment of the Hathi Trust by several of Google Book’s academic library partners (and others) makes me feel a little better about access and preservation issues, and I noted that Hathi Trust will provide a corpus of 50,000 documents for the NEH’s Digging into the Data Challenge.  And as I argued in an earlier series of blog posts, I certainly do see how Google Books can transform research by providing access to so much information.

Around the same time (same day?) that the Google Books settlement was released, the Open Content Alliance (OCA) reached an important milestone, providing access to over a million books.  As its name suggests, the OCA makes scanned books openly available for reading, download, and analysis, and from my observations the quality of the digitization is better.  Although the OCA’s collection is smaller and it focuses on public domain materials, it offers a vital alternative to GB.  (Rice is a member of the Open Content Alliance.)

Next up in the series on digital humanities in 2008: my attempt to summarize recent developments in research.

New MA Program in History & Media at the University at Albany

A few days ago a commenter on my blog asked how he could learn to develop rich historical web sites “that would allow me to bring primary sources/scholarship from centuries ago to a wider audience.”  I had a hard time thinking of digital humanities programs that provide training in authoring digital media (George Mason? Georgia Tech?).  But then I heard about the new Masters concentration in History and Media at the University at Albany, which promises to prepare students to develop historical web sites, documentary films, oral histories, and other forms of media.   Albany seems to be well-positioned to offer such a program; for instance, it published the late lamented Journal for Multimedia History, a groundbreaking journal focused multimedia explorations of historical topics. In a recent discussion about “The Promise of Digital History” published in the Journal of American History, Amy Murrell Taylor, one of the professors developing Albany’s program, makes a persuasive case for thinking about digital history as a medium, “as the production of something that can stand alongside a book, something that takes a different form but nonetheless raises questions, offers analysis, and advances our historiographical knowledge about a given subject.”

Here’s the announcement, taken from H-Net:

The University at Albany’s Department of History has introduced a new 36-credit History and Media concentration to its Masters program, allowing students to learn and apply specialized media skills — digital history and hypermedia authoring, photography and photoanalysis, documentary filmmaking, oral/video history, and aural history and audio documentary production — to the study of the past. The History and Media concentration builds on the Department’s strengths in academic and public history and its reputation as an innovator in the realm of digital and multimedia history.

Among the History and Media courses to be offered beginning in the fall of 2009 are: Introduction to Historical Documentary Media; Narrative in Historical Media; Readings and Practicum in Aural History and Audio Documentary Production; Readings and Practicum in Digital History and Hypermedia; Readings in the History and Theory of Documentary Filmmaking; Readings in Visual Media and Culture; Introduction to Oral and Video History; Research Seminar and Practicum in History and Media.

Instructors in the History and Media concentration will vary but will include a core faculty including:
Gerald Zahavi, Professor; Amy Murrell Taylor, Associate Professor; Ray Sapirstein, Assistant Professor; Sheila Curran Bernard, Assistant Professor.

For more information, contact Gerald Zahavi,; 518-442-5427.

Prof. Gerald Zahavi
Department of History
University at Albany
1400 Washington Avenue
Albany, NY 12222
Visit the website at

Digital Humanities in 2008, Part I

When I wrote a series of blog posts last year summarizing developments in digital humanities, a friend joked that I had just signed on to do the same thing every year.  So here’s my synthesis of digital humanities in 2008, delivered a little later than I intended. (Darn life, getting in the way of blogging!) This post, the first in a series, will focus on the emergence of digital humanities (DH), defining DH and its significance, and community-building efforts.   Subsequent posts will look at developments in research, open education, scholarly communication, mass digitization, and tools.   Caveat lector:  this series reflects the perspective of an English Ph.D. with a background in text encoding and interest in digital scholarship working at a U.S. library who wishes she knew and understood all but surely doesn’t.  Please  add comments and questions.

1.    The Emergence of the Digital Humanities

This year several leaders in digital humanities declared its “emergence.”  At one of the first Bamboo workshops, John Unsworth pointed to the high number of participants and developments in digital humanities since work on the ACLS Cyberinfrastructure report (Our Cultural Commonwealth) began 5 years earlier and noted “we have in fact reached emergence… we are now at a moment when real change seems possible.”  Likewise, Stan Katz commented in a blog post called “The Emergence of the Digital Humanities,” “Much remains to be done, and campus-based inattention to the humanities complicates the task. But the digital humanities are here to stay, and they bear close watching.”

Termite Cathedral (Wikipedia)

Emergence: Termite Cathedral (Wikipedia)

Last year I blogged about the emergence of digital humanities and I suspect I will the next few years as well, but digital humanities did seem to gain momentum and visibility in 2008.  For me, a key sign of the DH’s emergence came when the NEH transformed the Digital Humanities Initiative into the Office of Digital Humanities (ODH), signaling the significance of the “digital” to humanities scholarship.  After the office was established, Inside Higher Ed noted in“Rise of the Digital NEH” that what had been a “grassroots movement” was attracting funding and developing “organizational structure.”  Establishing the ODH gave credibility to an emerging field (discipline? methodology?).  When you’re trying to make the case that your work in digital humanities should count for tenure and promotion, it certainly doesn’t hurt to point out that it’s funded by the NEH.  The ODH acts not only as a funder (of 89 projects to date), but also a facilitator, convening conversations, listening actively, and encouraging digital humanities folks to “Keep innovating.” Recognizing that digital humanities works occurs across disciplinary and national boundaries, the ODH collaborates with funding agencies in other countries such as the UK’s JISC, Canada’s Social Sciences and Humanities Research Council (SSHRC), and Germany’s DFG; US agencies such as NSF, IMLS and DOE; and non-profits such as CLIR.  Although the ODH has a small staff (three people) and limited funds, I’ve been impressed by how much this knowledgeable, entrepreneurial team has been able to accomplish, such as launching initiatives focused on data mining and high performance computing, advocating for the digital humanities, providing seed funding for innovative projects, and sponsoring institutes on advanced topics in the digital humanities.

It also seemed like there were more digital humanities jobs in 2008, or at least more job postings that listed digital humanities as a desired specialization.  Of course, the economic downturn may limit not only the number of DH jobs, but also the funding available to pursue complex projects–or, here’s hoping, it may lead to funding for scanner-ready research infrastructure projects.

2.    Defining “digital humanities”

Perhaps another sign of emergence is the effort to figure out just what the beast is.  Several essays and dialogues published in 2008 explore and make the case for the digital humanities; a few use the term “promise,” suggesting that the digital humanities is full of potential but not yet fully realized.

  • The Promise of Digital History,” a conversation among Dan Cohen, Michael Frisch, Patrick Gallagher, Steven Mintz, Kirsten Sword, Amy Murrell Taylor, Will Thomas III, and Bill Turkel published in the Journal of American History.  This fascinating, wide-ranging discussion explores defining digital history; developing new methodological approaches; teaching both skills and an understanding of the significance of new media for history; coping with impermanence and fluidity; sustaining collaborations; expanding the audience for history; confronting institutional and cultural resistance to digital history; and much more. Whew! One of the most fascinating discussion threads: Is digital history a method, field, or medium?  If digital history is a method, then all historians need to acquire basic knowledge of it; if it is a medium, then it offers a new form for historical thinking, one that supports networked collaboration.  Participants argued that digital history is not just about algorithmic analysis, but also about collaboration, networking, and using new media to explore historical ideas.
  • In “Humanities 2.0: Promise, Perils, Predictions”  (subscription required, but see Participatory Learning and the New Humanities: An Interview with Cathy Davidson for related ideas), Cathy Davidson argues that the humanities, which offers strengths in “historical perspective, interpretative skill, critical analysis, and narrative form,” should be integral to the information age.  She calls for humanists to acknowledge and engage with the transformational potential of technology for teaching, research and writing.
    Extra Credit, by ptufts

    Extra Credit, by ptufts

    Describing how access to research materials online has changed research, she cites a colleague’s joke that work done before the emergence of digital archives should be emblazoned with an “Extra Credit” sticker.  Now we are moving into “Humanities 2.0,” characterized by networked participation, collaboration, and interaction.  For instance, scholars might open up an essay for criticism and commentary using a tool such as CommentPress, or they might collaborate on multinational, multilingual teaching and research projects, such as the Law in Slavery and Freedom Project.   Yet Davidson acknowledges the “perils” posed by information technology, particularly monopolistic, corporate control of information.   Davidson contributes to the conversation about digital humanities by emphasizing the importance of a critical understanding of information technology and advocating for a scholarship of engagement and participation.

  • In “Something Called ‘Digital Humanities'”, Wendell Piez challenges William Deresiewicz’s dismissal of “something called digital humanities” (as well as of “Contemporary lit, global lit, ethnic American lit; creative writing, film, ecocriticism”).  Piez argues that just as Renaissance “scholar-technologists” such as Aldus Manutius helped to create print culture, so digital humanists focus on both understanding and creating digital media. As we ponder the role of the humanities in society, perhaps digital humanities, which both enables new modes of communicating with the larger community and critically reflects on emerging media, provides one model for engagement.

3.    Community and collaboration

According to Our Cultural Commonwealth, “facilitat[ing] collaboration” is one of the five key goals for the humanities cyberinfrastructure.   Although this goal faces cultural, organizational, financial, and technical obstacles, several recent efforts are trying to articulate and address these challenges.

To facilitate collaboration, Our Cultural Commonwealth calls for developing a network of research centers that provide both technical and subject expertise.  In A Survey of Digital Humanities Centers in the United States, Diane Zorich inventories the governance, organizational structures, funding models, missions, projects, and research at existing DH centers.  She describes such centers as being at a turning point, reaching a point of maturity but facing challenges in sustaining themselves and preserving digital content.  Zorich acknowledges the innovative work many digital humanities centers have been doing, but calls for greater coordination among centers so that they can break out of siloes, tackle common issues such as digital preservation, and build shared services.   Such coordination is already underway through groups such as CenterNet and HASTAC, collaborative research projects funded by the NEH and other agencies, cyberinfrastructure planning projects such as Bamboo, and informal partnerships among centers.

How to achieve greater coordination among “Humanities Research Centers” was also the topic of the Sixth Scholarly Communications Instititute (SCI), which used the Zorich report as a starting point for discussion.   The SCI report looks at challenges facing both traditional humanities centers, as they engage with new media and try to become “agents of change,” and digital humanities centers, as they struggle to “move from experimentation to normalization” attain stability (6).   According to the report, humanities centers should facilitate “more engagement with methods,” discuss what counts as scholarship, and coordinate activities with each other.  Through my Twitter feeds, I understand that the SCI meeting seems to be yielding results: CenterNet and the Consortium of Humanities Centers and Institutes (CHCI) are now discussing possible collaboratiions, such as postdocs in digital humanities.

Likewise, Bamboo is bringing together humanities researchers, computer scientists, information technologists, and librarians to discuss developing shared technology services in support of arts and humanities researchers.  Since April 2008, Bamboo has convened three workshops to define scholarly practices, examine challenges, and plan for the humanities cyberinfrastructure.  I haven’t been involved with Bamboo (beyond partnering with them to add information to the Digital Research Tools wiki), so I am not the most authoritative commentator, but I think that involving a wide community in defining scholarly needs and developing technology services just makes sense–it prevents replication, leverages common resources, and ultimately, one hopes, makes it easier to perform and sustain research using digital tools and resources.  The challenge, of course, is how to move from talk to action, especially given current economic constraints and the mission creep that is probably inevitable with planning activities that involve over 300 people.  To tackle implementation issues, Bamboo has set up eight working groups that are addressing topics like education, scholarly networking, tools and content, and shared services. I’m eager to see what Bamboo comes up with.

Planning for the cyberinfrastructure and coordinating activities among humanities centers are important activities, but playing with tools and ideas among fellow digital humanists is fun!  (Well, I guess planning and coordination can be fun, too, but a different kind of fun.)  This June, the Center for New Media in History hosted its first THATCamp (The Humanities and

Dork Shorts at THAT Camp

Dork Shorts at THAT Camp

Technology Camp), a “user-generated,” organically organized “unconference” (very Web 2.0/ open source).  Rather than developing an agenda prior to the conference, the organizers asked each participant to blog about his or her interests, then devoted the first session to setting up sessions based on what participants wanted to discuss.  Instead of passively listening to three speakers read papers, each person who attended a session was asked to participate actively.  Topics included Teaching Digital Humanities, Making Things (Bill Turkel’s Arduino workshop), Visualization, Infrastructure and Sustainability, and the charmingly titled Dork Shorts, where THAT Campers briefly demonstrated their projects. THAT Camp drew a diversity of folks–faculty, graduate students, librarians, programmers, information technologists, funders, etc.  The conference used technology effectively to stir up and sustain energy and ideas—the blog posts before the conference helped the attendees set some common topics for discussion, and  Twitter provided a backchannel during the conference.   Sure,  a couple sessions meandered a bit, but I’ve never been to a conference where people were so excited to be there, so engaged and open.  I bet many collaborations and bright ideas were hatched at THAT Camp.  This year, THAT Camp will be expanded and will take place right after Digital Humanities 2009.

THAT Camp got me hooked on Twitter.  Initially a Twitter skeptic (gawd, do I need another way to procrastinate?), I’ve found that it’s great way to find out what’s going on digital humanities and connect with others who have similar interests.  I love Barbara Ganley’s line (via Dan Cohen): “blog to reflect, Tweet to connect.”  If you’re interesting in Twittering but aren’t sure how to get started, I’d suggest following digital humanities folks and the some of the people they follow.  You can also search for particular topics at  Amanda French has written a couple of great posts about Twitter as a vehicle for scholarly conversation, and a recent Digital Campus podcast features a discussion among Tweeters Dan Cohen and Tom Scheinfeldt and skeptic Mills Kelly.

HASTAC offers another model for collaboration by establishing a virtual network of people and organizations interested in digital humanities and sponsoring online forums (hosted by graduate and undergraduate students) and other community-building activities.  Currently HASTAC is running a lively, rich forum on the future of the digital humanities featuring Brett Bobley, director of the NEH’s ODH.  Check it out!

Digital Humanities Sessions at MLA 2008

A couple of days after returning from the MLA (Modern Language Association) conference, I ran into a biologist friend who had read about the “conference sex” panel at MLA.  She said,  “Wow, sometimes I doubt the relevance of my research, but that conference sounds ridiculous.” I’ve certainly had my moments of skepticism toward the larger purposes of literary research while sitting through dull conference sessions, but my MLA experience actually made me feel jazzed and hopeful about the humanities.  That’s because the sessions that I attended–mostly panels on the digital humanities–explored topics that seemed both intellectually rich and relevant to the contemporary moment.  For instance, panelists discussed the significance of networked reading, dealing with information abundance, new methods for conducting research such as macroanalysis and visualization, participatory learning, copyright challenges, the shift (?) to digital publishing, digital preservation, and collaborative editing.  Here are my somewhat sketchy notes about the MLA sessions I was able to attend; see great blog posts by Cathy Davidson, Matt Gold, Laura Mandell, Alex Reid, and John Jones for more reflections on MLA 2008.

1)    Seeing patterns in literary texts
At the session “Defoe, James, and Beerbohm: Computer-Assisted Criticism of Three Authors,” David Hoover noted that James scholars typically distinguish between his late and early work.  But what does that difference look like?  What evidence can we find of such a distinction? Hoover used computational/ statistical methods such as Principal Components Analysis and the T-test to examine the word choice in across James’ work and found some striking patterns illustrating that James’ diction during his early period was indeed quite different from his late period.   Hoover introduced the metaphor of computational approaches to literature serving either as a telescope (macroanalysis, discerning patterns across a large body of texts) or a microscope (looking closely at individual works or authors).

2)    New approaches to electronic editing

The ACH Guide to Digital-Humanities Talks at the 2008 MLA Convention lists at least 9 or 10 sessions concerned with editing or digital archives, and the Chronicle of Higher Ed dubbed digital editing as a “hot topic” for MLA 2008.   At the session on Scholarly Editing in the Twenty-First Century: Digital Media and Editing, Peter Robinson (whose paper was delivered by Jerome McGann and included passages referencing Jerome McGann) presented the idea of “Editing without walls,” shifting from a centralized model where a scholar acts as the “guide and guardian” who oversees work on an edition to a distributed, collaborative model.  With “community made editions,” a library would produce high quality images, researchers would transcribe those images, other researchers would collate the transcriptions, others would analyze the collations and add commentaries, etc. Work would be distributed and layered.  This approach opens up a number of questions: what incentives will researchers have to work on the project? How will the work be coordinated? Who will maintain the distributed edition for the long term?  But Robinson claimed that the approach would have significant advantages, including reduced cost and greater community investment in the editions.  Several European initiatives are already working on building tools and platforms similar to what Peter Shillingsburg calls “electronic knowledge sites,” including the Discovery Project, which aims to “explore how Semantic Web technology can help to create a state-of-the-art research and publishing environment for philosophy” and the Virtual Manuscript Room, which “will bring together digital resources related to manuscript materials (digital images, descriptions and other metadata, transcripts) in an environment which will permit libraries to add images, scholars to add and edit metadata and transcripts online, and users to access material.”

Matt Kirschenbaum then posed the provocative question if Shakespeare had a hard drive, what would scholars want to examine: when he began work on King Lear, how long he worked on it, what changes he made, what web sites he consulted while writing?  Of course, Shakespeare didn’t have a hard drive, but almost every writer working now uses a computer, so it’s possible to analyze a wide range of information about the writing process.  Invoking Tom Tanselle, Matt asked, “What are the dust jackets of the information age?” That is, what data do we want to preserve?  Discussing his exciting work with Alan Liu and Doug Reside to make available William Gibson’s Agrippa in emulation and as recorded on video in the early 1990s, Matt demonstrated how emulation can be used to simulate the original experience of this electronic poem.  He emphasized the importance of collaborating with non-academics–hackers, collectors, and even Agrippa’s original publisher–to learn about Agrippa’s history and make the poem available.  Matt then addressed digital preservation.  Even data designed to self-destruct is recoverable, but Matt expressed concern about cloud computing, where data exists on networked servers.  How will scholars get access to a writer’s email, Facebook updates, Google Docs, and other information stored online?  Matt pointed to several projects working on the problem of archiving electronic art and performances by interviewing artists about what’s essential and providing detailed descriptions of how they should be re-created: Forging the Future and Archiving the Avante Garde.
3)    Literary Studies in the Digital Age: A Methodological Primer

At the panel on Methodologies Literary Studies in the Digital Age, Ken Price discussed a forthcoming book that he is co-editing with Ray Siemens called Literary Studies in a Digital Age: A Methodological Primer.  The book, which is under consideration by MLA Press, will feature essays such as John Unsworth on electronic scholarly publishing, Tanya Clement on critical trends, David Hoover on textual analysis, Susan Schreibman on electronic editing, and Bill Kretzschmer on GIS, etc.   Several authors to be included in the volume—David Hoover, Alan Liu, and Susan Schreibman—spoke.

Hoover began with a provocative question: do we really want to get to 2.0, collaborative scholarship? He then described different models of textual analysis:
i.    the portal (e.g. MONK, TAPOR): typically a suite of simple tools; platform independent; not very customizable
ii.     desktop tools (e.g. TACT)
iii.    standardized software used for text analysis (e.g. Excel)

Next, Alan Liu  discussed his Transliteracies project, which examines the cultural practices of online reading and the ways in which reading changes in a digital environment (e.g. distant reading, sampling, and “networked public discourse,” with links, comments, trackback, etc).  The transformations in reading raise important questions, such as the relationship between expertise and networked public knowledge.  Liu pointed to a number of crucial research and development goals (both my notes and memory get pretty sketchy here):
1)    development of a standardized metadata scheme for annotating social networks
2)    data mining and annotating social computing
3)    reconciling metadata with writing systems
4)    information visualization for the contact zone between macro-analysis and close reading
5)    historical analysis of past paradigms for reading and writing
6)    authority-adjudicating systems to filter content
7)    institutional structures to encourage scholars to share and participate in new public knowledge

Finally, Susan Schreibman discussed electronic editions.  Among the first humanities folks drawn to the digital environment were editors, who recognized that electronic editions would allow them to overcome editorial challenges and present a history of the text over time, pushing beyond the limitations of the textual apparatus and representing each edition.  Initially the scholarly community focused on building single author editions such as the Blake and Whitman Archives.  Now the community is trying to get beyond siloed projects by building grid technologies to edit, search and display texts.  (See, for example, TextGrid,   Schreibman asked how we can use text encoding to “unleash the meanings of text that are not transparent” and encode themes or theories of text, then use tools such as TextArc or ManyEyes to engage in different spatial/temporal views.

A lively discussion of crowdsourcing and expert knowledge followed, hinging on the question of what the humanities have to offer in the digital age.  Some answers: historical perspective on past modes of reading, writing and research; methods for dealing with multiplicity, ambiguity and incomplete knowledge; providing expert knowledge about which text is the best to work with.  Panelists and participants envisioned new technologies and methods to support new literacies, such as the infrastructure that would enable scholars and readers to build their own editions; a “close-reading machine” based upon annotations that would enable someone to study, for example, dialogue in the novel; the ability to zoom out to see larger trends and zoom in to examine the details; the ability to examine “humanities in the age of total recall,” analyzing the text in a network of quotation and remixing; developing methods for dealing with what is unknowable.

4) Publishing and Cyberinfrastructure

At the panel on publishing and cyberinfrastracture moderated by Laura Mandell, Penny Kaiserling from the University of Virginia Press, Linda Bree from Cambridge UP, and Michael Lonegro from Johns Hopkins Press discussed the challenges that university presses are facing as they attempt to shift into the digital.  At Cambridge, print sales are currently subsidizing ebooks.  Change is slower than was envisioned ten years ago, more evolutionary than revolutionary.  All three publishers emphasized that publishers are unlikely to transform their publishing model unless academic institutions embrace electronic publication, accepting e-publishing for tenure and promotion and purchasing electronic works.  Ultimately, they said, it is up to the scholarly community to define what is valued.  Although the shift into electronic publishing of journals is significant, academic publishers’ experience lags in publishing monographs.  One challenge is that journals are typically bundled, but there isn’t such a model for bundling books.  Getting third party rights to illustrations and other copyrighted materials included in a book is another challenge.  Ultimately scholars will need to rethink the monograph, determining what is valuable (e.g. the coherence of an extended argument) and how it exists electronically, along with the benefits offered by social networking and analysis.  Although some in the audience challenged the publishers to take risks in initiating change themselves, the publishers emphasized that it is ultimately up to scholarly community.  The publishers also asked why the evaluation of scholarship depended on a university press constrained by economics rather than scholars themselves–that is, why professional review has been outsourced to the university press.

5) Copyright

The panel on Promoting the Useful Arts: Copyright, Fair Use, and the Digital Scholar, which was moderated by Steve Ramsay, featured Aileen Berg explaining the publishing industry’s view of copyright, Robin G. Schulze describing the nightmare of trying to get rights to publish an electronic edition of Marianne Moore’s notebooks, and Kari Kraus detailing how copyright and contract law make digital preservation difficult.  Schulze asked where the MLA was when copyright was extended through the Sony Bono Act, limiting what scholars can do, and said she is working on pre-1923 works to avoid the copyright nightmare.  Berg, who was a good sport to go before an audience not necessarily sympathetic to the publishing industry’s perspective, advised authors to exercise their own rights and negotiate their agreements rather than simply signing what is put before; often they can retain some rights.  Kraus discussed how licenses (such as click-through agreements) are further limiting how scholars can use intellectual works but noted some encouraging signs, such as the James Joyce estate’s settlement with a scholar allowing her to use copyrighted materials in her scholarship.  Attendees discussed ways that literature professors could become more active in challenging unfair copyright limitations, particularly through advocacy work and supporting groups such as the Electronic Frontier Foundation.

6) Humanities 2.0: Participatory Learning in an Age of Technology

The Humanities 2.0 panel featured three very interesting presentations about the projects funded through the MacArthur Digital Learning competition, as well Cathy Davidson’s overview of the competition and of HASTAC.  (For a fuller discussion of the session, see Cathy Davidson’s summary.) Davidson drew a distinction between “digital humanities,” which uses the digital technologies to enhance the mission of the humanities, and humanities 2.0, which “wants us to combine critical thinking about the use of technology in all aspects of social life and learning with creative design of future technologies” (Davidson).    Next Howard Rheingold discussed the “social media classroom,” which is “a free and open-source (Drupal-based) web service that provides teachers and learners with an integrated set of social media that each course can use for its own purposes—integrated forum, blog, comment, wiki, chat, social bookmarking, RSS, microblogging, widgets, and video commenting are the first set of tools.”  Todd Presner showcased the Hypercities project, a geotemporal interface for exploring and augmenting spaces.  Leveraging the Google Maps API and KML, HyperCities enable people to navigate and narrate their own past through space and time, adding their own markers to the map and experiencing different layers of time and space.  The project is working with citizens and students to add their own layers of information—images, narratives—to the maps, making available an otherwise hidden history.  Currently there are maps for Rome, LA, New York, and Berlin.   A key principle behind HyperCities is aggregating and integrating archives, moving away from silos of information. Finally, Greg Niemeyer and Antero Garcia presented, which is engaging students and citizens in tracking pollution using whimsically designed sensors that measure pollution.  Students tracked levels of pollution at different sites—including in their own classroom—and began taking action, investigating the causes of pollution and advocating for solutions.  What unified these projects was the belief that students and citizens have much to contribute in understanding and transforming their environments.

7. The Library of Google: Researching Scanned Books

What does Google Books mean for literary research?  Is Google Books more like a library or a research tool?  What kind of research is made possible by Google Books (GB)? What are GB’s limitations?  Such questions were discussed in a panel on Google Books that was moderated by Michael Hancher included Amanda French, Eleanor Shevlin, and me.  Amanda described how Google Books enabled her to find earlier sources on the history of the villanelle than she was able to locate pre-GB, Eleanor provided a book history perspective on GB, and I discussed the advantages and limitations of GB for  digital scholarship (my slides are available here).  A lively discussion among the 35 or so attendees followed; all but one person said that GB was, on balance, good for scholarship, although some people expressed concern that GB would replace interlibrary loan, indicated that they use GB mainly as a reference tool to find information in physical volumes, and emphasized the need to continue to consult physical books for bibliographic details such as illustrations and bindings.

8. Posters/Demonstrations: A Demonstration of Digital Poetry Archives and E-Criticism: New Critical Methods and Modalities

I was pleased to see the MLA feature two poster sessions, one on digital archives, one on digital research methods. Instead of just watching a presentation, attendees could engage in discussion with project developers and see how different archives and tools worked.  That kind of informal exchange allows people to form collaborations and have a more hands-on understanding of the digital humanities. (I didn’t take notes and the sessions occurred in the evening, when my brain begins to shut down, so my summary is pretty unsophisticated: wow, cool.)

Reflections on MLA

This was my first MLA and, despite having to leave home smack in the middle of the holidays, I enjoyed it.   Although many sessions that I attended shifted away from the “read your paper aloud when people are perfectly capable of reading it themselves” model, I noted the MLA’s requirement that authors bring three copies of their paper to provide upon request, which raises the question what if you don’t have a paper (just Powerpoint slides or notes) and why can’t you share electronically? And why doesn’t the MLA  provide fuller descriptions of the sessions besides just title and speakers?  (Or am I just not looking in the right place?)  Sure, in the paper era that would mean the conference issue of PMLA would be several volumes thick, but if the information were online there would be a much richer record of each session.  (Or you could enlist bloggers or twitterers [tweeters?] to summarize each session…) After attending THAT Camp, I’m a fan of the unconference model, which fosters the kind of engagement that conferences should be all about—conversation, brainstorming, and problem-solving rather than passive listening.  But lively discussions often do take place during the Q & A period and in the hallways after the sessions (and who knows what takes place elsewhere…)

Tips on Writing a Successful Grant Proposal

The NEH recently announced deadlines for several digital humanities programs, including the NEH Fellowships at Digital Humanities Centers (Sep. 15), Digital Humanities Start-Up Grants. (Oct. 8), and DFG/NEH Joint Digitization Program Deadline (Oct. 15).  So how do you win one of these grants?  I’ve had the honor and privilege (really, I mean it) of serving on several review panels, which has given me insight into what sets apart excellent proposals.  (Nope, I’m not going to say which panels I served on—let’s say if you won a grant, I was on the panel, and if you didn’t, I wasn’t.)

Before serving on a grant review panel, I sort of pictured it as a smoke-filled room where fat-cats chomping on big cigars exercised all their political might to get pet projects funded.  (OK, not really–but it was mysterious.) But the process is nothing like that—no smoke, no posturing, no arm-wringing.  Instead, the NEH brings together 5 or so experts in the field—often directors of digital humanities centers, faculty who have led digital projects, and others who have both subject knowledge in the humanities and expertise in technology– to evaluate the proposals.  Prior to coming to DC to serve on the panel, the panelists review each proposal and make detailed comments, using the grant guidelines as a rubric.  Panelists rank each proposal either as “excellent,” “very good,” “some merit,” or “not recommended for funding.”  Typically I read each proposal three times: first I give all of the proposals a quick read to get a sense of the whole, then I read more slowly to develop a more detailed understanding of each one, and finally I skim as I write up my comments.  The panel itself typically begins with an NEH official explaining the review process, including the conflict of interest rules.  Then panelists discuss each proposal, beginning with the ones rated most highly.  Each panelist provides his or her initial perspective on the proposal, which is followed by an open, respectful debate about its strengths and weaknesses.  Once the discussion is complete, each panelist offers his or her final ranking of the proposal.  I’m fascinated to hear the different perspectives offered by the other panelists; often I am persuaded to change my rankings based on the discussion.  At the end of an exhilarating and exhausting day, the NEH asks panelists for feedback on the proposal guidelines and the review process, demonstrating their commitment to improvement.

Based on my experience as a reviewer, I think I have insight into what makes a strong proposal.  I should say that I’ve never actually received an NEH grant, so take these suggestions with a grain of salt.

  • If you don’t receive a grant, don’t despair. On the panels on which I’ve served, only about 20% of the proposals get funded, which means that some very strong ones just don’t make it.  But you can always reapply, using the reviewers’ comments to strengthen your proposal.
  • Read the Guidelines: Make sure that your proposed project meets the criteria of the grant program.  Would it be better suited for another grant program?  In your narrative, address explicitly how you meet the review criteria–don’t make the reviewers guess.
  • Make an argument for funding your proposal: Don’t just say what you will do, but why it’s important to do it. What impact will your project have on the field, institution, or community?  How?  How is your proposal innovative?  Strong, relevant letters of support can help you make your argument about the proposal’s significance; it’s impressive when leading scholars testify to a project’s importance, but a stack of weak, generic letters can make a proposal seem, well, desperate.
  • Talk to the Program Officers: They’re there to help.  Often they will review a draft proposal prior to submission, provided that you get it to them at least 6 weeks in advance of the grant deadline.  I’m quite impressed by the staff of the Digital Humanities Office: they’re smart, knowledgeable, energetic, all-around good folks, the kind you would trust to lead one of the most visible funding programs in digital humanities.  In the review panels, they focus not on how weak a proposal is, but how they can help the applicant to make it better.
  • Show that you have technical knowledge: Digital humanities projects demand both sophisticated technical and subject knowledge.  Cite the appropriate standards and best practices and explain how you will apply them.
  • Focus. If you attempt to do too much, reviewers will wonder if you can pull it all off, and question what exactly it is you’re trying to do, anyway.
  • Be realistic. It’s always hard to figure out how long a project will take and how much everything will cost.  Talk to others who have done similar work to get a sense of what it will take to pull off your project.  In the work plan, offer a detailed description of what will be accomplished by what deadline and by whom.  Don’t over-promise; remember, if you win the grant, you’ll actually have to do what you said you would do.
  • Sweat the small stuff: Although reviewers focus on the substance of the proposal, a sloppy application can detract from the overall quality.  Proofread carefully to catch grammatical errors.  Think about the design of the document.  If I see huge margins and jumbo fonts, I wonder if the applicant is just trying to fill up space.
  • Ask to see the reviewers’ comments. Whether you’re successful or not, read the reviewers’ comments, which will likely be full of helpful suggestions about how to improve the project and application.  You’re getting free consulting from 5 or more experts in the field—take advantage of it.
  • Consider serving on a grant review panel. Sure, it’s a lot of work, but worth it. You do get a small stipend, but given that it takes about 3-4 hours to review and comment on each proposal and additional time to travel to DC and serve on the panel, the hourly pay probably works out to about $5 or $6.  But you get to serve the community, spend the day with smart colleagues talking about stuff that matters, and learn about what new ideas and projects are bubbling up.  Perhaps most importantly, I think I now have a better sense of what it takes to write strong application.   As a bonus, sometimes you get your very own plate of chocolate—including Special Dark!–for an afternoon boost.

For a detailed, inside-the-NEH perspective on writing successful applications, see Meredith Hindley’s How to Get a Grant from NEH:  A public service message.
Good luck!