Monthly Archives: February 2009

Digital Humanities in 2008, II: Scholarly Communication & Open Access

Open access, just like dark chocolate and blueberries, is good and good for you, enabling information to be mined and reused, fostering the exchange of ideas, and ensuring public access to research that taxpayers often helped to fund.  Moreover, as Dan Cohen contends, scholars benefit from open access to their work, since their own visibility increases: “In a world where we have instantaneous access to billions of documents online, why would you want the precious article or book you spent so much time on to exist only on paper, or behind a pay wall? This is a sure path to invisibility in the digital age.”  Thus some scholars are embracing social scholarship, which promotes openness, collaboration, and sharing research.  This year saw some positive developments in open access and scholarly communications, such as the implementation of the NIH mandate, Harvard’s Faculty of Arts & Science’s decision to go open access (followed by Harvard Law), and the launch of the Open Humanities Press.  But there were also some worrisome developments (the Conyers Bill’s attempt to rescind the NIH mandate, EndNote’s lawsuit against Zotero) and some confusing ones (the Google Books settlement).  In the second part of my summary on the year in digital humanities, I’ll look broadly at the scholarly communication landscape, discussing open access to educational materials, new publication models, the Google Books settlement, and cultural obstacles to digital publication.

Open Access Grows–and Faces Resistance

In December of 2007, the NIH Public Access Policy was signed into law, mandating that any research funded by the NIH would be deposited in PubMed

Ask Me About Open Access by mollyali

Ask Me About Open Access by mollyali

Central within a year of its publication.  Since the mandate was implemented, almost 3000 new biomedical manuscripts have been deposited into PubMed Central each month.  Now John Conyers has put forward a bill that would rescind the NIH mandate and prohibit other federal agencies from implementing similar policies.  This bill would deny the public access to research that it funded and choke innovation and scientific discovery.   According to Elias Zerhouni, former director of the NIH, there is no evidence that the mandate harms publishers; rather, it maximizes the public’s “return on its investment” in funding scientific research.  If you support public access to research, contact your representative and express your opposition to this bill before February 28.  The Alliance for Taxpayer Access offers a useful summary of key issues as well as a letter template at

Open Humanities?

Why has the humanities been lagging behind the sciences in adopting open access?  Gary Hall points to several ways in which the sciences differ from the humanities, including science’s greater funding  for “author pays” open access and emphasis  on disseminating information rapidly, as well as humanities’ “negative perception of the digital medium.”   But Hall is challenging that perception by helping to launch the Open Humanities Press (OHP) and publishing “Digitize This Book.”  Billing itself as “an international open access publishing collective in critical and cultural theory,” OHP  selects journals for inclusion in the collective  based upon their adherence to publication standards, open access standards, design standards, technical standards, and editorial best practices. Prominent scholars such as Jonathan Culler, Stephen Greenblatt, and Jerome McGann have signed on as board members of the Open Humanities Press, giving it more prestige and academic credibility.  In a talk at UC Irvine last spring,  OHP co-founder Sigi Jӧttkandt refuted the assumption that open access means “a sort of open free-for-all of publishing” rather than high-quality, peer-reviewed scholarship.  Jӧttkandt argued that open access should be fundamental to the digital humanities: “as long as the primary and secondary materials that these tools operate on remain locked away in walled gardens, the Digital Humanities will fail to fulfill the real promise of innovation contained in the digital medium.”  It’s worth noting that many digital humanities resources are available as open access, including Digital Humanities Quarterly, the Rossetti Archive, and projects developed by CHNM; many others may not be explicitly open access, but they make information available for free.

In “ANTHROPOLOGY OF/IN CIRCULATION: The Future of Open Access and Scholarly Societies,” Christopher Kelty, Michael M. J. Fischer, Alex “Rex” Golub, Jason Baird Jackson, Kimberly Christen, and Michael F. Brown engage in a wide-ranging discussion of open access in anthropology, prompted in part by the American Anthropological Association’s decision to move its publishing activities to Wiley Blackwell.  This rich conversation explores different models for open access, the role of scholarly societies in publishing, building community around research problems, reusing and remixing scholarly content, the economics of publishing, the connection between scholarly reputation and readers’ access to publications, how to make content accessible to source communities, and much more.   As Kelty argues, “The future of innovative scholarship is not only in the AAA (American Anthropological Association) and its journals, but in the structures we build that allow our research to circulate and interact in ways it never could before.”  Kelty (who, alas, was lured away from Rice by UCLA) is exploring how to make scholarship more open and interactive.  You can buy a print copy of Two Bits, his new book on the free software movement published by Duke UP; read (for free) a PDF version of the book; comment on the CommentPress version; or download and remix the HTML.  Reporting on Two Bits at Six Months, Kelty observed, “Duke is making as little or as much money on the book as they do on others of its ilk, and yet I am getting much more from it being open access than I might otherwise.”  The project has made Kelty more visible as a scholar, leading to more media attention, invitations to give lectures and submit papers, etc.

New Models of Scholarly Communication, and Continued Resistance

To what extent are new publishing models emerging as the Internet enables the rapid, inexpensive distribution of information, the incorporation of multimedia into publications, and networked collaboration? To find out, The ARL/ Ithaka New Model Publications Study conducted an “organized scan” of emerging scholarly publications such as blogs, ejournals, and research hubs.  ARL recruited 301 volunteer librarians from 46 colleges and universities to interview faculty about new model publications that they used.  (I participated in a small way, interviewing one faculty member at Rice.)  According to the report, examples of new model publications exist in all disciplines, although scientists are more likely to use pre-print repositories, while humanities scholars participate more frequently in discussion forums.  The study identifies eight principal types of scholarly resources:

  • E-only journals
  • Reviews
  • Preprints and working papers
  • Encyclopedias, dictionaries, and annotated  content
  • Data
  • Blogs
  • Discussion forums
  • Professional and scholarly hubs

These categories provide a sort of abbreviated field manual to identifying different types of new model publications.  I might add a few more categories, such as collaborative commentary or peer-to-peer review (exemplified by projects that use CommentPress); scholarly wikis like OpenWetWare that enable open sharing of scholarly information; and research portals like NINES (which perhaps would be considered a “hub”).   The report offers fascinating examples of innovative publications, such as ejournals that publish articles as they are ready rather on a set schedule and a video journal that documents experimental methods in biology.   Since only a few examples of new model publications could fit into this brief report, ARL is making available brief descriptions of 206 resources that it considered to be  “original and scholarly works” via a publicly accessible database.

My favorite example of a new model publication: eBird, a project initiated by  the Cornell Lab of Ornithology and the Audobon Society that enlists amateur and professional bird watchers to collect bird observation data.  Scientists then use this data to understand the “distribution and abundance” of birds.  Initially eBird ran into difficulty getting birders to participate, so they developed tools that allowed birders to get credit and feel part of a community, to “manage and maintain their lists online, to compare their observations with others’ observations.” I love the motto and mission of eBird—“Notice nature.”  I wonder if a similar collaborative research site could be set up for, say, the performing arts (, where audience members would document arts and humanities in the wild–plays, ballets, performance art, poetry readings, etc.

The ARL/Ithaka report also highlights some of the challenges faced by these new model publications, such as the conservatism of academic culture, the difficulty of getting scholars to participate in online forums, and finding ways to fund and sustain publications.  In  Interim Report: Assessing the Future Landscape of Scholarly Communication, Diane Harley and her colleagues at the University of California Berkeley delve into some of these challenges.  Harley finds that although some scholars are interested in publishing their research as interactive multimedia, “(1) new forms must be perceived as having undergone rigorous peer review, (2) few untenured scholars are presenting such publications as part of their tenure cases, and (3) the mechanisms for evaluating new genres (e.g., nonlinear narratives and multimedia publications) may be prohibitive for reviewers in terms of time and inclination.” Humanities researchers are typically less concerned with the speed of publication than scientists and social scientists, but they do complain about journals’ unwillingness to include many high quality images and would like to link from their arguments to supporting primary source material. However, faculty are not aware of any easy-to-use tools or support that would enable them to author multimedia works and are therefore less likely to experiment with new forms.  Scholars in all fields included in the study do share their research with other scholars, typically through emails and other forms of personal communication, but many regard blogs as “a waste of time because they are not peer reviewed.”  Similarly, Ithaka’s 2006 Studies of Key Stakeholders in the Digital Transformation in Higher Education (published in 2008) found that “faculty decisions about where and how to publish the results of their research are principally based on the visibility within their field of a particular option,” not open access.

But academic conservatism shouldn’t keep us from imagining and experimenting with alternative approaches to scholarly publishing.  Kathleen Fitzpatrick’s “book-like-object” (blob) proposal, Planned Obsolescence: Publishing, Technology, and the Future of the Academy, offers a bold and compelling vision of the future of academic publishing.  Fitzpatrick calls for academia to break out of its zombie-like adherence to (un)dead forms and proposes “peer-to-peer” review (as in Wikipedia), focusing on process rather than product (as in blogs), and engaging in networked conversation (as in CommentPress). (If references to zombies and blobs make you think Fitzpatrick’s stuff is fun to read as well as insightful, you’d be right.)

EndNote Sues Zotero

Normally I have trouble attracting faculty and grad students to workshops exploring research tools and scholarly communication issues, but they’ve been flocking to my workshops on Zotero, which they recognize as a tool that will help them work more productively.  Apparently Thomson Reuters, the maker of EndNote, has noticed the competitive threat posed by Zotero, since they have sued George Mason University, which produces Zotero, alleging that programmers reverse engineered EndNote so that they could convert proprietary EndNote .ens files into open Zotero .csl files.  Commentators more knowledgeable about the technical and legal details than I have found Thomson’s claims to be bogus.  My cynical read on this lawsuit is that EndNote saw a threat from a popular, powerful open source application and pursued legal action rather than competing by producing a better product.  As Hugh Cayless suggests, “This is an act of sheer desperation on the part of Thomson Reuters” and shows that Zotero has “scared your competitors enough to make them go running to Daddy, thus unequivocally validating your business model.”

The lawsuit seems to realize Yokai Benkler’s description of proprietary attempts to control information:

“In law, we see a continual tightening of the control that the owners of exclusive rights are given.  Copyrights are longer, apply to more uses, and are interpreted as reaching into every corner of valuable use. Trademarks are stronger and more aggressive. Patents have expanded to new domains and are given greater leeway. All these changes are skewing the institutional ecology in favor of business models and production practices that are based on exclusive proprietary claims; they are lobbied for by firms that collect large rents if these laws are expanded, followed, and enforced. Social trends in the past few years, however, are pushing in the opposite direction.”

Unfortunately, the lawsuit seems to be having a chilling effect that ultimately will, I think, hurt EndNote.  For instance, the developers of BibApp, “a publication-list manager and repository-populator,” decided not to import citation lists produced by EndNote, since “doing anything with their homegrown formats has been proven hazardous.” This lawsuit raises the crucial issue of whether researchers can move their data from one system to another.  Why would I want to choose a product that locks me in?  As Nature wrote in an editorial quoted by CHNM in its response to the lawsuit, “The virtues of interoperability and easy data-sharing among researchers are worth restating.”

Google Books Settlement

Google Books by Jon Wiley

Google Books by Jon Wiley

In the fall, Google settled with the Authors Guild and the Association of American Publishers over Google Book Search, allowing academic libraries to subscribe to a full-text collection of millions of out-of-print but (possibly) in-copyright books.  (Google estimates that about 70% of published books fall into this category).  Individuals can also purchase access to books, and libraries will be given a single terminal that will provide free access to the collection.  On a pragmatic (and gluttonous) level, I think, Oh boy, this settlement will give me access to so much stuff.   But, like others, I am concerned about one company owning all of this information, see the Book Rights Registry as potentially anti-competitive, and wish that a Google victory in court had verified fair use principles (even if such a decision probably would have kept us in snippet view or limited preview for in-copyright materials).  Libraries have some legitimate concerns about access, privacy, intellectual freedom, equitable treatment, and terms of use.  Indeed, Harvard pulled out of the project over concerns about cost and accessibility.  As Robert Darnton, director of the Harvard Library and a prominent scholar of book history, wrote in the NY Review of Books, “To digitize collections and sell the product in ways that fail to guarantee wide access… would turn the Internet into an instrument for privatizing knowledge that belongs in the public sphere.” Although the settlement makes a provision for “non-consumptive research” (using the books without reading them) that seems to allow for text mining and other computational research, I worry that digital humanists and other scholars won’t have access to the data they need.  What if Google goes under, or goes evil? But the establishment of the Hathi Trust by several of Google Book’s academic library partners (and others) makes me feel a little better about access and preservation issues, and I noted that Hathi Trust will provide a corpus of 50,000 documents for the NEH’s Digging into the Data Challenge.  And as I argued in an earlier series of blog posts, I certainly do see how Google Books can transform research by providing access to so much information.

Around the same time (same day?) that the Google Books settlement was released, the Open Content Alliance (OCA) reached an important milestone, providing access to over a million books.  As its name suggests, the OCA makes scanned books openly available for reading, download, and analysis, and from my observations the quality of the digitization is better.  Although the OCA’s collection is smaller and it focuses on public domain materials, it offers a vital alternative to GB.  (Rice is a member of the Open Content Alliance.)

Next up in the series on digital humanities in 2008: my attempt to summarize recent developments in research.


New MA Program in History & Media at the University at Albany

A few days ago a commenter on my blog asked how he could learn to develop rich historical web sites “that would allow me to bring primary sources/scholarship from centuries ago to a wider audience.”  I had a hard time thinking of digital humanities programs that provide training in authoring digital media (George Mason? Georgia Tech?).  But then I heard about the new Masters concentration in History and Media at the University at Albany, which promises to prepare students to develop historical web sites, documentary films, oral histories, and other forms of media.   Albany seems to be well-positioned to offer such a program; for instance, it published the late lamented Journal for Multimedia History, a groundbreaking journal focused multimedia explorations of historical topics. In a recent discussion about “The Promise of Digital History” published in the Journal of American History, Amy Murrell Taylor, one of the professors developing Albany’s program, makes a persuasive case for thinking about digital history as a medium, “as the production of something that can stand alongside a book, something that takes a different form but nonetheless raises questions, offers analysis, and advances our historiographical knowledge about a given subject.”

Here’s the announcement, taken from H-Net:

The University at Albany’s Department of History has introduced a new 36-credit History and Media concentration to its Masters program, allowing students to learn and apply specialized media skills — digital history and hypermedia authoring, photography and photoanalysis, documentary filmmaking, oral/video history, and aural history and audio documentary production — to the study of the past. The History and Media concentration builds on the Department’s strengths in academic and public history and its reputation as an innovator in the realm of digital and multimedia history.

Among the History and Media courses to be offered beginning in the fall of 2009 are: Introduction to Historical Documentary Media; Narrative in Historical Media; Readings and Practicum in Aural History and Audio Documentary Production; Readings and Practicum in Digital History and Hypermedia; Readings in the History and Theory of Documentary Filmmaking; Readings in Visual Media and Culture; Introduction to Oral and Video History; Research Seminar and Practicum in History and Media.

Instructors in the History and Media concentration will vary but will include a core faculty including:
Gerald Zahavi, Professor; Amy Murrell Taylor, Associate Professor; Ray Sapirstein, Assistant Professor; Sheila Curran Bernard, Assistant Professor.

For more information, contact Gerald Zahavi,; 518-442-5427.

Prof. Gerald Zahavi
Department of History
University at Albany
1400 Washington Avenue
Albany, NY 12222
Visit the website at

Digital Humanities in 2008, Part I

When I wrote a series of blog posts last year summarizing developments in digital humanities, a friend joked that I had just signed on to do the same thing every year.  So here’s my synthesis of digital humanities in 2008, delivered a little later than I intended. (Darn life, getting in the way of blogging!) This post, the first in a series, will focus on the emergence of digital humanities (DH), defining DH and its significance, and community-building efforts.   Subsequent posts will look at developments in research, open education, scholarly communication, mass digitization, and tools.   Caveat lector:  this series reflects the perspective of an English Ph.D. with a background in text encoding and interest in digital scholarship working at a U.S. library who wishes she knew and understood all but surely doesn’t.  Please  add comments and questions.

1.    The Emergence of the Digital Humanities

This year several leaders in digital humanities declared its “emergence.”  At one of the first Bamboo workshops, John Unsworth pointed to the high number of participants and developments in digital humanities since work on the ACLS Cyberinfrastructure report (Our Cultural Commonwealth) began 5 years earlier and noted “we have in fact reached emergence… we are now at a moment when real change seems possible.”  Likewise, Stan Katz commented in a blog post called “The Emergence of the Digital Humanities,” “Much remains to be done, and campus-based inattention to the humanities complicates the task. But the digital humanities are here to stay, and they bear close watching.”

Termite Cathedral (Wikipedia)

Emergence: Termite Cathedral (Wikipedia)

Last year I blogged about the emergence of digital humanities and I suspect I will the next few years as well, but digital humanities did seem to gain momentum and visibility in 2008.  For me, a key sign of the DH’s emergence came when the NEH transformed the Digital Humanities Initiative into the Office of Digital Humanities (ODH), signaling the significance of the “digital” to humanities scholarship.  After the office was established, Inside Higher Ed noted in“Rise of the Digital NEH” that what had been a “grassroots movement” was attracting funding and developing “organizational structure.”  Establishing the ODH gave credibility to an emerging field (discipline? methodology?).  When you’re trying to make the case that your work in digital humanities should count for tenure and promotion, it certainly doesn’t hurt to point out that it’s funded by the NEH.  The ODH acts not only as a funder (of 89 projects to date), but also a facilitator, convening conversations, listening actively, and encouraging digital humanities folks to “Keep innovating.” Recognizing that digital humanities works occurs across disciplinary and national boundaries, the ODH collaborates with funding agencies in other countries such as the UK’s JISC, Canada’s Social Sciences and Humanities Research Council (SSHRC), and Germany’s DFG; US agencies such as NSF, IMLS and DOE; and non-profits such as CLIR.  Although the ODH has a small staff (three people) and limited funds, I’ve been impressed by how much this knowledgeable, entrepreneurial team has been able to accomplish, such as launching initiatives focused on data mining and high performance computing, advocating for the digital humanities, providing seed funding for innovative projects, and sponsoring institutes on advanced topics in the digital humanities.

It also seemed like there were more digital humanities jobs in 2008, or at least more job postings that listed digital humanities as a desired specialization.  Of course, the economic downturn may limit not only the number of DH jobs, but also the funding available to pursue complex projects–or, here’s hoping, it may lead to funding for scanner-ready research infrastructure projects.

2.    Defining “digital humanities”

Perhaps another sign of emergence is the effort to figure out just what the beast is.  Several essays and dialogues published in 2008 explore and make the case for the digital humanities; a few use the term “promise,” suggesting that the digital humanities is full of potential but not yet fully realized.

  • The Promise of Digital History,” a conversation among Dan Cohen, Michael Frisch, Patrick Gallagher, Steven Mintz, Kirsten Sword, Amy Murrell Taylor, Will Thomas III, and Bill Turkel published in the Journal of American History.  This fascinating, wide-ranging discussion explores defining digital history; developing new methodological approaches; teaching both skills and an understanding of the significance of new media for history; coping with impermanence and fluidity; sustaining collaborations; expanding the audience for history; confronting institutional and cultural resistance to digital history; and much more. Whew! One of the most fascinating discussion threads: Is digital history a method, field, or medium?  If digital history is a method, then all historians need to acquire basic knowledge of it; if it is a medium, then it offers a new form for historical thinking, one that supports networked collaboration.  Participants argued that digital history is not just about algorithmic analysis, but also about collaboration, networking, and using new media to explore historical ideas.
  • In “Humanities 2.0: Promise, Perils, Predictions”  (subscription required, but see Participatory Learning and the New Humanities: An Interview with Cathy Davidson for related ideas), Cathy Davidson argues that the humanities, which offers strengths in “historical perspective, interpretative skill, critical analysis, and narrative form,” should be integral to the information age.  She calls for humanists to acknowledge and engage with the transformational potential of technology for teaching, research and writing.
    Extra Credit, by ptufts

    Extra Credit, by ptufts

    Describing how access to research materials online has changed research, she cites a colleague’s joke that work done before the emergence of digital archives should be emblazoned with an “Extra Credit” sticker.  Now we are moving into “Humanities 2.0,” characterized by networked participation, collaboration, and interaction.  For instance, scholars might open up an essay for criticism and commentary using a tool such as CommentPress, or they might collaborate on multinational, multilingual teaching and research projects, such as the Law in Slavery and Freedom Project.   Yet Davidson acknowledges the “perils” posed by information technology, particularly monopolistic, corporate control of information.   Davidson contributes to the conversation about digital humanities by emphasizing the importance of a critical understanding of information technology and advocating for a scholarship of engagement and participation.

  • In “Something Called ‘Digital Humanities'”, Wendell Piez challenges William Deresiewicz’s dismissal of “something called digital humanities” (as well as of “Contemporary lit, global lit, ethnic American lit; creative writing, film, ecocriticism”).  Piez argues that just as Renaissance “scholar-technologists” such as Aldus Manutius helped to create print culture, so digital humanists focus on both understanding and creating digital media. As we ponder the role of the humanities in society, perhaps digital humanities, which both enables new modes of communicating with the larger community and critically reflects on emerging media, provides one model for engagement.

3.    Community and collaboration

According to Our Cultural Commonwealth, “facilitat[ing] collaboration” is one of the five key goals for the humanities cyberinfrastructure.   Although this goal faces cultural, organizational, financial, and technical obstacles, several recent efforts are trying to articulate and address these challenges.

To facilitate collaboration, Our Cultural Commonwealth calls for developing a network of research centers that provide both technical and subject expertise.  In A Survey of Digital Humanities Centers in the United States, Diane Zorich inventories the governance, organizational structures, funding models, missions, projects, and research at existing DH centers.  She describes such centers as being at a turning point, reaching a point of maturity but facing challenges in sustaining themselves and preserving digital content.  Zorich acknowledges the innovative work many digital humanities centers have been doing, but calls for greater coordination among centers so that they can break out of siloes, tackle common issues such as digital preservation, and build shared services.   Such coordination is already underway through groups such as CenterNet and HASTAC, collaborative research projects funded by the NEH and other agencies, cyberinfrastructure planning projects such as Bamboo, and informal partnerships among centers.

How to achieve greater coordination among “Humanities Research Centers” was also the topic of the Sixth Scholarly Communications Instititute (SCI), which used the Zorich report as a starting point for discussion.   The SCI report looks at challenges facing both traditional humanities centers, as they engage with new media and try to become “agents of change,” and digital humanities centers, as they struggle to “move from experimentation to normalization” attain stability (6).   According to the report, humanities centers should facilitate “more engagement with methods,” discuss what counts as scholarship, and coordinate activities with each other.  Through my Twitter feeds, I understand that the SCI meeting seems to be yielding results: CenterNet and the Consortium of Humanities Centers and Institutes (CHCI) are now discussing possible collaboratiions, such as postdocs in digital humanities.

Likewise, Bamboo is bringing together humanities researchers, computer scientists, information technologists, and librarians to discuss developing shared technology services in support of arts and humanities researchers.  Since April 2008, Bamboo has convened three workshops to define scholarly practices, examine challenges, and plan for the humanities cyberinfrastructure.  I haven’t been involved with Bamboo (beyond partnering with them to add information to the Digital Research Tools wiki), so I am not the most authoritative commentator, but I think that involving a wide community in defining scholarly needs and developing technology services just makes sense–it prevents replication, leverages common resources, and ultimately, one hopes, makes it easier to perform and sustain research using digital tools and resources.  The challenge, of course, is how to move from talk to action, especially given current economic constraints and the mission creep that is probably inevitable with planning activities that involve over 300 people.  To tackle implementation issues, Bamboo has set up eight working groups that are addressing topics like education, scholarly networking, tools and content, and shared services. I’m eager to see what Bamboo comes up with.

Planning for the cyberinfrastructure and coordinating activities among humanities centers are important activities, but playing with tools and ideas among fellow digital humanists is fun!  (Well, I guess planning and coordination can be fun, too, but a different kind of fun.)  This June, the Center for New Media in History hosted its first THATCamp (The Humanities and

Dork Shorts at THAT Camp

Dork Shorts at THAT Camp

Technology Camp), a “user-generated,” organically organized “unconference” (very Web 2.0/ open source).  Rather than developing an agenda prior to the conference, the organizers asked each participant to blog about his or her interests, then devoted the first session to setting up sessions based on what participants wanted to discuss.  Instead of passively listening to three speakers read papers, each person who attended a session was asked to participate actively.  Topics included Teaching Digital Humanities, Making Things (Bill Turkel’s Arduino workshop), Visualization, Infrastructure and Sustainability, and the charmingly titled Dork Shorts, where THAT Campers briefly demonstrated their projects. THAT Camp drew a diversity of folks–faculty, graduate students, librarians, programmers, information technologists, funders, etc.  The conference used technology effectively to stir up and sustain energy and ideas—the blog posts before the conference helped the attendees set some common topics for discussion, and  Twitter provided a backchannel during the conference.   Sure,  a couple sessions meandered a bit, but I’ve never been to a conference where people were so excited to be there, so engaged and open.  I bet many collaborations and bright ideas were hatched at THAT Camp.  This year, THAT Camp will be expanded and will take place right after Digital Humanities 2009.

THAT Camp got me hooked on Twitter.  Initially a Twitter skeptic (gawd, do I need another way to procrastinate?), I’ve found that it’s great way to find out what’s going on digital humanities and connect with others who have similar interests.  I love Barbara Ganley’s line (via Dan Cohen): “blog to reflect, Tweet to connect.”  If you’re interesting in Twittering but aren’t sure how to get started, I’d suggest following digital humanities folks and the some of the people they follow.  You can also search for particular topics at  Amanda French has written a couple of great posts about Twitter as a vehicle for scholarly conversation, and a recent Digital Campus podcast features a discussion among Tweeters Dan Cohen and Tom Scheinfeldt and skeptic Mills Kelly.

HASTAC offers another model for collaboration by establishing a virtual network of people and organizations interested in digital humanities and sponsoring online forums (hosted by graduate and undergraduate students) and other community-building activities.  Currently HASTAC is running a lively, rich forum on the future of the digital humanities featuring Brett Bobley, director of the NEH’s ODH.  Check it out!