Monthly Archives: January 2008

Humanities Researchers on the Radio

Rice University (my employer) just announced that it will be working with the PRI program Fair Game to produce a series of segments profiling cutting-edge humanities research. Through this initiative, Rice’s School of Humanities hopes to engage leading humanities researchers more fully in public conversations.  The first shows will focus on democracy, taking on complex issues such as the role of religion in politics and the tension between justice and liberty. The host of Fair Game, Faith Salie, is a Rhodes Scholar and a comedian, so you get smart and funny together. Indeed, the show is a sort of Daily Show for the radio–it doesn’t take itself too seriously even as it examines serious ideas.

As an avid listener to PRI and NPR shows such as This American Life, RadioLab, and Speaking of Faith, I find radio is the best medium for carrying out (or listening in on) rich conversations about ideas, since I can really focus my attention on what is being said. I’m delighted to see the humanities find a new forum.  I hope that future episodes will look at digital arts and culture, since I believe that the shift to online communication and collaboration is a profound cultural transformation that humanities scholarship can help us to understand.  For instance, the show could examine debates over authority and Wikipedia, the changing nature of reading, the ethics of literary scholars retrieving deleted data from hard drives (see Matt Kirschenbaum’s Mechanisms), geographic visualizations, the impact of new technologies on research practices, etc.

By the way, after writing my post on YouTube Scholarship, I’ve come across more examples of scholarly videos.  For instance,  Rutger’s English department has produced some fantastic YouTube videos making the case for digital humanities.  I also noted that the New Media Consortium’s 2008 Horizon Report describes “grassroots video” as a key emerging technology and emphasizes the ease with which it can be produced and shared, making me think that the barriers to producing video aren’t as significant as I previously imagined.

THAT Podcast

I love listening to Digital Campus, a podcast produced by the Center for History and New Media (CHNM) that explores the impact of digital technologies on educational and cultural institutions. Now the folks at CHNM have launched another great podcast: THAT (The Humanities and Technology) podcast. In each episode, hosts Jeremy Boggs and Dave Lester will interview a guest about some aspect of technology and education, then show how to use a tool, mixing together the conversation and demonstration in a tasty gumbo. The first episode features an interview with WordPress founder Matt Mullenweg and a demonstration of Boggs and Lester’s own ScholarPress Courseware course management plugin for WordPress. Mullenweg described some of the inventive ways that WordPress is used by academics to store research notes, suggested that faculty be able to somehow mark their best podcasts to be evaluated as part of the their tenure packages, and offered some tips for getting technology projects off of the ground. The show illustrates why it’s important to talk with folks outside of academia; Mullenweg brought a fresh perspective and asked questions that led to an insightful discussion of digital humanities and academic blogging.

Sidenote: Man, CHNM should spin off a company to create new product names: THAT is such a witty acronym (“Who’s on first” jokes, anyone?), and Zotero and Omeka just sound so cool.

YouTube Scholarship

Web video isn’t just about dogs attacking toilets and guys skiing down escalators (to mention two recent YouTube videos championed by TopYouTubeVideos.com)–it also can be a powerful mode for disseminating ideas. Universities such as UC Berkeley and USC now have their own YouTube channels, and digital humanities groups such as MITH and HASTAC are making available talks and conference summaries through YouTube. In “Thanks to YouTube, Professors Are Finding New Audiences,” The Chronicle of Higher Education reports that over 100,000 people view some scholars’ YouTube lectures and that web video is becoming an important medium for “public intellectualism.” Billed as “You-Tube for ideas,” web sites such as FORA.tv and the recently-launched BigThink make available talks by and interviews with prominent thinkers such as Steven Pinker, Isabel Allende, and Kwame Anthony Appiah.

Although it seems that most academic web videos take the form of lectures, some scholars are using video to share research data or develop new approaches to making scholarly arguments. Perhaps the most prominent example of a scholarly digital video is Michael Wesch’s The Machine is Us/ing Us, which has been viewed over 4.3 million times and demonstrates the impact that a researcher can have by disseminating work through YouTube. Wesch, a cultural anthropologist who won a 2007 Rave Award from Wired and the John Culkin Award for Outstanding Media Praxis from the Media Ecology Association, found that the standard academic essay was the wrong medium for exploring Web 2.0–video better captured its dynamic nature. In Wesch’s video, the web is in a sense itself the central character, and the fast-paced editing and techno music reflect the central argument. Scientists are also beginning to recognize the power of video for documenting experiments, communicating with fellow scientists, and reaching out to the general public, as Maxine Clarke observes in Video as a tool for science communication. For example, SciVee, a sort of You Tube for science, launched last fall; it is operated by the Public Library of Science (PLoS), the National Science Foundation (NSF) and the San Diego Supercomputer Center (SDSC). At SciVee, scientists can increase the visibility of their research by uploading “pubcasts,” “the combination of a scientific publication and video or audio presentation.” For example, you can watch a computational biologist discuss his work on the structure of proteins, read the associated paper (which is synchronized to the talk and made available as open access), find related resources, and make comments. As a non-scientist, I have much easier time understanding someone talking about research than comprehending a scientific paper. SciVee also makes available lectures, such as Fran Berman of the San Diego Supercomputing Center talking about cyberinfrastructure for the humanities. To encourage contributions, SciVee provides tutorials that guide contributors through the process of making pubcasts (the very name makes me crave a Guinness) and explains that pubcasts increase the visibility of research. Scientific journals are also incorporating video, such as the Journal of Visualized Experiments (JoVE), which bills itself as “an online research journal employing visualization to increase reproducibility and transparency in biological sciences.”

Now comes a new model for using video in scholarship: the YouTube conference abstract. HASTAC II, which focuses on “techno-travels,” includes an intriguing requirement in its CFP: “In addition to filling out an application, participants will be required to make a two minute video of their proposal, upload it to YouTube and tag the YouTube video with ‘HASTAC2008.’” Wow! I’ve never before seen such a requirement, but it fits squarely with HASTAC’s aim to promote innovative approaches to communication and collaboration across (and beyond) academia. Since many of the posters and demos will likely focus on GIS Maps, mashups, geographic visualizations, and the like, video would represent dynamic, visual media much more effectively than a textual description would. By making available the videos through YouTube, HASTAC can reach a broader audience. Short videos describing multimedia projects might even serve digital preservation, since one could study how the project actually worked. Before attending a conference, I’d love to check out brief video previews of the talks so that I could know what to expect (and get a sense of how good the presenter will be). Video abstracts would also serve those who can’t attend conferences. Since I have two kids under four and therefore don’t travel to many conferences, I would really appreciate being able to see what I missed. For the presenters, HASTAC’s requirement poses a thrilling challenge: how to express complex ideas in a two-minute video. What fun!

But the realist in me (who is much less fun than the dreamer in me) wonders if this requirement may deter some people from submitting conference proposals, since they may lack the skills, time and resources necessary to put together a short video. Perhaps I’m being too skeptical; the HASTAC folks certainly know the community of likely presenters better than I, and I would bet that many digital humanities folks would be adept at producing videos. Even if the presenters aren’t experienced video producers, they could find collaborators who could help out with
the technical details, or they could take up the challenge to themselves learn how to make a simple video. Although it does take some time to make a video (I’d estimate it would take about 8 hours for a novice to produce a 2 minute piece), the process is lots of fun and isn’t that difficult. After all, hundreds of thousands (millions?) of people have figured out how to upload videos to YouTube, which had over 500,000 user accounts in 2006. One could use an inexpensive camcorder or even a webcam to capture footage and put together the video using free editing tools such as iMovie and MovieMaker–or one could bypass a camcorder altogether and use screen capture or slideshow software. In any case, I think the benefits to be gained from experimenting with this approach to scholarly communications outweigh the risk that some potential presenters might not be up for the challenge.

Indeed, I hope that HASTAC ’s requirement will stimulate more humanities scholars to experiment with video. Even as culture shifts toward visual media, I’ve noticed a general lack of interest in video as a means of scholarly communication. Almost every scholar knows how to bang out an essay in a word processor, but far fewer know how to craft a movie. As the manager of a university digital media lab, I’ve encountered few faculty members who want to produce their own videos, although some scientists have borrowed camcorders to capture experimental data. At most of the traditional literature conferences that I’ve attended, you’re considered cutting-edge (or freakish) if you use PowerPoint, never mind video. (I should say that the very best paper I saw at the 2007 American Literature Association conference was a beautifully delivered, brilliant talk on Whitman that Ed Folsom gave without any technological aids except paper–but I sure wish I had a video recording of that talk). I suspect that the obstacles to video scholarship (there’s got to be a better name) are the familiar ones: lack of training, resources, time, incentives, etc.

We need more models for multimedia scholarship, as well as more motivation for scholars to produce it. That’s why HASTAC’s experiment is so exciting, since it gives researchers a reason to play with video. I’ve made a few videos–a twenty-minute documentary about a feminist activist in Houston and a couple of short digital stories, as well as the obligatory kid vids–and I can testify to how exhilarating the process is. I get caught up in thinking about how to use images and audio to express ideas, how to assemble clips into a coherent narrative, what to cut to get to the essence. With video, I can follow that old writing adage of showing rather than telling, whether it’s the passion of an activist or the way a web site works. Then there’s the satisfaction of interacting with the audience. After I released my digital story about Oveta Culp Hobby on YouTube, I got great feedback and heard new stories from her son and grandson, as well as from a history grad student working on a dissertation about her.

What will it take to encourage more scholars to make videos? Two communities provide possible models:

  • Digital storytelling: Since the 1990s, the Center for Digital Storytelling has trained thousands of people to produce their own digital stories, two-to-three minute personal narratives that usemultimedia (photos, video, music, sound effects, and the human voice) to tell the story. Teachers are finding that digital storytelling is a powerful way to motivate students and show them both how to craft a narrative and how to use technology, while the BBC is using digital storytelling to engage its audience in sharing their own stories. I attended a digital storytelling workshop last year and was amazed that my colleagues learned how to create such compelling stories in only three days. For instance, David Noah’s Photo Opportunities and Kerry Ballast’s Rituals illustrate the power of digital stories to move viewers and explore ideas such as family, duty, ritual, memory, and the nature of photographic representation.
  • Digital journalism: In 1997, veteran photojournalist Dirck Halstead published The Platypus Papers, which recommended that photojournalists prepare themselves for the increasing prominence of the Internet and learn new skills in multimedia production. Halstead founded The Digital Journalist, which bills itself as “a multimedia magazine for Photojournalism in the Digital Age”; he also runs Platypus Workshops to train photojournalists in multimedia technologies.

Both of these communities provide training programs to help people quickly develop the skills to make their own digital narratives. In both communities people have powerful motives to learn these skills, whether to express themselves, develop communication strategies for community organizations, advance students’ knowledge of narrative and new media, or make a living as a photojournalist. Perhaps one way to encourage scholars to produce videos, then, is to offer training programs that would allow them to learn key skills and would build a community of practice. Universities are beginning to offer courses that explore the rhetoric of video, such as “Writing with video.” I’d love to see the NEH’s Institutes for Advanced Topics in the Digital Humanities program fund a workshop focused on video. Humanities scholars also need more forums (channels?) for publishing research-related video. In disciplines such as art history, film and media studies, and cultural anthropology, video would allow scholars to incorporate visual evidence into their work and explore ideas dynamically. Of course, some humanities journals are already pioneering the publication of multimedia scholarship; for instance, the Journal of Multimedia History provided an early model for using audio and images in historical scholarship, and Vectors is publishing thrilling, fascinating multimedia essays.

While some types of academic arguments certainly work best as books, articles or blog posts, I think that video will eventually become an important scholarly medium. I’m eager to see what the HASTAC folks come up with–I’ll certainly stay tuned.

The Cape Town Open Education Declaration

“[E]veryone should have the freedom to use, customize, improve and redistribute educational resources without constraint.” So states the Cape Town Open Education Declaration, which will be released on January 22. Modeled after the Budapest Open Access Initiative, which helped to spark the open access movement, this Declaration aims to stimulate the international development of open access educational resources and technologies by encouraging educators and learners to create, use and adapt open educational resources; calling on educators, publishers and institutions to release resources openly; and promoting policies that support open education. Open education can expand access to knowledge and support more flexible, learner-centered approaches to education. Those with an interest in education (and isn’t that all of us?) are invited to sign the declaration.

For a wonderful example of an open educational repository, check out Connexions. One of Connexions’ most popular and compelling courses is Understanding Basic Music Theory by Kitty Schmidt-Jones, whose modules have been viewed over 7 million times. (Full disclosure/bragging: Connexions got its start at Rice, and Kitty is my cousin’s cousin–or cousin-in-law?)

UPDATE: Wikipedia founder Jimmy Wales and Connexions founder Rich Baraniuk published an editorial about the  Cape Town Open Education Declaration in the San Francisco Chronicle on January 22.

Cell phone novels

A new literary genre is becoming popular in Japan: the cell-phone novel. According to an article in today’s NY Times, five of the ten Japanese bestselling novels this year were originally cellphone novels–as the name suggests, serial fiction delivered to cellphones. Typically written in the first person, these novels have simple plots and rely on dialogue and short sentences. The novels are particularly popular among young people, including many who had not read fiction before. The growth of the genre has been fueled by the affordability of text messaging in Japan, since now most companies offer unlimited texting as part of regular plans. Although writers don’t get paid for the cellphone version of the novel, some have prospered when their work is repackaged as a traditional book. There’s lots to think about here: changing reading habits, business models for the reuse of content, the relationship between technology and literary genre, etc.

Lions and tigers and screens, oh my!

When is something that you watch on a screen more “real” and more compelling than what you experience right in front of you? When you’re watching live footage of birds wheeling around a cliff rather than standing in front of a zoo cage glancing at listless black bears. In “Zoo Keepers’ Dilemma,” a short piece produced by WNYC’s Radio Lab, former zoo director David Hancocks says that his ideal zoo is a “cyberzoo,” where people would watch live, high-definition images from the wild rather than view animals in unnatural environments. According to Hancocks, most zoos focus more on entertaining people than on serving the needs of animals or advancing understanding of nature. The idea of the cyberzoo originated at a cocktail party thrown by wildlife filmmaker Chris Parsons, who set up 3 jumbo screens like a bay window and piped in live footage of sea birds at the Orkney Islands. Enraptured, party-goers stared silently at the “window on a natural scene” for half an hour. Since the footage was live, Parsons theorized, people were curious about what would happen next. Parsons later established Wildwalk, a wildlife exhibition in Bristol that incorporated footage from nature films as well as walk-through rainforest-like environments. Unfortunately, Wildwalk closed, but the idea is an intriguing one–create immersive, interactive environments where people can come together to see how animals behave in the wild.

By the way, check out Radio Lab, a show that explores “one big idea,” typically related to science. The show brilliantly weaves together interviews with scientists, witty conversations between hosts Jad Abumrad and Robert Krulwich, and exquisitely produced sound effects. My favorite episodes include Emergence, Musical Language and Detective Stories.

Digital Humanities in 2007 [Part 3 of 3]

In previous posts summing up digital humanities developments in 2007, I discussed efforts to develop the humanities cyberinfrastructure through new funding programs and organizations and reflected on questions of authority and reliability. In this final post, I’ll look at emerging forms of digital scholarship in the humanities as well as social networking. I’m sure I’ve missed a lot, so please add your own picks in the comments section.

  • e-Science as a model for the humanities? Funding agencies, scholarly societies, research libraries and the like are promoting e-Science, which the UK Research Council defines as “large scale science that will increasingly be carried out through distributed global collaborations enabled by the Internet. Typically, a feature of such collaborative scientific enterprises is that they will require access to very large data collections, very large scale computing resources and high performance visualisation back to the individual user scientists.” The NSF is investing millions in constructing the cyberinfrastructure for science. While the NEH is making admirable and energetic efforts to support digital humanities, its budget is much smaller than the NSF’s. In order to build more support for digital humanities, I think we need to continue to make the case to potential funders, university administrators, and colleagues, explaining what kind of research problems could be tackled if we had better tools and resources. Certainly papers such as the ACLS Report on Cyberinfrastructure for the Humanities & Social Sciences lay out a vision for digital humanities and describe what is needed, as does Cathy Davidson’s “Data Mining, Collaboration, and Institutional Infrastructure for Transforming Research and Teaching in the Human Sciences and Beyond.” Davidson recommends that humanities scholars move toward what she calls Humanities 2.0, which, like Web 2.0, is collaborative and user-driven. Already scholarship is being transformed by access to massive amounts of data, but Davidson proposes that humanities scholars follow the lead of scientists and embark on larger, more collaborative projects. She calls for collaboration across both disciplines and nations, insisting that humanities scholars provide vital perspective for scientific projects and that we must step outside our own cultural frameworks. Of course, as Davidson recognizes, humanities scholars are typically not rewarded for work on collaborative projects or for research that is not published as a book or article, so academic culture needs to change.
  • Emerging forms of digital scholarship: In 2007, a number digital humanities projects demonstrated how advanced computing can enable humanities researchers to tackle complex problems. For example:
    • 3D computer modeling: The release of IATH’s Rome Reborn, a precise digital model of Rome in late antiquity, illustrated the potential of computer modeling as a form of historical and archaeological scholarship. As users “walk” or “fly” through ancient Rome, they can come to a better understanding of how the city worked, such as how the massive size of monuments might have affected citizens’ perceptions of the grandeur of Rome. The demos of Rome Reborn are really cool, but I’m particularly interested in project’s plans to not only make available a detailed model of the city, but also to provide tools for analysis and scholarly communications. When I explore virtual spaces, I wonder how accurate they are and what evidence was used to justify representing a column or a mosaic in a particular way. Of course, you can assume that a research center like IATH will strive towards accuracy, but some decisions must still be based on conjecture, so being able to see the documentation supporting the design decisions would serve scholarship. With Rome Reborn, scholars will be able to add new layers of information using a “moderated wiki.” Rome Reborn may be made available through Second Life, which would make it more widely accessible but might raise other issues…
      After being hyped to death in 2006, Second Life itself is coming under scrutiny. In “Second Thoughts about Second Life,” Michael J. Bugeja outlined the liability risks universities face through their involvement in Second Life, particularly given the harassment (and worse) that regularly occurs in this world. Others have been skeptical about the educational potential of an environment that seems to focus on, er, adult entertainment. Yet I still see potential in SL for teaching and research. I helped to moderate the Second Life version of Rice’s De Lange Conference on Emerging Libraries and was impressed by the lively discussions that occurred during the sessions; being “virtually” present seemed to encourage dialogue. As the Chronicle of Higher Ed reports in Professor Avatar (subscription req), SL has been used successfully in classes on anthropology, as students study behavior in virtual worlds; communications, as students create and comment on virtual spaces; and literature, as students explore literary worlds such as Dante’s Inferno. Although Second Life faces many problems, both technical and cultural, I do think that 3D virtual worlds will play an increasingly important role in education, since they can allow people to explore phenomena that would otherwise be impossible to visualize (past societies, molecules, etc.) and provide immersive, interactive environments.
    • Text mining & visualization: MONK, “a digital environment designed to help humanities scholars discover and analyze patterns in the texts they study,” won a $1 million grant from the Mellon foundation. MONK is already producing great work, such as “‘Something that is interesting is interesting them’: Using Text Mining and Visualizations to Aid Interpreting Repetition in Gertrude Stein’s The Making of Americansby Tanya Clement, et al. Clement not only shows how text mining tools can help scholars explore Stein’s use of repetition, but explains the process of designing and developing tools that meet the needs of literary scholars and enable discovery.
    • The database as a scholarly genre: The October PMLA featured a fascinating discussion of the database as genre with a lead-off essay by the Walt Whitman Archive co-editor Ed Folsom and responses by Jonathan Freedman, N. Katherine Hayles, Jerome McGann, Meredith L. McGill, and Peter Stallybrass. Citing Lev Manovich, Folsom argues that the database is “the genre of the twenty-first century,” a genre opposed to narrative because it accrues details rather than imposing a structure. Folsom says that the Walt Whitman Archive is actually (and virtually) a database that brings together distributed materials and enables reordering and random access. As I’ve found in a research project I did with my colleague Jane Segal, the Whitman Archive is itself a work of scholarship that has made invaluable contributions to Whitman studies, opening up new areas of inquiry by providing access to once-inaccessible work. According to Folsom,

      As databases contain ever greater detail, we may begin to wonder if narrative itself is under threat. We’ve always known that any history or theory could be undone if we could access the materials it ignored, but when archives were physical and scattered across the globe and thus often inaccessible, it was easier to accept a history until someone else did the arduous work of researching the archives and altering the history with data that had before been excluded. (1576)

      Humans define the data model and collect the data (or set up instruments to do so). What do databases leave out, and how might those omissions affect scholarship? How would one make an argument based only on data, without a narrative structure? In his reply to his respondents, Folsom revises his original metaphor of narrative and databases being at battle and instead adopts N. Katherine Hayles’ metaphor of them existing in a symbiotic relationship, with databases supplying the details that narratives arrange into a coherent set of claims. Still, I find the notion that databases, with their supposed comprehensiveness and malleability, allow users to challenge master narratives intriguing. But doesn’t the answer that you get when you query a database depend on how you set up the query and how you interpret the data that you get?

  • Social networking in higher-ed. If “Humanities 2.0” entails collaboration, data, and tools, what technologies are required to support collaborative work? The most visible example of social networking is probably Facebook, which took off in 2007 among the post-college crowd, sparking speculations about the potential uses of social networking in academia. By opening up its API to external developers, Facebook expanded its features and its audience, but it angered many users through its Beacon program, which violates privacy by making available purchasing information through a Facebook news feed. Most of the Facebook apps seem to be about entertainment (ranking your friends or turning them into zombies), but I think some apps, such as BooksIRead, could be used in an academic context. Through BooksIRead, you can keep track of your own reading, see what friends are reading, and find recommendations and reviews for other books you might like, participating in an intellectual community.
OCLC’s Sharing, Privacy & Trust in Our Networked World shows that the user population for social networking sites is large and growing. I wonder if social networking technologies can be harnessed to bring scholars together to communicate and tackle common problems. Nature may provide a model with its social networking site for scientists, Nature Network, which includes personal profiles, blogs, job postings, discussion boards, tagging, and groups. But it doesn’t seem like there’s been widespread adoption of this site yet, and to succeed sites focused on user-generated content need, um, users. According to the OCLC report, the top reasons that people join social networking sites is because their friends are there and to have fun. What would entice them to join professional social networking sites? I would assume that connecting with colleagues would also be a top reason, along with being able to raise your own research profile and get tangible rewards, such as new knowledge, or at least pointers to articles you really should read. In Social Factors in the Adoption of New Academic Communication Technologies, Paul Dimaggio makes the astute argument that network effects drive the adoption of new technologies: “Only when some critical mass of colleagues adopt the new technological approach do the rest fall into line.” Perhaps the focus should not be so much on networking as on working collaboratively and sharing information; NINES and HASTAC provide models for such collaborative sites in digital humanities.
  • “Green” digital humanities? In 2007, the threat of global warming seems to have finally entered the public consciousness, with the release of the report by the UN climate panel (IPCC) and the Nobel Prize going to AL Gore and the IPCC. I fear a dire future for my kids and am trying to reduce my carbon footprint, whether by putting all of the computers in our lab on power saving settings or by paying for carbon offsets/indulgences and zealously turning off lights. I wonder what role, if any, digital humanities might have in tackling global warming? On first glance, it seems that the goals of digital humanities have little to do with reducing greenhouse gases– if anything, powering all of our servers contributes to the problem. But perhaps digital humanities can make contributions to the cyberinfrastructure that will enable collaboration and innovation in confronting global warming as well as other challenges. And perhaps the tools and resources developed by the digital humanities community can support research by historians, theologians, philosophers, literary scholars, and others into the humanistic dimensions of the environment.

I wanted to see if my sense of important digital humanities ideas in 2007 jibed with other people’s perceptions, so, geek that I am, I counted the number of delicious bookmarks for each web page as well as blogs linking to it using the Bloglines Citations Bookmarklet. I should note that these statistics do not necessarily measure significance, just frequency of citation. This approach is URL dependent–if people bookmark or cite a different page on a web site, it wouldn’t be included in the count. The numbers come from late December, 2007 and early January, 2008, so they have undoubtedly changed.

Site # of delicious bookmarks # bloglines links
Digital Humanities Quarterly 22 6
“Our Cultural Commonwealth” 20 6
Digital Humanities Centers Summit 7 3
NEH/ IMLS Advancing Knowledge Grants 9 7
ACLS Digital Innovation fellowships 21 1
MacArthur/HASTAC Digital Media and Learning Competition 119 38
Digital Americanists 7 0
TEI@20: 20 Years of Supporting the Digital Humanities 4 16
Keen vs. Weinberger 307 205
Andrew Keen v. Emily Bell 72 49
WikiScanner 2449 1,460
Amazon Kindle 51 275
Grafton, Future Reading 60 254
Caleb Crain, “Twilight of the Books 86 439
Newsweek: The Future of Reading
354 822
NEA: To Read or Not to Read
56 106
Kirschenbaum, How Reading Is Being Reimagined 22 3
Jensen, The New Metrics of Scholarly Authority 111 50
University Publishing In A Digital Age 80 97
Symposium: The Future of Scholarly Communication 8 1
Google Books: Is It Good for History? 16 2
Inheritance and Loss: A Brief Survey of Google Books 27 75
The Google Exchange: Leary & Duguid 28 20
Google Books: Champagne or Sour Grapes? 7 7
Davidson, Data Mining, Collaboration, and Institutional Infrastructure for Transforming Research and Teaching in the Human Sciences and Beyond 3 3
Rome Reborn 1246 302
Second Thoughts about Second Life 50 28
Professor Avatar 10 6
MONK 5 20
Folsom, Database as Genre 0 0
OCLC, Sharing, Privacy & Trust in Our Networked World 267 215
Nature Network 139 2740

These statistics suggest that the community of digital humanities folks who are blogging and bookmarking is relatively small, since papers with a specific relevance to digital humanists typically didn’t get cited that much. Indeed, some of the works that I found most stimulating received few citations, which reflects the specialized nature of the field rather than the value of the work being cited. However, some topics of interest to digital humanities seemed to capture broad attention: virtual reality, the future of reading, the reliability of Wikipedia and other Web 2.0 sites, and social networking. Essays that were only available by subscription (such as articles in PMLA) had few citations, perhaps showing that open access publications have a greater impact.

What did I miss, or misunderstand?

Digital Humanities in 2007 [Part 2 of 3]

In my previous post, I highlighted some of the major developments in digital humanities in 2007, focusing on the creation of organizations such as centernet and the Digital Americanists, journals such as Digital Humanities Quarterly, and funding programs such as the NEH’s Digital Humanities Initiative. Now I’ll broaden the scope to look at conducting and disseminating research: mass digitization, the future of reading, scholarly communications, and notions of authority and expertise in Web 2.0.

  • Debates over mass digitization. What effect will mass digitization projects such as Google Books have on scholarship, intellectual property, reading practices, etc? Will the quality be sufficient for research?
    • Apparently Google has already digitized over a million books, but several observers have criticized the quality of the work thus far. For instance, Robert Townsend’s Google Books: Is It Good for History? describes three problems: “(1) the quality of the scans is decidedly mixed; (2) the information about the books (the “metadata” in infospeak) is often inaccurate; and (3) the public domain is narrowly and erroneously construed, sadly restricting access to materials that should be freely available.” Paul Duguid echoes these concerns in Inheritance and Loss: A Brief Survey of Google Books, using errors from two Google Books versions of Tristram Shandy to illustrate problems in scanning quality and metadata. But, as Dan Cohen argues in Google Books: Champagne or Sour Grapes?, Google is making a defensible trade-off between rapid, mass digitization and quality control; quality issues can be addressed by measures such as allowing readers to mark errors and selective rescanning. Likewise, in his exchange with Duguid, Patrick Leary argues that massive digitization projects entail trade-offs and that the scale, accessibility and searchability of Google Books outweigh problems in quality. In “Google’s Moonshot,” Jeffrey Toobin raises another issue, suggesting that future massive digitization efforts may be harmed if Google were to settle with the publishers suing it for copyright violations. Although I, too, have found errors in Google Books and share the concern that Google may set back the cause of fair use, I’ve been stunned by the range of materials that I’ve found. I agree with Cohen that the key challenge will be in developing tools to search, analyze and manipulate data in Google Books.
    • Although it’s important to be aware of Google Book’s limitations, that shouldn’t keep us from speculating about the impact that it and other mass digitization projects will have on scholarship. How will researchers deal with the problem of information abundance? How will research change with the ability to search across such a range of resources? In “Future Reading,” Anthony Grafton gives historical perspective to the emerging “information ecologies” and contends that digitization will bring us not a “universal library,” but “a patchwork of interfaces and databases, some open to anyone with a computer and WiFi, others closed to those without access or money.” True, there is no single database of or interface to the world’s knowledge, but to what extent can we develop tools and methods to search across databases and find what we need?
  • Speculations about the future of reading. As the culture shifts from print to the screen, what will happen to reading?
    • The NEA’s report To Read or Not To Read: A Question of National Consequence contends that leisure reading has declined (even as time spent viewing TV has increased) and, not surprisingly, links that decline to lower test scores on reading tests. According to the NEA, “literary readers lead more robust lifestyles than non-readers” and are more likely to volunteer, vote, attend cultural events, and play sports. As a former literacy volunteer and a devoted bookworm, I by no means wish to diminish the importance of reading. However, as Matt Kirschenbaum argues in his smart essay “How Reading Is Being Reimagined,” the NEA report oversimplifies what it means to read by focusing only on leisure reading, overlooking modes of reading such as skimming and “lateral reading” across texts, and minimizing the importance of online reading, where reading and writing come together as readers comment on blog posts, share annotations and links, etc. Even as the report cites Neil Postman and Sven Birkets on the value of reading, it ignores proponents of screen literacy such as Henry Jenkins and Elizabeth Daley. I also wonder what 21st century skills literacy tests are not measuring.Whereas the NEA found that reading has declined, the OCLC’s report on Sharing, Privacy & Trust in Our Networked World reached the opposite conclusion, perhaps because OCLC’s survey took into account online as well as print-based reading. According to OCLC, “Respondents read and indicate that the amount they are reading has increased. Digital activities are not replacements for reading but perhaps increase the options for expanding communication and sharing content.” For a fascinating analysis of the NEA report and Maryanne Wolf’s Proust and the Squid (an account of the history and biology of reading), see Caleb Crain’s “Twilight of the Books.” Crain suggests that reading print books may become an arcane hobby and speculates that mental processes will change as we shift from print to the screen. Some of Crain’s conclusions are debatable, such as his claim that reading promotes comparison and critical thinking more than viewing does, but it’s a stimulating essay nonetheless.
    • The release of the Amazon Kindle wireless ebook reader sparked many speculations about the future of reading, including a cover story in Newsweek. Citing Kevin Kelly, Peter Brantley, and Bob Stein, the article anticipates authorship becoming a collaborative process with readers and books becoming open portals to associated information rather than closed containers. I’ll defer commenting on the Kindle until I actually get one in my hands (and the iPhone ranks higher on my gadget wish list.)
  • Transformations in scholarly communications: Increasingly researchers expect to find what they need online, and new researching and publishing environments are emerging. But university presses with tight budgets are finding it difficult to keep up.
    • In University Publishing In A Digital Age, Ithaka acknowledged the crisis in university publishing and called for presses to reinvent themselves by moving content online, collaborating with libraries, aligning themselves with the university’s strengths, and adopting a common technology platform to support digital content. The report notes the strengths of academic presses–they have expertise in reviewing, editing, and marketing scholarly works and disseminate research in more narrowly focused areas than would interest commercial publishers. Yet university presses lack resources, are generally tradition-bound, and often deviate from their core academic focus to bring in revenue. For me, the most interesting part of the report is its prediction that scholars will want to work in integrated, online digital research and publishing environments, which “will provide them with the tools and resources for conducting research, collaborating with peers, sharing working papers, publishing conference proceedings, manipulating data sets, etc.” Already journals such as Digital Humanities are trying to realize this vision by allowing for comments, integrating search and analysis tools, and making available draft versions of essays. But if this vision is to be fully realized, much remains to be done. Authoring multimedia content can be time and resource-intensive–to what extent will journals or other groups provide support to authors? Will this integrated publishing and research environment be organized at the level of the discipline, the university, or the publisher? (Paul DiMaggio has a compelling blog post about why the university is likely not the best organization for planning new approaches to scholarly communications.) How will all of this dynamic digital content be preserved? How will versioning be handled?
    • The Ithaka report sparked a fascinating online symposium on the future of scholarly communications featuring thinkers such as Stan Katz, Paul DiMaggio, Ed Felten, Laura Brown, and Peter Suber. Whereas Ed Felten argues that in computer science a “new system” of scholarly communications is emerging whereby an author posts a paper, the community comments, and then it submitted to a journal for formal peer review, Stan Katz notes that the humanities is still bound by the “tradition of individualism, privatism and secrecy.” Although I think that our ideas only get better and have a greater impact if we make them available for public discussion, I confess that I, too, fear that if I self-publish something online it will be less likely to be published by a mainstream publisher–yet this concern is allayed somewhat when I search SHERPA ROMEO and find that journals such as American Literature, Critical Inquiry, and New Literary History allow self-archiving. What will it take to change the culture of the humanities–more success stories from scholars who do take on more of a public role? assurances from tenure committees? clear statements from leading publishers that you can provide pre-prints online? I’m hoping that the MLA’s report on tenure and promotion, issued at the end of 2006, will make a difference in validating digital scholarship.
    • The open access movement scored a victory at the end of the year when Pres. Bush signed into law the omnibus spending bill requiring the NIH to mandate that all journal articles resulting from research it funds be made available as open access through PubMedCentral.
  • Debates over expertise and authority in a Web 2.0 world. With the controversies over the reliability of Wikipedia and other Web 2.0 sources, cultural commentators are discussing what constitutes expertise and authority.
    • Andrew Keen stirred up debate with The Cult of the Amateur: How Today’s Internet is Killing Our Culture, which decries the “amateurism” of Web 2.0, as “professional” mediators of taste are displaced by “monkeys” pounding out narcissistic blog posts and uploading stupid videos to YouTube. In response, David Weinberger contends that the Web facilitates the “collaborative process” of knowledge building, points out that most cultural gatekeepers are driven by profit rather than quality, and argues that amateurs challenge orthodoxy and gather data difficult for experts to compile. Likewise, Emily Bell argues that the Internet enables high-quality, original work to be discovered and that people can discern authority by exercising critical judgment in reading online content, reviewing comments on blog posts, and using tools such as Technorati’s authority index. Keen’s attack on amateurism overlooks the history of valuable contributions that amateurs, those who work for the love of it, have made to science. Although I think Keen is totally off-base, the debate does raise important questions: How do we evaluate quality information? How can scholars harness Web 2.0 technologies to conduct and disseminate research? Given that the theme of this year’s MLA convention was “The Humanities at Work in the World,” what might be gained (and lost?) by bringing academics and amateurs together through the web?
    • To what extent is the collaborative approach to producing and disseminating knowledge embodied by Wikipedia reliable? As Virgil Griffith’s WikiScanner showed, corporations and interest groups have made egregious edits to Wikipedia entries to advance their own agendas. Developers are creating other tools, such as WikiDashboard and WikiTrust, to reveal how trustworthy a Wikipedia entry is, computing, for instance, the “reputation” of a contributor. As Jonathan Dee shows in “All the News That’s Fit to Print Out,” Wikipedia admins quickly delete “biased” statements as they pursue a “Neutral Point of View”–but it’s not clear that objectivity equates to truth or insight. Several projects–Scholarpedia and, more recently, Google’s Knol–try to improve on the Wikipedia model by having experts write entries, associating entries with author’s names rather than allowing anonymous authorship and edits, and using peer review. Although Wikipedia may challenge academic practices by allowing anyone to make a contribution anonymously, it also furnishes a model for collaborative production of knowledge and openness and raises fascinating questions about transparency and authority. As we face information overload, I wonder if the academic community can adapt the tools and methods for evaluating reliability developed for Wikipedia and related projects.
    • In Michael Jensen’s excellent “The New Metrics of Scholarly Authority,” he contends that information abundance and web technologies are giving rise to new ways of measuring authority. Among the metrics for what he calls “Authority 3.0” are the prestige of the publisher, peer reviewers, and commentators; number of links to and citations of the article; number and quality of comments on the article; the author’s reputation; tags; and more. According to Jensen, succeeding in the Authority 3.0 world requires the “digital availability” of the article for indexing, tagging, commenting, etc. As Jensen argues, to ensure that work is visible, a scholar should “[e]ncourage your friends and colleagues to link to your online document. Encourage online back-and-forth with interested readers. Encourage free access to much or all of your scholarly work. Record and digitally archive all your scholarly activities. Recognize others’ works via links, quotes, and other online tips of the hat. Take advantage of institutional repositories, as well as open-access publishers.” Much of this advice echoes that given to a beginning blogger–to make an impact, make your stuff freely available and participate in the community.

Coming up in the third and final post: virtual reality, the database as genre, social networking, green digital humanities (?), and statistics on the number of delicious bookmarks and blog citations of the articles and web sites mentioned in this series of posts.

Digital Humanities in 2007 [Part 1 of 3]

I love reading year-end summaries and lists. Even if the judgments can seem arbitrary, such lists let me know about things I missed and remind me of what matters. Here I offer my own impressions of significant goings-on in and around digital humanities in 2007. Since a lot happened this year, I’ll divide these musings into 3 posts. Post 1 will focus specifically on digital humanities initiatives; post 2 on mass digitization, reading, and scholarly communications; and post 3 will examine databases, virtual reality, social networking, and “green” digital humanities, as well as present some simple stats on the ideas that generated the most buzz. Please see http://del.icio.us/lms4w/DH2007 for links to all of the papers and web sites mentioned here (links are also embedded in the post, of course).

  • New digital humanities initiatives. At the end of 2006, the ACLS published “Our Cultural Commonwealth,” which made the case for building the cyberinfrastucture for humanities and social sciences. Since then, it seems, “digital humanities” is gaining more recognition, attracting more funding, and stimulating exciting research projects. (Of course, many of these activities got started well before 2007.) Digital humanities centers, researchers, and funders have been actively working to develop the humanities cyberinfrastructure. For instance:
    • In April representatives of leading digital humanities centers and funding agencies convened at the University of Maryland for a summit to discuss establishing a network of digital humanities centers. Such collaboration could reduce duplication of effort, create synergies, and provide an infrastructure for training and supporting digital humanities scholars.
    • Organization building is occurring at the level of academic fields as well as research centers. As an Americanist with an interest in the digital, I was delighted that the Digital Americanists was formed. This organization aims to support the development of digital scholarship in American literature and culture, provide an infrastructure for evaluating and sustaining this scholarship, and stimulate reflection and conversation among Americanists about digital tools and methods. Like NINES and other organizations, the Digital Americanists brings together a community of scholars to tackle some of the problems facing digital scholarship, such as the lack of recognition and need for appropriate tools and training.
    • The first NEH Digital Humanities start-up grants were awarded. I found the diversity of projects funded in the first round of digital humanities start-up grants remarkable; disciplines ranged from Celtic studies to classics to jazz to art, and approaches included digitization, database construction, podcasting, and tool-creation. Some of the usual suspects (Virginia, Kentucky) were represented, but there were also successful proposals by institutions I had not previously associated with work in digital humanities, such as Coastal Carolina University. Other funding programs supporting digital scholarship in the humanities included the NEH/IMLS Advancing Knowledge program, the ACLS Digital Innovation fellowships, and the MacArthur/HASTAC Digital Media and Learning Competition (which focused more on pedagogy and communication than research).
    • Digital Humanities Quarterly (DHQ) was launched. An open access journal, DHQ facilitates the free exchange of ideas, provides a forum for publishing multimedia works, and encourages experiments with scholarly communications by enabling authors to incorporate datasets, video, and other forms of media. All articles are encoded in XML; ultimately this markup will provide the basis for sophisticated searching and visualization. Already DHQ is getting some attention–Dennis G. Jerz’s “Somewhere Nearby is Colossal Cave: Examining Will Crowther’s Original “Adventure” in Code and in Kentucky” (from issue 2) generated much buzz and even was noted by slashdot and boing boing. Recent articles have examined topics such as gaming, pedagogy, digitization, text markup and analysis, and narratology.
    • The Text Encoding Initiative turned twenty. The slogan for the 20th Anniversary conference–“20 Years of Supporting the Digital Humanities”–indicates how “digital humanities” has supplanted “humanities computing” as the preferred term, as well as how digital humanities is not a new fad, but a scholarly approach with a significant history. (Of course, humanities computing has been around much longer than the TEI, dating back to Roberto Busa’s work on the Index Thomisticus in the late 1940s.)
    • The Humanities Research Network (part of the Social Science Research Network) was launched, providing free access to humanities articles. (Top papers focus on the law; perhaps not surprisingly, the most popular paper is titled “Fu*k.”)