Among the many hats I wear, I’m the contributing editor for Digital History Reviews in the Journal of American History. That means I solicit, edit, and submit 5-6 500-word reviews every 3 months on a wide array of digital projects (including sites, tools, apps, databases, and so on). There are very few places that regularly review digital projects and those of us interested in digital history as a field need to facilitate more and better reviews of those projects. This notion was something that I heard over and over again at the Roy Rosenzweig Center for History and New Media 20th Anniversary Conference last weekend. [There were 3 different breakout sessions that dealt with peer review of digital history projects in some way and several of the invited speakers mentioned it in their remarks.]
Fortuitously, it is also time to update the guidelines that I provide the JAH Digital History Project Reviewers (copied below). Having just had a number of conversations about peer review of digital projects at the RRCHNM conference, I know that I want to push reviewers to think more about who the various contributors are (including undergraduate and graduate students), to contemplate questions of coding/programming, and to look more deeply at claimed and actual historiographic impact. I’m also struggling with notions of reviewing versions of digital projects (should we go back a review a site again after a big update?) as well as questions of how to capture the complexity of some projects in limited space (is it worth doing just 3 1000-word reviews to allow reviewers to explore more extensively each site?).
So, any suggestions you have for changes, updates, clarifications, or additions are welcomed.
Digital History Reviews
“Web Site Reviews” first appeared in the June 2001 issue of the Journal of American History and became “Digital History Reviews” in the September 2013 issue. This section is a collaborative venture with the Web site History Matters: The U.S. Survey Course on the Web http://historymatters.gmu.edu. This section appears quarterly and normally runs five reviews.
Jeffrey W. McClurken, the department chair and professor of History and American Studies at the University of Mary Washington, is the contributing editor for the “Digital History Reviews” section of the Journal.
The editor welcomes suggestions and may be reached at jmcclurk@umw.edu.
Guidelines
Although these scholarly reviews of digital history projects follow the long tradition of reviewing books in theJAH—as well as the more recent practice of reviewing museum exhibitions, films, and textbooks—digital history reviews have some particular features. The guidelines below provide specific suggestions for dealing with this medium. Please feel free to write to me with any questions you might have, as well as suggested revisions and clarifications in the guidelines.
Digital history projects share a common medium (the World Wide Web), but they are quite diverse in their character. Reviewers need to keep that diversity in mind and to evaluate them on their own terms. Generally, most digital history projects fall into one of the following categories, although many sites combine different genres:
- Archive: a site that provides a body of primary documents.
- Electronic Essay/Exhibit: something created/written specifically for the Web—that is, a secondary source that interprets the past in some fashion. This would include “hypertexts” that offer a historical narrative or argument.
- Teaching Resource: a site that provides online assignments, syllabi, and other resources specifically geared toward using the Web for teaching.
- Tool: something that provides functionality related to creating, accessing, or editing digital history content (rather than the content itself).
- Gateway: a site that provides access to other Web-based materials.
- Journal/Webzine: an online publication.
- Organization: a site devoted to providing information on a particular organization.
- Virtual Community: a site on which a historical community—popular or academic—interacts.
Most projects to be reviewed will probably fall into one of the first three categories. The reviewing criteria will vary depending on the category into which the site falls. Thus, for example, an archival site should be evaluated based on the quality of the materials presented; the care with which they have been prepared and perhaps edited and introduced; the ease of navigation; and its usefulness to teachers, students, and scholars. How comprehensive is the archive? Are there biases in what has been included or excluded? Does the archive, in effect, offer a point of view or interpretation? As with other types of reviews, you are providing guidance to readers on the usefulness of the site in their teaching or scholarship. At the same time, you are participating in a community of critical discourse and you are trying to improve the level of work in the field. As you would do in a scholarly book review, then, you are speaking both to potential readers and to producers of similar work.
Even within a single category, the purposes of the digital history projects can vary significantly. An online exhibition or an “electronic essay” can be directed at a largely scholarly audience or a more broadly public audience. It would be unfair to fault a popularly oriented Web site for failing to trace the latest nuances in scholarship, but it would certainly be fair to note that the creators had not taken current scholarship into account. In general, then, online exhibitions and essays should be judged by the quality of their interpretation: What version of the past is presented? Is it grounded in historical scholarship? Is it original in its interpretation or mode of presentation? Again, the goal of the review is to provide guidance to potential readers (who might be reading in their roles as teachers, scholars, or citizens) and to raise the level of digital-based historical work.
Classroom-oriented projects would be judged by the quality of the scholarship underlying them, but naturally you would also want to evaluate the originality and usefulness of the pedagogical approach. Will this project be useful to teachers and students? At what level?
Reviews of digital history projects must necessarily address questions of navigation and presentation. To some extent, this is the same as a book reviewer commenting on whether a book is well written or clearly organized. To be sure, the conventions of book publication are well enough established that book reviewers rarely comment on matters of navigation or design—although they do occasionally note a poorly prepared index or a work with excessive typographical errors. But in the digital world, which is an emerging medium that is visual (and often multimedia), issues of design and “interface” are necessarily more important. In this sense, digital history reviews share a great deal with film and exhibit reviews. In general, reviewers should consider what, if anything, the electronic medium adds to the historical work being presented. Does the digital format allow the creators of the project to do something different or better than what has been done in pre-digital formats (for example, books, films, museum exhibitions)? Have the creators of the project made effective use of the medium? How easy is it to find specific materials and to find your way around the project?
In summary, most reviews will address the following four areas:
- Content: Is the scholarship sound and current? What is the interpretation or point of view?
- Form: Is it clear? Easy to navigate? Does it function effectively? Does it have a clear, effective, and original design? Does it have a coherent structure?
- Audience/Use: Is it directed at a clear audience? Will it serve the needs of that audience?
- New Media: Does it make effective use of new media and new technology? Does it do something that could not be done in other media—print, exhibition, film?
Because some digital history projects (largely archives) are vast, it is not possible to read every document or visit every link. American Life Histories: Manuscripts from the Federal Writers’ Project, 1936–1940, at the Library of Congress’s American Memory site, http://memory.loc.gov/ammem/wpaintro/wpahome.html, includes 2,900 documents that range from 2,000 to 15,000 words in length. The reviewer could hardly be expected to read what probably amounts to the equivalent of 300 books. In such circumstances, some systematic sampling of the contents can substitute for a review of every single Web page. At the same time, the reviewer of a Web site should devote the same kind of close attention to the work as does a reviewer of a book, exhibition, or film. Because there is no easy way to indicate the size of a Web site (as you can note the number of pages in a book or the number of minutes in a film), you should try (ideally early in your review) to give readers some sense of the kinds of material found and the quantity of each.
One final way that digital history projects differ from books, exhibits, and films is that they are often works in progress. Thus, we ask that the headnote for the review indicate when you examined the project (this could be a range of dates) just as you would indicate in reviewing a performance of a play. Where the project plans some significant further changes, you should say that in the review. If you think that it would make more sense to wait for further changes before reviewing the project, then please let us know and we will put the review off to a later date. If you feel that you need additional information about a project in order to complete a review, we would be happy to contact the author or creator on your behalf.
Because of our scholarly and pedagogical focus, our first priority in selecting reviewers is to find people whose scholarship and teaching parallels the subject areas of the project. We do not favor people who have some “technical” skill any more than we would expect book reviewers to know how books are typeset and printed. But we do have a preference—where possible—for reviewers who are familiar with what has been done in the digital world, since that will give them a comparative context for their evaluation. Still, we recognize that such familiarity is still only gradually emerging among professional historians, and some reviewers will be relatively new to such work.
Headings:
Name of site/title. Address/URL. Who set it up? Who maintains it (if different)? When reviewer consulted it.
EXAMPLES:
Panoramic Maps, 1847–1929, http://memory.loc.gov/ammem/pmhtml/panhome.html. Created and maintained by the Geography and Map Division, Library of Congress, Washington, D.C. Reviewed Dec. 25, 2000–Jan. 2, 2001.
The Triangle Shirtwaist Factory Fire: March 25, 1911, http://www.ilr.cornell.edu/trianglefire. Kheel Center for Labor-Management Documentation and Archives at Cornell University in cooperation with unite! (Union of Needle Trades, Industrial, and Textile Employees); edited by Hope Nisly and Patricia Sione. Last site update April 21, 2000. Reviewed Dec. 20, 2000–Jan. 5, 2001.
Jeffrey McClurken
Editor, Digital History Reviews, Journal of American History
Professor and Department Chair, History and American Studies
University of Mary Washington
http://cas.umw.edu/historyamericanstudies/
http://mcclurken.org/
Twitter: @jmcclurken
Phone: 540-654-1475
jmcclurk at umw dot edu