Teaching and Learning with Omeka

We have recently had some inquiries about our work using Omeka with students and teachers in our workshops and courses.  I recently attended THATcamp, where I discussed some of the questions, problems, and competencies involved in this process.  My initial submission to the THATcamp blog is here and is reposted below for your convenience.  Be sure to read the comments at THATcamp.org and leave any new comments or questions at the end of this post.

Teaching Digital Archival and Publishing Skills (reposted from THATcamp.org)

I’ve been putting this off for a while now, especially after seeing some of the really impressive projects other campers are working on.  My job is not research-oriented; much of what I do revolves around operationalizing and supporting faculty projects in the History Department where I work.  What follows is a rather long description of one such project in which students, in the context of a local history research seminar, are tasked with digitizing archival items, cataloging them using Dublin Core, and creating Omeka exhibits that reflect the findings from their traditional research papers.  Despite the fact that the students are typically Education or Public History majors, they are expected to carry out these tasks to standards which can be challenging even to professional historians and librarians.

I’ve written about some of the practical challenges in projects like this here.  For a full description of the project at hand, click through the page break below.  What is intriguing me right now are the questions such projects raise, particularly those relating to content quality and presentation.

What are realistic expectations for metadata implementation?  Is enforcing metadata standards even appropriate in the context of humanities education?  Many trained librarians aren’t even competent or consistent at cataloging, how can we expect more from undergrad History students?  It’s not that they don’t gain from it (whether they like/know it or not), it’s just that poor metadata might be worse than none.  Information architecture is another challenge, even when students have no role in the initial site design.  They can still confuse the navigation scheme and decrease usability through poorly organized contributions.  Likewise, the content students create is not always something we want to keep online for any number of reasons.  Where do you draw the line between a teaching site (as in, a site designed and used for training projects) and one which is distinctly for use by the broader public?  It’s very blurry to me, but I think how you answer that dictates what you are willing to do and what you end up with.  We really want to create something that is generated entirely by students but with a life outside the classroom.  Ultimately though, we will make decisions that best serve our instructional goals.  I think the value is the process, not the result (though it would be nice for them to match up).  We have done some very ambitious and high quality projects working with small, dedicated teams, but working with large class groups has led to some interesting and unforeseen problems.  I wonder if anyone has any idea about how we might be able to replicate that small team experience and quality on this significantly larger scale.

Has anyone out there done a similar project?  I’d love to hear some experiences and/or suggestions on pedagogy, standards or documentation?

I think this fits in to some degree with Jim Calder’s post and Amanda French’s post, among others (sadly, I have yet to read all the posts here, but I will get to it soon and maybe hit some people up in the comments).

OVERVIEW
This past semester, the Center for Public History and Digital Humanities at CSU has been training teachers, interns and undergraduate students in the History Department to use Omeka as a tool for exploring archives, sharing research, and curating personal exhibits.  Students in our Local History Seminar are trained in archival research, image handling and digitization, and archival description and subject cataloging, including the use of Dublin Core metadata.  In the interest of harnessing student labor for the benefit of the library, and protecting heavily used artifacts from further deterioration, we have tightened the process so that each participant’s labor may yield results that can be directly transferred to the library’s digital archive, Cleveland Memory , which runs on the ContentDM platform.  Through trial and error, we have devised a barebones metadata plan,  set digital image processing standards, and crafted a workflow that optimizes time and labor investments by students, faculty, and department and library staff.  We hit a few bumps along the way, but have plans to revise our process next semester.

EDUCATIONAL RATIONALE
Holistic experience in history-making, from archival process to research to public exhibition

  • Creation and collection of student-generated content (images, maps, charts, exhibits, etc.)
  • Hands-on research in physical and digital archival collections
  • Image processing (digitizing physical artifacts according to locally-defined best practices)
  • Archival description using common metadata standards (Dublin Core)
  • Increased awareness of organization and use of metadata in libraries/archives may lead to increase in use and overall research effectiveness?
  • Experience using online archival software / publishing platform (Omeka)
  • Curating thematic local history exhibits based on area of research
  • We believe this increases readiness for employment, teaching, and continued education.

PROCESS
Students choose a research topic in local history, most often a neighborhood, park, district or institution/building with historical interest.  Students are required to write a 15 page analytical research paper based in primary source research.  They collect documents and images from available archival resources, including both digital and physical artifacts.  Items are uploaded to an Omeka installation (exhibits.clevelandhistory.org) and described using Dublin Core and local metadata standards.  Non-digital items are digitized according to processing guidelines set by CSU Special Collections.  Using the items they collect, and the content from their research papers, students use Omeka to curate an interpretive exhibit around their topic, which they present to the class at the end of the semester.  Professors spend a limited amount of class time providing ongoing instruction and guidance in technical matters, but generally focus on content.

As Center staff, I met with the class for hands-on sessions in Omeka use and image digitization, and have created handouts and an online student guide (exhibits.clevelandhistory.org/guide) containing instructions for using Omeka, digitizing items, and employing metadata standards.  The guide contains general rules for Dublin Core and, as the first semester progressed, has evolved to also address common mistakes and questions.  I track and enforce quality control on new items, and use the MyOmeka plug-in to leave administrative notes on each record containing instructions for correcting errors, as well as other suggestions for improvement.  These notes can be seen only by students and administrators who are logged in with the single shared username.  At the end of the semester, items and exhibits are graded and vetted to determine which will remain online.  Items which contain complete metadata records and meet copyright and quality standards are exported into the Cleveland Memory collection.  The rest are deleted.  High-quality Exhibits remain public, others are deleted or made private.

RESULTS
Despite the extensive documentation, administrative notes, classroom instruction, and my availability for one-on-one consultation, the results in our first run were decidedly mixed. About one-third of students met the expectations for overall quality; another third came very close but made a few significant mistakes.  Common mistakes included use of copyright protected items, grammar and syntax errors in metadata creation, improper use of controlled vocabulary terms, use of editorial voice in item descriptions, and image processing errors (low resolution, poorly cropped or aligned images, etc.).  Others failed to translate their research into well-crafted exhibits, despite the fact that their in-class presentations were almost unanimously excellent.

From an administrative perspective, we also have some work to do to streamline the process.  Some of our challenges involved limitations with the Omeka software, which was not necessarily designed for such projects.

We gave comments via the MyOmeka plug-in, which requires students to log-in and find their items via the public view.  Once they find an item in need of correction, they must return to the admin view to make corrections and cannot see comments without again returning to the public view.  At least one student complained about this cumbersome process.  It was equally difficult for administrators.  While printing out item records and adding handwritten notes would have been ideal for students and instructors, our workflow and other commitments dictated that this would not be possible.

At the end of the semester, we began the vetting process.  I went through and reviewed each item, tagging them with “keep,” “revise,” “remove,” “rights,” and “cmp.”  “Rights” was assigned to items in which copyright status was uncertain.  “CMP” was assigned to items which were already available via the Cleveland Memory project.  The tags were useful in quickly identifying the status of each item in the collection, but moving beyond that point has proven problematic.  For one, the University dictates that we keep student work for up to 6 weeks after the end of the semester.  Were the items and exhibits graded as a final exam, we would need to keep them for a full semester (thankfully, the physical research paper was “the final” for this course).  Additionally, there is no easy way to batch delete or batch edit items from Omeka.  Again, this is not necessarily a shortcoming in Omeka’s architecture, just a limitation of our project design.  Due to each of these issues, we are making items and exhibits public or not public according to our vetting criteria.  Deletions and revisions will have to wait at least six weeks.

We have decided to postpone plans for migration to Cleveland Memory until we can address some of the problems encountered  in our trial run.  We are optimistic that we can improve our instructional and administrative processes next semester, but that will require some new approaches and answers to some of the questions that emerged the first time around.

NEW APPROACHES

Next semester we will use the Contribution plug-in to collect items.  This will allow us to limit confusion about which fields to fill and will also allow us to track submissions more effectively.  Because we still want students to have some experience with metadata standards, and need to collect some additional information for later migration to the Cleveland Memory repository, we have customized the plug-in to include some additional fields.

To solve the issues of grading and revision, as well as required retention, we will use the ScreenGrab plug-in for Firefox, which allows for the capture of complete web pages.  Students will save each item record and exhibit page in JPEG or PNG format, adding them to a printable document that they will submit for review as items and exhibits are added.

We are still trying to figure out a way to modify and delete items in batches.  Since most mistakes involved improper use of controlled subject terms, it would be nice if we could identify a recurring term and edit it in a way that would cascade across the entire installation (e.g. locate all instances of the incorrect subject “Terminal Tower” and replace each with “Union Terminal Complex (Cleveland, Ohio)” ).  This would likely involve a major change in Omeka, which to my knowledge does not collate Subject fields in this way.  Batch deletion for superusers, on the other hand, might be easier to accomplish.  Any thoughts?

Students will receive more comprehensive training.  Based on common mistakes and frustrations, we will adjust instruction and documentation accordingly.

[Be sure to check out the original comments for this post, which were posted in preparation for the conference.  Leave any future comments or questions below.]

Erin Bell (M.L.I.S.) is Project Coordinator and Technology Director at the Center for Public History + Digital Humanities at Cleveland State University and lead developer for Curatescape, a web and mobile app framework for publishing location-based humanities content. In addition to managing a variety of oral history, digital humanities and educational technology initiatives, he has spoken to audiences of librarians, scholars, and technologists on best practices in web development and publishing.