Artifacts in the Archives

The following slides and text are from a presentation at the Society of Florida Archivists/Society of Georgia Archivists Joint Annual Meeting in Savannah, GA on October 14, 2016.

The full data-set can be downloaded here.

slide01So with this transition from the grant-funded project to our regular UF operations, I was tasked with creating a processing plan for what remained unprocessed from the museum collection. This included a large assortment of artifacts and artwork, along with your more standard archival documents and photographs. John and I met with the three members of the Panama grant project team and went over the work they had done so far and tried to gain an understanding of their processes and the work that had been completed. While the ways in which processing had been done over the previous 2 years with the project did not meet with how we would have preferred the work to be completed – and somewhat complicated things from our archival viewpoint – it was decided to continue in the same manner to assure that the entire collection got processed and, while it wasn’t ideal, keeping with the earlier practices would at least create fairly consistent control and description of the collection.

slide02I set out to create the processing plan – what actually was my first ever solo processing plan – by surveying the collection holdings at our off-site storage facility where the majority of the unprocessed items and records were held. The results of the survey showed that we had about 7 linear feet of archival documents; over 200 framed art pieces, maps, and similar works; and almost 4,000 artifacts left to process. Now, I’m well-trained in archival processing and come from a long line of MPLP-style work, having received my early hands-on processing training with one of the PACSCL Hidden Collections projects in Philadelphia. I keep stats on my own processing whether administrators request it or not, and I’ve implemented some of the same processes for metrics tracking at UF. So, I was pretty secure in estimating the needs for processing those 7 linear feet of archival records and photographs.

slide03What I wasn’t sure about was how to estimate processing of the art and artifacts. At PACSCL, we dealt with a small number of artifacts and tended to keep them within the archival collections. I also worked with the National Park Service for about a year, but there, artifacts were removed and processed by someone else. I headed to the web, as you do, to look for information on processing times for artifacts, but didn’t coming up with anything of much use. The Park Service has a lot of information on how to budget money for artifact processing, but doesn’t include information about time in their manuals. There was scant information available from other sources, so I ended up making an educated guess and crossed my fingers (in the end, I guessed a bit too low).

slide04But, this made me question – with our love of stats and assessment – why aren’t some general numbers for artifact processing available somewhere.

slide05I posed this question to John and he agreed. He had looked for this type of data before and found very little. He recalled a few times in the past where archivists or other professionals would pose this question to the SAA listserv and noted that they would generally be met with responses noting the unique nature of artifacts and how one couldn’t possibly generalize processing times for artifacts or artwork. Having learned how archivists used to say this all the time about our own paper collections, but knowing that we somehow managed to move on to the understanding that minimal processing usually takes around 4 hours per linear foot and item-level processing tends to take 8 to 10 hours per foot, I thought, we can do better. And with the advice and encouragement of my dear supervisor … a research project was born.

Along with John, I formed a small but professionally-diverse group of people including Lourdes and Jessica, John’s highly knowledgeable wife Laura Nemmers, and a colleague from the Ringling Museum in Sarasota, Jarred Wilson. We started working on a survey to pose to archivists and museum professionals to try to figure out what data people had and how we could aggregate that into a generalized form that would be useful for budgeting and planning future processing projects. As is the focus of our talk here, this issue is becoming more and more common and we all thought these sorts of metrics would prove useful to others in the future.

slide06In our first meeting we spent a lot of time deciding how to collect the data and also discussing terminology. Having a group with mixed archival and museum backgrounds led to discussions of what exactly each of us meant when we said accessioning, processing, inventorying, and other such terms. Where I say process, Jessica may say accession. Where I say minimal record, she may say inventory entry. Further research and discussions showed that even within one segment of the community, these terms didn’t describe the same tasks for everyone. So, we began to think that we should survey people about terminology before surveying them about data – to make sure we asked the right questions.

When we next met, we went over the survey I had devised to try to get a grip on the terminology questions – but it was still confusing and not actually getting at the point we were after. And we also knew that surveys tend to have a small response rate and we didn’t want to over-burden the people that might participate in this project. So, back to the drawing board we went. We decided instead of asking people what they meant by each term and then asking how much time they spent doing the tasks described, we would cut to the chase and describe the actions we meant and see if they had data they could share or if they would agree to collect some data and send it to us.

slide07I sent out a general email asking for people who might be interested in taking part in a research survey regarding artifact processing within archival settings. From that first request, I received 31 responses from people interested in taking part in or learning more about the project. Then, once we had sorted out exactly what to ask for and how to format the data, I sent another, more specific request to just the people that had initially responded. After sending out that request, a number of people dropped out, and in the end only 6 people submitted data.

slide08But within those 6 institutions (7 when we include UF) were a wide variety of institution- and record-types – including archivists, curators, and managers from academic institutions, museums, federal and city government, and public libraries.

slide09As for the data, we had devised a set of 9 categories of artifacts that grouped different sorts of items together based on size or complexity, and a general idea of how long they would take to describe. Of the institutions who participated, 4 used these categories to collect data, while the other 2 sent in more generalized information from how they normally collect or devise processing times. At UF, we did a bit more processing of the artifacts with these categories in mind since metrics were not collected during the first 2 years of the project and having our own data involved seemed like a good idea.

slide10Here you can see the data parsed out by the categories showing the average amount of time for either minimal or full processing for the assorted 9 categories. The entries marked “null” meant that no data was received in that category for that level of processing. (And you may notice that one outlier in category 3 where each item took almost 3 hours to process. Those were some pretty intense dioramas that skewed the data wildly for that category, but it doesn’t have much of an impact on the final averages.)

slide11 Here you can see the average overall processing times in a few different ways. Processing time for the categorized items comes in at around 8 minutes per item for minimal processing and almost 22 minutes per item for full processing. All of the categorized processing averages out to just over 19 minutes per item. When I couple this data with the numbers from the other 2 institutions that only sent in generalized data, you can see that the final number only goes up by about 20 seconds in the end. So what we have in the end is that, with or without the categories, an artifact can generally be expected to take roughly 20 minutes to process (I had estimated 10 minutes in my processing plan). This is an aggregate, so obviously the processing times of individual items will vary dramatically. But for large collections of objects, knowing that you have, say, 2,000 or so items to process, at roughly 20 minutes per item, allows an institution to at least propose a relatively reliable timeline (5 to 6 months) for project planning and budgeting.

I would like to see a larger data-set to create more useful guidelines for processors going forward, and we’re continuing to collect numbers at UF, but for now, this is what we have. Also, just a quick thanks to everyone who participated in this study.

slide12

Leave a comment

Leave a Reply