Artifacts in the Archives

The following slides and text are from a presentation at the Society of Florida Archivists/Society of Georgia Archivists Joint Annual Meeting in Savannah, GA on October 14, 2016.

The full data-set can be downloaded here.

slide01So with this transition from the grant-funded project to our regular UF operations, I was tasked with creating a processing plan for what remained unprocessed from the museum collection. This included a large assortment of artifacts and artwork, along with your more standard archival documents and photographs. John and I met with the three members of the Panama grant project team and went over the work they had done so far and tried to gain an understanding of their processes and the work that had been completed. While the ways in which processing had been done over the previous 2 years with the project did not meet with how we would have preferred the work to be completed – and somewhat complicated things from our archival viewpoint – it was decided to continue in the same manner to assure that the entire collection got processed and, while it wasn’t ideal, keeping with the earlier practices would at least create fairly consistent control and description of the collection.

slide02I set out to create the processing plan – what actually was my first ever solo processing plan – by surveying the collection holdings at our off-site storage facility where the majority of the unprocessed items and records were held. The results of the survey showed that we had about 7 linear feet of archival documents; over 200 framed art pieces, maps, and similar works; and almost 4,000 artifacts left to process. Now, I’m well-trained in archival processing and come from a long line of MPLP-style work, having received my early hands-on processing training with one of the PACSCL Hidden Collections projects in Philadelphia. I keep stats on my own processing whether administrators request it or not, and I’ve implemented some of the same processes for metrics tracking at UF. So, I was pretty secure in estimating the needs for processing those 7 linear feet of archival records and photographs.

slide03What I wasn’t sure about was how to estimate processing of the art and artifacts. At PACSCL, we dealt with a small number of artifacts and tended to keep them within the archival collections. I also worked with the National Park Service for about a year, but there, artifacts were removed and processed by someone else. I headed to the web, as you do, to look for information on processing times for artifacts, but didn’t coming up with anything of much use. The Park Service has a lot of information on how to budget money for artifact processing, but doesn’t include information about time in their manuals. There was scant information available from other sources, so I ended up making an educated guess and crossed my fingers (in the end, I guessed a bit too low).

slide04But, this made me question – with our love of stats and assessment – why aren’t some general numbers for artifact processing available somewhere.

slide05I posed this question to John and he agreed. He had looked for this type of data before and found very little. He recalled a few times in the past where archivists or other professionals would pose this question to the SAA listserv and noted that they would generally be met with responses noting the unique nature of artifacts and how one couldn’t possibly generalize processing times for artifacts or artwork. Having learned how archivists used to say this all the time about our own paper collections, but knowing that we somehow managed to move on to the understanding that minimal processing usually takes around 4 hours per linear foot and item-level processing tends to take 8 to 10 hours per foot, I thought, we can do better. And with the advice and encouragement of my dear supervisor … a research project was born.

Along with John, I formed a small but professionally-diverse group of people including Lourdes and Jessica, John’s highly knowledgeable wife Laura Nemmers, and a colleague from the Ringling Museum in Sarasota, Jarred Wilson. We started working on a survey to pose to archivists and museum professionals to try to figure out what data people had and how we could aggregate that into a generalized form that would be useful for budgeting and planning future processing projects. As is the focus of our talk here, this issue is becoming more and more common and we all thought these sorts of metrics would prove useful to others in the future.

slide06In our first meeting we spent a lot of time deciding how to collect the data and also discussing terminology. Having a group with mixed archival and museum backgrounds led to discussions of what exactly each of us meant when we said accessioning, processing, inventorying, and other such terms. Where I say process, Jessica may say accession. Where I say minimal record, she may say inventory entry. Further research and discussions showed that even within one segment of the community, these terms didn’t describe the same tasks for everyone. So, we began to think that we should survey people about terminology before surveying them about data – to make sure we asked the right questions.

When we next met, we went over the survey I had devised to try to get a grip on the terminology questions – but it was still confusing and not actually getting at the point we were after. And we also knew that surveys tend to have a small response rate and we didn’t want to over-burden the people that might participate in this project. So, back to the drawing board we went. We decided instead of asking people what they meant by each term and then asking how much time they spent doing the tasks described, we would cut to the chase and describe the actions we meant and see if they had data they could share or if they would agree to collect some data and send it to us.

slide07I sent out a general email asking for people who might be interested in taking part in a research survey regarding artifact processing within archival settings. From that first request, I received 31 responses from people interested in taking part in or learning more about the project. Then, once we had sorted out exactly what to ask for and how to format the data, I sent another, more specific request to just the people that had initially responded. After sending out that request, a number of people dropped out, and in the end only 6 people submitted data.

slide08But within those 6 institutions (7 when we include UF) were a wide variety of institution- and record-types – including archivists, curators, and managers from academic institutions, museums, federal and city government, and public libraries.

slide09As for the data, we had devised a set of 9 categories of artifacts that grouped different sorts of items together based on size or complexity, and a general idea of how long they would take to describe. Of the institutions who participated, 4 used these categories to collect data, while the other 2 sent in more generalized information from how they normally collect or devise processing times. At UF, we did a bit more processing of the artifacts with these categories in mind since metrics were not collected during the first 2 years of the project and having our own data involved seemed like a good idea.

slide10Here you can see the data parsed out by the categories showing the average amount of time for either minimal or full processing for the assorted 9 categories. The entries marked “null” meant that no data was received in that category for that level of processing. (And you may notice that one outlier in category 3 where each item took almost 3 hours to process. Those were some pretty intense dioramas that skewed the data wildly for that category, but it doesn’t have much of an impact on the final averages.)

slide11 Here you can see the average overall processing times in a few different ways. Processing time for the categorized items comes in at around 8 minutes per item for minimal processing and almost 22 minutes per item for full processing. All of the categorized processing averages out to just over 19 minutes per item. When I couple this data with the numbers from the other 2 institutions that only sent in generalized data, you can see that the final number only goes up by about 20 seconds in the end. So what we have in the end is that, with or without the categories, an artifact can generally be expected to take roughly 20 minutes to process (I had estimated 10 minutes in my processing plan). This is an aggregate, so obviously the processing times of individual items will vary dramatically. But for large collections of objects, knowing that you have, say, 2,000 or so items to process, at roughly 20 minutes per item, allows an institution to at least propose a relatively reliable timeline (5 to 6 months) for project planning and budgeting.

I would like to see a larger data-set to create more useful guidelines for processors going forward, and we’re continuing to collect numbers at UF, but for now, this is what we have. Also, just a quick thanks to everyone who participated in this study.


National Tracing Center of the Bureau of Alcohol, Tobacco, Firearms, and Explosives

by Steve Ammidown* and Steve Duckworth (original post appeared on the SAA I&A Roundtable “Archivists on the Issues” blog on July 7, 2016; this post updated on September 19, 2016 to include new information and articles)

Let’s talk about guns. And records management. And maybe some advocacy.

According to their website, the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) is

a law enforcement agency … that protects our communities from violent criminals, criminal organizations, the illegal use and trafficking of firearms, the illegal use and storage of explosives, acts of arson and bombings, acts of terrorism, and the illegal diversion of alcohol and tobacco products. [They] partner with communities, industries, law enforcement, and public safety agencies to safeguard the public [they] serve through information sharing, training, research, and use of technology.

That’s a big job. But as we’re about to see, “information sharing” and the “use of technology” are pretty restricted at the ATF.

The National Tracing Center (NTC) is the firearms tracing facility of the ATF. Located in Martinsburg, WV, they are the only facility in the United States that can provide information to law enforcement agencies (local, state, federal, and international) that can be used to trace firearms in criminal investigations, gun trafficking, and other movement of firearms, both domestically and internationally. And they are drowning in records. From recent reports, roughly 1.6 million records arrive at the facility each month. Records usually come from defunct firearms dealers who are required to submit their records when they go out of business. (For dealers still in business, the NTC contacts them after tracing a weapon through its manufacturer.) There appear to be no standards in place for how dealers have to keep or submit these records. There is a form (4473) for the actual gun purchase, but other records can come in on computer media or hand-written documents. They often arrive somewhat damaged, with partially shredded or water-damaged records being frequently cited in news reports. Some people even send theirs in on rolls of toilet paper.

As it is currently illegal to create a registry of firearms in the U.S., the idea of a searchable database is also off the table. This leaves workers at the facility with the task of sifting through these records manually to complete traces. Upwards of 365,000 traces are requested each year, and the number will just keep growing (due in part to the Obama administration’s requirement that every gun involved in a crime be traced). While records are now being digitized to provide some easier access and relief for the physical space needed, the records remain non-searchable and amount to a newer version of microfilmed records. Even these digitization efforts are problematic, as a recent Government Accountability Office report showed. The GAO reported that digital records systems were in violation because, among other things, the records were kept on a single server, and allowed access to too much data.

Some of the problems at the NTC can be traced to the lack of consistent leadership and chronic underfunding. The position of the agency director was unfilled from 2006 to 2013 due to legislation backed by the National Rifle Association (NRA) that requires Senate confirmation to fill the position. In 2013, acting director B. Todd Jones was narrowly confirmed to fill the Director role, but he retired soon after (in 2015) and the position has yet to be filled permanently. Stagnant funding has prevented the ATF from keeping up with demand. In addition to the overwhelmed workers at NTC, the agency has just over 600 inspectors dedicated to inspecting the record-keeping at over 140,000 gun dealers across the country.

The data collected by the NTC is also subject to the NRA’s legislative sway in Washington. As mentioned, they have been successful in heading off any attempts at creating a searchable database, arguing that such a mechanism would be a “registry” in violation of the Second Amendment. Taking it one step further, a set of provisions known as the “Tiahrt Amendments” has been attached to every U.S. Department of Justice appropriations bill since 2003, prohibiting the NTC from releasing information to anyone other than a law enforcement agency or prosecutor in connection with a criminal investigation. The law effectively blocks this data from being used in academic research on criminal gun use or in civil lawsuits against gun sellers or manufacturers. It also prevents the ATF from collecting the inventory information from gun dealers, which would further help identify lost or stolen guns. The Law Center to Prevent Gun Violence argues that these amendments only empower criminals and reckless dealers.

Restrictive laws and a lack of quality management lead to a massive backlog of records and a very limited system of filling the great number of trace requests the NTC receives. The antiquated measures required by the NTC restrict law enforcement’s ability to perform their duties. While public opinion regarding gun sales seems to be turning (unlike Congress’s voting record), the idea of a database seems quite far off. An effective and permanent director at the ATF would be a good starting place, but as of this writing, a nomination doesn’t appear to even be in place (and given the current political climate, it’s not likely to happen anytime soon).

Not unlike the rest of the gun debate, the political debate around the NTC and gun tracing data seems intractable and unlikely to change. Luckily for us, however, we’re archivists and records managers! We offer a unique perspective on this subject when we contact our elected officials on this topic. We’ve been in the dusty stacks (yes, we said it) and dealt with unwieldy access systems when time was of the essence. We should be arguing for the modernization and full funding of the NTC and the repeal of the Tiahrt Amendments, at the least to improve access to government information and at the most to help save lives. So consider this your call to advocacy (as mentioned at the start). If you feel this situation warrants some action, take it and contact your legislators now!

*Steve Ammidown is the Manuscripts and Outreach Archivist for the Browne Popular Culture Library at Bowling Green State University in Bowling Green, Ohio.


Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF),

Government Accountability Office, June 2016 “Report to Congressional Requesters: ATF Did Not Always Comply with the Appropriations Act Restriction and Should Better Adhere to Its Policies” (GAO-16-552),

Law Center to Prevent Gun Violence, “Maintaining Records on Gun Sales,”

National Tracing Center (NTC),, (informational brochure:

National Tracing Center via Wikipedia (good general overview),

The White House, “Presidential Memorandum – Tracing of Firearms in Connection with Criminal Investigations,”

News reports:

2016 August 30, GQ, “Inside the Federal Bureau of Way Too Many Guns,”

2016 August 24, The Trace, “The ATF’s Nonsensical Non-Searchable Gun Database, Explained,”

2016 March 24, America’s 1st Freedom [NRA magazine], “Where the ATF Scans Gun Sales Records,”

2016 January 6, The Guardian, “Agency tasked with enforcing Obama’s gun control measures has been gutted,”

2015 October 27, USA Today, “Millions of Firearms Records Languish at National Tracing Center”

2015 March 20, USA Today, “ATF director announces resignation,”

2013 June 11, Media Matters, “How the NRA Hinders the ATF Director Confirmation Process,”

2013 May 20, NPR, “The Low-Tech Way Guns Get Traced,”

2013 March 13, InformationWeek, “ATF’s Gun Tracing System is a Dud,”

2013 February 19, WJLA ABC7 (Washington, D.C.), “ATF National Tracing Center Traces Guns the Old-Fashioned Way,” (YouTube:

2013 January 30, CBS Evening News, “Tracing Guns is Low-tech Operation for ATF,”

2011 November 2, Law Center to Prevent Gun Violence, “Federal Law on Tiahrt Amendments,”

2010 October 26, Washington Post, “ATF’s Oversight Limited in Face of Gun Lobby,”

Research Post: Fire at the Cinemateca Brasileira

This post first appeared on the Society of American Archivists’ Issues & Advocacy Roundtable blog.

I&A Research Teams are groups of dedicated volunteers who monitor breaking news and delve into ongoing topics affecting archives and the archival profession. Under the leadership of the I&A Steering Committee, the Research Teams compile their findings into Research Posts for the I&A blog. Each Research Post offers a summary and coverage of an issue. This Research Post comes from On-Call Research Team #2, which is mobilized to investigate issues as they arise.

Please be aware that the sources cited have not been vetted and do not indicate an official stance of SAA or the Issues and Advocacy Roundtable.

Summary of the Issue

A fire broke out in the film library of the Cinemateca Brasileira in São Paolo on February 3, 2016. The exact cause of the fire was not reported, but the area involved was where nitrate film was stored. This material is known to be volatile and can spontaneously combust due to environmental factors. Sources reported that approximately 1,000 rolls of film burned in the fire. All is not lost, however, as the institution states that all films lost in the fire had been preserved in other media formats (though some of the reports stated that 80% were preserved in other formats). Reports of the fire came out soon after the event occurred, but updates and further information has not been located. While there are many reports, especially reports in Portuguese, almost all of them date from February 3 or 4. They each appear to leave some questions on the table.

The fire occurred in one of the institution’s nitrate film warehouses, which are specially designed to house such film; there is no electric grid and interior walls do not reach the ceilings. Most sources report that it took about 30 minutes to contain the fire. Some video footage of the scene can be found here.

The Cinemateca Brasileira holds some 250,000 film rolls, including features, short films, and newsreels, as well as books, papers, movie posters, and other paper records; this loss represents 0.4% of their film holdings. The history of the Cinemateca can be traced back to 1946 as the Second Film Club of São Paolo (after the First had been closed by the Department of Press and Propaganda in 1941). In 1948, the Club became affiliated with the International Federation of Film Clubs and, in 1949, with the film department of São Paolo’s newly created Museum of Modern Art. In 1964, it was incorporated into the Ministry of Culture, becoming a governmental institution. Previous fires have occurred in 1957, 1969, and 1982, all due to nitrate film. The institute moved into its current facilities, built under the technical guidance of the International Federation of Film Archives (FIAF), in 1998.

The Archivist Rising blog reported that the institution suffered somewhat recent budget cuts due to a large financial crisis. Blogger Aurélio Michiles blames the incident on the previous budget cuts as well, but describes the cuts as more of a punishment towards the administration rather than having to do with an overall financial crisis. Further sources state the number of employees has been reduced from over 100 in 2013 to just over 20 currently, though it remains unclear how many employees are governmental workers and how many are actually employed by the Cinematheque’s Friends Society (Sociedade Amigos da Cinemateca) and whether or not that affects the various numbers reported from different sources. The truth behind this budget controversy is left for further research – and preferably by someone proficient in Portuguese.

bibliography of coverage of the issue:

“Some thousand film rolls burnt in Cinemateca Brasileira fire.” EBC Agencia Brasil. Accessed 2016 March 25. (EBC manages TV Brasil, TV Brasil International, Agência Brasil, Radioagência, and the National Public Broadcast System. Besides the commitment to public communication, their values are characterized by editorial independence, transparency, and participatory management.)

“Cinemateca Brasileira.” Wikipedia. Accessed 2016 March 25.

“Ministério da Cultura não tem plano para evitar novos incêndios na Cinemateca Brasileira.” Estãdo. Accessed 2016 March 25.,ministerio-da-cultura-nao-tem-plano-para-evitar-novos-incendios-na-cinemateca-brasileira,10000014848

“Ministério da Cultura mudará gestão da Cinemateca.” Folha De S. Paolo. Accessed 2016 March 25.

The I&A Steering Committee would like to thank Steve Duckworth for writing this post, and Rachel Seale and Alison Stankrauff for doing key research on the issue.

I&A On-Call Research Team #2 is:

Alison Stankrauff, Leader
Katherine Barbera
Anna Chen
Steven Duckworth
David McAllister
Rachel Seale

If you are aware of an issue that might benefit from a Research Post, please get in touch with us:

Code4Lib2016 Conference Review

This post was written for, and first appeared on, SAA’s SNAP roundtable blog.

Code4Lib 2016 was held in Philadelphia, PA from March 7 to 10 along with a day of pre-conference workshops. The core Code4Lib community consists of “developers and technologists for libraries, museums, and archives who have a strong commitment to open technologies,” but they are quite open and welcoming to any tangentially related person or institution. As a processing archivist whose main experience has been with paper documents, I thought I would feel confused and out of place for the length of this conference, but, while I had my moments, I left feeling more knowledgeable about efforts and innovations within the coding community, giddy with ideas of projects to bring to my own workplace, and incredibly glad that I stepped outside of my archival comfort zone to attend (and present at!) this conference. (And I have to thank our university’s Metadata Librarian, Allison Jai O’Dell, for asking me to present with her. Without her reaching out to me, I likely wouldn’t have gotten involved in the conference to begin with.)

So, before Code4Lib, there was Code4Arc – at least, as a preconference workshop. Code4Arc focused on the specific coding and technology needs of the archivist community and on the need to make Code4Arc an actual thing, rather than just an attachment to Code4Lib. While both communities would have quite a bit of overlap, archivists obviously have their own niche problems, and coders can often help sort those problems out. Also, having a direct line between consumer-with-a-problem and developer-with-a-solution would prove quite beneficial to all parties involved. The day was divided up into a series of informal discussions and more focused breakout groups, along with some updates from developers. The end result mainly boiled down to continuing the discussion about our needs as a community, communicating and sharing knowledge and data more openly, and focusing efforts on specific problems that affect many archives. We’ve formed some ad hoc groups and will likely have more to say in the not-too-distant future.

code-loveAs to the conference proper, I’ll start by noting that a ton of information is available online. The conference site lists presentations, presenter bios, and links to twitter handles and slides where available. Three series of Lightning Talks emerged during the conference; information on those talks can be found on the wiki, which is full of useful information and links. And everything was recorded, so you can watch the presentations from the Code4Lib YouTube channel. The conference presentations were almost a series of lightning talks themselves. Each presentation was allotted 10-20 minutes of time, with 6 groups of presentations given over the course of the conference, along with 2 plenary talks. So, while it was a nice change from the general conference configuration, it did make for a rather exhausting (but engaging) experience. Having said that, I will only specifically mention a few of the presentations that resounded more with me or relate more specifically to archival work (because seriously, I saw over 50 in the course of 2.5 days). But again, I stress, totally worth it! And they feed you. A lot!

So on day one (inserts shameless plug), Allison Jai O’Dell and I presented The Fancy Finding Aid (video | slides). We talked about some front-end design solutions for making finding aids more interactive and attractive. Allison is wicked smart and also offered up a quick lightning talk on day three about the importance of communicating, often informally, with your co-workers (video). Other presentations of note from day one include Shira Peltzman, Alice Sara Prael, and Julie Swierzek speaking about digital preservation in the real world in two separate presentations, “Good Enough” preservation (video | slides) and Preservation 101 (video | slides). Eka Grguric broke down some simple steps anyone can take towards Usability Testing (video | slides) and Katherine Lynch shared great ideas regarding Web Accessibility issues (video | slides). Check out the slides for lots of great links and starting points, like testing out navigability by displacing your mouse or using a screen reader with your monitor off.

Matienzo: Ever to Excel

Later on, Mark Matienzo discussed the ubiquity of the spreadsheet in Ever to Excel (video | slides). The popularity of spreadsheets may come from the hidden framework that shields users from low-level programming, making users feel more empowered. Lightning talks included the programming committee asking for help with diversity in #ProgramSoWhite (video | slides), a focus repeated the following day in a diversity breakout session. Ideas generated from the diversity talks were focused on further outreach with schools and professional organizations, scholarship initiatives for underrepresented populations and newer professionals, and stressing the need for those in the coding community to reach out for collaborators in other areas to bring new voices into the community.

Angela Galvan, in her talk titled “So you’re going to die” (video | related notes), spoke about digital estate management and the need to plan for what happens to digital assets after someone dies. Though humans now post so much of their lives online, we are still relatively silent about death. Yuka Egusa’s talk about how non-coders can contribute to open source software projects was particularly popular (video | slides). She notes that engineers love coding, but generally don’t like writing documentation. Librarians and archivists can write those documents and training manuals, and we can aid with reporting bugs and usability testing. Don’t let lack of coding knowledge keep you from being part of innovative programs that interest you.

Yoose: libtech burnoutDay two: Becky Yoose gave an exhilarating talk about protecting yourself from #libtech burnout (video | slides). In the lightning talks, Greg Wiedeman spoke about his Archives Network Transfer System (video | more info), which is an interesting solution to a problem Code4Arc focused on, but also highlights the need for a simpler way to structure the process of transferring digital materials to the archives.

Andreas Orphanides gave a great talk about the power of design. Architecture is Politics (video | slides) highlighted how, intentionally or not, your web and systems designs are political; likewise politics influence your design. The choices you make in design can control your user, both explicitly and subtly, and politics can influence the choices you make in the same way. Thus, design is a social justice issue and you need to be active in knowing your users, recognizing your own biases, and diversifying your practices. Matt Carruthers talked about Utilizing digital scholarship to foster new research in Special Collections (video | slides). This project at the University of Michigan provides visualization-on-demand customized to a patron’s research question. Though still in the early stages of development, they are extracting data from EAD files they already have to create EAC-CPF connections. This data is then used to visualize the networks, and online access to the visualizations is offered for users. This is the start of a fascinating new way to provide further discovery and access in archives and special collections.

Day three’s lightning talks included Sean Aery from Duke speaking about integration of digital collections and findings aids and some great ways to maintain context while doing so (video | slides); Heidi Tebbe recommended the use of GitHub as a knowledge base, not just a place for code (video | slides); and Steelsen Smith pointed out the various issues that can arise with assorted sign-ons and using single sign-ons to actually open up systems for more users (video | slides).

And lastly, Mike Shallcross discussed a University of Michigan project that I’ve been following closely, the ArchivesSpace-Archivematica-DSpace Workflow Integration (video | slides). They are working to overhaul archival management to bring ArchivesSpace and Archivematica together with a DSpace repository to standardize description and create a “curation ecosystem.” We’re closing in on a similar project where I work and Mike has been making regular (and rather entertaining) blogposts about the Michigan project, so it was good to hear him in person. (If interested in more, check out their blog.)

Orphanides: Architecture is PoliticsOh, the plenary talks. I almost forgot. They were great. The opening talk by Kate Krauss of the Tor project focused on social justice movements in the age of online surveillance (video | slides) and the closing talk by DuckDuckGo founder Gabriel Weinberg (video) similarly focused on privacy and related concerns in online searching.

So, it was a great conference. There were definite themes emerging about creating better access and more privacy for users; trying to get out of your normal routine and envision projects from another perspective; communicating better and more openly within and around our own community; and using all of this to better document and support underrepresented communities around the world. I’ve now said too much. I hate reading long blog posts. But I definitely recommend this conference to anyone in the library and archives fields with any inkling of interest in digital projects. It’s a great way to get new ideas, see that you aren’t alone with your out-of-date systems, and meet some great people who you may not normally get to interact with on a regular basis.

code4lib 2016: The Fancy Finding Aid

Fancy Finding Aid – PowerPoint slides (PDF format, 1 MB)Fancy Finding Aid PowerPoint files

Video of presentation:

Reaction on twitter:

Shuck it!

I was recently reminded of the usefulness of the oyster shucker as archival implement. Yes, that’s right, the humble oyster shucker.


It’s not something I ever envisioned using in the archives. It’s not something I took courses on in library school. It was never mentioned during my internships. But it is a tool I feel we all need to be a bit more educated about. Is has a multitude of uses, and it’s just kind of fun to say.

It expertly assists with:



and shucking oysters – of course
(though perhaps don’t use the same one you use in the archives)Oyster shuckers at Apalachicola, Fla. This work is carried on by many young boys during the busy seasons. This is a... - NARA - 523162

So, here’s to the wonderful and versatile oyster shucker! And thanks to the National Park Service in Anchorage for bringing this shucker into my life.

Metrics for hybrid collections

Archival repositories are, more and more, finding themselves in the position of processing and housing hybrid collections of standard archival documents and what would generally be referred to as artifacts or museum-style objects. While archivists have become quite adept at tracking timing statistics for processing paper-based collections, few similar, timing-based metrics appear to exist, at least in professional literature, to aid in planning for processing of these hybrid collections.

Our group of archivists and museum professionals are interested in closing this gap in metrics for the information science community as a whole. If you or your institution currently keep metrics on such holdings, and would be willing to participate in a future survey regarding your collecting and processing habits, please email me at I would appreciate hearing from you by February 29, 2016.

Please feel free to share this message with other interested parties. Thank you.

Steve Duckworth | Processing Archivist
Department of Special & Area Studies Collections
George A. Smathers Libraries
University of Florida
200A Smathers Library