Oral History of Museum Computing: Rob Lancefield

This Oral History of Museum Computing is provided by Rob Lancefield, and was recorded on the 21st of May, 2021, by Paul Marty and Kathy Jones. It is shared under a Creative Commons Attribution 4.0 International license (CC-BY), which allows for unrestricted reuse provided that appropriate credit is given to the original source. For the recording of this oral history, please see https://youtu.be/uwKUaOpOYHI.

Here we go. Before joining the museum world, I was actually a professional musician for about ten years. I think Paul knows this. And along with a lot of performance and some composing, I was also supporting myself doing various kinds of self-employed work that almost all lived in this overlapping area between the arts, in some form, and technologies, in some form, mostly analog ones back then. This was the ’80s, for the most part. Between music typesetting, audio recording, photography—all these things that are in that sort of overlapping space. And it actually all sort of amounted to the gig economy before it was called the gig economy.

After that I jumped back into—well, jumped into grad school, and this was at Wesleyan [Wesleyan University, in Middletown, Connecticut]. And I thought it was going to be just two years of reading the same books as a cohort of people who were reading the same books and talking about them, and then going back to being a musician and doing all of this freelance stuff; but partway through [a subsequent phase of] that graduate time I was kind of informally recruited, by the person who became my boss, to apply for a museum job. This was as Registrar of the Davison Art Center Collection, which is also right at Wesleyan, next door to the Music Department, where I was spending so much time.

[…]

So I agonized over whether I could make a two-year commitment to the job, which now strikes me as, like, borderline [laughing] amusing-slash-hilarious because, obviously, that was a long time ago and I haven’t left the sector yet. So I took the plunge and before I knew it, the museum sector had actually become my main professional home for now almost thirty years. So I mentioned I was hired in as a registrar. This is, I guess, because I can sometimes be good at organizing things and I already had this pretty serious interest in photographs, and this is largely a collection of works on paper, including photographs. But about a year after I started the job, I attended MCN 1995 [the annual conference of the Museum Computer Network, a professional organization for people who do digital work in museums] at the Coronado in San Diego. And yay! It was awesome. And I—this was in large part because, even in the course of my first year as a registrar (and I say this with genuine great respect for registrars, who are awesome at what they do), but I knew that it wasn’t really a line of work that I would personally find engaging for more than a couple of years. I was feeling like I really like working in a museum, [and] the things that would engage me more would be thinking about how to make use of new technologies that pop up, in relation to this really cool collection and the ways that people can use it and learn from it and enjoy it.

So at MCN 1995, a good number of the presentations were showing off and talking about museum websites—this new thing, this fairly new thing—and how they were made and all of this. So that seemed pretty interesting. And I got back from MCN, got a big fat book on HTML 2.0 (it’s amazing, thinking back, like how fat a book could be published about HTML 2.0, which was not all that very extensive a standard; but oh my gosh, this was a fat book on HTML 2.0), and played around in my favorite text-editing software (BBEdit, for the Mac users out there), and a pretty early version of Photoshop—I don’t remember, [version] 3, 4, something like that. I learned a little bit about Apache [web server software in use at Wesleyan] and launched our museum website early in ’96. Pretty minimal, but it was fun to do. And suddenly we didn’t have to put ink on paper to let people know, like, when the gallery would be open, right [while dissemination of those kinds of information also continued in printed form]. The brochureware days, such as they were.

But as I alluded to a second ago, it was already becoming clear to me at that point that the area of museum work that I found most engaging and most likely to keep me engaged over a good long time was using technologies to enable people to get to digital content and have experiences with that content that they would find useful or enjoyable somehow. And that really turned out to be a defining thread through a lot of my work ever since—along with the many other things that need doing, of course. [laughing] It’s not all that [technology work].

But—so, then, where are we? Mid-’90s into late ’90s? When I started the job as registrar, collections data [at the Davison Art Center] was managed in two parallel ways: a physical card catalog (one card per print or photograph, generally) and HyperCard! Anybody remember HyperCard? HyperCard was awesome for what it was. It was not really a collections information management system platform, but you know, it could be made to do all sorts of things. And the person before the person before the person in the job before me—there’d been some quick turnover there—had built this cool little HyperCard stack that was at least a solid little place to start having untold legions of student interns keyboarding in typed content off of—at that point, I guess, it would have been about forty years’ worth of catalog cards. So, HyperCard. Yeah. A good place to sort of stage some content before it lands in a system that can do a little bit more and be, like, robustly multi-user and all the rest.

So I, back then, was—knowing that we were a very small shop (the DAC still is; there were just three staff people, one of whom was a half-time preparator/installer, the curator who was de facto director, and the role that I was in, which at that point was called Registrar of Collections; it soon after acquired a second title to reflect the more technical work), but it was a, you know, tiny little shop, and there proved to be no way, that we could figure out, to make the case for the kind of actual recurring annual cash budget allocation that it would have taken to license and keep maintenance and support running for a vendor-developed and -supplied collections system. That was always my first choice in principle, because it is a lot more robust across prospective staff transitions, especially when there are only three people who work there, and no one of those positions was really solely focused on the kind of work that would allow an in-house system to be kept running.

That said, it seemed like, “Okay, what do we do here?” And this goes to the theme of invisibility, I should say. Is that it—at a point, it seemed that, “Okay, we need a more robust system. The one way that we can make that happen at least for the foreseeable future, on a scale of some years, is to develop in-house.” So, in ways that I designed to be as minimally dependent [as possible] on me being there eternally, ended up actually using the sometimes loved and sometimes maligned FileMaker Pro as, for the record, not a platform to build a terribly architected system, but a platform to actually use as a legit Rapid Application Development environment and a robust client-server platform with a lot of stuff way locked down and modeled for data integrity, and all of that good stuff. And did it. Rolled it out. It made sense. It worked for a long time—kept us running for about ten years, up until it became possible to make a case to the University, which is the DAC’s parent institution, for actual—an ongoing commitment for the funding it would take to migrate to an externally developed and supported system [and pay for continued maintenance and support for such a system]. Which, you know, always in concept was the model that made sense, but was not the model that could actually be implemented for some time. Until it could be.

And this—this really goes to invisibility, right, because the cost of licensing a system, a CMS [Collections Management System]: that was very visible. The costs of staff time to develop in-house because there happened to be somebody working there who was really interested in doing it and had thought some about data modeling in other contexts before museum work and was getting paid to be there anyway—that was effectively invisible. So it looked like, “Wow, cool. You know, so all those people at the Davison Art Center have to do is, like, pay 159 bucks for a FileMaker Pro license, and then once this thing is built after a year or two, figure out in their budget how to pay about a thousand a year for server and client licenses for this rather more generic, far lower-cost thing that has no specific support for an application that’s developed on top of it.” It looked cheap, and in a cash sense, it was.

It actually worked out fine because it did bridge that time until we could move off [the] in-house [system]; but the reason it played out that way really was because technology work in the museum was invisible in certain ways. So it looked like things that would require that [work] as a chief resource looked like they were close to free, although of course it was zero-sum with other work that could have been done and all of that. So, just an interesting aspect of invisibility, I think. After about those—after that ten years or so, we were able to migrate out. By then, one of the nice things about being in an in-house system was that for data cleanup and reconciliation, I could build all sorts of custom stuff into it, that I came to take for granted—the ability to do that. [laughing] It was interesting: once we were in a vendor-developed system, it was like, “Wait, I can’t, like, set up calc fields and do this and truncate and re-write to a different field, and whatever else?” [laughing] They’re like, “No. Why would you be able to do that?” It’s like, “Oh, never mind; I’m just going to go, like, shed a tear and figure out another way to do what I want to do.” But it was interesting. And they’re [the Davison Art Center is] still running that [vendor-developed] system, too, so that ended up working out, and now there is outside support for continuity across staff transitions. So, big sigh of relief. And—Kathy?

[Jones]: Which system did replace the one that you got? That you made?

That was EmbARK.

[Jones]: Oh, EmbARK. Still, they’re using EmbARK?

Yeah.

[Jones]: Okay.

Yeah. Which, I mean, you know every system has its upsides and downsides. There were certainly—now here’s some stuff that I might, like, massage the language around once I’m seeing the transcript. But there are—as with any CMS, there are aspects of EmbARK that drove me completely crazy. I think in any CMS there would be some things that would drive me crazy. In EmbARK, there were also lots of things that I really, really liked. There were some things, especially since the—well, there were things using—leveraging [EmbARK] Web Kiosk, as a web collection search platform, which I’ll get to in a second, there were hoops I was able to sort of cajole it into jumping through that were really sort of fun and convenient. And so there were some nice things there. But yeah, monolithic data file, whatever, there are some things that I’m not so crazy about. Incremental updates to the web? Nope, one big whole big thing, but you know, tradeoffs: life is full of them. [laughing]

[Jones]: I made the face because the Peabody had EmbARK and moved to TMS [The Museum System]. And so, in anyany collections management system, you have to be careful what you wish for.

Sure enough. And I’ve migrated myself [in the sense of migrating as a museum technology user] from a Davison Art Center in-house system to EmbARK at the Davison Art Center and now to TMS at the Yale Center for British Art. And, on the one hand, there are lots of great things TMS can do. On the other hand, there are definitely things I miss from EmbARK that TMS cannot do. And you know, since I’d not been so closely involved with TMS before—and I’ve just known lots of people who have worked with it over the years—until I moved, yeah, there are certainly things that I found surprising, like, “Wait, what? We can’t do that?” But then there’s a ton of other stuff that we can do because you can, of course, do all sorts of raw SQL [Structured Query Language] stuff under the hood.

[Marty]: I was just going to jump in and say that I appreciate your highlighting both aspects of the problem there. Because there’s two kinds of invisibility there. So one, you’ve got the invisible work of the people who are already working in the museum and why should we buy an expensive new system when they can just keep that old one working and it doesn’t cost us anything? Right. And then you’ve got the invisible work of when you move to a new system and suddenly things don’t work the way it used to. And you have all kinds of unanticipated challenges. And that adds a lot of work on to the other side that, that people aren’t seeing either.

Yeah, yeah. And as is sort of common coin in our professional world—even though CMSes, strictly speaking nowadays (aside from a few really terrible, long-ago legacy ones)—in a strict technical sense they don’t really have lock-in, but in a practical, operational-costs-of-switching [sense] due to staff time and other factors, they have immense costs of switching. So, I think they still tend, even though vendors, at least in all of the cases that I know much about, have really done the right thing and made it perfectly feasible to extract everything and put it into some new environment (it’s not that there was like there’s no way to export the data; I’ve heard tales of that in certain systems that I won’t mention because it’s all hearsay, [and] I think we know what some of them might be), but, you know—even with that, even without a technical roadblock to it: oh my God, I mean the switching costs. You know, it’s [true] for any museum, regardless of whether they have three staff or a hundred staff or hundreds of staff, relative to the scale of the staff that they have—how is a CMS migration anything less than like a [case of] “everything else kind of goes on hold for two or three years”? That is, especially [close to everything] collections-data related, right? I mean and that’s if everything goes well. [laughing] Well, no, I’m exaggerating. Two or three years, including a little contingency time.

[Marty]: Just a quick question here. Right, because you went from the paper-based card catalog to the, you said you also had HyperCard, to a FileMaker Pro migration. Then from FileMaker Pro system to EmbARK. How would you compare and contrast those two migrations?

Yeah. Interesting. Well, so I was never—happily, basically all of the paper card content had been keyed in before I came on. So it was pretty much all in HyperCard except for, you know, error correction: “We’d better check the card, because this is—seems to be a typo” kind of stuff. So as far as my migration-planning and -execution moments, it was HyperCard to in-house and then in-house to EmbARK. HyperCard to in-house was simpler in one way and more complex in one way: simpler because the field set was significantly smaller at that point. It was, gosh, I don’t recall exactly, but I would guess there were probably like thirty to forty fields in the HyperCard stack.

But the content in those fields, since it was freshly keyboarded in by students (whom I’m not meaning to disrespect here: they did tons and tons of work that they did diligently and could not have been done any other way), but since the data that was being keyboarded in came off catalog cards that had been produced in a time when there was a very, very high level of presumed expertise on the part of anybody who would ever actually have access to look at the catalog cards—so those cards, as were many in many museums, were full of the world’s most cryptic and often ambiguous abbreviations that, like, a print curator would totally know what this thing means but other people might not. You know: “B.67”. You know, is it [a number from a reference work by] Bartsch? Is it [from] one of like six different catalogues raisonnés that also were written by somebody whose name begins with “B,” and the curator knows that it’s not Bartsch because it’s a 20th-century print, so it couldn’t possibly be Bartsch? All of these things that so—really, really, let me say, much-appreciated data content, but really, really not yet cleaned-up data content. So some tricky stuff around there also in regard to what fields content had been entered into and what it—what that [field location] meant implicitly.

And also, and in that first migration, as I was running test migrations, I was also still building and doing sort of iterative tuning of design of the FileMaker-based system, based on how that content actually lived there, based on things in user interface that I would realize, “Oh gosh, I know people are going to want to be able to do this [some specific, foreseeable user action].” And so there was a fair amount of sort of recursive—you know, “How much of this does it make sense to make a very large number of systematic cleaning passes through the data in HyperCard? How much does it make sense to do that externally to both systems in, you know, big tab files or whatever? And how much of it does it make sense to tactically hold off on until it’s in the new system, where I know I could craft certain other kinds of automated things?” And the affordances for running cleanup, either in automated ways or in ways that can be optimized for visual display and like human mind-ear—sorry, mind-eye—recognition. So that was a pretty idiosyncratic process that kind of recursively folded together data cleanup, data migration, system tuning in the new system, and all of that.

The second migration for me, out of the in-house system and into EmbARK—that was, among other things, it was starting with a significantly cleaner set of data content. There were still big chunks of that that I knew still needed a whole lot of normalization, but I knew what they were by that point. And they could be effectively migrated as not as clean as I knew I wanted them to be, but I knew that there’d be no tripwires there for things going awry actually during migration. And that was somewhat quicker too, because the system was fully built that it was moving into. [laughing] There wasn’t this, like, combined effort to figure out what the system should really look like for its first production version, while also, like, getting the data into it. So that was a little bit more of a sort of a typical, you know, see—do a test migration first and see how a small number of complete records land [after phased data transport for successive groups of fields in those test records], and what wants to be remapped and what wants to be—what needs to be concatenated before it can go in, if it was more granular in the source system. What needs to be split out because it naturally wants to live in a more granular way in the new system. What wants to stay stuck together in a less granular way in the new system, even though it could be broken out, because if it gets broken out in the new system, then it all has to be inevitably re-concatenated for every single foreseeable display use off the new system, even though it can be [broken out]—and here, like, dimensions, right? They’re sort of a classic case of this. They had—because, especially in a print collection, they’re often full of things like “Plate: 147 millimeters” [for a width dimension, and] in parens: “irregular at left edge” or, you know, with all of these weird attributes and qualifiers and hedges and clarifications.

And so there were things like that, where it—where for that migration it made sense to—I can’t remember which approach I adopted—anyway, eventually, where it had been in the in-house system that I built and in EmbARK was that we actually maintained two sets of—two representations of dimensions. One was a, you know, numeric field per dimension, per dimension type. Plate width would have its own little “132” [numeric value] for millimeters or whatever. There’d also be a human-readable string that would be effectively a concatenation of that [and other dimensions]. And I think if I recall, eventually in EmbARK I dropped—I was able to do some finicky-enough concatenation logic [for downstream web display] that we eventually dropped the flat, free-text version of it, and it’s all built off of the granular stuff. But that’s an example, thinking of migration, where a—sort of a strategic deferral of something that is definitely going to be time-intensive made sense. Where, sure, migrate over the purposely redundant stuff as is because it serves needs that can’t be met in a more sort of systematically elegant but time-intensive-to-build-up-front kind of way, knowing that sometime after it’s in the new system that there will be a time to build that out. So [the two migrations were] sort of different in their nature, but—yeah.

So let’s see. Yeah, okay, I know where I was here in my notes for this. Yeah, so I had sort of, in my notes for this I had sort of two ALL-CAPS callouts to things I wanted to think of as, like, lessons that came to my mind, for me, about how invisibility plays. So I’ve got Lesson 1: Invisibility can invisibly steer decisions away from important strategic factors. And I think that where that comes from in the story I’ve just been telling is pretty obvious: the invisibility of the very significant amount of time it took for an already on-staff person to design, build, and manage an in-house system drove an institutional decision unavoidably towards going with an in-house system, even though (and I never actually did the [retrospective] arithmetic on this, but I am absolutely sure), you know, adding up all of the various indirect but definable costs of that approach, surely that exceeded what would have been paid to, in that case, Gallery Systems, had we moved to EmbARK back in like 1998 or whenever. But the fact that the vendor cost would have been highly visible, year after year, in an operating budget, and the cost of the allocated staff time was already bundled and not broken out in any way [in the University’s enterprise financial systems] that could be made part of that case-making process, drove that decision in a direction that, while it worked out just fine, to my mind was not the most strategic place for it to land right up front. Which [migrating directly to a vendor-developed system rather than initially to an interim in-house system] also would have avoided a second migration stage, which was by its nature pretty time-intensive.

So that was like late ’90s. The EmbARK moment got us up to about 2009–2010. There’s obviously ten years in there, which is a pretty significant amount of time. Broadly speaking, work just kept churning away. You know, there was data cleaning in the in-house system. There was “Hmm, how do we make the case for EmbARK?” [which continued to be the top candidate system, due in part to its functional parity for Mac client machines]. There was “Hmm, how do we improve the website?” There was “Hmm, how do we make the collection searchable online?” There was “Hmm, how do we create more images of collections objects?”

And a quick sidebar on images: I had started doing direct digital capture back in the ’90s using a 4×5 camera and a digital scan back—a Better Light back, which, I mean, awesome equipment: made beautiful, beautiful captures. Super, super slow and finicky, which was the nature of those things. You know, it would be like a two- or three-minute exposure, and if there was a power hiccup or a bird [hypothetically were to fly by a window and momentarily affect the light] (well, no, I had the windows blacked out!), if a bird were in the reapportioned [temporarily repurposed office] studio and, like, flew in front of a lamp [for a split-second during a long exposure], you know, it would have been all over. But [leaving aside any hypothetical birds subtly affecting the light] if the floor would vibrate a little bit, there’d be a couple of scan lines that would get glitchy [and require reshooting]. So: painstaking, slow, you know, it was low-volume in production for that reason. And it was really useful—some of those images are still out there; [for example,] the Goya Caprichos images that are on the DAC website and downloadable as open access images are Better Light captures from back in the day. That kind of makes me happy. They’re still out there living their life in public.

But anyway, so all these things were happening. And I—alongside that and in ways that fed in all sorts of ways into that, I had found it increasingly valuable to become more and more active in MCN, the Museum Computer Network, serving in some leadership roles and finding in that organization a musetech [museum technology] community that obviously transcends any one museum, because there were people there from all different museums. And in that way, thinking about invisibility within one’s own institution, in regard at least to certain kinds of work, both MCN and Museums and the Web—now MuseWeb—have been really central to my career in those ways. Not just, like, “Things That I Learned There,” and even not just to professional networks of people who—many of whom I now count as friends as well from those worlds (which is really top of list for me [in more personal ways]). But also just [laughing] time-to-time feeling like, “oh my God, you know this is—I’m putting all of this effort in and, you know, nobody understands me” [said with mock tears of self-ridicule] or whatever. You know, wherever we go in our minds with those things. For the transcript, I was sort of, like, laughing at myself as I said that.

But there’s some truth there still, which is to say, working in a vacuum for an extended period of time—and a vacuum not in regard to, you know, “being unappreciated by colleagues” or something in a general way, which [being appreciated by colleagues] I have always felt, but just in regard to being able to have more of a nitty-gritty conversation with people who are similarly engaged with similarly finer points of some pretty technical stuff, and all of the human dynamics that happen around that stuff, too: the way that that work lives in an institution, the way that conversations about that kind of work or that implicitly rely on that kind of work (without [colleagues] necessarily understanding that they do) within an institution—those forums have been so important for a sense of community that—I haven’t thought of this [the upcoming point] until this moment, actually, but thinking hypothetically, were there no such a thing as MCN or Museums and the Web, I bet that I would have, at some point or other, walked away from this kind of work over the years. Because it’s been a—often a kind of a mutually nurturing and restorative sort of space to be [in] from time to time, and to realize, “Okay, you know, we’re sort of all in this together, each in our way, each in our institution. And not only can we share some things we’ve learned with each other, but we can, by having those conversations, come to understand that, yeah, you know, there—there are a lot of us who are finding a way through these not just technical things, but the organizational positionings of those technical things, and the sorts of conversations that can be challenging to find ways through that keep that work sufficiently resourced.” Which is something I’ll loop back to a little bit further on as well.

So, I guess in that sense, I had another—this is like “Lesson 1.5” (I wasn’t sure it was a whole lesson, but—[laughing]): We are sometimes most visible to one another across institutions, and that wider community can be a key resource in keeping us going. Which is to say, we can—in regard to the actual, like, “What is it that we do?” aspects of things, there’s less invisibility sometimes when we, you know, walk out of our museum and talk with people who are at other museums.

So, let’s see. Getting back to the narrative thread. I said I’m not good at, like, sticking to a storyline but I’ll jump back into it now. Now we’re at 2012 or so, when I did one of the things that actually I will admit to taking the most satisfaction in in my time there, which is led the development and launch of the DAC Open Access Images policy, and right along with that, prepared and launched the DAC’s online collection search in what I called a public alpha. It strictly speaking wasn’t really alpha because it was running on EmbARK Web Kiosk [which is a mature platform]; but it was [such] a highly customized Web Kiosk that, when I launched it, there were still a bunch of things I didn’t really have working right yet, but I knew that people would still find it useful to be able to more or less search for things [in the collection from anywhere with web access] and find some things, even if it wasn’t close to optimized. [laughing]

So—did that, and the open access policy was a case in which being at a really small museum offered a certain kind of nimbleness that the Davison Art Center was able to be a pretty early mover, and by virtue of that make Wesleyan a pretty early mover in making images—highly accurate images of public domain objects in the art collection—freely available for any use that people wanted to do. There—at that point, in the year or so leading up to that, there had been a few high-profile launches of similar policies: the National Gallery [US National Gallery of Art], LACMA [the Los Angeles County Museum of Art], the Yale museums (Yale Center for British Art and Yale University Art Gallery), one or two others here and there, but not a whole lot yet. Clearly this was a thing that was important. Clearly it was a thing that, in my own little worldview, aligned with my sense of why I was doing a lot of the work I was doing: to make these things [images of collections items] as available to people to do whatever they dream up as possible. And since it was such a tiny museum (I mean three staff people, right?), so: conversation with my boss, conversation with a Dean, Dean flew it to Provost (I think, if I recall), and there was an approval. You know, I put together a one-sheet—one-page PDF that laid out why I felt that this move would be aligned with the University’s mission, which is of course fundamentally educational, and how it, further to that, could help position the University as a leader in that area. And lo and behold: “Approved, go ahead.” So okay, cool: We’ve got an open access images policy now [for me to finish developing and fly by counsel for legal vetting].

In a larger museum with lots of departments, various concerns from various angles (all of them grounded in experience and legitimate in their way, but whether or not I personally would agree with them), that would tend to make figuring that out a somewhat longer, slower process. So that was a case where being in a tiny museum, resource-constrained as it was, actually enabled something to happen really quickly. It didn’t take huge technical resources. There were ways to—for actual image delivery, largely to leverage existing university infrastructure at sort of enterprise scale. You know, there was already a content management and delivery platform that I could sort of repurpose for actually delivering the images. Since then they’ve shifted over to AWS [Amazon Web Services], but there was a way [in designing the initial architecture] to just set up a little one- or two-line redirect [Rewrite] in an Apache config file for our main website so we could have stable URLs [via an abstraction layer that enables durable public URLs to work with future changes to the content delivery platform], which is why people can follow the same links now and get the images from AWS that they used to get via a CM—Content Management System called Xythos that Wesleyan used to run and has decommissioned since.

So there were some—it was really nice to be able to make that happen, and also some satisfaction in having architected it in a way that actually did survive a couple of major, under-the-hood enterprise architecture changes at Wesleyan right about the time that I moved on from the University, without leaving some terrible mess for people to solve. So, “phew” for that. [laughing]

And, let’s see, 2013. I guess 2012–13 a lot started happening, or a lot of things that I’d wanted to move forward started getting more traction for various reasons. 2013, I was able to start running rapid collections imaging projects in some summers. This was something that, as digital single-lens reflex [DSLR] camera technology improved in the early 2000s—early to mid-2000s—the really slow, painstaking, beautiful, Better Light four-by-five-inch scan-back capture that I mentioned, was no longer, like, kind of the only way [aside from medium-format camera systems that would have been prohibitively costly] to do direct digital capture that would really give a really good, usable, use-neutral, high-res, highly color-accurate image. And bubbling up around in the museum imaging world, more and more places were heading towards, for many kinds of capture at least, relatively high-volume capture using DSLRs, and setting up, you know, really consistently lit capture stations and moving as much [art] work as they could in front of the camera and capturing it and having really systematic workflows and empirically assessed quality assessment with color charts and checking Delta-E’s [a quantitative measure of color variance] and all of this. So that was fun to get set up.

We were in a funny position though, right, because (a) we didn’t have enough permanent staff for there to be any way to execute on this [at scale and speed] without hiring in for it, (b) we were tiny and we had no track record of ever having done this successfully, (c)—I don’t know how many letters I’m going to get through here!—(c) I have served on enough IMLS [US Institute of Museum and Library Services] grant review panels to know that if I ever saw an application come in from a 2.5-FTE [Full-Time Equivalent permanent staff] museum saying, “We’re going to do this highly accurate systematic collections imaging and we’re going to pull together a project team and get everybody up and running together, and do all this stuff, and we’re going to aim to get this many thousand images over three summers” and whatever—I would have looked at it, I would have thought, “Oh, man, this is really great. You know, their heart’s in the right place. They really want to do this, but there is no way to know what their chances are for success here.” So we knew that trying to get some sort of significant grant right up front would just be like—not time yet for that. [laughing]

But happily, there was a really supportive Dean at Wesleyan who was able to direct one summer’s worth of funding for it to us in 2013. That was something that really made all the difference, because I already had the project design sort of roughed-out in my head. I was able to build that out more completely. This is where past, pre-museum life as a photographer and photo lab manager and stuff also kind of helped because I was sort of wired to engage with that, and was able to hire in a photographer who had significant experience doing art imaging—had not done the sort of highly consistent, quantitatively quality-controlled capture that we were going to do, but she was really eager to learn it.

And two people, who I called Imaging Specialists, who were largely sort of second-stage quality control [QC] after the photographer, would do quick QC and metadata embedding and pushing derivatives around, and fine rotation and crop and all of that kind of thing. And then two student positions that were basically art handlers, but were really encouraged to learn what was going on in the more technical roles that they were not filling in their own capacity. And that turned into this, like, great little team of six of us going crazy for six weeks, shooting as much as we possibly could. I was ironing out—that first run, I was ironing out all sorts of [factors] like, “Okay, well, now we know [that] when we’re trying to run network shares and run this number of really big image files across them really fast, strange things are happening. Okay, so we’ll go to little hard-drive arrays and run them up and down the hall and look, it’s—that’s working now.” And so, you know, some crazy workarounds here and there, but we shot a whole lot of prints and it came out really well. And then as a—I guess, strictly speaking, secondary kind of value but highly intentional and in some ways even more important value than the actual images we made that summer, we also had a track record to point to.

And that then enabled us to write for IMLS grant support, and we were fortunate to get funding from the IMLS to run very similar projects—of course, each summer iterating a bit, fine-tuning things, but basically adopting the same model for three summers of that. And then subsequent to that, to do so for a second grant as well, that funded further imaging. And so, after that first summer of it, in kind of like really scrappy startup mode, then we were able to turn that into a recurrent thing. That said, even with most of the actual work [being performed by temporary project-team members who were paid mostly from the grants], along with me trying to remove obstacles from people’s paths and help them understand what the big aims were and all of this, it was still a really resource-intensive thing for a museum that small to do, right?

There was, of course, for the grant, there was a matching requirement that we were largely able to do through, you know, shares of people’s salaries and that—who were already being paid and were putting a lot of time into the project, each in their way. But it was still something that would tend to displace a lot of other work that typically otherwise would have happened in the summer because “We’re running an imaging project again this year! We’d better clear decks,” and that kind of thing. So things that, you know, what—big inventory projects or whatever might not be happening at that same time.

But it was really fun to design and direct those projects. And it was also really fun to give some early-career folks a chance to get some real-world experience doing those kinds of work, and get something—you know, a track record themselves for their résumés. Some of the student interns—well, we didn’t call them interns—the student employees (Imaging Project Assistants) and the Imaging Specialists, who were outside hires, but really early-career people—to see them then move on to do other things where they were able to leverage some of what they learned then, and sort of the reputational value for them, too, of having been part of one of those projects. So they [those imaging projects] were always intensive and exhausting and mostly fun.

And so, let’s see. Oh yeah, so mentioning project staff reminds me that I did also want to touch on the invisibility of musetech work and permanent staffing levels. Paul, I know you knew this was coming; it’s something that I’ve been—thought out loud [about] a little bit in email. And this is something I may be careful to craft in a way that doesn’t risk misreading in any way (before this is published), because it could risk that. But I think that writ into a single line, my Lesson 2 is: Invisibility can pose a serious challenge to appropriately robust staffing, I think, is—when I tried to—and I may tune this a little bit at the later stage too, but that’s the sort of clearest short version I could come up with this morning. And to say a little more about that: in my experience, the frequent invisibility of (sorry, just, I need to trigger myself with a note here)—the frequent invisibility (I’m really not reading it, though [laughing], I do say) of musetech work to internal institutional stakeholders (not so much [to external ones], I mean we know it’s invisible outside the museum for the most part, unless it’s a really socially engaged project of some sort), but that internal invisibility, I think can really readily lead indirectly to a risk of very fragile understaffing in certain areas.

One case in point would be systems administration for, like, core collection systems: these kinds of things that, so long as those systems aren’t visibly breaking or causing trouble in some way, they can be pretty invisible. I mean in a good sense and a bad sense, they’re a little bit like plumbing. You know, if they’re working it’s great; if they’re not, you really know it; but you often don’t think about whatever sorts of ongoing maintenance and proactive engagement they require that takes very specific kinds of knowledge. So if they’re not breaking, it can look like staffing levels are fine, even if they’re not. And that’s like 100% understandable; none of what I’m saying here is to disrespect or impugn people who don’t work in these areas, and I’m not saying that anybody should know better or anything like that. It’s a tough thing to help people understand, and it’s something that always has been, at least in my experience.

But when things are working, it can mask the fact that there can still be a dangerous degree of fragility in how much the overall pool of people who are our technical staff [have the capacity to keep those systems performing as required], if there is a pool or a “pool of one” or however many there may be—because people don’t necessarily have time to cross-train in ways that avoid having a single point of failure if one person becomes unavailable. If everybody is maxed out with their primary responsibilities, then there’s not really a great way to ensure that all of the critical things that any one person knows are also known, at least to some degree, by other folks, so if any one person does become unavailable, then you have some sort of smooth failover, even if it’s not as well-run, as quickly responded-to, as smoothly improved, as it would be if the person who really lives and breathes that system every day were there. At least there’s a reasonable level of predictable continuity—foreseeable continuity—across those kinds of transitions. And of course documentation can help with that, but I think another thing that is—can be difficult to help people really get a handle on when they haven’t been part of keeping a complex system running—is that even, no matter how solid documentation about that system is, losing local technical knowledge that lives in people’s heads about how to apply that documented knowledge to the way that a system is configured and run—losing that technical knowledge can be really crippling if there isn’t a colleague who can just sort of jump in and say, “Oh yeah, I pretty much know how that works and I can help take care of that.”

And I think there’s always an aim of fostering that kind of cross-training, but it can bump up against sort of irreducible capacity constraints, where if somebody is spending enough time learning how to be the backup for this invisible thing then they are not doing things that actually are mission-critical that they are the primary person for. So it’s a tough thing, this matter of, you know—these human repositories of hands-on operational knowledge, right, and how shared that can be. So much of this always comes back to people.

I mean, even when we’re talking about technologies and musetech and systems and these things, the only way any of this stuff works is when people know how to keep it working and make it work better. And that can be—the more invisible that is, the tougher it can be to help people understand why it should not be left at a—in what’s actually a dangerous state. Or a “fragile” state is probably better than “dangerous” there. (I may amend that one when I’m looking through the transcript.) And this is true, just to be clear, even—this is not about, like, institutional decision makers not understanding things they should, or something. I mean this is, with—all of the people in those levels that I’ve reported to or up to in the two museums I’ve worked in, have been extremely smart people operating in genuinely good faith and making what are, based on what they know, the best overall strategic decisions for the institution. The super-tricky part is communicating the importance of keeping invisible work happening and able to continue happening across any unforeseeable transitions in staff. And I have not found a good solution to that. I’ve had very limited success in moving those conversations in the directions that I think would lend the most resilience to the operations that I’ve been part of over the years. It’s tricky. I mean it can look like “working properly” is just the natural state of those systems when—and it can look that way very reasonably—when, in fact, it’s a condition that comes about, in large part, only because of the expertise that staff are applying to those systems every day to keep them running in ways that people don’t notice, because they are working.

And the key other fact there then being, as I mentioned, I mean that expertise really needs to be shared for operational resilience. And if those systems fail, it’s all too easy for that to read as a failure of an individual, rather than a structural issue [of staffing levels]. And by that I mean, systems that complex always fail. I mean, they’re not always failing, although in a sort of a philosophical sense, I guess they are. But in a very practical sense, you know, they’re always on the edge of failing. You just don’t know when they will. You don’t know how they will. They will. When they do, people need to be able to jump right in and make them work again the way that they’re supposed to work. And if that’s being covered adequately by even just one person, then it’s easy for it to seem like the natural state of that system is that it just works, so great, you know, so be it. And it’s just a Catch-22 in that way, right? You know, if it’s working, then it must be fine that it’s working with the staffing level that we have. If it’s not working, then it’s, “Oh, why is it not working? It needs to work.” And that’s a very reasonable response. Again, I’m not judging that response but there’s a—the key piece of it around the invisibility of the ongoing work that it takes to keep the thing [working that is] already working in a way that makes it seem to be its natural state. That’s a tough one.

So: people, people, people. [laughing] I mean, these are technologies, but it’s—it’s people. And you know that all of that was, by way of example, focused on systems. I’m thinking here of things like collections systems—obviously, it would be top of list for that. But I think it also applies in all sorts of ways to other technology—digital technologies work in museums, and, of course, for that matter, probably all work in museums and most work anywhere—is that, ultimately, whatever sorts of technologies are in play, whatever sorts of platforms and systems and tools and software are in play, it’s ultimately people who make these things work, who keep them working for other people, who answer a technical support email to help the people who usually make them work figure it out when they can’t figure out what the issue is. And all of that can be so invisible, in ways that can pose continuing risks that can be difficult to communicate. That—I don’t know, I guess I keep coming back to the risk-mitigation thing, which is not necessarily the most zingy, engaging sort of thing, but it’s where my mind has been a lot in recent years. [laughing] And it’s, it’s—I think it’s—the reason that’s surfacing so much in these comments is because, to my mind, it is so closely bound up with the framing rubric of invisibility. The less visible something is—and by “something” here, I guess I really mean the less visible the particular kinds of expert work that can be required are—the more they are at risk of laying the groundwork for difficulties that could otherwise have been prevented [by, in context-appropriate ways, more robust staffing or other resourcing]. Let me put it that way.

So, and I guess to—since there has been an occasional shred of narrative in this, let me get back to that to tie it—I know we’re over time. So—I was at the Davison Art Center for twenty-five years. I had, in the later years of that run, come to realize, “My gosh, I’ve been here for more than twenty years. Everything I know leads me to think that, statistically, I am likely to see through my career here, because people tend not to leave a place after a couple of decades, if they’ve been there that long.” And then, to my surprise, certain things changed. Certain opportunities popped up and it just proved to be time—now about two years ago—to look into a new possibility, which did pan out. And I have, since 2019, been the Head of IT at the Yale Center for British Art, which surely is another reason why so much of what I’ve ended up thinking out loud about here has been systems-related. Because while my former job at the Davison Art Center covered a pretty wide range of areas—including imaging, and data content itself, and a lot of other things as well as systems—my current role is more specifically focused on infrastructure of various sorts, for the most part.

So that tends to be more front-of-mind more of the time for me, although [laughing] I’ve always spent a lot of time obsessing about that sort of thing. So. And I may as well say, since we’re like 99% of the way through a big curve, that I will actually be retiring June 30. And still, I’m probably an MCN member for life (it would surprise me if I were not), and I’ll be seeing through a couple of professional service roles with AAM [the American Alliance of Museums] and with MCN for another year after that, but I will be, I guess, no longer a fully employed, active musetech professional; but maybe I’ll make room in this field in which many people are eager to work. If all things remain equal in the structural sense of this, who knows: maybe there will be a great opening for somebody else to engage with.