Oral History of Museum Computing: Howard Besser
This Oral History of Museum Computing is provided by Howard Besser, and was recorded on the 12th of April and the 14th of May, 2021, by Paul Marty and Kathy Jones. It is shared under a Creative Commons Attribution 4.0 International license (CC-BY), which allows for unrestricted reuse provided that appropriate credit is given to the original source. For the recording of this oral history, please see https://youtu.be/ElKqWu7Wke0.
I started working as a volunteer for Pacific Film Archive, part of the University Art Museum in Berkeley in 1972. It’s now called the Berkeley Art Museum / Pacific Film Archive – it’s “slash Pacific Film Archive” now. I was doing things around cataloguing the collection, using computers to help catalog the collection. We got a grant from The National Endowment for the Arts specifically for cataloguing our Japanese collection, and so I was essentially doing computerized typesetting that would also feed into an eventual database, but at that point we had no capabilities of databases. We started that project in about 1977 or so, where we were typesetting the records of different films.
Moving on from there, the administration of the museum saw how kind of sexy that typesetting was on that grant project, and the grant project was paying my salary at the time, so the administration of the museum decided, “Hey, let’s take our regular printed calendar, and let’s make a souped-up version using computers to typeset the calendar.” So they hired a very expensive designer and I worked with the designer… the designer was only interested in what the calendar looked like. And we went from a weekly to a monthly calendar at that point, but I was interested in saving the records that we were doing. So at that point in time, Pacific Film Archive was showing about 15 films per week, and the calendar included all of the information about the filmmaker, the stars, the date, the distribution company, as well as a synopsis of the film. And so I was interested in saving all of that, and making sure that we had that and in such a way that it could enter a database, so I developed a very simple user interface, where the computer would ask you director, film title, filmed date, running time, etc. And then it would nicely typeset all that but also save all the information in machine-readable form. The museum was really only interested in the immediate issue of typesetting. And they were not interested in saving that, but I was struggling and making the process a whole lot more complicated in order to save those records.
So I should also add that, at this point mid ‘70s, mid to late ‘70s, I also started getting involved, or at least tracking what was being done in similar institutions. So the closest in terms of structure to ours was MoMA. MoMA has a film department in an art museum, and so I was tracking what MoMA was doing. I was dealing with Jon Gartenberg at MoMA, who was a member of the Museum Computer Network in those early days. But I was very dismissive of what they were doing. They were using SELGEM [Smithsonian], GRIPHOS [MCN], punch cards that only gave capital letters. I had been entering all of this from a typesetting point of view, where we could have things look very nice and this limitation of only capital letters and doing punch cards just didn’t seem very viable to me. I was working on terminals connected to a Minicomputer, versus their 80 characters per keypunched card, and how limiting that was. And I saw everyone there is being really kind of backwards.
Around 1980, there was a meeting of all the film archives, to try to develop some type of standards for cataloging their films. This was co-sponsored by Library of Congress, with I believe money from NEA, and they brought us all together in Washington. There was no money to send me there, I had to foot the bill myself. My museum wouldn’t send me or wouldn’t pay for it. And out of that grew the National Center for Film Preservation, which became an umbrella group for records about films in cultural institutions. It wasn’t a union catalog. It was more trying, they were trying to push what American Film Institute had started in terms of cataloging all the films that had been made in or produced in the United States, so it was oriented towards that, and that operation lasted for about eight years. It was centered at American Film Institute in L.A. And our former NEA Program Officer Stephen Gong became the head of that particular project. And I’ve interacted with him many, many times over the years on different projects. We did a project last year, in fact. He’s now head of the Center for Asian American Media, that is the producer of the “Asian Americans” series on PBS right now, screening these days that we’re speaking.
Soon after I started at Pacific Film Archive, once we got this grant (this grant to do the Japanese film collection catalog), I became the expert in computers for the art museum. And so, I was doing everything from troubleshooting microcomputer problems on people’s desktops, to databases to planning, and just a wide variety of things. And I was always being paid with soft money. I was never on hard money. By the early to mid 1980s, probably 1983 or 1984, I… that was the Reagan era, and one of the mantras was “trickle-down economics.” So I kind of took that to heart and thought, “How are we ever going to get the money to catalog the collection? Let’s think big. No one gives people money to catalog a collection. They give people money for some large, outrageous, sexy project, so let me propose a large, outrageous, sexy project and the money will trickle-down to cataloging.”
So I wrote a paper (which is since lost) probably around 1984, a paper saying, “Hey, look. We have these facilities in Pacific Film Archive. It’s part of the university. We have a mission for the university to help in teaching and education. Well, we have a tremendous program, bringing directors to show the premieres of their films. We have a tremendous program of older classical films. How about we invite directors to come in with one of their older films? A film that has been beaten up and destroyed, and there are lots of frames missing and lots of scratches on the film. Let us digitize those films, every frame of the film. Let us develop a way to let the director color-correct their films, rebalance, clean up scratches and imperfections in the film, and then we can strike a preservation-quality print of the film as restored. And we can do all of this, using computers.” This is 1984, three years after the IBM PC came out. Hard drives — the biggest hard drive you could get was 30 megabytes, right — this was not a viable kind of plan. But, I laid out the plan.
Somehow the paper circulated in various places, and by 1985, the… there was a new Vice Chancellor of the University of California Berkeley, Vice Chancellor for Computing. The paper happened across his desk. He read it. He saw this as his leverage to do what he wanted to do. His main project was to build a network linking the buildings on the campus, and linking them to other places in the world. But networking was not really well received at that point in time. He had on his side people in the sciences, particularly physics, who could see the reason for running network wires to different buildings and having access to material that is not on the campus. But he had no support from the arts or humanities. So, he saw this paper that I wrote as an angle he could use: “Hey, film, art! … Everyone can relate to that. Getting art on your computer screen, getting film on your computer screen, getting it from the museum to classrooms, to researchers to people in all different places across a network.”
So we set up a meeting between the Assistant Director of the museum, myself, and this Vice Chancellor, Ray Neff. The Assistant Director of the museum was a guy who really worked angles, and he was insistent that my office stay at the museum. Ray Neff bought this. He agreed to pay my salary. He assigned two computer programmers to this project, and he lumped our project together with another project in the geography department that was looking at images (geospatial images), and a project in the architecture department that was essentially taking their architecture slide library and converting it to digital. And from our side, on the art museum side, we basically gave up the idea of doing this with film, but we decided that we’d do it with art and art objects.
So, we worked on this really hard. In 1985, we developed a system, a prototype system where you could do a query across a network, using simple text fields, do a query and get an image that would show up on your screen, and then we had to build in things like zooming on the image and other things. We had to design it as client server, and the problem with client server is “who’s going to install client software on their machine?” So, we used the only kind of client that would was available at the time that would run on a Windows environment, and a PC environment, an Apple MacIntosh environment, and a Unix environment — that was X Window client, so you should think of X Window [System] as being like a web browser is today. It’s something that you have to have on your machine in order to view something remotely, and everyone has to have one or they can’t do it.
So in the spring of 1986, both the American Library Association (ALA) and the American Association of Museums (AAM) had their annual meetings in San Francisco. We rented an exhibition hall booth for both of those. We had done all our development work on Sun Microsystems minicomputers (high, very ultra-high end workstations). Sun had been donating us equipment and they rented booths for us at both AAM and ALA conferences, and it was a huge hit. Most people, at that point in time, had never seen a high-resolution image on a computer before, and certainly not on these Sun workstations that were about 1500 pixels across. At that point most computer images were like Pac Man style, or like the early Mario Brothers — very blocky, you couldn’t see any detail. We were showing our images that were really pristine and looked like a photograph. And it did. It just became this huge hit.
Let me backtrack a little bit and tell a little anecdote. When, we started into the project, at that point in time, there were no scanners. The closest thing you could get to a scanner was like a Densometer (measuring kind of tool for taking image density). So, the very first image I scanned was of an oil painting about 8 by 10 inches in a frame. I brought it down to the Digital Equipment Company (DEC) office in Palo Alto, and they had just bought a scanner from a company called Eiconix (that was later bought by Kodak), where a digital camera was mounted on a copy stand. And then you had to have lights at 45 degree angles, and it would essentially scan what was on the platform underneath. But, this at this point, they were so primitive that you could not see what you were scanning. You could not see what the camera was seeing. So what you had to do was scan, FTP that file to another computer that has display software, and then see what you’ve got. So we had to put pencils down on the surface, capture an image, FTP it to another computer, use display software to look at it and see, “Oh, we need to move that pencil in a little bit,” or “we need to move that pencil out a little bit” in order to see the borders of what the camera was viewing. And so, we, figured all that out. We got it to the right dimensions so that we could, it could be right, and we then could take our final scan of the art work. The big problem turned out to be something that almost made me give up the whole thing: I had a darkroom thermometer with me. Darkroom thermometers go to about 110 degrees. I put the darkroom thermometer next to the painting. The scan took about 50-some minutes, and at the end of that scan, the thermometer had gone off the end of 110 degrees, so you know, I was really upset. I was thinking, “What, what’s the temperature at which oil painting oil, the oil in oil painting dissolves? What point does it turn liquid?” Luckily, you know, very purposefully, this was a lesser piece in our collection, but from a conservation standpoint, this was not going to work. So, what we ended up doing was playing around with fans and blowing air across the surface of the painting, ended up being our way of dealing with it. Looking back on it, as someone who has since been very involved with conservation and preservation, this was not a good idea. So though we solved the heat problem with fans, but that didn’t solve the light problem. This was exposing that painting to a lot of light for 50-some minutes, so that was, that was really hellish. But just, just thinking of how we had to deal with computers and scanning before there were scanners. It’s kind of astounding, the difficulties that we had. Because, you know, we, it’s kind of like early Word Processors, where you would not see… you would put in like, like Word Star, you put in “<b>word</b>” to create boldface but you can’t really see it, the way it’s designed to be. That’s, that’s what we were… [reading the chat] Yeah, control K, control C. That way, it was such a delay between when we took a scan and when we could see it. And it took so long for the scan, it’s harmful for the material, but luckily, over the course of the next year or so, some real scanners came to market. They were still relatively primitive. They were low resolution scanners. They were… most of them were like the Targa board, which basically used a video camera and video technology, but could scan relatively quickly. But it was limited to your video aspect ratio of 640 by 480 aspect ratio, so you couldn’t get a very detailed scan.
Anyway, we never got around to doing this really sexy thing with moving images, but this became, you know, just a big hit. At some point in maybe ’88, late ‘88 or early ’89, The New York Times came out, interviewed me and did the cover story for the Sunday Art section on what we were doing. So, it just it ended up being, you know, this really incredible (although very difficult) project that was really way, way, way ahead of its time. You know, this is 1980s. This is before the web. The first web browser was 1993, and we were doing this in ’86, was kind of the big debut of our system. But there was no support on the Berkeley campus for continuing this. It turned out that the Assistant Vice Chancellor, who was heading this project had been cooking the books. The first version of this we did ourselves. The second version we hired an outside contractor to work on it. He had me being paid by the outside contractor to kind of hide my salary, and he ended up with a multi, multi, multi-million dollar deficit, and, that he had been hiding. And when he was finally caught, he was fired, and, and I got caught up in all that. I had to have people testify on my behalf that I really was a university employee, when, because they fired the outside contractor, and they were trying to save money, and so the, the whole thing just fell apart when he left. But what remained were serious basic research and development work on all of this. Paul?
[Marty]: I was just going to jump in and say, I just posted it in the chat, the link to the 1988 New York Times piece that you were just talking about.
Oh cool.
[Marty]: And I don’t know if I’ve ever seen this before, Howard, but what I love about this piece is that in this piece, you have an editor at the New Criterion, an art critic, saying that, that digital reproductions of art is another way to defraud the public – which was the counterpoint in their article to your argument that this is the democratization in helping more people gain access to art. And isn’t it wonderful to see that that argument was going on in The New York Times back in the 1980s? [Laughs.]
Well, literally, yeah, Yeah, that’s, that’s pretty amazing, but one thing I should add, is the reason I got into all of this stuff was both, was kind of, my head going in two different directions. One direction was, “Hey, this is great, and think of all the wonderful things we can do.” And the other was, “Well, there’s a downside to all of this as well.” I published a piece in probably 1988 or ‘89 called “The Changing Museum” where it’s kind of a little bit of taking Walter Benjamin’s “Work of Art in the Age of Mechanical Reproduction” and putting that into the age of digital reproduction. What is the role? And I talked about how it’s more, it’s democratizing, but it also is something that removes the aura of the original. And there’s something about the original. And that, that that’s always been the theme for me from the very beginning. On the one hand, here’s all these cool things, on the other hand, we can’t just do the cool things without thinking about what the ultimate negative ramifications are for it.
So, yeah, Thank you for, for getting that article. Cool. So let me… Okay, yeah. Let’s, let’s continue chronologically. So, I took a job… so, so things were dissipating on the Berkeley campus. I didn’t really want to go back to my work at the museum on the calendar and on everyone’s headaches, headaches with their desktop machinery.
Through a lot of public speaking and talking about this project in various places, I ended up getting to know both David Bearman and Toni Carbo Bearman. Toni was David’s wife at the time, and she was Dean of the Library School at University of Pittsburgh, and she very heavily recruited me to be a faculty member there. I finally, decided, yes, I’m going to do this, but much more to be able to work with David than to be able to work with Toni. And David, as you know, had founded something called Archives and Museum Informatics, and he was a leading figure at the time and, later, Founder of Museums and the Web, and other, other such things. So, I moved to Pittsburgh and, and taught there for three years. My closest colleague my first two years there was a woman named Carla Hayden, who has since become the Librarian of Congress, which, as many things in my life that seem, you know, kind of innocuous end up being very important later on. You know, my experience in high school with photography, which was just kind of a fun hobby, really helped with my imaging skills and scanning, and things like that, so yeah, all the threads that go apart and weave back together. And you know, there’s so many things in my life like that.
So beginning in 1989 I worked with David. I taught in Toni’s program for three years, continued doing work like this, but I never wanted… I was always clear to them that three years were it, and when it came to the end of the second year, faculty kept on telling me, “Hey, if you ever want to be a professor, you have to stay here ‘til you get tenure.” And I said, “That’s not me.” I moved to Pittsburgh to work with David. I got that work done. It was kind of the ending up, and I’m not… I don’t care. So maybe I’ll never be a professor. I don’t care. So I started kind of looking at things that might, that I might do post – University of Pittsburgh.
And completely, completely out of the blue came a recruitment, a set of recruiters for the Canadian Centre for Architecture. The Canadian Centre for Architecture is one of the few private museums in Canada, run at that time, in a very autocratic manner by the richest woman in Canada, Phyllis Lambert, an heir to the Seagram’s fortune. And it’s a very prestigious museum. It had worked on many standardization projects prior to this. Many, many standardization projects with the Getty and with the Canadian Research Institute, Canadian Conservation Institute, and others. And it was a very, very prestigious museum, so they started recruiting me in 1991, and I went up there to visit. I have a friend of mine in Montreal that I decided to spend time with while I was up there on my job interview, and my friend is a performance artist. He’s an anarchist. He doesn’t think much of rich people, but when I mentioned to him that I’m there to be recruited for Phyllis Lambert, his eyes lit up and he said, “She’s wonderful.” She had saved his housing stock. They were going to tear down this neighborhood that he lived in, and because she’s this famous architect, and has a social view of architecture, she came in and forced the government to take it over and run it as a co-operative. He thought so much of her, so I thought, with his endorsement, I’ll go and I’ll work at Canadian Centre for Architecture (CCA).
So, the reason that they have so heavily recruited me (I didn’t see their job description until after I had interviewed) but the reason they so heavily recruited me was I was probably the only person in the world that would meet their requirements for who they wanted to hire. They wanted someone with a Ph.D. — well that limits pretty far. They wanted someone who could work very well in English and in French — that is more limiting. They wanted someone who could work on integrating the catalogs and metadata in their three different divisions: library, museum and archives. So with all of those I was probably the only person in the world who qualified, so, so basically I got to write my job. I got to have what I, whatever I wanted, my salary, my health plan, and you know, the way that my contract was written was I could live in Berkeley for nine months out of the year, and they would fly me to Montreal for one week every month of those nine months. And then I had to live in Montreal over the summer, so I didn’t have to do Montreal winters except a week at time. They put me up in a hotel, just a block from CCA. And I telecommuted the rest of the time, and that allowed me to go back to the Berkeley campus to teach. I taught like one course a semester on image databases.
So I was at CCA, and the big project I was to undertake was to join together the catalogs of the archive, museum, and library. But, shortly after I started there, something interfered with this. At that point, the Collection Manager on the museum collection side was Jennifer Trant, who you both know. I don’t know whether the audience for this knows that. I worked closely with Jennifer, but Jennifer was also managing another project which I’ll talk about in in a minute, and at some point Jennifer dropped out of that other project and at some point, she actually left CCA and moved back with her, her then-husband to Toronto. So when Jennifer dropped the project, I got put in a coordinating position for this mammoth project. So here’s where I’ve got like a long, long description.
Okay, you ready for this? Okay: 1992 was the 350th anniversary of the founding of the city of Montreal, and the 150th anniversary of the establishment of the country of Canada. Summer of ‘92 was big, big anniversary time. The Canadian Centre for Architecture (CCA) had been working for many, many, many years on an exhibit that would open that summer. The exhibit was called “Opening the Gates of 18th Century Montreal.” They were planning a really big deal opening with fireworks, the Mayor present, the Vice Premier of Canada, lots of TV coverage, and the centerpiece of this exhibit were a set of videos of the 3D models of the city of Montreal across the 18th Century and a kiosk with a user interface that allowed people to wander around different areas in a 3D model of the city. [The exhibit was called “Opening the Gates of Montreal in the 18th Century” http://besser.tsoa.nyu.edu/howard/Papers/gates.html].
So, the exhibit was, was trying, among other things, to answer the question: Why, in the course of the 18th Century did Montreal go from a Francophone dominated city to Anglophone control? And one of the things that we could do visually was to actually show what was happening in that transition. So we could, with the visualizations, we could show how over time, in the beginning… the French were fur trappers and the British were commercial interests. What we saw was that, it was a walled city with — I don’t know 20 gates or so around the city — and what happened was that the Anglophones bought up the property around the gates, and so, they were buying the furs as soon as the French came back with the furs. And so, they became more powerful from their center power around the gates, and around the transactions that were happening. So we could actually show that visually. So, the exhibit included three-minute videos of 3D flyovers showing buildings colored in different colors for the French versus the British, and how that changed over time, the buildings, the fortifications, all of these were in those 3D flyovers. Perhaps the most challenging part of this exhibit was this interactive touchscreen kiosk that had three modes for each of the neighborhoods, each of three different neighborhoods. We just implemented it in three different neighborhoods…
[…]
One mode was the plan mode, where the different type of use of each building was, could be color-coded on a map…
[Screen Sharing Starts]
Was the property used for commerce, was it production, service, or institutional use? The type of use was color-coded. And you could look at periods of 20-year intervals, if you see at the bottom. Okay, so first if you, if you look at this, it says, “It’s the neighborhood of Place d’Armes,” and we have this implemented at intervals of 1693, 1725, 1745, 1785, so you can look at any interval and then you can see what the land use was for the different pieces of land. And then, you could touch the screen and it would pop up a new window menu. That would fork a procedure to the database, and it would return the name of the owner, the type of use, and whether it was under Francophone or Anglophone ownership. So you could touch anywhere, and it would just pop this up. And you had two modes to look at it. You could look at it in terms of what is the land use, and you could look at it in terms of what is the language of the owner.
Okay, so that’s the “plan” (or map) mode. We also have a 3D mode where you can touch the screen and touch the arrows on the screen, and kind of like a video game, you could move forward, backwards, turn, move left, move right, and you could see how far apart the buildings were and what the spatial relationships looked like. And finally, we had document mode, which allowed you to look at building plans and related documents. You could touch the screen and the screen would open a pop-up zoom window, so you could look at the document more closely. Then, and the videos had this, you know, with the different neighborhoods. Here, these were the three neighborhoods that we modeled in great detail and allowed the user to intensively explore.
[…]
[Marty]: Can you remind me what year this is when you’re doing these videos?
This is 1992, summer of 1992… September 1992. So, here you have documents and zooming in, you touch the screen, and it zooms in on the document. You get a zoom-in window.
[Screen Sharing Ends]
So that’s, that’s basically it. Now let me talk about the background to all of this, because I think that’s really interesting. And it’s part of what Paul’s been talking about, about getting to the behind-the-scenes work. The information gathering for this included 15 to 20 years of poring through church records of birth and death, building records, property registration, planning documents, architectural drawings, all kinds of things. There was 15 to 20 years of work that went into building three different databases to contain this information. So this is the product of the huge amount of work and the vision of Phyllis Lambert, the Museum Director, that she could pull off something if she spent all this money and time on people doing very tedious archival research for this. But let me talk specifically about the kinds of the kinds of things that were happening when I started my involvement in this, which was in either December of ‘91 or January of ‘92. We had entire crews, one crew was the regular exhibition team, another crew was the research and database team that had been doing all that research and populating databases. A third crew was the University of Toronto Center for Landscape Research. They built the software for the 3D modeling. Essentially, the museum paid them to take an application that did some basic 3D landscape modeling and to soup it up and have it work for the museum’s purposes, and then, they could then take that and market that or use that for their own purposes as well. And then, a last group was a user interface and labeling team from the Concordia University education department. They were a team that had designed the Metro Map for Montreal, and one of their biggest things that they did for us was explore how can we label something with a word that means roughly the same in English and French? So many of the labels like, what you saw, what I showed you a few minutes ago if you use the word “occupation,” rather than ”type of business “because occupation is not quite the right word, but it’s spelled the same way in French and English. So we had to struggle a lot with our labeling everywhere, for this. So, that team helped us with that, and with making the videos that played constantly. There were two three-minute videos that played in the gallery.
So what, what did we have to do? We had to integrate these three different databases of information about the properties and the people and the buildings. We had to meld the databases to the geographic information system from Center for Landscape Research. We had really expensive hardware. We were running hardware that was a workstation that’s the size of a large piece of luggage. I can show you that, too, I have an image of that. Silicon Graphics Crimson, which was a highly graphic-oriented super workstation. It cost us just the one box, the one computer, the size of a piece of luggage, cost us a million dollars to buy. We had issues with getting a touch screen, which we got from another third-party vendor. So, getting the touch screen to work with everything else was another issue. And we also contracted out the building of a computer enclosure with a fan so it could sit in the gallery and not get bumped.
So, now let me zoom in to our biggest problem. Our biggest problem ended up emerging on a Friday, when we were opening the exhibit on the following Tuesday. On Friday, we finally got the delivery of the enclosure. Four days before the big opening. That was quite a bit delayed, but we had specced out everything around the enclosure. The enclosure had a fan in it. It had to be exactly the right height, so that we could have teenagers as well as adults be able to use the touch screen. We had tested the touchscreen and they took the measurements of it. So, on Friday, four days before the exhibit opened, we put the computer and the touch screen into the enclosure. Everything fit fine. It was the end of the day, I had to pick up someone at the airport at 5 p.m. and remembering my problem with the initial scanning many years earlier, I took a darkroom thermometer, and I put it in, in the enclosure and went to the airport. And luckily, rather than hanging out with the person who I picked up at the airport, I came back. I checked the thermometer and it was it was over 100 degrees.
I then check the documentation that said, “The computer should never be never operate above 80 degrees.” And here, after an hour and a half or so, it was at 100 degrees. And I panicked. And started trying to wrap my brain around this. The only viable solution was to remove the computer from the enclosure, and just have the touch screen and the screen in the enclosure, but we couldn’t put a million-dollar computer in an open gallery, right. You know, someone’s going to bump into it, someone’s going to…. Plus, it just wouldn’t look nice. So luckily, we were on eastern time, and the companies that we were dealing with were on west coast time. So I called both the computer company and the touchscreen company, and even though it was after 6:00 our time, there were still people there. I asked how long the wires could be between the computer and the touchscreen, and of course they couldn’t give me a definitive answer because no one had ever asked that before. But they said it was probably something like 10 to 20 feet. So, I found some extra cables and tried, you know, even though I had the touch screen and the computer next to each other, I rolled out cables, about 20 feet of daisy chaining cables together, and tried some experiments. And it was, by 10 feet, or 15 feet, it was already starting to not work. So then, I had to go and work with other people in panic mode. We got floor plans for the building and discovered the only possible place to put the computer was in someone’s office directly below the gallery. On this Friday evening, we had to get lots of people involved. Yes, we had to drill through concrete. We had to get lots of people involved: the Director, the Assistant Directors of the museum, maintenance people, exhibition people, conservators and even the person in whose office we planned to put the computer. We ordered drilling through the concrete over the weekend, had to have staff there to supervise, obviously. We special ordered single long cables so we weren’t daisy chaining these cables, so that we might get a longer distance between the screen and the computer. And we still, so we finally did it. It was Okay, but we had another problem — booting the computer — because the screen… You booted with a touch screen or mouse. You had to click on things in order to get both the computer and the application to start. But the, the screen was in the gallery, and the computer and the mouse were one floor below. So we had to get the security people to borrow their walkie talkies and we were, “Okay, move the mouse one inch to the right, half an inch up.” You know, “Okay, now click. No, you missed it. Down a little bit. Click…” So we had to boot it that way, and then we had to end up doing that about once a week, we had to reboot it as well with the walkie talkie. So anyway, that’s, that’s my big story about difficulty with all this, so.
[Marty]: I love it. It shows how universal all of these problems are. I know I’ve had similar experiences with technology and cases overheating. I think Koven Smith told us a fantastic story of something just like that that happened to him at the Met when he was there, right.
It’s amazing, but you know that by far, this was the most high-pressure kind of thing that I found myself in because this was 15 to 20 years of research. This had the attention of the entire country on this, you know, the Vice Premier, the Mayor. They had fireworks out in front and a light show by some other artist. There was just way, way too much pressure to try to make it work, but we did make it work, and that was pretty amazing.
[…]
Okay, so MESL (Museum Educational Site Licensing) project actually grew out of some efforts that had been percolating a little bit before it, so let me give a little backstory to that. Geoff Samuels was a kind of an entrepreneur who was a good friend of Karl — forget Karl’s last name — he ran Muse Film & Television, and he had been a Director of Exhibits for the Met, he had been a museum director and he formed this company called Muse TV to make videos for museums to be showing in the galleries, essentially, [reading chat] Karl Katz, yes, that’s right. Did you know that, or did you look it up? [laughs] Yeah, so, so Geoff Samuels was a friend of Karl Katz. He didn’t really work for Muse, but he was really interested in what happens when museum objects are shown on a screen. And so, he contacted me because of the project I’d been doing at Berkeley. So I probably first communicated with him in around ‘91 or ’92, and he wanted to do all kinds of things. So he… what he did accomplish, which kind of started leading towards MESL, was he used Karl, Karl’s klout, to get a bunch of art museum directors to come to a kind of a focus group. It wasn’t as formal as a focus group, but to come to a meeting in New York City where they could look at museum images on a screen. I supplied the images. He got a workstation, and these museum directors were kind of astounded at what it’s like, you know, because they’ve read about it or heard it, but the quality of the image was striking to them. And so Geoff really wanted to do something to kind of get over some of their resistance, because some of them had an attitude, “Well, if we put our images out there no one’s going to come to the museum.”
So yeah, right well, these were naive kind of notions that people had at the time, and no one really knew, and so Geoff really started pushing hard for trying to do something to try to assuage the resistance from some of these art museum directors. So that intersected with some things that had been happening in Getty AHIP, the Art History Information Program.
Right around that time, Jennifer Trant had left the Center for Canadian Architecture, where I was, where she and I were both working together, and she went to work as a kind of Program Officer at Getty AHIP, and she was in charge of all things imaging. But there wasn’t much there, so she made a contract with me to write the book “Introduction to Imaging.” So that contract probably would have been around 1993, and it was published in ‘95 (http://www.getty.edu/publications/virtuallibrary/0892367334.html ). But she also didn’t have a whole lot of other things to do, to take up her time. The other influential thing was that what was happening at AHIP then was Michael Lester left to form Luna Imaging with Getty money for support, so they were doing high-quality images for museums. And the Eleanor Fink replaced him as head of Getty AHIP. And Eleanor had been, we had kind of been pushing her to get more involved in other communities than the straight museum community, and she was very interested in the educational community. So we brought her to a meeting of the Coalition for Networked Information, which is Cliff Lynch’s organization. But back then, it was really Paul Peters’ organization, but Cliff was very active in it. He didn’t become Director ‘til the late ‘90s when Paul died. CNI meets twice a year, and we had a couple of sessions around what the educational community could do with the museum community. And out of that came this idea around 1993 of “how can we actually show how the educational community might be using images from museums”? So we, we came up with that idea of MESL (Museum Educational Site Licensing Project). MESL was really poorly named, because it really had very little to do with site licensing, nor did it have much to do with intellectual property issues at all. It was really a proof of concept on how you could get images and rich metadata from seven different repositories, six museums and the Library of Congress. How you could aggregate all of that into one dataset because everyone has different metadata, everyone had different standards for their images. How could we aggregate all of that into one dataset which we did and created about 10,000 images? And then, how you could deploy that in a university environment? And we had seven universities who each had their own user interface, had their own kind of mapping of metadata, had their own search systems, had their own display devices, had their own image viewing software and image displayers, things like that? And, and so, how could we deploy this? So the project was a very elaborate one. It involved 14 different institutions, seven collections and seven universities. When we had meetings, which we had at least twice a year, those meetings had at least two representatives from each organization, usually a technical person and a metadata person, or occasionally, we also had faculty who came, art history faculty who taught with these images. But that meant that we had a minimum of 28 people at each of these meetings plus administrators.
And there was a steering committee. The steering committee consisted of myself and Cliff Lynch ostensibly representing the educational community, and David Bearman and Max Anderson essentially representing the museum community, though Max, Max was not hugely involved on an ongoing basis. Max didn’t really come to our regular meetings, but Max was someone who was really critical in arm-twisting or getting other museum directors to make this into a priority. So he wasn’t involved in the actual ongoing management administration functions, but he was really critical at certain points, intervening and trying to make things happen. Initially we put out a call for papers. I’ve actually gone back and looked at dates. (I gave my papers regarding MESL, so I think 17 boxes of papers to the Getty Research institute, so I had to look at the finding aid this morning, to get some dates.)
So in 1994, early 1994, we put out a call for proposals for museums or universities that were interested, and we also, of course, did a lot of word of mouth, and then later in ‘94 we met in one of the Disney hotels in Orlando, where four of us on the management committee plus Geoff Samuels plus Jennifer Trant, representing the Getty, and who were funding this, we all met to go over proposals from like 20 or 25 museums, and the same number of universities. And we met to decide who would, who we would choose. And he… actually Max’s role really came out there, because at one point we were looking at the application for the Houston Museum of Fine Art, and we were really concerned about something in their application. So he went to the phone, called his office, got someone to look at his Rolodex, and he called the director of that museum and asked him, and kind of pressured him into making sure that they could indeed perform what we were expecting them to perform. So that’s just a sample of kind of how Max really worked on this, but he wasn’t involved in the kind of day-to-day stuff.
So MESL went on from about 1994 to about 1998 (http://www.getty.edu/publications/virtuallibrary/0892365099.html). In 1995, we did an application to the Mellon Foundation, and at that point, the Mellon Foundation was dominated by economists. The head was an economist, most the program officers were economists, and so they, when we started talking to them about funding a study of MESL, they were interested in, not so much in how would a future use of this material be, how would some future infrastructure for distributing museum images and rich metadata to universities, or even the public. They weren’t as interested in the mechanistic parts of that as they were interested in a business model to make it work. Or, you know, how much it would cost to do? They were more interested in those things. And that was a constant tension between the team. So I think, back up. Mellon gave us money. Gave us a healthy sum of money to evaluate MESL with an eye towards putting something like that into place. But they were constantly pushing us to focus on cost. And we were more interested in the mechanistic part of that. And so, I had a team… the MESL money came to me at Berkeley. I was at Berkeley at that point, came to me in Berkeley and I hired a whole team of people. I hired a postdoc full time, and three 20-hour-a-week students for a couple of years to work on this, and we came out with an evaluation of the cost and use of digital images online, which is still up somewhere, available on the web (https://groups.ischool.berkeley.edu/archive/mellon/mellon_pdfs/f0-front&intro-0923.pdf). And that eventually became a planning model for Mellon’s eventual service that they created called Artstor.
The agreement was that MESL would last till 1998. And everyone, when we got to about 1996 or ’97, everyone was worried about what was going to happen in 1998, when the contracts were up. And would all the museums continue to let these universities use the images? Would they want to charge for them? What would happen? And at that point, about ’97, I would say, David Bearman started working on what became AMICO, and plotting out how AMICO would work. So the core for AMICO came from the museum side of MESL. And there was no involvement from the university participants at MESL. It was David’s belief that the only way to try to make something like AMICO work was for museums to make decisions totally on their own, and kind of hope that the universities would go along with paying for it. He thought it was going to be too difficult if the universities got involved on the early stage, because he was very afraid that, even though that the six museums and Library of Congress that contributed to MESL or that were participants in MESL, even though all of those people were on board with something like AMICO, he didn’t think that it would grow. He didn’t think that they could convince other museums, unless they had conversations and decision-making totally on the museum side.
[…]
One of the interesting things that we found as we were doing MESL was originally, we all decided, we came up with the decision that we would have one file format. And then everyone would use Lossless JPEG as their file format for all the images. But about six months into that, one of the people, someone… the I.T. person from the National Gallery of Art, started doing some experiments with some of the software. There was not much software available for scanning or anything back then. Not much hardware or software. He did an experiment, where he saved a file in Lossless JPEG, or converted a file into Lossless JPEG, and then compared that to the original file. And he found that there was loss. I don’t remember what the software was, but this was the main software that people were using to create Lossless JPEG, and it was lying about what it was really creating. It was not creating Lossless JPEG. So there were lots of little things like that that we discovered, all of this was early on.
MESL started in 1994. The first web browser was released in May of 1993. So it was not clear that the web would be this universal interface for people. It wasn’t clear at the time that we started that that would happen. And so, a couple of the universities started out to deploy software that was not on a web browser. And one of those universities switched to a web browser like a year later, and had to re-architect everything. But the only university that stayed with a non-web browser approach was the University of Maryland. And I think they converted to a web browser probably around 1998, near the end of the project. So the project allowed us to look at a whole lot of different things, look at crosswalks, in terms of we, we created our own data dictionary and set of fields that were wider than a core. You know, it was much wider than Dublin Core… First meeting of Dublin Core of the creation of Dublin Core was March of ’95, so we were starting before Dublin Core started, but we had the advantage of all the great vocabulary work that the Getty had done in like, the Catalog of Description of Works of Art, and the Art and Architecture Thesaurus. We were able to build on those, as well as some work that had gone on in CIDOC and other international groups.
So we created a data dictionary of a set of I think it was maybe 30 fields, something like that, and each university had, or each museum had to map their metadata into those fields. And then, what was really interesting was how the universities then mapped those 30-some fields to queries that their users would make. Because most of them had simple queries and complex queries. Now, this is before Google. This was a different world than today, where we expect that a query is typing in text into a little box, and you just type in a few words and that’s a query. That wasn’t clearly widespread. People didn’t really think that that was what people would do back then. We had these elaborate query systems with pull-down menus like “artist equals…” you know, you pull down a list of artists, or you have a pull-down menu with a list of the artists, or you type in in a field “artist name” or you type in another field, the title, or things like that, so everyone had different user interfaces and everyone mapped their metadata fields differently. So the system might have expectations of two or three different data dictionary fields that would be mapped into a single query field. You might be combining things like “prints” and “photographs,” or some things like that. Those fields might be mapped together. At some point I did a paper on this, called something like, “If the Data is the Same, Why Do I Get Different Answers?” or something like that, where, where I explored how the same query of the same 10,000 images in each university would yield different results. I could do a query at one university and get one set of hits, and do a query at another university and get a completely different set of hits. And that was totally because of how the universities mapped some of the 30 fields into fewer query fields. Yeah, the article was called “If the museum’s data’s the same, why does this look different” (https://www.museumsandtheweb.com/biblio/comparing_five_implementations_of_the_museum_educat_0.html), yeah.
[…]
Just talking about other things you know Getty AHIP continued to really kind of lead the way in standards for the art world, and in trying to make things happen. Again, you know, I think Eleanor Fink had a real good vision of where things were headed and deployed Getty money to do some very important things. So after my “Intro to Imaging” book was wildly popular, she commissioned an “Introduction to Metadata”, and then, a series of other “Introduction to…” books. I think that these books had a lot of impact on growing the field of people involved in trying to deploy museum-type collections or art historical-type collections to a wider environment, and both to museums and to universities, and to the public in general. So I think, I think Getty AHIP really had just a critical role in doing this.
[…]
Yeah, yeah, yeah. AHIP did so many things that really moved the field, And it was really a shame that how things, you know, they, they changed the name from the Getty Art History Information Program to the Getty Information Institute, and then maybe a couple years after that they folded it, and moved the vocabulary areas into the Getty Research Institute. But, you know, I was on contract for the Getty to write the book, “Intro to Imaging” for a couple of years, and then I was on contract to them periodically for smaller projects after that.
We did a project in 1998 that’s worth mentioning. It was co-sponsored by the Getty Conservation Institute and the Getty Information Institute. It was called “Time and Bits: Managing Digital Continuity,” and it was an attempt to get leading thinkers together for three days to look at the problems of preservation of art historical works, over time. And so, the people invited to that included: Brian Eno as an artist who does experimental music, the technologist Jaron Lanier, the editor of Wired magazine, Brewster Kahle, who was founder of the Internet Archive (which was brand new then). He actually brought the Internet Archive in to show us show us the machine, it was it was a small machine. This was like a year and a half after he first started it. Stewart Brand was the convener for the meeting, so he was master of ceremonies or chair, or whatever. So that it was an interesting group of people, a really interesting group of people, but the technology people were more focused kind of in the wrong area of digital preservation.
They were looking at trying to inscribe something in titanium that will last for a long time. So that the actual end results that everyone could agree on were very limited, but the discussions over those three days were really interesting. The first two and a half days were discussions just amongst the group of about 14 people. But the last half day was open to the public, where we presented our findings to lots of other people in the field. By the public, I don’t mean people who go to museums, I mean people who work in the field participated in that last half day. But the proceedings of it are really interesting. There’s a book that was generated from the proceedings (https://books.google.com/books?id=MUNpHfMUTMEC&printsec=copyright#v=onepage&q&f=false). And the ideas that came out, there’s lots of interesting things.
One of the stories I, I recount pretty frequently is Jaron Lanier’s story of his early video game, and how the Computer Museum (then in Boston), had wanted to do an exhibit on video games, and asked him to bring his game. He had his game on a cartridge, but he didn’t have a TRS 80. They got him a TRS 80. They managed to find that, but then, the joystick that they found was a joystick from a later version of the TRS 80, and his game wouldn’t run. Not because they didn’t have the software, not because they didn’t have the computer, but because they didn’t have the right joystick. This just illustrated some of the difficulties of digital preservation. There’s lots of little stories like that from, from “Time and Bits” that are interesting to go to.
[…]
Oh, there is one more thing I wanted to say. What I felt was a real turning point in the museum computing was the MCN meeting in New Orleans, whenever that was. That had had to be late ‘80s, early ‘90s…. 1986, yeah. That was a real turning point. We had vendors, who were showing that you wouldn’t have to have a mainframe computer to do museum computing. There were vendors doing things on workstations. Yeah, Chuck Patch was instrumental in putting together that meeting, and another person instrumental in that meeting was Lenore Sarasan. Lenore was a vendor (Willoughby Associates), and she was on the MCN Board. And she is the one who convinced me to go. She paid my way herself, not MCN, but herself. And I think that among the vendors she was particularly instrumental in working on and developing standards for the museum community. And, you know, it made sense from a business perspective for her. I don’t know that the other vendors really saw it right at the beginning, but if museums agree on what their vocabulary standards are, then the same piece of software can be sold to multiple museums.
So standards does make sense from the vendor standpoint. Before that, the vendor had to retool each of their products for each new customer, for each new museum. So, Lenore was really instrumental in that and she was very involved in vocabulary. And people who worked for her, there were at least two people who went on to do really fundamental work in museum vocabulary kind of activities. [reading chat] Yeah, yeah, Jane Sledge is one. Trying to remember the name of some of the others, but they’re, you know, they’re, they’re… You know, that that MCN meeting was just, I think, really, really set the stage for the future. In a way, you could say, there was MCN before that meeting, and there was MCN after, and MCN after was much more populist, much more doable, much more affordable, that the solutions and… It felt like this was opening MCN to a much broader, broader constituency.