ECPN Interviews: Electronic Media Conservation with Yasmin Dessem

To promote awareness and a clearer understanding of different pathways into specializations that require particular training, the Emerging Conservation Professionals Network (ECPN) is conducting a series of interviews with conservation professionals in these specialties. We kicked off the series with Chinese and Japanese painting conservation, and now we are focusing on practitioners in AIC’s Electronic Media Group (EMG). These conservators work with time-based media, which can include moving components, performance, light or sound elements, film and video, analog or born-digital materials. We’ve asked our interviewees to share some thoughts about their career paths, which we hope will inspire new conservation professionals and provide valuable insight into these areas of our professional field.

This is the third post from ECPN’s EMG blog series, for which we first interview Nick Kaplan and more recently, Alex Nichols. For our third interview from the EMG series, we spoke with Yasmin Dessem, currently Head of the Audiovisual Preservation Studio at UCLA Library where she serves as the technical lead as the library continues to develop its program of preservation, digitization and access of its moving image and sound holdings. Previously she managed archive deliverables for new feature releases at Paramount Pictures. She has experience working with a wide variety of moving image and sound formats, as well as pre-film animation devices, silent-era cameras, costumes and paper collections. Yasmin holds Master’s degrees in Art History and Moving Image Archive Studies from UCLA.


Yasmin Dessem (left) and Allie Whalen (right) cleaning and relubricating a Betacam deck. [Photo: Walter Urie]
Yasmin Dessem (left) and Allie Whalen (right) cleaning and relubricating a Betacam deck. [Photo: Walter Urie]
ECPN: Please tell us a little bit about yourself and your current position.

Yasmin Dessem (YD): I oversee the preservation of moving image and recorded sound materials at the UCLA Library’s Preservation Department. For nearly 90 years, the UCLA Library has collected audiovisual materials with content such as home movies, oral histories, and radio broadcasts. Examples are home movies of Susan Sontag’s parents sailing to China in the 1920s and field interviews with Watts residents after the 1965 riots. Audiovisual preservation (AV) at the library is a relatively young unit—a dedicated AV preservationist first came on board in 2011. We offer a number of in-house digitization and preservation services and are currently focusing on increasing our capacity and launching a survey.

ECPN: How were you first introduced to conservation, and why did you decide to pursue conservation?

YD: The 1996 re-release of the restored version of Vertigo first made me aware of film restoration and preservation as an actual practice. Later, as I was finishing my Masters in Art History at UCLA, I took a wonderful class on restoration, preservation, and conservation with Professor David A. Scott. The course covered the material care issues and decision-making ethics for a wide breadth of cultural heritage materials. The class struck a deep chord with me, but I was eager to graduate and start working. After graduation, I ended up working in the film industry for about six years. I was tracking down historic stock footage at one job when my mind circled back to the preservation field as I considered how the films were stored and made available. I had entertained the idea of potentially returning to graduate school to study art conservation some day, but around that time the idea of film preservation as a possible career path began to fully materialize for me. As a result, I began exploring potential graduate programs.

ECPN: Of all specializations, what contributed to your decision to pursue electronic media conservation?

YD: My longtime love for film and music intersected with my curiosity for all things historical and technology-related. These were topics that in one form or another always interested me, but I don’t think I had a full grasp on how to combine them meaningfully into a profession. Preservation was the missing key. My exposure to preservation and conservation while studying art history and my later experience working at film studios both helped direct me towards the specialization.

ECPN: What has been your training pathway?  Please list any universities, apprenticeships, technical experience, and any related jobs or hobbies.

YD: I pursued my studies in the Moving Image Archive Studies (MIAS) Program at UCLA—which persists today as a Master of Library and Information Science (M.L.I.S.) with a Media Archival Studies specialization. While in the program, I completed internships with Universal Pictures and the Academy of Motion Pictures Arts and Sciences, and volunteered at the Hugh Hefner Moving Image Archive at the University of Southern California. Throughout the two-year MIAS program, I also worked as a fellow at the Center for Primary Research and Training program at UCLA Library Special Collections, where I learned archival processing. My experiences weren’t limited to preserving moving image and sound media, but included paper-based collections, costumes, and film technology. After graduating I attended the International Federation of Film Archives (FIAF) Film Restoration Summer School hosted by the Cineteca di Bologna and L’Immagine Ritrovata.

ECPN: Are there any particular skills that you feel are important or unique to your discipline?

YD: Digital preservation will continue to be a key area of expertise that’s needed in museums and archives. Preserving the original source material and digitizing content is not enough. There are more resources than ever for strategies and tools for digital preservation, and it’s important to seek them out. Another valuable skill is developing a level of comfort with handling and understanding the unique characteristics of a wide variety of physical analog formats  such as film, videotape, audiotape, and grooved media (LP, 78s, lacquer discs, wax cylinders, etc.). Similarly, it’s helpful to have a familiarity with playback devices for these obsolete media formats (equipment like open-reel decks or video decks.) Lastly, metadata can be an unsung hero in media preservation. Often, we’re the first to see or hear a recording in decades, so capturing metadata around the point of transfer is critical. Metadata standards can be a rabbit hole of complexities, especially when it comes to describing audiovisual media, but understanding their application is an essential skill.

Lacquer disc cleaning and transfer workshop at the Instituto de Historia de Cuba in Havana, Cuba [Photo: Yasmin Dessem]
Lacquer disc cleaning and transfer workshop at the Instituto de Historia de Cuba in Havana, Cuba [Photo: Yasmin Dessem]
ECPN: What are some of your current projects, research, or interests?

YD: We’re just wrapping up digitization of materials from the Golden State Mutual Life Insurance Company (GSM), an African American-owned and operated insurance firm established in Los Angeles in 1925 in response to discriminatory practices that restricted the ability of African American residents to purchase insurance. GSM operated for 85 years and their collection is a vibrant resource documenting Los Angeles and the empowerment of a community. We received grants from the National Film Preservation Foundation and the John Randolph Haynes and Dora Haynes Foundation to support this work. The digitized collection is now available on Calisphere. We’ve just started a crowd sourcing project working with former GSM staffers to describe any unidentified content. It’s been one of the most rewarding experiences of my career, hearing everyone’s stories and seeing how much it means to everyone involved to have this collection preserved and made available.

We’ve also been in preparation to launch a large-scale survey that will help us gather data on the Library’s audiovisual collections that can be used for long term-planning. Outside of UCLA, we’ve been involved with ongoing work with cultural heritage institutions in Cuba. Last February, I set up equipment and held a workshop on the digitization of radio transcription discs held at the Instituto de Historia de Cuba (IHC) in Havana. I’m heading back there next week to begin a project to transfer IHC’s open reel audio collections.

ECPN: In your opinion, what is an important research area or need in your specialization?

YD: It’s crucial to preserve the expertise related to the operation and repair of playback equipment. Playback equipment will become more and more difficult to source in the future. Engineers, whose entire careers are dedicated to the use and care of this equipment, are some of the best resources for this knowledge. Their knowledge is shared through conversation, YouTube videos, social media, and professional workshops. Documenting the skills required to handle, maintain, calibrate, and service this equipment in a more formalized way and sharing that knowledge widely will ensure that the preservationists can keep their equipment viable for longer.

ECPN: Do you have any advice for prospective emerging conservators who would like to pursue this specialization?

YD: Try everything. Media preservation requires a wide variety of skills from computer coding to soldering decades-old circuit boards. Depending on where your career takes you, it’s good to have at least a passing familiarity with the full range of skills you may need to call upon. Apply for internships or fellowships with organizations, like the National Digital Stewardship Residency. Volunteer at community-based archives that need help getting their collections in order. Join professional organizations, like the Association of Recorded Sound Collections (ARSC) or the Association of Moving Image Archivists. Attend conferences like code4lib, the Preservation and Archiving Special Interest Group (PASIG), or the Digital Asset Symposium (DAS). Network with engineers or preservation professionals to continue to grow your own expertise, but also share your own skills when you can. Collaboration and knowledge-sharing are a fundamental part of the profession.

Perforation repair of 16 mm film [Photo: Yasmin Dessem]
Perforation repair of 16 mm film [Photo: Yasmin Dessem]
ECPN: Please share any last thoughts or reflections.

YD: One thing to be aware of, if you’re a woman in the field of audiovisual preservation, is that you may occasionally run into people who are surprised to see a woman working with technology (much less wielding a screwdriver!). This response persists to some degree despite the presence of many successful female professionals in the field. What’s encouraging, however, is seeing the growth of groups like the Women in Recorded Sound collective at ARSC providing support.

Audiovisual preservation is such a gratifying profession. Having the opportunity to make historic content available is incredibly meaningful work that I feel lucky to be a part of everyday. On an even more basic level, figuring out a new workflow or getting a piece of equipment to finally work is just so viscerally satisfying. I’m part of an amazing team whose passion, humor and willingness to try out new things inspires me every day and makes me feel so lucky to be doing this work.

Recap: ECPN's Digital Tips and Tools for Conservators

Back in June we posted a series of tips to the ECPN Facebook page. Now that school is back in full swing we thought we’d post a reminder. We hope you enjoyed this collection of digital resources! Feel free to contribute your own tips in the comments below.
1: Zotero Bibliography management tool (https://www.zotero.org/)
Zotero allows you to make bibliographies easily and keep track of abstracts (it pulls them directly from some sources) or your own notes. It also helps you to keep track of artworks from museum collections, and you can keep all the relevant information (catalog information, dimensions, conservation history notes) in one place. Zotero is free and if you install it as a plug-in to your preferred internet browser you just click and –ta da!– it magically saves all the bibliographic information for you. You can share collected references and notes with other Zotero users through groups as well.
tip1-1
Image 1: Desktop Zotero application.
 
tip1-2
Image 2: Saving an artwork from a museum’s online catalogue using Zotero on an internet browser (Firefox or Chrome).
 
2: Compound Interest has lots of infographics (http://www.compoundchem.com/infographics/) which are great references for chemistry topics. The site has lots of good information on analytical techniques as well as fun chemistry facts and a weekly roundup of chemistry news. Print materials out for your lab!
Some examples of particular interest to conservators:

 
3: With Inkpad Pro or other vector drawing apps, you can make diagrams for condition mapping, mounts, and packing. These apps are generally far less expensive than the PC-based programs they emulate, like Illustrator or Photoshop, and range from free to a few dollars. You can use a stylus on your iPad to trace from photographs and annotate. There are lots of color, line weight, and arrow options, and it’s easy to do overlays. Since the iPad is also smaller and more portable, you can do your condition mapping in the gallery or during installations as well. You can export your final drawings as PDFs and share them through Dropbox or email.
tip3-1     tip3-2
tip3-3     tip3-4
Images 3-6: Creating a vector drawing and condition map from a photograph using the iPad app InkPad Pro.
 
4: Podcasts
We’d like to highlight one of our favorite podcasts, “Chemistry in its Element” by the Royal Society of Chemistry. There are short episodes about all sorts of interesting chemical compounds. Of particular interest to conservators are podcasts on mauveine, carminic acid, citric acid, calcium hydroxide, goethite, vermillion, and PVC, for example. Episodes are about 5 minutes long each.
(link: https://www.chemistryworld.com/podcasts)
 
5: RSS feeds for Cultural Heritage Blogs
Using an RSS feed can help you keep tabs on conservation news reported on blogs. We recommend Old Reader, a free replacement for Google Reader (https://theoldreader.com/), to keep track of the many conservation blogs. AIC has a blogroll list that can help you find conservation blogs: look to the right sidebar here on Conservators Converse.
There are too many great blogs to name, but one favorite is the Penn Museum’s “In the Artifact Lab” (http://www.penn.museum/sites/artifactlab/), which is frequently updated with great photos and stories about conservation treatments underway. Another one you might like is Things Organized Neatly (http://thingsorganizedneatly.tumblr.com/)– not strictly speaking a conservation blog, but definitely has some appeal for conservators!
 
Feel free to add your favorites tips and tools below in the comments!
 
All images courtesy of Jessica Walthew, Professional Education & Training Officer, Emerging Conservation Professionals Network (ECPN).

43rd Annual Meeting – Electronic Media Session, May 16, "Tackling obsolescence through virtualization: facing challenges and finding potentials” by Patricia Falcao, Annet Dekker, and Pip Laurenson

The presenters began by explaining that they had changed the title to reflect the emphasis of presentation. The new title became "An exploration of significance and dependency in the conservation of software-based artwork."

Based upon their research, the presenters decided to focus on dependencies rather than obsolesence per se. The project was related to PERICLES, a pan-European risk assessment project for preserving digital content. PERICLES was a four-year collaboration that included systems engineers and other specialists, modeling systems to predict change.

The presenters used two case studies from the Tate to examine key concepts of dependencies and significant properties. Significant properties were described as values defined by the artist. Dependency is the connection between different elements in a system, defined by the function of those elements, such as the speed of a processor. The research focused on works of art where software is the essential part of the art. The presenters explained that there were four categories of software-based artwork: contained, networked, user-dependent, and generative. The featured case studies were examples of contained and networked artworks. These categories were defined not only in terms of behavior, but also in terms of dependencies.

Michael Craig-Martin's Becoming was a contained artwork. The changing composition of images was comprised of animation of the artist’s drawings on LCD screen, using proprietary software. Playback speed is an example of an essential property that could be changed, if there were a future change in hardware, for example.

Jose Carlos Martinat Mendoza's Brutalism: Stereo Reality Environment 3 was the second case study discussed by the presenters. This work of art is organized around a visual pun, evoking the Brutalist architecture of the Peruvian “Pentagonito,” a government Ministry of Defense office associated with the human rights abuses of a brutal regime. Both the overall physical form of the installation, when viewed merely as sculpture, and the photographic image of the original structure reinforce the architectural message. A printer integrated into the exhibit conveys textual messages gleaned from internet searches of brutality. While the networked connection permitted a degree of randomness and spontaneity in the information flowing from the printer, there was a backup MySQL database to provide content, in the event of an interruption in the internet connection.

The presenters emphasized that the dependencies for software-based art were built around aesthetic considerations of function. A diagram was used to illustrate the connection between artwork-level dependencies. With "artwork" in the center, three spokes radiated outward toward knowledge, interface, and computation. An example of knowledge might be the use of a password to have administrative rights to access or modify the work. A joystick or a game controller would be examples of interfaces. In Brutalism, the printer is an interface. Computation refers to the capacity and processor speed of the computer itself.

Virtualization has been offered as an approach to preserving these essential relationships. It separates hardware from software, creating a single file out of many. It can act as a diagnostic tool and a preservation strategy that mitigates against hardware failure. The drawbacks were that it could mean copying unnecessary or undesirable files or that the virtual machine (and the x86 virtualization architecture) could become obsolete. Another concern is that virtualization may not capture all of the significant properties that give the artwork its unique character. A major advantage of virtualization is that it permits the testing of dependencies such as processor speed. It also facilitates version control and comparison of different versions.The authors did not really explain the difference between emulation and virtualization, perhaps assuming that the audience already knew the difference. Emulation uses software to replicate the original hardware environment to run different operating systems, whereas virtualization uses the existing underlying hardware to run different operating systems. The hardware emulation step decreases performance.

The presenters then explained the process that is used at the Tate. They create a copy of the hardware and software. A copy is kept on the Tate servers. Collections are maintained in a High Value Digital Asset Repository. The presenters also described the relationship of the artist's installation requirements to the dependencies and significant properties. For example, Becoming requires a monitor with a clean black frame of specific dimensions and aspect ratio. The software controls the timing and speed of image rotation and the randomness or image changes, as well as traditional artistic elements of color and scale. With Brutalism, the language (Spanish to English) is another essential factor, along with "liveness" of search.

During the question and answer period, the presenters explained that they were using VMware, because it was practical and readily available. An audience member asked an interesting question about the limitations of virtualization for the GPU (graphics processing unit). The current methodology at the Tate works for the CPU(central processing unit) only, not the graphics unit. The presenters indicated that they anticipated future support for the GPU.

This presentation emphasized the importance of curatorship of significant propeeties and documentation of dependencies in conserving software-based art. It was important to understand the artist's intent and to capture the essence of the artwork as it was meant to be presented, while recognizing that the artist’s hardware, operating system, applications, and hardware drivers could all become obsolete. It was clear from the presentation that a few unanswered questions remain, but virtualization appears to be a viable preservation strategy.

43rd Annual Meeting – Textiles Specialty Group, May 14th, “Lights, Camera, Archaeology: Documenting Archaeological Textile Impressions with Reflectance Transformation Imaging (RTI)” by Emily Frank

Documenting textile impressions or pseudomorphs on archaeological objects is very challenging. In my own experience, I’ve found trying to photograph textile pseudomorphs, especially when they are poorly preserved, very difficult and involves taking multiple shots with varying light angles, which still often results in poor quality images. This is why Emily Frank‘s paper was of particular interest to me because it provided an alternative to digital photography that would be feasible and more effective in documenting textile impressions: Reflectance Transformation Imaging (RTI).
RTI is a computational documentation method that allows for multiple images of an object to be merged into one and viewed interactively to allow the direction of light to be changed so that surface features are enhanced. The process involves changing the direction of the light when each photo is taken. Using open source software, a single image is rendered using various algorithms that allows the viewer to move a dial and change the direction/angle of light the image can be viewed at. Additional components in the software allow for the images to be viewed using different filters or light effects that make visualization of surface features easier. RTI is gaining in popularity as a documentation tool in conservation due to its low cost and feasibility and several papers presented at this year’s conference touched on the use of this technique (including this paper I also blogged about).
There are two general light sources used for RTI. One uses a dome outfitted with many LED lights that will turn off and on as photographs are taken. An RTI light dome is pictured on Cultural Heritage Imaging’s website that was used at the Worcester Art Museum (CHI is a non-profit organization that provides training and tools for this technique). However, most conservators use a lower tech method where a light source (a camera flash or lamp for example) is held at a fixed distance from the artifact and manually moved around at different angles when each photo is taken. You can see an example of this method used in the field in this blog post from UCLA/Getty Conservation Program student Heather White.
In her paper, Emily focused on documenting textile or basketry impressions on ceramics and more ephemeral impressions, such as those left in the soil by deteriorated textiles or baskets, using RTI. By using the various tools offered by the RTI software (changing light angle, using diffuse light or changing it so that concave surfaces of impressions look convex), she was able to see fine features not clearly visible with standard digital photography, such as the angle of fibers, striations on the surface of plant material or the weave structure. For impressions of textiles left in soil (these were mock-ups she made in potting soil) she noted that digital photography was not very effective in recording these because there was no contrast and the impressions were so fragile that they could not be lifted or moved for better examination or imaging. However using RTI she was able to clearly see that the textiles were crocheted.
In describing her set up and work flow, Emily took photos of the impressions indoors, as well as outdoors (for the soil impressions). She was able to take good images outdoors, but it was better to do RTI at dusk with lower light. She took a minimum of 12 shots per impression at 3 different angles. For her light source she used a flash. In all, she said it took her about 10 minutes to shoot each impression.
When compared to digital photography, RTI is a useful and feasible technique for the documentation of impressions, and worked well for most of the impressions Emily tried to record. It seems that RTI worked well as the stand alone documentation method for impressions in about 40% of the images she took, but is more effective as an examination and documentation tool in combination with standard digital photography. RTI is on its way to becoming a more standardized documentation method in conservation. It appears to be effective for recording low contrast, low relief surfaces, such as textile impressions, and may be the best method to record ephemeral or extremely fragile surfaces that are not possible to preserve. I’m excited about the potential of RTI for impressions and look forward to trying it out the next time I have to record textile impressions or organic pseudomorphs on an archaeological object.

42nd Annual Meeting – Paintings, May 30, "Piet Mondrian: Technical Studies and Treatment" by Ana Martins, Associate Research Scientist, MoMA, and Cynthia Albertson, Assistant Conservator, MoMA

NYC’s Museum of Modern Art owns sixteen Piet Mondrian oil paintings, the most comprehensive collection in North America. From this starting point, conservator Cynthia Albertson and research scientist Ana Martins embarked on an impressive project, both in breadth and in consequence—an in-depth technical examination across all sixteen Mondrians. All examined paintings are fully documented, and the primary preservation goal is returning the artwork to the artist’s intended state. Paint instability in the artist’s later paintings will also be treated with insight from the technical examination.
The initial scope of the project focused on nondestructive analysis of MoMA’s sixteen oil paintings. As more questions arose, other collections and museum conservators were called upon to provide information on their Mondrians. Over 200 other paintings were consulted over the course of the project. Of special importance to the conservators were untreated Mondrians, as they could help answer questions about the artist’s original varnish choices and artist-modified frames. Mondrian’s technique of reworking areas of his own paintings was also under scrutiny, as it called into question whether newer paint on a canvas was his, or a restorer’s overpaint. Fortunately, the MoMA research team had a variety of technology at their disposal: X-Radiography, Reflectance Transformation Imaging, and X-ray Fluorescence (XRF) spectroscopy and XRF mapping were all tools referenced in the presentation.
The lecture discussed three paintings to provide an example of how preservation issues were addressed and how the research process revealed information on unstable paint layers in later Mondrian paintings. The paintings were Tableau no. 2 / Composition no. V (1914), Composition with Color Planes 5 (1917), and Composition C (1920), but for demonstration’s sake only the analysis of the earliest painting will be used as an example here.
Tableau no. 2 / Composition no. V (1914) was on a stretcher that was too thick, wax-lined, covered in a thick, glossy varnish, and had corrosion products along the tacking edges. Research identified the corrosion as accretions from a gold frame that the artist added for an exhibition. The painting has some obviously reworked areas, distinguished by dramatic variations in texture, and a painted-over signature; these changes are visible in the technical analysis. The same research that identified the source of the corrosion also explained that Mondrian reworked and resigned the painting for the exhibition. XRF mapping of the pigments, fillers, and additives provided an early baseline of materials to compare later works to, as the paint here did not exhibit the cracking of later examples. Ultimately, the restorer’s varnish was removed to return the paint surface to its intended matte appearance, and the wax lining was mechanically separated from the canvas with a specially produced Teflon spatula. Composition no. V (1914) was then strip-lined, and re-stretched to a more appropriate-width stretcher.
It is possible to create a timeline of Mondrian’s working methods with information gleaned from the technical examination of all three paintings. His technique had evolved from an overall matte surface, to variations in varnish glossiness between painted areas. XRF analysis demonstrated a shift in his palette, with the addition of vermillion, cobalt, and cadmium red in his later works. XRF also revealed that the artist used registration lines of zinc and lead whites mixed together and used on their own. Knowing the chemical composition of Mondrian’s paint is vital to understanding the nature of the cracking media and identifying techniques to preserve it.
The underpinning of all this research is documentation. This means both accounting for un-documented or poorly documented past restorations, as well as elaborating upon existing references. Many of the MoMA paintings had minimal photographic documentation, which hinders the ability of conservators to identify changes to the work over time. The wealth of information gathered by the conservation and research team remains within the museum’s internal database, but there are plans to expand access to the project’s data. Having already worked in collaboration with many Dutch museums for access to their Mondrian collections, it’s clear to the MoMA team how a compiled database of all their research and documentation would be groundbreaking for the conservation and art history fields.

AIC's 41st Annual Meeting- Art on Paper Discussion Group

The inaugural meeting for this group took place on May 31, 2013 at the AIC Annual Meeting in Indianapolis, ID.  Organized by Nancy Ash, Scott Homolka, Stephanie Lussier and Eliza Spaulding, the session presented the Draft Guidelines for Descriptive Terminology for Works of Art on Paper which is a project under way at the Philadelphia Museum of Art and supported by an IMLS 21st Century Museum Professionals Grant.
Continue reading “AIC's 41st Annual Meeting- Art on Paper Discussion Group”

41st Annual Meeting-Electronic Media Session, May 31, "Technical Documentation of Source Code at the Museum of Modern Art" by Deena Engel and Glenn Wharton

Glenn Wharton began with an overview of the conservation of electronic media at the Museum of  Modern Art (MoMA). When he set up the Media Conservation program at MoMA in 2005, there were over 2,000 media objects, mostly analog video, and only 20 software objects. The main focus of the program was digitizing analog video and audio tapes. Wharton was a strong advocate for the involvement of IT experts from the very beginning of the process. Over time, they developed a working group representing all 7 curatorial departments, collaborating with IT and artists to assess, document, and manage electronic media collections.
Wharton described the risk assessment approach that MoMA has developed for stewardship of its collections, which includes evaluation of software dependency and operating system dependency for digital objects.  They have increased the involvement of technical experts, and they have collaborated with Howard Besser and moving image archivists.
The presenters chose to focus on project design and objectives; they plan to publish their findings in the near future. Glenn Wharton described the three case study artworks: Thinking Machine 4, Shadow Monsters, and 33 Questions per Minute. He explained how he collaborated with NYU computer science professor Deena Engel to harness the power of a group of college undergraduate students to provide basic research into source code documentation. Thinking Machine 4 and Shadow Monsters were both written in Processing, an open source programming language based on Java. On the other hand, 33 Questions per Minute was written in Delphi, derived from PASCAL; Delphi is not very popular in the US, so the students where challenged to learn an unfamiliar language.
Engel explained that source code can be understood by anyone who knows the language, just as one might read and comprehend a foreign language. She discussed the need for software maintenance that is common across various types of industries, not unique to software-based art projects. Software maintenance is needed when the hardware is altered,  the operating system is changed, or the programming language is updated. She also explained four types of code documentation: annotation (comments) in the source code, narratives, visuals, and Unified Modeling Language (UML) diagrams.
Engel discussed the ways that the source code affects the output or the user experience and the need to capture the essential elements of presentation in artwork, which are unique to artistic software. In 33 Questions per Minute, the system configuration includes a language setting with options for English, German, or Spanish. Some functions were operating system-specific, such as the Mac-Unix scripts that allow the interactive artwork Shadow Monsters to reboot if overloaded by a rambunctious school group flooding the gallery with lots of moving shadows. Source code specified aesthetic components such as color, speed, and randomization for all of the case study artworks.
One interesting discovery was the amount of code that was “commented out.” Similar to  studies, underdrawings, or early states of a print, there were areas of code that had been deactivated without being deleted, and these could be examined as evidence of the artist’s working methods.
Engel concluded by mentioning that the field of reproducibility in scientific research is also involved with documenting and preserving source code, in order to replicate data-heavy scientific experiments. Of course, they are more concerned with handling very large data sets, while museums are more concerned with replicating the look and feel of the user experience. Source code documentation will be one more tool to inform conservation decisions, complimenting the artist interview and other documentation of software-based art.
Audience members asked several questions regarding intellectual property issues, especially if the artists were using proprietary software rather than open-source software.   There were also questions raised about artists who were reluctant to share code. Glenn Wharton explained that MoMA is trying to acquire code at the same time that the artwork is acquired. They can offer the option of a sort of embargo or source code “escrow” where the source code would be preserved but not accessed until some time in the future.

39th Annual Meeting – Joint Paintings/Research and Technical Studies Session, June 3, “Speed, Precision, And A Lighter Load: Metigo MAP 3.0, A Great Advancement In Condition Mapping For Large-Scale Projects” by Emily MacDonald-Korth

Emily MacDonald presented on the usefulness of a new condition mapping program called Metigo MAP 3.0.  She began her presentation with a description of a collaborative project between  the University of Delaware and the Tsinghua University (Beijing) led by Dr. Susan Buck (Winterthur/University of Delaware Program in Art Conservation) and Dr. Liu Chang (Tsinghua University) to examine and document Buddhist murals and polychromy in the Fengguo Temple (Fengguosi), located in Yixian County, Liaoning Province, China.  The four interior walls of the temple are lined with the murals.  The murals were in very poor condition and their contained images were skewed by loss and other damage.

The Metigo Map software allowed the conservation team to map the murals’ condition issues in a short period of time.  The software incorporates mapping, digital imaging, and area measurement tools. The program streamlines the mapping process and is easy to use.  Emily compared the software to known and used techniques of documentation and illustrated the limitations of each.

Metigo Map was created by German company fokus GmbH Leipzig, dedicated to architectural surveying in addition to documentation of large scale conservation projects.

Maps are produced by uploading images into the software.  The images can then be drawn on and annotated.  The program makes the image true to scale and is able to rectify skewed images to proper orientation.  This allows images to be used that were taken from an angle if your subject is not accessible from the front.  By inputting the dimensions of the painting, the software can give exact locations of areas of interest and calculate the surface area of damage.  This feature can also be useful in making time estimates for proposals on big projects.  Image processing setting allows for photo editing to aid mapping.   Mapped images can then be exported as tif. files and opened in other programs.

For the presentation, Emily chose three murals to be representative of the condition issues they noted overall.  The conservators worked as a team, using Metigo Map to document the condition of the murals.  After the murals are mapped, the maps can be compared easily for condition issues.  The software can also be used to map the locations of samples.  Annotations can be made to the maps for future referral.

For large scale projects or projects particularly difficult to photograph, users can use the tiling function of the software to piece together the rectified image.  This allows for seeing the project unobstructed.

Emily also illustrated how Metigo map can be used to document experiments.  She has also used the software while working on a graffitti removal research project at the Getty to document surface changes and areas of treated surfaces.

Emily summed up the talk with an excellent slide comparing the pros and cons of the software.  The pros included:  easy mapping, image processing, rectification, measurement functions, compatibility with other software, and easy interface.  Cons included:  requires initial training, no white balance (but this can be done on photoshop beforehand), and cost (more expensive than adobe creative but less expensive than autocad).