Tess Colwell, Catherine DeRose and Lindsay King; Yale University
27 May 2021
This presentation demonstrates how Yale’s open-source PixPlot [http://github.com/YaleDHLab/pix-plot] software provides a platform for engaging and interpreting Visual Resource Collections (VRC) in new ways and at new scales. The software uses a convolutional neural network to ‘featurize’ tens of thousands of images into a high-dimensional space and then applies an advanced dimensionality reduction algorithm to display those images in a web browser such that visually similar images cluster near one another. For example, images of human sculptures will cluster near one another while painted portraits will form their own cluster. These algorithmically generated clusters provide quick, high-level insights into large visual collections. Using Yale’s VRC—which consists of more than 370,000 digital items comprised of lantern slides, 35mm slides, and photographs—as a case study, we will demonstrate how PixPlot works from both a creator and end user point-of-view, walking through the steps from dataset ingestion to curation, selection, and export.
This presentation was delivered at the (en)coding Heritage Seminar Series, which brought together researchers working at the cutting edge of digital technologies, humanities and heritage science. The session was dedicated to New Directions in Digital Visual Studies. The full programme can be found here.
Organised and chaired by Dr Lia Costiner (University of Oxford) and Dr Leonardo Impett (Durham University) for the Oxford (en)coding Heritage Network.