Analyse, Sort, Classify and Mark Up
Analyse, sort, classify and mark up: text encoding and its debt to historical dictionaries and word lists
Dr Julianne Nyhan (University College London).
Though Digital Humanities (DH) has been defined in many ways it is widely agreed that text is central to it. One of the main achievements of DH has been the development of the TEI Guidelines, which is a de facto standard for the XML encoding of texts in order to make them machine readable (see http://www.tei-c.org/index.xml). Within DH the technical and organisational history of TEI and XML has received some attention (see, for example, Schmidt 2010 and Vanhoutte 2009) but its intellectual history and contexts much less so. The field of lexicography has a long tradition of developing and using text technologies and techniques that make complex texts (albeit to varying degrees) navigable and searchable, through the use of e.g. thematic ordering and alphabetisation. In this talk I will look to the history of lexicography in order to situate text encoding within a wider history of the techniques that have historically been used to make text finable and searchable. Looking to those who have worked on the sociology and history of knowledge, such as Peter Burke (2000), I will argue that our current understanding of the act text encoding is impoverished and propose some future research directions for DH. To close, I will reflect briefly on what the history of lexicography has to contribute to DH (as opposed to the other way around)
Peter Burke (2000). A Social History of Knowledge: from Gutenberg to Diderot. Polity Press: Cambridge & Malden Schmidt, D., 2010. The inadequacy of embedded markup for cultural heritage texts. Literary and Linguistic Computing, 25(3), pp.337 –356.
Vanhoutte, Edward & Ron Van den Branden. 'The Text Encoding Initiative.' In Marcia J. Bates and Mary Niles Maack (eds.), ELIS. Encyclopedia of Library and Information Sciences. Taylor & Francis, p. 5172-5181. DOI: 10.1081/E-ELIS3-120043748. [Preprint] Vanhoutte, Edward & Ron Van den Branden. 'The Text Encoding Initiative.' In Marcia J. Bates and Mary Niles Maack (eds.), ELIS. Encyclopedia of Library and Information Sciences. Taylor & Francis, p. 5172-5181. DOI: 10.1081/E-ELIS3-120043748. [Preprint]
Dr Julianne Nyhan is lecturer in Digital Information Studies in the Department of Information Studies, University College London. Her research interests include the history of computing in the Humanities and most aspects digital humanities with special emphasis on meta-markup languages and digital lexicography. She has published widely in Digital Humanities, most recently she has co-edited Digital Humanities in Practice (Facet 2012 http://www.facetpublishing.co.uk/title.php?id=7661) and Defining Digital Humanities: A Reader (Ashgate 2013 http://tinyurl.com/o77ssw2). Among other things, she is a member of AHRC’s Peer Review College, a member of the European Science Foundation’s expert working group on Research Infrastructures in the Humanities and European Liaison manager in the UCL Centre for Digital Humanities. Find her on twitter @juliannenyhan and she blogs at http://archelogos.hypotheses.org/author/archelogos .
HiCor: a Cross-Disciplinary Network for History and Corpus Linguistics
Contact name: Barbara McGillivray
Contact email: email@example.com
Audience: Open to all