techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.6K
active users

#openUpEditions

0 posts0 participants0 posts today

Matteo Romanello, @mr56k , is presenting Distributed Text Services (DTS). Paper on DTS: Almas, B., Cayless, H., Clérice, T., Jolivet, V., Liuzzo, P. M., Robie, J., Romanello, M., & Scott, I. (2023). Distributed Text Services (DTS): A Community-Built API to Publish and Consume Text Collections as Linked Data. Journal of the Text Encoding Initiative. doi.org/10.4000/jtei.4352 #openUpEditions #RoundTable about infrastructures for editions.

Replied to Stefan Dumont

@stefandumont Danke für den indirekten Tipp zur #openUpEditions. War dann heute nachmittag dabei.
UI&Usability steht wohl da nicht so weit vorne in der Diskussion. Es sind wohl eher Modellierungsfragen. Und als User werden implizit Researcher und Scholars angenommen. Also wohl auch die Zielgruppe der Tagung selbst. Ich würde es ja noch spannender finden, wenn man bei der Modellierung auch ein allgemeinverständliches UI im Blick hätte, das nicht nur AkademikerInnen nutzen können.
@dta_cthomas

Die digitale Edition der Notizbücher von Max Frisch sind ein interessantes und gut umgesetztes Projekt mit Oxygen XML, TEI Publisher (hier auch NER), ODD und DTABf. Interessant ist u.a. Umgang mit Seitenreihenfolge (weil Frisch durcheinander schreibt) und ODD-Processing Model für Lese- und kritische Fassung (wenn ich das richtig verstanden habe). #openUpEditions

Ever wondered how to tackle in ? In our @up_johd contribution (doi.org/10.5334/johd.174) we describe how we tackled , Early New High German , , and for 's multilingual correspondence. The results are all on bullinger-digital.ch. And should be at the conference @uzh_zde right now and have questions, feel free to ask.

openhumanitiesdata.metajnl.comMultilingual Workflows in Bullinger Digital: Data Curation for Latin and Early New High GermanThis paper presents how we enhanced the accessibility and utility of historical linguistic data in the project Bullinger Digital. The project involved the transformation of 3,100 letters, primarily available as scanned PDFs, into a dynamic, fully digital format. The expanded digital collection now includes 12,000 letters, 3,100 edited, 5,400 transcribed, and 3,500 represented through detailed metadata and results from handwritten text recognition. Central to our discussion is the innovative workflow developed for this multilingual corpus. This includes strategies for text normalisation, machine translation, and handwritten text recognition, particularly focusing on the challenges of code-switching within historical documents. The resulting digital platform features an advanced search system, offering users various filtering options such as correspondent names, time periods, languages, and locations. It also incorporates fuzzy and exact search capabilities, with the ability to focus searches within specific text parts, like summaries or footnotes. Beyond detailing the technical process, this paper underscores the project’s contribution to historical research and digital humanities. While the Bullinger Digital platform serves as a model for similar projects, the corpus behind it demonstrates the vast potential for data reuse in historical linguistics. The project exemplifies how digital humanities methodologies can revitalise historical text collections, offering researchers access to and interaction with historical data. This paper aims to provide readers with a comprehensive understanding of our project’s scope and broader implications for the field of digital humanities, highlighting the transformative potential of such digital endeavours in historical linguistic research.