Page 3 of 10

Universal Dependencies 2.6 released!

tree

Check out the new Universal Dependencies (UD) release V2.6! This is the twelfth release of the annotated treebanks at http://universaldependencies.org/.  The project now covers syntactically annotated corpora in 92 languages, including Coptic. The size of the Coptic Treebank is now around 43,000 words, and growing. For the latest version of the Coptic data, see our development branch here: https://github.com/UniversalDependencies/UD_Coptic-Scriptorium/tree/dev. For documentation, see the UD Coptic annotation guidelines.

The inclusion of the Coptic Treebank in the UD dataset means that many standard parsers and other NLP tools trained on all well attested UD languages now support Coptic out-of-the-box, including Stanford NLP’s Stanza and UFAL’s UDPipe. Feel free to try out these libraries for your data! For optimal performance on open domain Coptic text, we still recommend our custom tool-chain, Coptic-NLP, which is highly optimized to Coptic and uses additional resources beyond the treebank. Or try it out online:

Coptic-NLP demo

 

Winter 2020 Corpora Release 3.1.0

It is our pleasure to announce a new data release, with a variety of new sources from our collaborators (including more digitized data courtesy of the Marcion and PAThs projects and other scholars). New in this release are:

All documents have metadata for word segmentation, tagging, and parsing to indicate whether those annotations are machine annotations only (automatic), checked for accuracy by an expert in Coptic (checked), or closely reviewed for accuracy, usually as a result of manual parsing (gold).

You can search all corpora using ANNIS and download the data in 4 formats (relANNIS database files, PAULA XML files, TEI XML files, and SGML files in Tree-tagger format): browse on GitHub. If you just want to read works, cite project data or browse metadata, you can use our updated repository browser, the Canonical Text Services browser and URN resolver:

http://data.copticscriptorium.org/

The new material in this release includes some 78,000 tokens in 33 documents and represents a tremendous amount of work by our project members and collaborators. We would like to thank the individual contributors (which you can find in the ‘annotation’ metadata), the Marcion and PAThs projects who shared their data with us, and the National Endowment for the Humanities for supporting us. We are continuing to work on more data, links to other resources and new kinds of annotations and tools, which we plan to make available in the summer. Please let us know if you have any feedback!

The Coptic Dictionary Online wins the 2019 DH Award for Best Tool

We are very happy to announce that the Coptic Dictionary Online (CDO) has won the 2019 Digital Humanities Award in the category Best Tool or Suite of Tools! The dictionary interface, shown below, gives users access to searches by Coptic word forms, definitions in three languages (English, French and German), pattern and part of speech searches, and more. We have also added links to quantitative corpus data, including attestation and collocation analyses from data published by Coptic Scriptorium.

The dictionary is the result of a collaboration between Coptic Scriptorium and lexicographers in Germany at the Berlin-Brandenburg and Göttingen Academies of Science, the Free University in Berlin, and the Universities of Göttingen and Leipzig. This collaboration has been funded by the National Endowment for the Humanities (NEH) and the German Research Foundation (DFG).

To read more about the dictionary’s structure and creation, see Feder et al. (2018).

A view of the Coptic Dictionary Online

Fall 2019 Corpora Release 3.0.0

Coptic Scriptorium is happy to announce our latest data release, including a variety of new sources thanks to our collaborators (digitized data courtesy of the Marcion and PAThs projects!). New in this release are:

  • Saints’ lives
    • Life of Cyrus
    • Life of Onnophrius
    • Lives of Longinus and Lucius
    • Martyrdom of Victor the General (part 2)
  •  Miscellaneous:
    • Dormition of John
    • Homilies of Proclus
    • Letter of Pseudo-Ephrem

We are also releasing expansions to some of our existing corpora, including:

  • Canons of Johannes (new material annotated by Elizabeth Platte and Caroline T. Schroeder, digital edition provided by Diliana Atanassova)
  • Apophthegmata Patrum
  • A large number of corrections to most of our existing corpora, which are being republished in this release.

All documents have metadata for word segmentation, tagging, and parsing to indicate whether those annotations are machine annotations only (automatic), checked for accuracy by an expert in Coptic (checked), or closely reviewed for accuracy, usually as a result of manual parsing (gold).

You can search all corpora using ANNIS and download the data in 4 formats (relANNIS database files, PAULA XML files, TEI XML files, and SGML files in Tree-tagger format): browse on GitHub. If you just want to read works, cite project data or browse metadata, you can use our updated repository browser, the Canonical Text Services browser and URN resolver:

http://data.copticscriptorium.org/

Our total annotated corpora are now at over 850,000 words; corpora that have human editors who reviewed the machine annotations are now over 150,000!

We would like to thank Marcion, PAThs and the National Endowment for the Humanities for supporting us – we hope this release will be useful and are already working on more!

New release of Natural Language Processing Tools

Amir Zeldes and Luke Gessler  have spent much of the past summer improving Coptic Scriptorium’s Natural Language Processing tools, and are now happy to announce the release of Coptic-NLP V3.0.0. You can read more about what we’ve been doing and the impact on performance in our three part blog post (part 1, part 2, part 3). Some of the new improvements include:

  • A new 3 step normalization framework, which allows us to hypothetically normalize bound groups before deciding how to segment them, then normalize each segment again
  • A smart rebinding module which can handle deciding to merge split bound groups based on context (useful for processing messy texts with line-breaks mid word, or other segmentation anomalies)
  • A re-implemented segmentation algorithm which is especially better at handling ambiguous groups in context (e.g. “nau” in “peja|f na|u” vs. “nau ero|f”) and spelling variation
  • A brand new, more accurate part of speech tagger
  • Higher accuracy across tools thanks to hyperparameter optimization
  • More robust test suite to ensure new errors don’t creep in
  • Various data/lexicon/ruleset improvements and bugfixes

You can download the latest version of the tools here:

https://github.com/CopticScriptorium/coptic-nlp/

Or use our web interface, which has been updated with the latest version:

https://corpling.uis.georgetown.edu/coptic-nlp/

We appreciate your feedback and comments, and hope to release more data processed with these tools very soon!

Dealing with Heterogeneous Low Resource Data – Part III

(This post is part of a series on our 2019 summer’s work improving processing for non-standardized Coptic resources)

In this post, we present some of our work on integrating more ambitious automatic normalization tools that allow us to deal with heterogeneous spelling in Coptic, and give some first numbers on improvements in accuracy through this summer’s work.

Three-step normalization

In 2018, our normalization strategy was a basic statistical one: to look up previously normalized forms in our data and choose the most frequent normalization. Because there are some frequent spelling variations, we also had a rule based system postprocess the statistical normalizer’s output, to expand, for example, common spellings of nomina sacra (e.g. ⲭⲥ for ⲭⲣⲓⲥⲧⲟⲥ ‘Christ’), even when they appeared as part of larger bound groups (ⲙⲡⲉⲭⲣⲓⲥⲧⲟⲥ ‘of the Christ’, sometimes spelled ⲙⲡⲉⲭⲥ).

One of the problems with this strategy is that for many individual words, we might know common normalizations, such as spelling ⲏⲉⲓ for ⲏⲓ ‘house’, but recognizing that normalization should be carried out depends on correct segmentation – if the system sees ⲙⲡⲏⲉⲓ  ‘of the house’ it may not be certain that normalization should occur. Paradoxically, correct normalization vastly improves segmentation accuracy, which is needed for normalization… resulting in a vicious circle.

To address the challenges of normalizing Coptic orthography, this summer we developed a three level process:

  • We consider hypothetical normalizations which could be applied to bound groups if we spelled certain words together, then choose what to spell together (see Part II of this post series)
  • We consider normalizations for the bound groups we ended up choosing, based on past experience (lookup), rules (finite-state morphology) and machine learning (feature based prediction)
  • After segmenting bound groups into morphological categories, we consider whether the segmented sequence contains smaller units that should be normalized

To illustrate how this works, we can consider the following example:

Coptic edition:            ⲙ̅ⲡ     ⲉⲓⲙ̅ⲡϣⲁ

Romanized:                mp    ei|mpša

Gloss:                          didn’t  I-worthy

Translation:                “I was not worthy”

These words should be spelled together by our conventions, based on Layton’s (2011) grammar, but Budge’s (1914) edition has placed a space here and the first person marker is irregularly spelled epsilon-iota, ⲉⲓ ‘I’, instead of just iota, ⲓ. When resolving whitespace ambiguity, we ask how likely it is that mp stands alone (unlikely), but also whether mp+ei… is grammatical, which in the current spelling might not be recognized. Our normalizer needs to resolve the hypothetical fused group to be spelled ⲙⲡⲓⲙⲡϣⲁ, mpimpša. Since this particular form has not appeared before in our corpora, we rely on data augmentation: our system internally generates variant spellings, for example substituting the common spelling variation of ⲓ with ⲉⲓ in words we have seen before, and generating a solution ⲙⲡⲉⲓⲙⲡϣⲁ -> ⲙⲡⲓⲙⲡϣⲁ. The augmentation system relies both on previously seen forms (a normally spelled ⲙⲡⲓⲙⲡϣⲁ, which we have however also not seen before) and combinations produced by a grammar (it considers the negative past auxiliary ⲙⲡ followed by all subject pronouns and verbs in our lexicon, which does yield the necessary ⲙⲡⲓⲙⲡϣⲁ).

The segmenter is then able to successfully segment this into mp|i|mpša, and we therefore decide:

  1. This should be fused
  2. This can be segmented into three segments
  3. The middle segment is the first person pronoun (with non-standard spelling)
  4. It should be normalized (and subsequently tagged and lemmatized)

If normalization had failed for the whole word group, there is still a chance that the machine learning segmenter would have recognized mpša ‘worthy’ and split it apart, which means that segmentation is slightly less impacted by normalization errors than it would have been in our tools a year ago.

How big of a deal is this?

It’s hard to give an idea of what each improvement like this does for the quality of our data, but we’ll try to give some numbers and contextualize them here. The table below shows an evaluation on the same training and test data: in-domain data comes from UD Coptic Test 2.4, and out-of-domain data represents two texts from editions by W. Budge: the Life of Cyrus and the Repose of John the Evangelist, previously digitized by the Marcion project. The distinction between in-domain and out-of-domain is important here, as in-domain evaluation gives the tools test data that comes from the same distribution of text types the tools are trained on, and is consequently much less surprising. Out-of-domain data comes from text types the system has not seen before, edited with very different editorial practices.

 

2018 2019
task in domain out of domain in domain out of domain
spaces NA* 96.57 NA* 98.08
orthography 98.81 95.79 99.76 97.83
segmentation 97.78 (96.86**) 93.67 (92.28**) 99.54 (99.36**) 96.71 (96.25**)
tagging 96.23 96.34 98.35 98.11

 

* In domain data has canonical whitespace and cannot be used to test white space normalization

** Numbers in brackets represent automatically normalized input (more realistic, but harder to judge performance of segmentation as an isolated task)

 

The numbers show that several tools have improved dramatically across the board, even for in-domain data – most noticeably the part of speech tagger and normalizer modules. The improvement in segmentation accuracy is much more marked on out-of-domain data, and especially if we do not give the segmenter normalized text as input (numbers in brackets). In this case, automatic normalization is used, and the improvements in normalization cascade into better segmentation as well.

Qualitatively these differences are transformative for Coptic Scriptorium: the ability to handle out-of-domain data with good accuracy is essential for making large amounts of digitized text available that come from earlier digitization efforts and partner projects. Although 2018 accuracies on the left may look alright, the reduction in error rates is more than half in some cases (7.72% down to 3.75% in realistic segmentation accuracy out-of-domain). Additionally, the reduced errors are qualitatively different: the bulk of accuracy up to 90% represents easy cases, such as tagging and segmenting function words (e.g. prepositions) correctly. The last few percent represent the hard cases, such as unknown proper names, unusual verb forms and grammatical constructions, loan words, and other high-interest items.

You can get the latest release of the Coptic-NLP tools (V3.0.0) here. We plan to release new data processed with these tools soon – stay tuned!

Dealing with Heterogeneous Low Resource Data – Part II

(This post is part of a series on our 2019 summer’s work improving processing for non-standardized Coptic resources)

The first step in processing heterogeneous data in Coptic is deciding when words should be spelled together. As we described in part I, this is a problem because there are no spaces in original Coptic manuscripts, and editorial standards for how to segment words have varied through the centuries.

Moreover, even within a single edition, there may be inconsistencies in word segmentation. Take, for instance, the word ⲉⲃⲟⲗ (ebol), ‘out, outward’. This word is historically a combination of the preposition ⲉ (e), ‘to, towards’, and the noun ⲃⲟⲗ (bol), ‘outside, exterior’. In some editions, such as those by W. Budge, it is variously spelled as either one (ebol) or two (e bol) words, as in this example from the Asketikon of Apa Ephraim:

ⲡⲃⲱⲗ ⲉⲃⲟⲗ ⲉ ⲡⲉϩⲟⲩⲟ ϩⲛ̅ ⲟⲩϩⲓⲛⲏⲃ ⲉϥϩⲟⲣϣ̅
pbōl ebol e pehouo hn ouhineb efhorš
`<translation>`

ⲙⲏ ⲙ̅ⲡ ϥ̅ⲃⲱⲗ ⲉ ⲃⲟⲗ ⲛ̅ ⲛⲉⲥⲛⲁⲩϩ
mē mp fbōl e bol n nesnauh
`<translation>`

(Lines 12.2, 15.25 in Asketikon of Apa Ephraim. Transcribed by the Marcion project, based on W. Budge’s (1914) edition.)

Up until 2017, we had no automatic tools to ensure consistent word separation, and up until recently we used only a simple approach based on relative probability of being spelled apart: words spelled apart less than 90% of the time in our existing data were attached to the following word. For instance, across 4470 occurrences, the word  (e) was attached to the following word ~92% of the time. That is above 90%, so our simple system would always attach an  (e) to the following word, regardless of context. This approach is capable of effectively dealing with common cases such as prepositions, but it is incapable of handling more complex cases, e.g. where identically-spelled words exhibit different behaviors, or a word has never been seen before.

In summer of 2019, we set out to develop new machine learning tools for solving this whitespace normalization problem. We first considered the most obvious way to frame this problem, as a sequence to sequence (seq2seq) prediction problem: given a sequence of Coptic characters, predict another sequence of Coptic characters, hopefully with spaces inserted in the right places.

The problem is that seq2seq models require a lot of annotated data, much more than we had on hand. At the time, we only had on the order of tens of thousands of words’ worth of hand-normalized text from the type of edition shown in the example. We found that this was much too little data for any usual seq2seq model, like an LSTM (a Long-Short Term Memory neural network).

The key to progress was to observe that in most editions, the difference in whitespace was that there were too many spaces: it almost never happened that two words were spelled together that should have been apart. That left just the case where two words were spelled apart that should have been put together.

The question was now, simply, “for each whitespace that did occur in the edition, should we delete it, or should we keep it?” This is a simple binary classification task, which makes the task we’re asking of the computer much less demanding: instead of asking it to produce a stream of characters, we are asking it for a simple yes/no judgment.

But what kind of information goes into a yes/no decision like this? After a lot of experimentation, we found that the answers to these questions (in addition to others) were most helpful in deciding whether to keep or delete a space in between two words:

  • How common are the words on either side of the space? (Our proxy for “common”ness: how often it appears in our annotated corpora)
  • How common is the word I’d get if I deleted the space between the two words?
  • How long are the two words and the words around them? (This might be a hint—it’s very unlikely, for instance, that a preposition would be more than several characters long.)
  • What are the parts of speech of the words around the space?
  • Does the word to the right consist solely of punctuation?

We tried several machine learning algorithms using this approach. To begin with, we only had ~10,000 words of training data, which is too little for many algorithms to effectively learn. In the end, our XGBoost model ended up performing the best, with a “correctness rate” (F1) of ~99%, against the naïve baseline (keep all spaces all the time), which hovered around 78%.

Dealing with Heterogeneous Low Resource Data – Part I

Image from Budge’s (1914), Coptic Martyrdoms in the Dialect of Upper Egypt

Image from Budge’s (1914), Coptic Martyrdoms
in the Dialect of Upper Egypt
(scan made available by archive.org)

(This post is part of a series on our 2019 summer’s work improving processing for non-standardized Coptic resources)

A major challenge for Coptic Scriptorium as we expand to cover texts from other genres, with different authors, styles and transcription practices, is how to make everything uniform. For example, our previously released data has very specific transcription conventions with respect to what to spell together, based on Layton’s (2011:22-27) concept of bound groups, how to normalize spellings, what base forms to lemmatize words to, and how to segment and analyze groups of words internally.

An example of our standard is shown in below, with segments inside groups separated by ‘|’:

Coptic original:         ⲉⲃⲟⲗ ϩⲙ̅|ⲡ|ⲣⲟ    (Genesis 18:2)

Romanized:                 ebol hm|p|ro

Translation:                 out of the door

The words hm ‘in’, p ‘the’ and ro ‘door’ are spelled together, since they are phonologically bound: similarly to words spelled together in Arabic or Hebrew, the entire phrase carries one stress (on the word ‘door’) and no words may be inserted between them. Assimilation processes unique to the environment inside bound groups also occur, such as hm ‘in’, which is normally hn with an ‘n’, which becomes ‘m’ before the labial ‘p’, a process which does not occur across adjacent bound groups.

But many texts which we would like to make available online are transcribed using very different conventions, such as (2), from the Life of Cyrus, previously transcribed by the Marcion project following the convention of W. Budge’s (1914) edition:

 

Coptic original:    ⲁ    ⲡⲥ̅ⲏ̅ⲣ̅               ⲉⲓ  ⲉ ⲃⲟⲗ    ϩⲙ̅ ⲡⲣⲟ  (Life of Cyrus, BritMusOriental6783)

Romanized:           a     p|sēr              ei   e bol   hm p|ro

Gloss:                        did the|savior go to-out in the|door

Translation:          The savior went out of the door

 

Budge’s edition usually (but not always) spells prepositions apart, articles together and the word ebol in two parts, e + bol. These specific cases are not hard to list, but others are more difficult: the past auxiliary is just a, and is usually spelled together with the subject, here ‘savior’. However, ‘savior’ has been spelled as an abbreviation: sēr for sōtēr, making it harder to recognize that a is followed by a noun and is likely to be the past tense marker, and not all cases of a should be bound. This is further complicated by the fact that words in the edition also break across lines, meaning we sometimes need to decide whether to fuse parts of words that are arbitrarily broken across typesetting boundaries as well.

The amount of material available in varying standards is too large to manually normalize each instance to a single form, raising the question of how we can deal with these automatically. In the next posts we will look at how white space can be normalized using training data, rule based morphology and machine learning tools, and how we can recover standard spellings to ensure uniform searchability and online dictionary linking.

 

References

Layton, B. (2011). A Coptic Grammar. (Porta linguarum orientalium 20.) Wiesbaden: Harrassowitz.

Budge, E.A.W. (1914) Coptic Martyrdoms in the Dialect of Upper Egypt. London: Oxford University Press.

On the Road Summer 2019

Coptic Scriptorium is busy this summer conference season.

I had the privilege of teaching one of the Sunoikisis Digital Classicist summer session earlier in July.

UCLA-St Shenouda Society image

The UCLA-St Shenouda Society conference participants, 2019

I also presented some research on girls and girlhood using the Coptic Scriptorium Corpora and the Online Coptic Dictionary at the annual UCLA-St. Shenouda Society Coptic Studies Conference.  This year was the 20th anniversary conference, and the theme was Shenoute and the White Monastery.

C. Schroeder presenting at ACH 2019; photo courtesy Melissa Dollman via Twitter

C. Schroeder presenting at ACH 2019; photo courtesy Melissa Dollman via Twitter

This week,  the American Digital Humanities organization, the Association for Computational Humanities, held a conference in Pittsburgh.  There I talked about colonialism, Coptic manuscripts, and resisting continuing colonialist tendencies in digitizing these manuscripts.

Meanwhile we’ve also been working on digitizing and annotating more texts, which we hope to release in the fall.

Happy summer everyone!

Congratulating our colleagues!

Two pillars in the fields of Digital Humanities, cultural heritage, and the manuscripts of the Eastern Mediterranean world received honors this month, and we at Coptic Scriptorium wish to congratulate them both.

Orlandi-by-Ciotti-DH2019

Tito Orlandi at DH2019, photo by Fabio Ciotti via Twitter

Dr.  Tito Orlandi was awarded the Busa Prize for lifetime achievement by the Alliance of Digital Humanities Organizations at the annual Digital Humanities Conference in Utrecht.  This honor is bestowed only every three years and thus is quite a distinguished award. Tito’s work in text encoding, developing stable identifiers for manuscripts, digital lexica, and digitization has been foundational for Coptic Studies.  He founded the Corpus dei Manoscritti Copti Letterari
(CMCL) project.

Columba Stewart, OSB, D.Phil, photograph from the HMML site

Columba Stewart, OSB, D.Phil, from the HMML site

Dr. Columba Stewart, Director of the Hill Museum and Manuscript Library at St. John’s University, has been named the Jefferson Lecturer by the National Endowment for the Humanities for 2019.  Other luminaries who have received this honor include Toni Morrison, John Updike, and others.  Columba’s scholarship on early monasticism—especially Evagrius and Cassian—is well-known, widely respected, and oft-cited.  He is being honored by the NEH in particular for his work at HMML to collaborate with communities in the Middle East to photograph and preserve manuscripts manuscripts from both Christian and Muslim communities and traditions that are endangered for various political, cultural, and geographic reasons.

On a personal note, Tito has been a supportive colleague long before Coptic Scriptorium existed.  At my first Congress of the International Association of Coptic Studies in Leiden in 2000, Tito chaired the session in which I gave my paper.  I will never forget when my slides first went up on the screen with one of my photographs of the White Monastery Church, he warmly remarked how happy it made him to see the White Monastery.   This sounds like a small thing, but for a grad student at this international conference for the first time, it was a reassuring way to start my paper.  When we began Coptic Scriptorium, Tito shared with us his digital lexica, which allowed us to shave at least a year off of our labors. Conversations with Tito over the years have enriched our work.

Likewise, Columba has been a kind and generous colleague and mentor since we first met in 1999 at the Oxford Patristics Conference.  Columba’s research on early monasticism has inspired me for a long time, and his work at HMML and the vHMML online reading room is a model for public-facing cultural heritage preservation and collaborations between American scholars and heritage communities in the Middle East.  Columba’s work is sometimes framed as saving manuscripts from ISIS, but Columba himself talks about the American role in the loss of cultural heritage in the Middle East and is, in my opinion, open about the geopolitical and colonialist power dynamics at work. As I said more informally to some friends on social media, Columba is 100% the real deal.

Additionally, for those of us who work on Christianity in the ancient Eastern Mediterranean and the languages and manuscripts of these communities, these two awards cast a warm glow over the whole field.  Thank you Columba and Tito for your work, and thank you to the ADHO and the NEH for honoring them and by extension their areas of work.

A warm, sincere congratulations to you both!

« Older posts Newer posts »