Category: Uncategorized (page 2 of 3)

Entities in the Coptic Treebank

entities

With the release of Version 2.6 of Universal Dependencies, our focus has shifted to handling Named and Non-Named Entity Recognition (NER/NNER) in Coptic data. As a result of intensive work by the Coptic Scriptorium team in the past few months, the development branch of the Treebank now contains complete entity spans and types for the entire data in the Treebank, which can be accessed here. Special thanks are due to Lance Martin, Liz Davidson and Mitchell Abrams for all their efforts!

What’s included?

  • All data from the Coptic treebank (78 documents, approx. 46,000 words)
  • All spans of text referring to a named or unnamed entity, such as “Emperor Diocletian”, “the old woman” or “his cell”.
  • Nested entities contained in other entities, such a [the kingdom of [the Emperor Diocletian]]
  • Entity types, divided into the following 10 classes: (English examples are provided in brackets)

 

What do we plan to do with this?

Entity annotations are a gateway to exposing and linking semantic content information from collections of documents. Having such annotations for all of our Coptic data will allow search by entity types (and ultimately names), enable analysis and comparison of texts based on the quantity, proportion and dispersion of entity types, facilitate identification of textual reuse disregarding either the entities involved or the ways in which they are phrased, and much more.

Over the course of the summer, our next goals fall into three packages:

  1. Natural Language Processing (NLP): Develop high-accuracy automatic entity recognition tools for Coptic based on this data, and make them freely available.
  2. Corpora: Enrich all of our available data with automatic entity annotations, which can be corrected and improved iteratively in the future.
  3. Entity linking: Leverage the inventory of named entities identified in the data to carry out named entity linking with resources such as Wikipedia and other DH project identifiers. This will allow users to find all mentions of a specific person or place, regardless of how they are referred to.

Since the tools and annotations are based only on Coptic textual input and subsequent automatic NLP, we envision including search and visualization of entity data for all of our corpora, including ones for which we do not have a translation. This means that data whose content could not be easily deciphered without extensive reading of the original Coptic text will become much more easily discoverable, by exploring entities in which researchers are interested.

Stay tuned for more updates on Coptic entities!

Universal Dependencies 2.6 released!

tree

Check out the new Universal Dependencies (UD) release V2.6! This is the twelfth release of the annotated treebanks at http://universaldependencies.org/.  The project now covers syntactically annotated corpora in 92 languages, including Coptic. The size of the Coptic Treebank is now around 43,000 words, and growing. For the latest version of the Coptic data, see our development branch here: https://github.com/UniversalDependencies/UD_Coptic-Scriptorium/tree/dev. For documentation, see the UD Coptic annotation guidelines.

The inclusion of the Coptic Treebank in the UD dataset means that many standard parsers and other NLP tools trained on all well attested UD languages now support Coptic out-of-the-box, including Stanford NLP’s Stanza and UFAL’s UDPipe. Feel free to try out these libraries for your data! For optimal performance on open domain Coptic text, we still recommend our custom tool-chain, Coptic-NLP, which is highly optimized to Coptic and uses additional resources beyond the treebank. Or try it out online:

Coptic-NLP demo

 

Winter 2020 Corpora Release 3.1.0

It is our pleasure to announce a new data release, with a variety of new sources from our collaborators (including more digitized data courtesy of the Marcion and PAThs projects and other scholars). New in this release are:

All documents have metadata for word segmentation, tagging, and parsing to indicate whether those annotations are machine annotations only (automatic), checked for accuracy by an expert in Coptic (checked), or closely reviewed for accuracy, usually as a result of manual parsing (gold).

You can search all corpora using ANNIS and download the data in 4 formats (relANNIS database files, PAULA XML files, TEI XML files, and SGML files in Tree-tagger format): browse on GitHub. If you just want to read works, cite project data or browse metadata, you can use our updated repository browser, the Canonical Text Services browser and URN resolver:

http://data.copticscriptorium.org/

The new material in this release includes some 78,000 tokens in 33 documents and represents a tremendous amount of work by our project members and collaborators. We would like to thank the individual contributors (which you can find in the ‘annotation’ metadata), the Marcion and PAThs projects who shared their data with us, and the National Endowment for the Humanities for supporting us. We are continuing to work on more data, links to other resources and new kinds of annotations and tools, which we plan to make available in the summer. Please let us know if you have any feedback!

The Coptic Dictionary Online wins the 2019 DH Award for Best Tool

We are very happy to announce that the Coptic Dictionary Online (CDO) has won the 2019 Digital Humanities Award in the category Best Tool or Suite of Tools! The dictionary interface, shown below, gives users access to searches by Coptic word forms, definitions in three languages (English, French and German), pattern and part of speech searches, and more. We have also added links to quantitative corpus data, including attestation and collocation analyses from data published by Coptic Scriptorium.

The dictionary is the result of a collaboration between Coptic Scriptorium and lexicographers in Germany at the Berlin-Brandenburg and Göttingen Academies of Science, the Free University in Berlin, and the Universities of Göttingen and Leipzig. This collaboration has been funded by the National Endowment for the Humanities (NEH) and the German Research Foundation (DFG).

To read more about the dictionary’s structure and creation, see Feder et al. (2018).

A view of the Coptic Dictionary Online

Dealing with Heterogeneous Low Resource Data – Part III

(This post is part of a series on our 2019 summer’s work improving processing for non-standardized Coptic resources)

In this post, we present some of our work on integrating more ambitious automatic normalization tools that allow us to deal with heterogeneous spelling in Coptic, and give some first numbers on improvements in accuracy through this summer’s work.

Three-step normalization

In 2018, our normalization strategy was a basic statistical one: to look up previously normalized forms in our data and choose the most frequent normalization. Because there are some frequent spelling variations, we also had a rule based system postprocess the statistical normalizer’s output, to expand, for example, common spellings of nomina sacra (e.g. ⲭⲥ for ⲭⲣⲓⲥⲧⲟⲥ ‘Christ’), even when they appeared as part of larger bound groups (ⲙⲡⲉⲭⲣⲓⲥⲧⲟⲥ ‘of the Christ’, sometimes spelled ⲙⲡⲉⲭⲥ).

One of the problems with this strategy is that for many individual words, we might know common normalizations, such as spelling ⲏⲉⲓ for ⲏⲓ ‘house’, but recognizing that normalization should be carried out depends on correct segmentation – if the system sees ⲙⲡⲏⲉⲓ  ‘of the house’ it may not be certain that normalization should occur. Paradoxically, correct normalization vastly improves segmentation accuracy, which is needed for normalization… resulting in a vicious circle.

To address the challenges of normalizing Coptic orthography, this summer we developed a three level process:

  • We consider hypothetical normalizations which could be applied to bound groups if we spelled certain words together, then choose what to spell together (see Part II of this post series)
  • We consider normalizations for the bound groups we ended up choosing, based on past experience (lookup), rules (finite-state morphology) and machine learning (feature based prediction)
  • After segmenting bound groups into morphological categories, we consider whether the segmented sequence contains smaller units that should be normalized

To illustrate how this works, we can consider the following example:

Coptic edition:            ⲙ̅ⲡ     ⲉⲓⲙ̅ⲡϣⲁ

Romanized:                mp    ei|mpša

Gloss:                          didn’t  I-worthy

Translation:                “I was not worthy”

These words should be spelled together by our conventions, based on Layton’s (2011) grammar, but Budge’s (1914) edition has placed a space here and the first person marker is irregularly spelled epsilon-iota, ⲉⲓ ‘I’, instead of just iota, ⲓ. When resolving whitespace ambiguity, we ask how likely it is that mp stands alone (unlikely), but also whether mp+ei… is grammatical, which in the current spelling might not be recognized. Our normalizer needs to resolve the hypothetical fused group to be spelled ⲙⲡⲓⲙⲡϣⲁ, mpimpša. Since this particular form has not appeared before in our corpora, we rely on data augmentation: our system internally generates variant spellings, for example substituting the common spelling variation of ⲓ with ⲉⲓ in words we have seen before, and generating a solution ⲙⲡⲉⲓⲙⲡϣⲁ -> ⲙⲡⲓⲙⲡϣⲁ. The augmentation system relies both on previously seen forms (a normally spelled ⲙⲡⲓⲙⲡϣⲁ, which we have however also not seen before) and combinations produced by a grammar (it considers the negative past auxiliary ⲙⲡ followed by all subject pronouns and verbs in our lexicon, which does yield the necessary ⲙⲡⲓⲙⲡϣⲁ).

The segmenter is then able to successfully segment this into mp|i|mpša, and we therefore decide:

  1. This should be fused
  2. This can be segmented into three segments
  3. The middle segment is the first person pronoun (with non-standard spelling)
  4. It should be normalized (and subsequently tagged and lemmatized)

If normalization had failed for the whole word group, there is still a chance that the machine learning segmenter would have recognized mpša ‘worthy’ and split it apart, which means that segmentation is slightly less impacted by normalization errors than it would have been in our tools a year ago.

How big of a deal is this?

It’s hard to give an idea of what each improvement like this does for the quality of our data, but we’ll try to give some numbers and contextualize them here. The table below shows an evaluation on the same training and test data: in-domain data comes from UD Coptic Test 2.4, and out-of-domain data represents two texts from editions by W. Budge: the Life of Cyrus and the Repose of John the Evangelist, previously digitized by the Marcion project. The distinction between in-domain and out-of-domain is important here, as in-domain evaluation gives the tools test data that comes from the same distribution of text types the tools are trained on, and is consequently much less surprising. Out-of-domain data comes from text types the system has not seen before, edited with very different editorial practices.

 

2018 2019
task in domain out of domain in domain out of domain
spaces NA* 96.57 NA* 98.08
orthography 98.81 95.79 99.76 97.83
segmentation 97.78 (96.86**) 93.67 (92.28**) 99.54 (99.36**) 96.71 (96.25**)
tagging 96.23 96.34 98.35 98.11

 

* In domain data has canonical whitespace and cannot be used to test white space normalization

** Numbers in brackets represent automatically normalized input (more realistic, but harder to judge performance of segmentation as an isolated task)

 

The numbers show that several tools have improved dramatically across the board, even for in-domain data – most noticeably the part of speech tagger and normalizer modules. The improvement in segmentation accuracy is much more marked on out-of-domain data, and especially if we do not give the segmenter normalized text as input (numbers in brackets). In this case, automatic normalization is used, and the improvements in normalization cascade into better segmentation as well.

Qualitatively these differences are transformative for Coptic Scriptorium: the ability to handle out-of-domain data with good accuracy is essential for making large amounts of digitized text available that come from earlier digitization efforts and partner projects. Although 2018 accuracies on the left may look alright, the reduction in error rates is more than half in some cases (7.72% down to 3.75% in realistic segmentation accuracy out-of-domain). Additionally, the reduced errors are qualitatively different: the bulk of accuracy up to 90% represents easy cases, such as tagging and segmenting function words (e.g. prepositions) correctly. The last few percent represent the hard cases, such as unknown proper names, unusual verb forms and grammatical constructions, loan words, and other high-interest items.

You can get the latest release of the Coptic-NLP tools (V3.0.0) here. We plan to release new data processed with these tools soon – stay tuned!

Dealing with Heterogeneous Low Resource Data – Part II

(This post is part of a series on our 2019 summer’s work improving processing for non-standardized Coptic resources)

The first step in processing heterogeneous data in Coptic is deciding when words should be spelled together. As we described in part I, this is a problem because there are no spaces in original Coptic manuscripts, and editorial standards for how to segment words have varied through the centuries.

Moreover, even within a single edition, there may be inconsistencies in word segmentation. Take, for instance, the word ⲉⲃⲟⲗ (ebol), ‘out, outward’. This word is historically a combination of the preposition ⲉ (e), ‘to, towards’, and the noun ⲃⲟⲗ (bol), ‘outside, exterior’. In some editions, such as those by W. Budge, it is variously spelled as either one (ebol) or two (e bol) words, as in this example from the Asketikon of Apa Ephraim:

ⲡⲃⲱⲗ ⲉⲃⲟⲗ ⲉ ⲡⲉϩⲟⲩⲟ ϩⲛ̅ ⲟⲩϩⲓⲛⲏⲃ ⲉϥϩⲟⲣϣ̅
pbōl ebol e pehouo hn ouhineb efhorš
`<translation>`

ⲙⲏ ⲙ̅ⲡ ϥ̅ⲃⲱⲗ ⲉ ⲃⲟⲗ ⲛ̅ ⲛⲉⲥⲛⲁⲩϩ
mē mp fbōl e bol n nesnauh
`<translation>`

(Lines 12.2, 15.25 in Asketikon of Apa Ephraim. Transcribed by the Marcion project, based on W. Budge’s (1914) edition.)

Up until 2017, we had no automatic tools to ensure consistent word separation, and up until recently we used only a simple approach based on relative probability of being spelled apart: words spelled apart less than 90% of the time in our existing data were attached to the following word. For instance, across 4470 occurrences, the word  (e) was attached to the following word ~92% of the time. That is above 90%, so our simple system would always attach an  (e) to the following word, regardless of context. This approach is capable of effectively dealing with common cases such as prepositions, but it is incapable of handling more complex cases, e.g. where identically-spelled words exhibit different behaviors, or a word has never been seen before.

In summer of 2019, we set out to develop new machine learning tools for solving this whitespace normalization problem. We first considered the most obvious way to frame this problem, as a sequence to sequence (seq2seq) prediction problem: given a sequence of Coptic characters, predict another sequence of Coptic characters, hopefully with spaces inserted in the right places.

The problem is that seq2seq models require a lot of annotated data, much more than we had on hand. At the time, we only had on the order of tens of thousands of words’ worth of hand-normalized text from the type of edition shown in the example. We found that this was much too little data for any usual seq2seq model, like an LSTM (a Long-Short Term Memory neural network).

The key to progress was to observe that in most editions, the difference in whitespace was that there were too many spaces: it almost never happened that two words were spelled together that should have been apart. That left just the case where two words were spelled apart that should have been put together.

The question was now, simply, “for each whitespace that did occur in the edition, should we delete it, or should we keep it?” This is a simple binary classification task, which makes the task we’re asking of the computer much less demanding: instead of asking it to produce a stream of characters, we are asking it for a simple yes/no judgment.

But what kind of information goes into a yes/no decision like this? After a lot of experimentation, we found that the answers to these questions (in addition to others) were most helpful in deciding whether to keep or delete a space in between two words:

  • How common are the words on either side of the space? (Our proxy for “common”ness: how often it appears in our annotated corpora)
  • How common is the word I’d get if I deleted the space between the two words?
  • How long are the two words and the words around them? (This might be a hint—it’s very unlikely, for instance, that a preposition would be more than several characters long.)
  • What are the parts of speech of the words around the space?
  • Does the word to the right consist solely of punctuation?

We tried several machine learning algorithms using this approach. To begin with, we only had ~10,000 words of training data, which is too little for many algorithms to effectively learn. In the end, our XGBoost model ended up performing the best, with a “correctness rate” (F1) of ~99%, against the naïve baseline (keep all spaces all the time), which hovered around 78%.

Dealing with Heterogeneous Low Resource Data – Part I

Image from Budge’s (1914), Coptic Martyrdoms in the Dialect of Upper Egypt

Image from Budge’s (1914), Coptic Martyrdoms
in the Dialect of Upper Egypt
(scan made available by archive.org)

(This post is part of a series on our 2019 summer’s work improving processing for non-standardized Coptic resources)

A major challenge for Coptic Scriptorium as we expand to cover texts from other genres, with different authors, styles and transcription practices, is how to make everything uniform. For example, our previously released data has very specific transcription conventions with respect to what to spell together, based on Layton’s (2011:22-27) concept of bound groups, how to normalize spellings, what base forms to lemmatize words to, and how to segment and analyze groups of words internally.

An example of our standard is shown in below, with segments inside groups separated by ‘|’:

Coptic original:         ⲉⲃⲟⲗ ϩⲙ̅|ⲡ|ⲣⲟ    (Genesis 18:2)

Romanized:                 ebol hm|p|ro

Translation:                 out of the door

The words hm ‘in’, p ‘the’ and ro ‘door’ are spelled together, since they are phonologically bound: similarly to words spelled together in Arabic or Hebrew, the entire phrase carries one stress (on the word ‘door’) and no words may be inserted between them. Assimilation processes unique to the environment inside bound groups also occur, such as hm ‘in’, which is normally hn with an ‘n’, which becomes ‘m’ before the labial ‘p’, a process which does not occur across adjacent bound groups.

But many texts which we would like to make available online are transcribed using very different conventions, such as (2), from the Life of Cyrus, previously transcribed by the Marcion project following the convention of W. Budge’s (1914) edition:

 

Coptic original:    ⲁ    ⲡⲥ̅ⲏ̅ⲣ̅               ⲉⲓ  ⲉ ⲃⲟⲗ    ϩⲙ̅ ⲡⲣⲟ  (Life of Cyrus, BritMusOriental6783)

Romanized:           a     p|sēr              ei   e bol   hm p|ro

Gloss:                        did the|savior go to-out in the|door

Translation:          The savior went out of the door

 

Budge’s edition usually (but not always) spells prepositions apart, articles together and the word ebol in two parts, e + bol. These specific cases are not hard to list, but others are more difficult: the past auxiliary is just a, and is usually spelled together with the subject, here ‘savior’. However, ‘savior’ has been spelled as an abbreviation: sēr for sōtēr, making it harder to recognize that a is followed by a noun and is likely to be the past tense marker, and not all cases of a should be bound. This is further complicated by the fact that words in the edition also break across lines, meaning we sometimes need to decide whether to fuse parts of words that are arbitrarily broken across typesetting boundaries as well.

The amount of material available in varying standards is too large to manually normalize each instance to a single form, raising the question of how we can deal with these automatically. In the next posts we will look at how white space can be normalized using training data, rule based morphology and machine learning tools, and how we can recover standard spellings to ensure uniform searchability and online dictionary linking.

 

References

Layton, B. (2011). A Coptic Grammar. (Porta linguarum orientalium 20.) Wiesbaden: Harrassowitz.

Budge, E.A.W. (1914) Coptic Martyrdoms in the Dialect of Upper Egypt. London: Oxford University Press.

Coptic Scriptorium wins DHAG grant from the NEH

NEH Logo MASTER_082010

copticlogoenglishlogo

We are very pleased to report that the National Endowment for the Humanities just announced their approval of a new Stage III Digital Humanities Advancement Grant continuing their long-standing support of our work in Coptic Scriptorium. The new grant is titled:

A Linked Digital Environment for Coptic Studies: Integrating Heterogeneous Data with Machine Learning and Natural Language Processing

The focus of the grant, which is set to run from fall 2018 for three years, is developing more robust tools which can deliver high accuracy analyses with less manual intervention in the face of more heterogeneous data, including Coptic materials from OCR, spelling variation across editions and varying scholarly conventions. This will allow us to grow the collection of corpora made available over the project’s tools in the coming months and years. Additional areas of work include improving our work on Linked Open Data standards connecting the project to other initiatives in the field, and pioneering methods for automatic Named Entity Recognition in Coptic, among other things!

Please stay tuned for more updates on the new project – in the meantime we thank the NEH for their trust in us, our project members, contributors and advisory board for all their work, and the Digital Coptic community for your support!

Wishing the NEH a happy 50th Anniversary!

The Coptic SCRIPTORIUM team would like to wish the National Endowment for the Humanities (NEH), a happy 50th anniversary! We would like to thank the NEH for supporting Coptic SCRIPTORIUM. Cheers to the NEH!

50thsocialengagement

Corpora and how to use ANNIS

Coptic SCRIPTORIUM provides Coptic texts for reading, analysis, and complex searches. For a full list of our text corpora, please click here. We have also added answers to who and what some people and terms mean on our main site. A video tutorial given by Amir Zeldes and Caroline T. Schroeder is also available on how to search our database using the tool ANNIS.

 

(Originally posted in December of 2014 at http://copticscriptorium.org/)

Older posts Newer posts