Tag: tools (page 1 of 2)

New links for tools and services

After our recent server outage, we’ve been re-installing our tools and software. Some of our services are now available at new URLs.

The ANNIS database is now at https://annis.copticscriptorium.org/annis/scriptorium

Our Sahidic Coptic natural language processing tools are at https://tools.copticscriptorium.org/coptic-nlp

Our GitDox annotation tool is at https://tools.copticscriptorium.org/gitdox/scriptorium

The Coptic Dictionary online is still at https://coptic-dictionary.org, and our tool for browsing and reading texts is still at https://data.copticscriptorium.org

Thanks for your patience!

The Coptic Dictionary Online wins the 2019 DH Award for Best Tool

We are very happy to announce that the Coptic Dictionary Online (CDO) has won the 2019 Digital Humanities Award in the category Best Tool or Suite of Tools! The dictionary interface, shown below, gives users access to searches by Coptic word forms, definitions in three languages (English, French and German), pattern and part of speech searches, and more. We have also added links to quantitative corpus data, including attestation and collocation analyses from data published by Coptic Scriptorium.

The dictionary is the result of a collaboration between Coptic Scriptorium and lexicographers in Germany at the Berlin-Brandenburg and Göttingen Academies of Science, the Free University in Berlin, and the Universities of Göttingen and Leipzig. This collaboration has been funded by the National Endowment for the Humanities (NEH) and the German Research Foundation (DFG).

To read more about the dictionary’s structure and creation, see Feder et al. (2018).

A view of the Coptic Dictionary Online

New release of Natural Language Processing Tools

Amir Zeldes and Luke Gessler  have spent much of the past summer improving Coptic Scriptorium’s Natural Language Processing tools, and are now happy to announce the release of Coptic-NLP V3.0.0. You can read more about what we’ve been doing and the impact on performance in our three part blog post (part 1, part 2, part 3). Some of the new improvements include:

  • A new 3 step normalization framework, which allows us to hypothetically normalize bound groups before deciding how to segment them, then normalize each segment again
  • A smart rebinding module which can handle deciding to merge split bound groups based on context (useful for processing messy texts with line-breaks mid word, or other segmentation anomalies)
  • A re-implemented segmentation algorithm which is especially better at handling ambiguous groups in context (e.g. “nau” in “peja|f na|u” vs. “nau ero|f”) and spelling variation
  • A brand new, more accurate part of speech tagger
  • Higher accuracy across tools thanks to hyperparameter optimization
  • More robust test suite to ensure new errors don’t creep in
  • Various data/lexicon/ruleset improvements and bugfixes

You can download the latest version of the tools here:

https://github.com/CopticScriptorium/coptic-nlp/

Or use our web interface, which has been updated with the latest version:

https://corpling.uis.georgetown.edu/coptic-nlp/

We appreciate your feedback and comments, and hope to release more data processed with these tools very soon!

Dealing with Heterogeneous Low Resource Data – Part I

Image from Budge’s (1914), Coptic Martyrdoms in the Dialect of Upper Egypt

Image from Budge’s (1914), Coptic Martyrdoms
in the Dialect of Upper Egypt
(scan made available by archive.org)

(This post is part of a series on our 2019 summer’s work improving processing for non-standardized Coptic resources)

A major challenge for Coptic Scriptorium as we expand to cover texts from other genres, with different authors, styles and transcription practices, is how to make everything uniform. For example, our previously released data has very specific transcription conventions with respect to what to spell together, based on Layton’s (2011:22-27) concept of bound groups, how to normalize spellings, what base forms to lemmatize words to, and how to segment and analyze groups of words internally.

An example of our standard is shown in below, with segments inside groups separated by ‘|’:

Coptic original:         ⲉⲃⲟⲗ ϩⲙ̅|ⲡ|ⲣⲟ    (Genesis 18:2)

Romanized:                 ebol hm|p|ro

Translation:                 out of the door

The words hm ‘in’, p ‘the’ and ro ‘door’ are spelled together, since they are phonologically bound: similarly to words spelled together in Arabic or Hebrew, the entire phrase carries one stress (on the word ‘door’) and no words may be inserted between them. Assimilation processes unique to the environment inside bound groups also occur, such as hm ‘in’, which is normally hn with an ‘n’, which becomes ‘m’ before the labial ‘p’, a process which does not occur across adjacent bound groups.

But many texts which we would like to make available online are transcribed using very different conventions, such as (2), from the Life of Cyrus, previously transcribed by the Marcion project following the convention of W. Budge’s (1914) edition:

 

Coptic original:    ⲁ    ⲡⲥ̅ⲏ̅ⲣ̅               ⲉⲓ  ⲉ ⲃⲟⲗ    ϩⲙ̅ ⲡⲣⲟ  (Life of Cyrus, BritMusOriental6783)

Romanized:           a     p|sēr              ei   e bol   hm p|ro

Gloss:                        did the|savior go to-out in the|door

Translation:          The savior went out of the door

 

Budge’s edition usually (but not always) spells prepositions apart, articles together and the word ebol in two parts, e + bol. These specific cases are not hard to list, but others are more difficult: the past auxiliary is just a, and is usually spelled together with the subject, here ‘savior’. However, ‘savior’ has been spelled as an abbreviation: sēr for sōtēr, making it harder to recognize that a is followed by a noun and is likely to be the past tense marker, and not all cases of a should be bound. This is further complicated by the fact that words in the edition also break across lines, meaning we sometimes need to decide whether to fuse parts of words that are arbitrarily broken across typesetting boundaries as well.

The amount of material available in varying standards is too large to manually normalize each instance to a single form, raising the question of how we can deal with these automatically. In the next posts we will look at how white space can be normalized using training data, rule based morphology and machine learning tools, and how we can recover standard spellings to ensure uniform searchability and online dictionary linking.

 

References

Layton, B. (2011). A Coptic Grammar. (Porta linguarum orientalium 20.) Wiesbaden: Harrassowitz.

Budge, E.A.W. (1914) Coptic Martyrdoms in the Dialect of Upper Egypt. London: Oxford University Press.

Corpora release 2.6

We are pleased to announce release 2.6 of our corpora! Some exciting new things:

  • Expanded Coptic Old Testament
  • More gold-standard treebanked texts
  • Updated files of Shenoute’s Abraham Our Father and Acephalous Work 22
  • New metadata fields to indicate whether documents have been machine annotated or if an editor has reviewed the machine annotations

Expanded Coptic Old Testament

Our Coptic Old Testament corpus is updated and expanded, with digital text from the our partners at the Digital Edition of the Coptic Old Testament project in Goettingen.  All annotations in this corpora are fully machine-processed (no human editing, because it’s BIG). You can read through all the text in two different visualizations online and search it in the ANNIS database:

  1. analytic: the normalized text segmented into words aligned with part of speech tags; each verse is aligned with Brenton’s English translation of the Septuagint
  2. chapter: the normalized text presented as chapters and verses; each word links to the online Coptic dictionary
  3. ANNIS search: full search of text, lemmas, parts of speech, syntactic annotations, etc. (see our ANNIS tips if you’re new to ANNIS)

Please keep in mind this corpus is fully machine-annotated, and we currently do not have the capacity to make manual changes to a corpus of this size.  If you notice systemic errors (the same thing tagged incorrectly often, for example) please let us know.  Otherwise, please be patient: as the tools improve, we will update this corpus.

We’ve also machine-aligned the text with Brenton’s English translation of the Septuagint. It’s possible there will be some misalignments.  Thanks for your understanding!

Treebanks

We’ve added more documents to our separate gold-standard treebank corpus.  (Want to learn more about treebanks?) In this corpus, the treebank/syntactic annotations have been manually corrected; the documents are part of the Universal Dependencies project for cross-language linguistics research.  New treebanked documents include selections from 1 Corinthians, the Gospel of Mark, Shenoute’s Abraham Our Father, Shenoute’s Acephalous Work 22, and the Martyrdom of Victor.  This means the self-standing treebank corpus is expanded, and any documents we’ve treebanked have updated word segmentation, part of speech tagging, etc., in their regular corpora.

Updated Shenoute Documents

Documents in the corpora for Shenoute’s Abraham Our Father and Acephalous Work 22 have several updates.

First, some documents are in our treebank corpus and are now significantly more accurate in terms of word segmentation, tagging, etc.

Second, we’ve added chapter and verse segmentation to these works.  Since there are no comprehensive print editions of these works with versification, we’ve applied our own chapter and verse numbers.  We recognize that versification is arbitrary, but nonetheless useful for citation.  For texts transcribed from manuscripts, chapter divisions typically occur when an ekthesis occurs in the manuscript. (Ekthesis describes a letter hanging in the margin.)  They do not necessarily occur with each ekthesis (if ekthesis is very frequent), but we try to make the divisions occur only with ekthesis.  Verses typically equal one sentence, sometimes more than one sentence per verse for very short sentences or more than one verse per sentence for very long Shenoutean sentences.

Third, we’ve added “Order” metadata to make it easier to read a work in order if it’s broken into multiple documents.  Check out Abraham Our Father, for example: the first document in the list is the beginning of the work.

Screen shot of list of documents in Abraham Our Father Corpus

Screen shot of list of documents in Abraham Our Father Corpus

When you’re reading through a document, click on “Next” to get the next document in reading order.  (If there are multiple manuscript witnesses to a work, we’ll send you to the next document in order with the fewest lacunae, or missing segments.)

Screen shot of beginning of Abraham Our Father

Screen shot of beginning of Abraham Our Father

Of course, you can always click on documents in any order you want to read however you like!

And everything is fully searchable across all documents in ANNIS.

New Metadata Fields Documenting Annotation

We sometimes get asked: which corpora do scholars annotate and which corpora are machine-annotated?  The answer is complicated — almost everything is machine annotated, with different levels of scholarly review.  So we’re adding three new metadata fields to help show users what kinds of annotation each document get:

  • Segmentation refers to word segmentation (or “tokenization”) performed by the tokenizer tool.
  • Tagging refers to part of speech, language of origin, and lemma tagging performed by our tagger
  • Parsing refers to dependency syntax annotations (which are part of our treebanking)

Each of these fields contains one of the following values:

  • automatic: fully machine annotated; no manual review or corrections to the tool output
  • checked: the tool has annotated the text, and a scholar has reviewed the annotations before publication
  • gold: the tools have been run and the annotations have received thorough review; this value usually applies only to documents that have been treebanked by a scholar (requiring rigorous review of word segmentation and tagging along the way)

For example, in the first image of document metadata visible in ANNIS, the document has automatic parsing; a scholar has checked the word segmentation and tagging.

 

Screenshot of document metadata showing checked word segmentation and tagging

Screenshot of document metadata showing checked word segmentation and tagging

In the next image of document metadata, a scholar has treebanked the text, making segmentation, tagging, and parsing all gold.

Screenshot of document metadata showing gold level annotations

Screenshot of document metadata showing gold level annotations

 

We are rolling out these annotations with each new corpus and newly edited corpus; not every corpus has them, yet — only the ones in this release.  Our New Testament and Old Testament corpora are machine-annotated (automatic) in all annotations.

 

We hope you enjoy!

 

Online lexicon linked to our corpora!

We have a great announcement today.  Along with our German research partners as part of the KELLIA project, we are releasing an online Coptic lexicon linked to our corpora.

For over three years, the Berlin-Brandenberg Academy of Sciences has been working on a digital lexicon for Coptic.  Frank Feder began the work.  Frank Feder began creating it, encoding definitions for Coptic lemmas in three languages: English, French, and German. The final entries were completed by Maxim Kupreyev at the academy and Julien Delhez in Göttingen.  The base lexicon file is encoded in TEI-XML.  This summer Amir Zeldes and his student, Emma Manning, created a web interface.  We will release the source code soon as part of the KELLIA project.

It may still need some refinements and updates, but we think it is a useful achievement that will help anyone interested in Coptic.

Entries have definitions in French, German, and English.

You can use the lexicon as a standalone website.  For the pilot launch, it’s on the Georgetown server, but make no mistake, this is major research outcome for the BBAW.

We’ve also linked the dictionary to our texts in Coptic SCRIPTORIUM.  You can click on the ANNIS icon in the dictionary entry to search all corpora in Coptic SCRIPTORIUM for that word.

lexicon-to-ANNIS The link also goes in the other direction.  In the normalized visualization of our texts, you can click on a word and get taken to the entry for that word’s lemma in the dictionary.  You can do this in the normalized visualization in our web application for reading and accessing texts (pictured below), or in the normalized visualization embedded in the ANNIS tool.

Screen Shot 2016-07-28 at 10.22.39 AM

Of course there will be refinements and developments to come.  We would love to hear your feedback on what works, what could work better, and where you find glitches.

On a more personal note, when Amir and I first came up with the idea for the project, we dreamed of creating a Perseus Digital Library for Coptic.  This dictionary is a huge step forward.  And honestly, I myself had almost nothing to do with this piece of the project.  It’s an example of the importance and power of collaboration.

Coptic Treebank Released

Yesterday we published the first public version of the Coptic Universal Dependency Treebank. This resource is the first syntactically annotated corpus of Coptic, containing complete analyses of each sentence in over 4,300 words of Coptic excerpts from Shenoute, the New Testament and the Apophthegmata Patrum.

To get an idea of the kind of analysis that Treebank data gives use, compare the following examples of an English and a Coptic dependency syntax tree. In the English tree below, the subject and object of the verb ‘depend’ on the verb for their grammatical function – the nominal subject (nsubj) is “I”, and the direct object (dobj) is “cat”.

cat_mat

We can quickly find out what’s going on in a sentence or ‘who did what to whom’ by looking at the arrows emanating from each word. The same holds for this Coptic example, which uses the same Universal Dependencies annotation schema, allowing us to compare English and Coptic syntax.

He gave them to the poor

He gave them to the poor

Treebanks are an essential component for linguistic research, but they also enable a variety of Natural Language Processing technologies to be used on a language. Beyond automatically parsing text to make some more analyzed data, we can use syntax trees for information extraction and entity recognition. For example, the first tree below shows us that “the Presbyter of Scetis” is a coherent entity (a subgraph, headed by a noun); the incorrect analysis following it would suggest Scetis is not part of the same unit as the Presbyter, meaning we could be dealing with a different person.

One time, the Presbyter of Scetis went...

One time, the Presbyter of Scetis went…

One time, the Presbyter went from Scetis... (incorrect!)

One time, the Presbyter went from Scetis… (incorrect!)

To find out more about this resource, check out the new Coptic Treebank webpage. And to read where the Presbyter of Scetis went, go to this URN: urn:cts:copticLit:ap.19.monbeg.

New Coptic morphological anlaysis

A new component has been added to the Coptic NLP pipe-line at:

https://corpling.uis.georgetown.edu/coptic-nlp/

This adds morphological analysis of complex word forms, including multiple affixes (e.g. derived nouns with affixes such as Coptic ‘mnt’, equivalent to English ‘-ness’), compounds (noun-noun combinations) and complex verbs. Using the automatic morphological analysis will substantially reduce the amount of manual work involved in putting new texts online, meaning we will be able to concentrate on getting more texts out there faster, as well as developing new tools and ways of interacting with the data.

New Coptic NLP pipeline

The entire tool chain of Coptic Natural Language Processing has been difficult to get running for some: it involves a bunch of command line tools, and special attention needed to be paid to coordinating word division expectations between the tools (normalization, tagging, language of origin detection). In order to make this process simpler, we now offer a Web interface that let’s you paste in Coptic text and run all tools on the input automatically, without installing anything. You can find the interface here:

https://corpling.uis.georgetown.edu/coptic-nlp/

The pipeline is XML tolerant (preserves tags in the input) and there’s also a machine actionable API version for external software to use these resources. Please let the Scriptorium team know if you’re using the pipeline and/or run into any problems.

 

Happy processing!

Introducing the Lemmatizer Tool

A new tool available at the Coptic SCRIPTORIUM webpage is the lemmatizer. The lemmatizer annotates words with their dictionary head word. The purpose of lemmatization is to group together the different inflected forms of a word so they can be analyzed as a single item.

For example, in English, the verb ‘to walk’ may appear as ‘walk’, ‘walked’, ‘walks’, and ‘walking’. The base form, ‘walk’, might be the word to look up in the dictionary, and it would be called the lemma for the word.

In Coptic, plural nouns sometimes have different forms, and verbs have different forms.  A lemmatized corpus is useful for searching all the forms of a word and also if you want to link all the forms of a word to an online dictionary for future use.

Two of the corpora we have are annotated with lemmas: Not because a fox barks (Shenoute) and the Apophthegmata. As illustrated in the image below, I have searched for ⲟⲩⲱϩ, to live or dwell.

1

Also note that in the corpus list, I have chosen to look in the corpus ‘Not Because a Fox Barks’, as indicated by the highlighted blue selection.

scriptorium ANNIS Corpus Search

Notice the word forms corresponding to the lemma I have searched for becomes highlighted in the corpus that was chosen.  Two forms of the verb ⲟⲩⲱϩ appear in the results:  ⲟⲩⲱϩ and ⲟⲩⲏϩ.  In addition, there is also an annotation grid.

Desctop screenshot

Clicking on the annotations grid reveals a plethora of information including the translation of the text along with its parts of speech. Hovering over the text using your computer’s mouse allows you to also find parts that may be related. For example, below  the POS (part of speech) is V (verb), and when the mouse is hovering over V, a highlight indicates what word in the text the verb is referring to.

2

3

The tool is a feature in our part-of-speech tagger, so you can lemmatize at the same time you annotate a corpus for parts of speech.  See https://github.com/CopticScriptorium/tagger-part-of-speech/.

Additional guidelines are available here:  https://github.com/CopticScriptorium/tagger-part-of-speech/blob/master/Coptic%20SCRIPTORIUM%20lemmatization%20guidelines.pdf

Older posts