I intend to write about most, if not all, of the presentations I attended this weekend. But I’m going to begin with some reflections about my own, including a summary of what I said and what I forgot to say.
I suppose that my biggest disappointment was that I was not as prepared as I had hoped. When it comes down to it, I made the mistake of forgetting to print my notes before we left, which forced to be jump between being able to see my power point slides and my notes. That basically results in me saying "uhm" significantly more than I would have preferred and in me forgetting a number of points the I wanted to emphasize.
I forgot to point out that the lexicon part of the program makes it possible to include both audio pronunciation of individual words as well as pictures of those words, as well as the ability to reverse the lexicon to go from either Greek to English or English to Greek. And well, you can imagine the benefit of all of that for language learning.
Essentially, as a whole Language Explorer functions as a unified and integrated grammar and lexicon in such a way as to make the dream Dr. William Johnson described in his essay for Biblical Greek Language and Lexicographycompletely possible for those of us (me included) who do not have any computer programming skills. What Johnson envisioned was an independently creates Greek lexicon; one that was not dependent upon past lexica for glosses, examples, translations, etc. This is what he wanted:
- Better Translations
- Real Definitions (He specifically said, “[S]omething which will more accurately and fully delimit and explain the word’s semantic character.”)
- Up-to-date Glosses
- Syntactic Usage
- Common Expressions
- Dialectal Information
- Orthographical Variants
- Morphological Relationships
- Frequency & Usage
- Across All Greek?
- Across individual authors?
And he argued that the vast majority of these tasks could be automated. And I argued in my presentation that Language Explorer (FLEx) does just that because FLEx is a lexicon, grammar, text database, morphological parser, and discourse charting tool all rolled up into one. Each of these functions within the program are integrated together in such a way that any morphological analysis is independent of the Greek text used, which is currently part of the challenge with the MorphGNT and its licensing issues.
Further, rather than a simple tags for individual wordforms such as VAAI3S (Verb Aorist Active Indicative 3rd Singular), the parser actually separates the word into distinct morphemes. So for example, λόγος would be divided by the parser as *λογο –ς. The root and lexeme would be annotated for gender and inflection class, (masculine, 2nd Declension) and the inflectional suffix would be marked as “nom.sg.” This is a simple example, things get more complicated with other words, particularly neuters. But FLEx deals with the ambiguity of identical nominatives, vocatives, and accusatives extremely well. All of this information is stored in the Lexicon and Grammar rather than in the text itself, which also includes inflectional variants and allomorphs (e.g. dative plurals in –αις, –οις, and –σιν can all have a single lexical entry). Allomorphs can then be marked based on the environment in which they occur or inflectional class/declension. The same can be done for phonological (pronunciation) changes or orthographic changes (e.g. in the nominative singulars ending in consonants: κ+ς –> ξ).
As a lexicon, FLEx is highly flexible. Its possible to replicate entries from BDAG in its fields. And what’s more, it goes well beyond BDAG in that one can include semantic domains similar to Louw & Nida as well as a significantly larger number of lexicon relations than either of them:
Calendar Relations – A calendar relation is a type of scale or sequence set. Sequences include a group of senses that are related in an ordered fashion (i.e, like “Monday, Tuesday, Wednesday…)
Compare Relations – This is for general references to other entries
FLEx also makes it possible to include dialectal differences for the parser to recognize, which means that the same word, whether spelled with “σσ” or the Attic “ττ,” would be parsed the same.
In terms of find new examples for lexical entries, one can go directly to the text database in the program to find every occurrence of the word to choose examples that are clear and helpful for individual senses.
My plan is, once I have a functional morphology developed, to find collaborators who would be willing to work on building a lexicon directly through the parse texts I’ve been analyzing in the program itself. But my other plan is to get a MA thesis out of it as well, so I won’t be releasing anything more widely until at least the next version is released later this year. I’m hoping that by that time, it will be possible to set up users and administrators, so that I’ll be able to protect my own work. Syntactic analysis is expected to be available then as well, which I’m really looking forward to.
And hopefully by that time, I’ll have significantly more done.
 William Johnson, “Greek Electronic Resources and Lexicographical Function,” ed. Bernard A. Taylor, et al. in Biblical Greek Language and Lexicography (Grand Rapids, Mich.: Eerdmans, 2004), 78.