Let's learn a bit more

Catch "at-risk" language before it becomes a problem

High-value documents are found in nearly every business today, from small tech start-ups to Fortune 500 companies. These documents touch nearly every aspect of product development and are often considered the "knowledge real estate" of the organization.

If your organization writes or reviews requirements documents, proposals, instructions, standards, or briefs, then Vital Text Systems Analytic Service helps you ensure those documents contain ironclad, linguistically hardened, unambiguous language - helping your organization avoid potentially dire consequences.

Why Vital Text Systems

... and what about Grammar Checkers?

Other commercially available requirements engineering tools address requirements quality with project management features. Vital Text Systems is the only tool, available, that addresses the issues of language fidelity at the document authoring and/or review levels.

Grammar checkers are great! Don't get us wrong. We think they're great tools to help you write better. But, as the name implies, they are all about grammar (and syntax).

No grammar issue ever caused a $100 million misunderstanding and no one ever died from bad grammar (though some may have wished they had).

Vital Text Systems is the only technology which helps comprehensively reduce defects caused by imprecise, vague, ambiguous, or incoherent language- and our patent (USPTO #9,678,949) proves it!

We're also the only solution that provides real-time feedback to document authors and reviewers- helping increase their knowledge of terms and language usage, which helps reduce at-risk language in their future work.

By fixing errors at their root, Vital Text Systems saves time, money, and even lives.

How Does Vital Text Work?

Below is some sample input text:

Vital Text Systems (VTS) provides a patented analytic software tool, Vital Text Systems Analytics (VTAS). VTAS enhances the comprehensibility and reliability of language in written texts of requirements documents (e.g. technical specifications, training manuals, and instructions). More specifically, VTAS is a language analysis software system that flags “at-risk” or weak language in highly important documents.

VTAS implements multiple layers of analysis combining computational linguistics technology (i.e. natural language processing), machine learning (i.e. predictive analytics) using progressive learning with real human assessment of language, and scientific cognitive science evidence to support the discovery and identification of weaknesses in natural language.

Our patented technology is, in essence, an analysis engine analyzing the linguistic structures in large documents, incorporating numerous classes of risky language patterns (e.g. misuse of pronouns, confusing syntactic structures).

What Does Vital Text Do?

We analyze your written text and provide a report covering four areas of at-risk language:

Coordination Ambiguities

These are constructions that use “and” or “or”, where it may be unclear which concepts in a list or pair should be grouped together In cases where an appropriately placed final comma (Oxford comma) is optional, we recommend it for clarity.

Referent Ambiguities

These are typically pronouns distant from the referent to which they are attached. The more distant the referent, the more likely the pronoun will be thought to apply to an unintended referent.

Speculative Modals

These are usages of language that express uncertainty, necessity, ability or permission, most often in a weak manner. Examples are words like can, could, may, ought, should and others of like expressions.

Long Sentences

These findings occur when the sentence in question is long enough to present a misunderstanding. Studies have shown that as sentences increase in length they have a greater risk of ambiguity.

In addition to language analysis, VTS also provides an interactive framework that connects our patented system with the document author.

Here are the analysis results for the sample text above:

Important facts and conclusions.
As discussed earlier, one of the primary objectives of these DAH rules is to ensure that operators have at least one source of FAA-approved data and documents that they can use to comply with operational requirements.
This objective would be defeated if the required data and documents were not, in fact, approved. Only by retaining authority to approve these materials can we ensure that they comply with applicable requirements and can be relied upon by operators to comply with operational rules.
The system reports to the author four crucial pieces of information:
  • Results from the language analysis (potential language weaknesses)
  • Tools and resources for a more comprehensive understanding of the nature of identified problems in the text
  • Ways to resolve and improve the quality of the document (language education)
  • A Vital Text Comprehension Quality Index (CGI)- a statistical ranking of the document quality indicating overall language clarity and comprehensibility. (Statistical ranking is especially useful for those knowledge workers who further employ the document for product development.)

Meet the team

... a vital part of this mission

Gordon Monk

Gordon Monk


With 30 years experience managing software teams, Gordon is an expert in agile, and plan-driven, systems engineering. He has also trained many teams on effective requirements engineering techniques.


Wayne Cowart, PhD

Chief Linguistics Officer

Wayne is a retired professor of linguistics at University of Southern Maine. He specializes in syntactic aspects of language and is the author of the English4Engineers curriculum.

Chris Gannon

Chris Gannon

Director, Digital Strategy

Chris has 20+ years experience in web development, eCommerce, business solutions, and public speaking. Past clients include BarnesAndNoble.com, FedEx, Citi Group, and KPMG. He believes AI should help people do their jobs, not replace them.

Matthew Mains

Matthew Mains

Lead developer

Matthew has 5 years experience in web and desktop software development. He specializes in Microsoft .Net but is always willing to branch out to try new technologies and strives to understand the bigger picture of how systems interact.

Eric Mulvihill

Eric Mulvihill

Sr. Software Engineer

Eric has 20 years of experience in eCommerce, online learning, multimedia, and systems administration. He enjoys working closely with researchers and has a passion for machine learning applications and improving engineering teams through continuous delivery practices.

Tony Mullen

Tony Mullen, PhD

Sr. Data Scientist

Tony has a PhD in natural language processing from the University of Groningen and a master's degree in linguistics from Trinity College, Dublin. He has published research in the fields of computational syntax, named entity recognition, and sentiment analysis.

Daniel Tofan

Daniel Tofan, PhD

Sr. Software Engineer

Daniel has a PhD in Chemistry from Georgetown University and a few other degrees. He has worked as a software engineer at several tech companies, mostly with Java, Groovy, and Grails. Now he enjoys UI/UX work using modern frameworks such as Vue and Angular, as well as other tools within the Node.js ecosystem.

Margaret Anne Rowe

Margaret Anne Rowe

Language Scientist

Margaret Anne Rowe is a discourse analyst by trade and a corpus linguist by night. She graduated from Georgetown University in 2018 with a BA in linguistics and French and is working on her accelerated master’s in language and communication for 2019. She believes linguistics is at its best when it’s interdisciplinary.

Ethan Beaman

Ethan Beaman

Machine Learning Researcher

Ethan is a Data Scientist from Georgetown University with a particular interest in natural language processing and a growing portfolio of original algorithms for anomaly detection. While he waits for his models to train he likes playing chess and taking long walks on the beach.

Ready to check it out?