At the heart of all language lies a singular paradox; namely, that it is impossible to define a word without using more words. Each of those words is then defined with other words, which are themselves defined with yet more words, and so on. If all words are defined with other words, where does the meaning come from? This is among the biggest questions of abstract linguistics. One hypothesis asserts that the meaning is entirely relative: all that exists for certain is the structure of language, imposed by dictionaries and conversations, and individual people assign their own personal meaning to words based on context. Some find this theory rather troublesome. For example, it would imply that there is no way you can be certain that the meaning you have assigned to the words you’re currently reading matches the meaning I have assigned to the words that I am currently writing. Worse still, if there is a disparity between how we define words, it is irresolvable. The only way to explain to someone else that their meaning of a word is different from yours is by using other words, which may also be defined differently by both parties. These worries are usually hand-waved away by assuming that the structure of language is complex enough that only one mapping of meanings to words satisfies all of it, plus or minus a few quirks here and there. However, this assurance is a conjecture, rather than a theorem; while no one has been able to rigorously prove it, no one has been able to disprove it either, and it seems more plausible than implausible. The “Holy Grail” of this field of linguistics is to disprove this conjecture by constructing a proper linguistic isomorphism: a way of assigning substantially different meanings to each word, such that the structure of language is preserved. That is, every meaningful sentence remains meaningful, albeit with a different meaning, and every unmeaningful sentence remains unmeaningful.
Then there are those linguists who maintain that relativity is a matter for physicists, with no rightful place in their field. Linguistic absolutists, as they are commonly called, believe meaning to be derived from The Primal Dictionary: the minimal set of word-idea pairs that can be used to construct a maximally robust language, that is, one capable of expressing any possible (or impossible) idea. Though the phrase “The Primal Dictionary” carries more gravitas than merely “A Primal Dictionary”, the latter is technically more correct; even without knowing whether or not such a dictionary exists, linguists have proven that it could not be unique, as the existence of one implies the existence of many more. For previous definitions of a Primal Dictionary, (readers are recommended against trying to fathom how one defines that which defines all meaning) it sufficed to be able to construct a language as robust as English. Whether or not this new condition is stronger is a tremendously difficult problem. How can one know whether or not there exist ideas that are inexpressible by human language? And if one were to find such an idea, how would they convince their colleagues?