>>29
I wouldn't know.
>>31
A database of basic chars would contain just a few entries. The dictionary of words isn't comparable to the Unicode database, I don't believe.
Are you trolling?
No. My thoughts are summarized in-part here: http://verisimilitudes.net/2020-11-11
Natural language is continuously changing, and words are constantly moving in a cloud of meaning, with complex interaction with similar words, commonly paired terms, cultural impact...
Sure, but I've found the same peoople who defend bastardizing language also tend to be the ones telling others how to use it when it suits them. Observe the cretins who use they singularly, or try to push latinx on Spanish, also tend to claim language evolves to justify these changes, while advocating them without room for disagreement.
There's absolutely nothing eternal about a word.
I disagree. I believe most people around me use English incorrectly. If everyone were to use English incorrectly, that doesn't make it right.
Not even concepts are eternal
Is that concept eternal?
What I'm trying to get at is, under what criteria would you delimit a word in an encoding?
They would be organized by form only. While my system could help to correct errors, that's not the primary purpose.
Words are also change in their spelling, would you use the brittish or the american spelling of a word?
I'd include both, and an auxiliary table of variant words could make a connection, were such valuable.
Not only blacks or some specific community evolves language
My point is ebonics isn't an evolution. It's a sickening degeneration, and I don't respect it. I'd prefer it didn't exist.
How would you make space for the new words that are coined each year, or which are inevitably overlooked by the initial cataloging scheme, without disorderly pushing them at the end of the list?
A new dictionary would be published, and rules for translation could be automatically derived.
1. The dictionary is essentially the same as the Oxford dictionary of the english language
The end goal would be a comprehensive document, yes.
2. The dictionary describes an official version of the english language
There's no issue with groups having specialized dictionaries, and this could even be valuable. I respect groups which have formed their own words, such as hackers, and a programming forum could include those words.
3. The dictionary is but an extension to a character encoding scheme
No; the goal is superseding such.
4. The dictionary is made for a language such as lojban where most of these issues are simply non-existant, and which has well-defined rules for making new words and the meaning attached to them.
I'm still playing with toki pona as a test bed: http://verisimilitudes.net/2020-06-18
That also needs updating, but the tiny language will work for a demonstration.
All things considered, making such a scheme beyond a trivial collection of high-frequency words seems way more complex than every encoding standard put together.
It's a shame it seems that way, since that's incorrect.
Sorry for the rant, I just couldn't not take the bait.
None of this is bait.