amhoanna wrote:The conscious de-emphasis on aesthetics
I’m not entirely sure what exactly you are referring to. While I also observe a trend that things above all are supposed to look pompous and not necessarily beautiful, I’m not quite sure how that relates to the topic of script-unification. After all, people can choose an aesthetic script as well as a non-aesthetic one. In fact, unless there are other more important reasons to choose a non-aesthetic over an aesthetic one, I would guess that people would tend to the latter. Of course, aesthetics is a very subjective matter, but cultural influences would probably urge people in roughly the same direction; I would guess this to be one of the main reasons (if not THE main reason) behind the bias towards completely or at least mainly character-based writing systems.
amhoanna wrote:I doubt inconsistency of orthography is a key factor on any level. It is never cited as a reason by non-learners.
I agree to a certain point. A lack of a standard is definitely something that poses more problems to learners than to native-speakers. However I am also convinced that there would be a distinct benefit for native-speakers, too. Just recently, I read about a study on how the familiarity with the visual form of a written word has a significant impact on the reading speed (I would give the actual title of the study but I merely read about it in a secondary source which I don’t have access to at the moment. I can therefore only refer to said secondary source, Mark Sebba’s “Spelling and Society” (2007)). For example, since we are used to the current English orthography, we can read it a lot faster than, for example, ɪf wi: sɑdn̩li ɹout ɪt ɪn IPA. That doesn’t mean one spelling is superior to the other in any way, only that reading a script that you’re used to is easier than one that you’re not used to because you’re familiar with the visual form of the words and you can recognize them at one glance. I’m guessing this is also connected to the common advice given at speadreading courses, that you should give up subvocalization (the habit of “pronouncing” the words silently in your head while reading) and disassociate the form of the word from its pronounciation because you (supposedly) read much quicker if your focus is on recognizing the written form instead of thinking about the pronunciation.
It is in fact debated how big the influence of this effect on reading smoothness and speed actually is. My guess is that it probably varies depending on the script. For example, I expect the impact of this effect to be much bigger with rather irregular scripts like English whose written forms are not that closely connected with the sound, compared to largely phonetic scripts like, say, POJ. Even then, though, it does make a difference, as visible for example when you compare the speed in which you can read POJ to Liim Keahioong’s equally regular Phofsit Daaibun (at least for me there is a significant difference, but then again, I of course don’t even come close to a level where I can count myself among experienced Hokkien users
). To come back to the topic however, this means that a unified script would most likely lead to texts being easier and quicker to read for native-speakers/-readers because the visual forms of words would always be the same.
Also, while the needs of native speakers definitely need to be addressed first, the demands of learners for a standard should not be completely disregarded either. Hokkien admittedly doesn’t count among the languages with the most learners in the world, also because other languages are much more important for success in business. Despite, or maybe precisely because of the small number of learners however, I think that it would be beneficial for the promotion of Hokkien to not scare all but the most persistent ones away from learning it.
On the other hand, I agree with you that the MoE characters are not exactly ideal from the linguistic point of view due to too many, and often unjustifiable borrowings from Mandarin, the 的/个 distinction being the most horrific example of this (although at least they refrained from creating {女尹} and {牜尹} to reflect the Mandarin characters 她 and 它
(which of course are equally unjustifiable with reasons inherent to the Mandarin language)). Even from the pragmatic point of view there are reasons against them because the anti-local-language-policies of the KMT government made quite a few proponents of Hokkien literature oppose anything coming from the government for the simple reason that it comes from the government.
I also agree that there is the danger that some 借字 could be mistaken for 本字 by non-linguists. However I think this in itself is not too big an issue. Taking Mandarin 這 as an example again, I doubt that a lot of Chinese out of circles highly educated in classic literature know that this character originally had nothing to do with the meaning “this” but was pronounced “yàn” and meant “to meet”. Still, this doesn’t impede their ability to read and write Mandarin. Of course, if the 借字 so many, and only from one language, that the script almost looks like this other language (which I agree is a problem with the MoE characters), it does become a problem, but it has less to do with 借字 being mistaken for 本字 but with too extensive and imbalanced borrowing. Of course I prefer using 本字 if they are available. But if they are not, I think that the danger of 借字 being mistaken for 本字 is not really that problematic in practice because it would have little impact on people’s ability to write Hokkien. Also, you can always publish lists with the 借字 so that people can refer to them if they really do need/want that information.
Still, I agree with Ah-bin that the Nôm way of creating new characters when there is no identifiable pún-jī is preferrable to borrowing characters from other languages. For example you could write say, {多坐}, {欲末} and {肉巴} for chē, bueh and bah (I feel {多坐} might be more ideal than {多齊} because as far as I know 齊 doesn’t rhyme with the word for “many” in Chôan-chiu-type dialects). What might make this a little difficult in praxis is the way characters are encoded in Unicode (full characters instead of freely combinable components), but in theory it should be no problem to include them in an extension. In Word you can even circumvent this problem (at least for characters which consists of two other characters next to each other) by writing both seperately and then adjusting the horizontal scale of the characters (the setting for this can be found under the Home tab, paragraph section, and then the option “character scaling” under the “Asian Layout” button, at least in Word 2007). However this of course quite a pain for writing a single character. Also, it only works with characters composed in one particular way, and only if the software you’re writing in has this option; in this forum for example there is no such option (or maybe there is but it would by far exceed my html skills
) In my eyes, the more ideal solution would be a revamping of the way characters are encoded in Unicode from the whole-character system to a system where you can combine elements, say “亻+因” or “伊+心/亻+尹+心” for the third person plural pronoun, at least if your character isn’t already among the encoded ones.
While I don’t see kana as the ideal solution, I also agree with amhoanna that the integration of entirely phonetic elements is also necessary. Even if we invented characters to write the non-sinitic words which have been part of Hokkien for centuries if not millennia (such as “bah”, “bueh” and “lâng”), there would still be plenty of more recent loanwords which I feel are not easy to integrate into a wholly character-based system (i.e. not without either using characters purely for phonetic purposes or creating a whole bunch of new characters for just one single word), simply because most of them are poly-syllabic. Latin Script may be the most obvious solution, but in my eyes (and I don’t think I’m alone with that view), it doesn’t blend well with the characters. Kana might actually more workable than I originally thought, but I still think they are too inflexible, especially when it comes to finals and tones. Therefore, I still think a Hangeul-type solution would be easier to adapt and to learn, although of course, standard Hangeul would have to be modified as well, to fit Hokkien phonology. The more practical problem though, is of course the bias amongst many Hoklophones (especially Mandarin-educated ones) against non-character elements (and in Mainland China very likely especially so against Japanese kana). Some may call this bias irrational, ignorant in the face of proof for the successful use of non-character scripts in the past, or even arrogant towards languages that don’t use characters. However such accusations, be they justified or not, will not make the Hoklophone public more accepting towards any suggestions we might have. I may believe that a certain way of writing is the most ideal from a linguistic point of view but if the public doesn’t accept it, it has been of little practical use. For this reason, I think that it makes only limited sense to exclude the needs of the “recipient”, i.e. the people who are actually supposed to use the script, from my considerations when creating a script. That is if my goal is to have the public accept it in the first place, of course