Re: Unicode width data inconsistent/outdated
On 2017-08-07 03:28, Corinna Vinschen wrote:
> On Aug 5 21:06, Thomas Wolff wrote:
>> Am 04.08.2017 um 19:01 schrieb Corinna Vinschen:
>>> On Aug 3 21:44, Thomas Wolff wrote:
>>>> My attempt would be to base the functions on a common table of character categories instead.
>>> Keep in mind that the table is not loaded into memory on demand, as on
>>> Linux. Rather it will be part of the Cygwin DLL, and worse in case
>>> newlib, any target using the wctype functions.
>> Maybe we could change that (load on demand, or put them in a shared library
>> perhaps), but...
> That won't work for embedded targets, especially small ones.
> If you want to go that route, you would have to extend struct __locale_t
> or lc_ctype_T (in newlib/libc/locale/setlocale.h) to contain pointers to
> conversion tables (Cygwin-only), and the __set_lc_ctype_from_win function
> or a new function inside Cygwin (but called from __ctype_load_locale)
> could load the tables.
> Then you could create new iswXXX, towXXX, and wcwidth functions inside
> Cygwin using these tables, rather than relying on the newlib code.
> Alternatively, if RTEMS is interested as well, we may strive for a
> newlib solution which is opt-in. Loading tables (or even big tables at
> all) isn't a good solution for very small targets.
>>> The idea here is that the tables take less space than a full-fledged
>>> category table. The tables in utf8print.h and utf8alpha.h and the code
>>> in iswalpha and iswprint combined are 10K, code and data of the
>>> tolower/toupper functions are 7K, wcwidth 3K, so a total of 20K,
>>> covering Unicode 5.2 with 107K codepoints.
>>> A category table would have to contain the category bits for the entire
>>> Unicode codepoint range. The number of potential bits is > 8 as far as I
>>> know so it needs 2 bytes per char, but let's make that 1 byte for now.
>>> For Unicode 5.2 only the table would be at least 107K, and that would
>>> only cover the iswXXX functions.
>> I have a working version now, and it uses much less as the category table is
>> Another table is needed for case conversion. Size estimates are as follows
>> (based on Unicode 5.2 for a fair comparison, going up a little bit for 10.0
>> of course):
>> Categories: 2313 entries (10.0: 2715)
>> each entry needs 9 bytes, total 20817 bytes
>> I don't know whether that expands by some word-alignment.
>> I could pack entries to 7 bytes, or even 6 bytes if that helps (total 16191
>> or 13878).
>> Case conversion: 2062 entries (10.0: 2621)
>> each entry needs 12 bytes, total 24744
>> packed 8 bytes, total 16496
>> The Categories table could be boiled down to 1223 entries (penalty: double
>> runtime for iswupper and iswlower)
>> The Case conversion table could be transformed to a compact form
>> Case conversion compact: 1201 entries
>> each entry needs 16 bytes, total 19216
>> packed 12 or 11 (or even 10), total 14412 (or 12010)
>> So I think the increase is acceptable for the benefit of simple and
>> automatic generation
> So we're at 40K+ plus code then.
> newlib: embedded targets, looking for small sized solutions. Simple
> and automatic generation is not the main goal.
>> and also more efficient processing by some of the
>> functions. Also they would apply to more functions, e.g. iswdigit which
>> would confirm all Unicode digits, not just the ASCII ones.
> Don't do that. There's a collision with C99 if you define other
> characters than ASCII digits to return nonzero from iswdigit. Comment
> from inside Glibc:
> % The "digit" class must only contain the BASIC LATIN digits, says ISO C 99
> % (sections 184.108.40.206.5 and 5.2.1).
>>>>> int wcwidth(wint_t c);
>>>> Why not revert to wcwidth(wint_t)?
>>>> I think for cygwin it is the only solution that makes wcwidth work for
>>>> non-BMP characters and is also compatible (unlike some proposals discussed
>>>> later in the quoted thread).
>>> We can do this, but it may result in complaints from the other
>>> newlib consumers. If in doubt, use #ifdef __CYGWIN__
>> Which other platforms do actually use newlib?
> Lots of embedded and bare-metal tagets.
>>>> Issue 2 is the handling of titlecase characters (e.g. "Nj" as one Unicode
>>>> character U+01CB). The current implementation considers them to be both
>>>> upper and lower (iswupper: return towlower (c) != c); I'd rather consider
>>>> them as neither upper nor lower (iswalpha (c) && towupper (c) == c).
>>>> https://linux.die.net/man/3/iswupper allows both interpretations:
>>>>> The wide-character class "upper" contains *at least* those characters wc
>>>>> which are equal to towupper(wc) and different from towlower(wc).
>>> Susv4 says "The iswupper() [...] functions shall test whether wc is a
>>> wide-character code representing a character of class upper." Whatever
>>> does that correctly with a low footprint is fine.
>> The question here is how "character of class upper" is defined, and how to
>> interpret pre-Unicode assumptions in a Unicode context.
> In theory, do it as glibc does and you're fine.
Implementation considerations for handling the Unicode tables described in
and implemented in
ICU icu4[cj] uses a folded trie of the properties, where the unique property
combinations are indexed, strings of those indices are generated for fixed size
groups of character codes, unique values of those strings are then indexed, and
those indices assigned to each character code group. The result is a multi-level
indexing operation that returns the required property combination for each
The FOX Toolkit uses a similar approach, splitting the 21 bit character code
into 7 bit groups, with two higher levels of 7 bit indices, and more tweaks to
Take care. Thanks, Brian Inglis, Calgary, Alberta, Canada
Problem reports: http://cygwin.com/problems.html
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple