So lets put down a few things, that helps as a primer and gets some confusion out of the way.
Unicode started in the late 80s of last centuryas a 16-bit character model.
All encodings except the 32-bit ones are variable width. The UTF-16 encoding is a variable width encoding where each code point (not character!, see below why) takes one or more 16-bit words.
This is because – as of Unicode version 2.0 in 1996 – a surrogate character mechanism was introduced to be able to have more than 64k code points.
In Unicode, there is adistinctionbetween code points(the mapping of the character to an actual IDs), storage/encoding (in Windows now uses UTF-16LE which includes the past used UCS-2) and leaves visual representation (glyphs/renderings) to fonts.
There is no font that can display all Unicode code points. By original aim, the first 256 Unicode code points are identical to the ISO 8859-1 character set(which is Windows-29591, not Windows-1252!) for which most fonts can display most characters.
One thing that complexes things, is that Unicode allows for bothcomposite charactersand ready made composites. This is one form wheredifferent sequences can be equivalent, so there can be Unicode equivalence for which you need some knowledge on Unicode Normalization (be sure to read this StackOverflow questionand thisarticle by Michael Kaplan on Unicode Normalization).
There are many Unicode encodings, of which UTF-8 and UTF-16 are the most widely used (and are variable length). UTF-32 is fixed length. All 16-bit and 32-bit encodings can have big-endian and little-endian storage and can use a Byte Order Mark (BOM) to indicate their endinaness. Not all software uses BOMs, and there are BOMs for UTF-8 and other encodings as well (for UTF-8 it is not recommended to include a BOM).
When only parts your development environment supports Unicode strings, you need to be aware of which do and which don’t. For any interface boundary between those, you need to be aware of potential data loss, and need to decide how to cope with that.
For instance, does your database use Unicode or not for character storage? (For Microsoft SQL Server: do you useCHAR/VARCHAR or NCHAR/NVARCHAR;you should aim for NVARCHAR, yes you really should, do not use text, ntext and image). What do you do while transferring Unicode and non-Unicode text to it? Ask the same questions for Web Services, configuration files, binary storage, message queueing and various other interfaces to the outside world.
The Windows API is almost exclusively Unicode (see this StackOverflow question for more details)
Delphi and Unicode
Let’s focus a bit on Delphi now, as that the migration towards Unicode at clientsraised a few questions over the last couple of months.
One of the key questions is why there are no conversion tools that help you migrate your existing source code to fully embrace Unicode.
The short answer is: because you can’t automate the detection of intent in your codebase.
The longer answer starts with that there are tools that detect parts of your Delphi source that potentially has problems: the compiler hints, warnings and errors that brings your attention to spots that are fishy, are likely to fail, or are plain wrong.
Delphi uses the standard Windows storage format for Unicode text: UTF-16LE.
Next to that, Delphi supports conversion to and from UTF-8 en UTF-32 (in their various forms endianness).
External storage of text is best done asUTF-8 because it doesn’t have endianness, and because of easier exchange of text in ISO-8859-1.
A few extra notes on Delphi and Unicode:
With Delphi string types, stick to the UnicodeString (default string as of Delphi 2009) and AnsiString (default string until Delphi 2007) as their memory management is done by Delphi. WideString management is done by COM, so only use that when you really need to. Also avoid ShortString.
For any interfaces to the external world, you need to decide which ones to keep to generic string, Char, PChar and which ones to fix to AnsiChar/PAnsiChar/AnsiString(+ accompanying codepage) or fix at UnicodeChar/PUnicodeChar/UnicodeString.
Of course remnants from the past will catch up with you: if you have Technical Debton the past where characters were bytes, and you abused Char/PChar/array-of-char/etc you need to fix that, and use theByte/PByte/TByteArray/PByteArray. It can be costly to pay the accrued debt on that.
- There is even more confusion on character set, code page, etc, which Mihai tries to set straight at theWhy is the default console codepage called “OEM”?episode of “The Old New Thing“
- Getting your character set (ANSI, Windows-1252, ISO 8859-1) right is a problem of the same order of magnitude as Ben Hutchings shows.
- Notepad supports three kinds of text formats
Best practices for all organizations that would like to produce more secure applications!
As part of the software development process, security professionals must make choices about where to invest their budget and staff resources to ensure that homegrown applications are as secure as possible. ESG research found organizations that are considered security leaders tend to make different choices than other firms.