Skip to main content

PyCon: Pragmatic Unicode, or, How do I stop the pain?

See the website.

See the slides.

This was one of the best talks. The room was packed. This is the best unicode talk I've ever been to!

Computers deal with bytes: files, networks, everything. We assign meaning to bytes using convention. The first such convention was ASCII.

The world needs more than 256 symbols. Character codes were a system that mapped single bytes to characters. However, this still limited us to 256 symbols.

EBCDIC, APO, BBG, OMG, WTF!

Then, they tried two bytes.

Finally, they came up with Unicode.

Unicode assigns characters to code points. There are 1.1 million code points. Only 110k are assigned at this point. All major writing systems have been covered.

"Klingon is not in Unicode. I can explain later."

Unicode has many funny symbols, like a snowman and a pile of poo.

"U+2602 UMBRELLA" is a Unicode character.

Encodings map unicode code points to bytes.

UTF-16, UTF_32, UCS-2, UCS-4, UTF-8 are all encodings.

UTF-8 is the king of encodings. It uses a variable number of bytes per character; hence it's a variable length encoding. ASCII characters are still only one byte in UTF-8.

No Unicode code point needs more than 4 UTF-8 bytes.

Python 2 and Python 3 are radically different.
In Python2
A str is a sequence of bytes, like 'foo'. A unicode object is a sequence of code points, like u'foo'.

bytes != code points!

unicode.encode() returns bytes. bytes.decode() returns a unicode object.
my_unicode = u"Hi \u2119"
my_utf8 = my_unicode.encode('utf-8')
my_unicode = my_utf8.decode('utf-8')
Many encodings only support a subset of unicode. For instance, .encode('ascii') will fail for characters out of the range(128).

Random byte streams cannot successfully be decoded as UTF-8. This is a feature that tells you when you're doing something wrong (i.e. decoding something that isn't actually UTF-8).

If there are errors in the encoded bytes, you can handle errors in multiple ways. For instance, you can replace the erroneous bytes with "?" by using "my_unicode.replace('ascii', 'replace')". There are other approaches available as well. See the second argument to the replace method.

Python 2 tries to implicitly do conversions when you mix bytes and unicode. This is based on sys.getdefaultencoding().

This helpfulness is very helpful when everything is ASCII. If it isn't, it's PAINFUL!

You have both bytes and unicode, and you need to keep them straight.
In Python3
The biggest change in Python 3, and the one that causes the most pain is unicode.

A str is a sequence of code points (i.e. Unicode), such as "Hi \u2119".

A bytes object is a sequence of bytes, such as b"foo".

Python 3 does not provide any automatic conversion between bytes and (unicode) strs.

Mixing bytes and (unicode) strs is always painful in Python 3. You are forced to keep them straight. The pain is much more immediate.

The data you get from files depends on how you open it. For instance, if you use "open('f', 'rb')", you'll get bytes because of the 'b'. You can also pass an encoding argument.

See: locale.getpreferredencoding()

stdin and stdout are preopened file handles. That can complicate things.
Relieving the Pain
Think of your program as a Unicode sandwich. It should use bytes on the outside and unicode objects on the inside. It should encode and decode at the edges. Beware that libraries might be doing the encoding and decoding for you.

Know what you have. Is it bytes or unicode? If it's bytes, what's the encoding?

Encoding is out-of-band. You must be told what encoding the bytes have. You cannot infer. You must be told, or you have to guess (which may not always work).

Data is dirty. Sometimes you get data and you're wrong about what encoding it's in.

Test unicode. Get some exotic text. Upside down text is awesome.

There are a lot more details such as BOMs (byte order marks).

Cherokee was recently given a writing system.

Japanese people have a large set of emoticons, and they were added to Unicode.

u+1f47d is an alien emoticon.

He showed a bunch of cool Unicode glyphs.

It's better to not have an implicit encoding. It's better to always be explicit.

Don't mess with the default system encoding!

A BOM shouldn't appear in UTF-8 at all since it is unnecessary. However, Python has an encoding that basically says, "this is UTF-8, but ignore the BOMs".

The unicodedata module has a remarkable amount of information.

The implicit encoding and decoding present in Python 2 doesn't even exist in Python 3.

In Python 2, there's an "io" module that knows how to open files with the correct encoding.

When piping to a file, stdout defaults to UTF-8. When outputting to terminal, stdout uses the terminal encoding.

Comments

Popular posts from this blog

Drawing Sierpinski's Triangle in Minecraft Using Python

In his keynote at PyCon, Eben Upton, the Executive Director of the Rasberry Pi Foundation, mentioned that not only has Minecraft been ported to the Rasberry Pi, but you can even control it with Python . Since four of my kids are avid Minecraft fans, I figured this might be a good time to teach them to program using Python. So I started yesterday with the goal of programming something cool for Minecraft and then showing it off at the San Francisco Python Meetup in the evening. The first problem that I faced was that I didn't have a Rasberry Pi. You can't hack Minecraft by just installing the Minecraft client. Speaking of which, I didn't have the Minecraft client installed either ;) My kids always play it on their Nexus 7s. I found an open source Minecraft server called Bukkit that "provides the means to extend the popular Minecraft multiplayer server." Then I found a plugin called RaspberryJuice that implements a subset of the Minecraft Pi modding API for B

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

Creating Windows 10 Boot Media for a Lenovo Thinkpad T410 Using Only a Mac and a Linux Machine

TL;DR: Giovanni and I struggled trying to get Windows 10 installed on the Lenovo Thinkpad T410. We struggled a lot trying to create the installation media because we only had a Mac and a Linux machine to work with. Everytime we tried to boot the USB thumb drive, it just showed us a blinking cursor. At the end, we finally realized that Windows 10 wasn't supported on this laptop :-/ I've heard that it took Thomas Edison 100 tries to figure out the right material to use as a lightbulb filament. Well, I'm no Thomas Edison, but I thought it might be noteworthy to document our attempts at getting it to boot off a USB thumb drive: Download the ISO. Attempt 1: Use Etcher. Etcher says it doesn't work for Windows. Attempt 2: Use Boot Camp Assistant. It doesn't have that feature anymore. Attempt 3: Use Disk Utility on a Mac. Erase a USB thumb drive: Format: ExFAT Scheme: GUID Partition Map Mount the ISO. Copy everything from