Friday, March 23, 2012

PyCon: How the PyPy JIT Works

See the website.

"If the implementation is hard to explain, it's a bad idea." (Except PyPy!)

The JIT is interpreter agnostic.

It's a tracing JIT. They compile only the code that's run repeatedly through the interpreter.

They have to remove all the indirection that's there because it's a dynamic language.

They try to optimize simple, idiomatic Python. That is not an easy talk.

(The room is packed. I guess people were pretty excited about David Beazley's keynote.)

There's a metainterpreter. It traces through function calls, flattening the loop.

JIT compiler optimizations are different than compiler optimizations. You're limited by speed. You have to do the optimizations fast.

If objects are allocated in a loop and they don't escape the loop, they don't need to use the heap and they can remove boxing.

They do unrolling to take out the loop invariants.

They have a JIT viewer.

Generating assembly is surprisingly easy. They use a linear register allocator. The GC has to be informed of dynamic allocations.

They use guards that must be true to continue using the JITted code. I.e., did the code raise an exception?

They have data structures optimized for the JIT such as map dicts.

They can translate attribute access to an array in certain cases.

The JIT is generated from an RPython description of the interpreter.

The metainterpreter traces hot loops and functions.

They use optimizations that remove indirection.

They adapt to new runtime information with bridges.

They added stackless support to the JIT.

They want the JIT to help with STM.

They have Prolog and Scheme interpreters written on top of the PyPy infrastructure.

They don't do much with trying to take advantage of specific CPUs.

No comments: