RusTiny: I got Stuck! May 11, 2015

Recently I’ve written about RusTiny, the compiler I’m writing. It’s been a quite smooth sailing so far but now I got stuck. But first let’s see what has been done so far.

The past

In the aforementioned article on RusTiny I’ve described an outline for the RusTiny compiler:

  1. Parse the source file into an Abstract Syntax Tree
  2. Validate the AST (e.g. type checking)
  3. Transform the AST into an Intermediate Representation
  4. Optimize the IR
  5. Transform the IR into machine code

Step 1 was the easiest so far. I’ve already written two or three recursive descent parsers, so I was able to recycle most of the code. Parsing expressions (as opposed to statements) first looked very complicated, but with the help of the Pratt parser technique I was able to parse them without much hassle.

Step 2 was new territory for me, but turned out to be relatively straight-forward. My AST validation includes semantic checks (the program obeys the scope rules, break statements aren’t used outside of loops etc.) and type checking. It was of great help that I implemented a general AST walker that allowed me to express these checks without much overhead.

Step 3 was the first one that was tricky. Compilers are all about translating between languages and the compiler I have in mind has two compilation steps: From the source file to an intermediate representation (IR), and from the IR to machine code. And as in real life, translation can be quite difficult to get right.

My intermediate representation is inspired by the LLVM IR. That allowed me to look at the Rust compiler (which translates to LLVM IR) when I got stuck somewhere. For the translation from AST to IR I followed Torben Ægidius Mogensen’s Basics of Compiler Design for the most part, deviating in some places so the IR is in SSA form.

I decided to skip step 4 for now because optimization can be added later without altering the language semantics. It’s certainly nice to have but not essential for an educational compiler.

The present

While the translation from AST to IR is very simple algorithmically, machine code generation (step 5) is a totally different beast. Usually it’s implemented as three separate steps:

  1. Instruction selection. You have to choose which machine instructions to use for the IR you have. That’s complicated because there are a lot of ways to implement an IR instruction and you want to select the most efficient combination of machine instructions.
  2. Instruction scheduling. Not only you need to know which machine instructions to use, but in which order to execute them. Compilers try to minimize pipeline stalls or other sorts of timing issues.
  3. Register allocation. Usually the intermediate representation assumes that the computer has an unlimited number of registers. As that’s usually not the case, you have to decide which values to keep in a register and which ones to temporarily store in the working memory.

And this is where I got stuck. Each of these steps is an advanced problem from an algorithmic point of view and there are several approaches how to solve each of them – and I don’t know which one to pick. Moreover, I’m unsure how to represent machine code in the compiler and whether I’d need another more low-level IR.

The future

I don’t have a plan on how to proceed from here. Initially I planned to write a retargetable compiler, but considering my current struggles I’ll propably at least postpone this goal. I’ll also need some time to think through the question of how to represent machine code.

If you’re experienced in these areas, I’d love to hear your input on my thoughts! I also appreciate feedback on my implementation. Feel free to open issues if you have questions about my code. You’ll find the project hosted on GitHub.