Writing a Compiler in C#: Lexical Analysis

October 6, 2010

tags:
4 comments

I’m going to write a compiler for a simple language. The compiler will be written in C#, and will have multiple back ends. The first back end will compile the source code to C, and use cl.exe (the Visual C++ compiler) to produce an executable binary.

But first, a minor digression.

Over my blogging years, I developed this tendency of abandoning blog post series just prior to their final installment. I abandoned the unit testing series, the primality testing series, and many other “series”.

Therefore, I’m not going to call this thing a “series”. I might be able to write another post on the same subject, or I might not; if I don’t, I’ll post the whole lump of source code here and have you decide if it’s worth continuing on your own.

With that said, let’s start by introducing the language for which we’ll write a compiler. It’s called Jack, and I haven’t made it up—it’s a teaching language used in the book The Elements of Computer Systems by Noam Nissan and Shimon Shocken, with some minor modifications I introduced. The language is designed to make lexical analysis, parsing, and code generation as easy as possible. (Indeed, the HUJI course From NAND to Tetris covers compiler construction in two lessons, and students complete a working Jack compiler—to an intermediate VM representation—in slightly less than three weeks.)

Next, the obligatory “Hello World” program in Jack:

class Main {
function void main() {
do System.print("Hello World!");
do System.println();
}
}

And now a more realistic example that demonstrates some of Jack’s coding constructs:

class Main {
    function Array initNumbers() {
        var Array numbers;
        let numbers = Array.new(5);
        let numbers[0] = 13;
        let numbers[1] = 14;
        let numbers[2] = 41;
        let numbers[3] = 97;
        let numbers[4] = 101;
        return numbers;
    }
    function void main() {
        var Array numbers;
        var int i, j;
        var boolean prime;
        let numbers = Main.initNumbers();
        let j = 0;
        while (j < 5) {
            let i = 2;
            let prime = true;
            while (i < numbers[j] & prime) {
                if (numbers[j] % i = 0) {
                    do System.printInt(numbers[j]);
                    do System.print(" is composite.");
                    do System.println();
                    let prime = false;
                }
                let i = i + 1;
            }
            if (prime) {
                    do System.printInt(numbers[j]);
                    do System.print(" is prime.");
                    do System.println();
            }
            let j = j + 1;
        }
    }
}

This program’s output is:

C:\JackCompiler>JackCompiler.exe HelloWorld.jack

C:\JackCompiler>out

13 is prime.

14 is composite.

41 is prime.

97 is prime.

101 is prime.

Assuming that we’re not interested in extraneous formalism, we can go ahead and think about the first part of the compiler—the lexical analyzer, or the tokenizer. The structure of a compiler is well-illustrated by the following diagram [source]:

image 

Before we attach semantic meaning to the language constructs, we have to get away with such details as skipping unnecessary whitespace, recognizing legal identifiers, separating symbols from keywords, and so on. This is the purpose of the lexical analyzer, which takes an input stream of characters and generates from it a stream of tokens, elements that can be processed by the parser. Sometimes the parser constructs a parse tree (abstract syntax tree) or any other intermediate representation of the source code; at other times, the parser directly instructs the compiler back-end (or code generator) to synthesize the executable program.

Normally, you wouldn’t write the lexical analyzer by hand. Instead, you provide a tool such as flex with a list of regular expressions and rules, and obtain from it a working program capable of generating tokens. For example, the following regular expression recognizes all legal Jack identifiers:

[_A-Za-z][_A-Za-z0-9]*

However, for didactic reasons, we will be rolling by hand our own lexical analyzer. It’s not a very challenging task, too—dealing with comments and extraneous whitespace is probably the hardest part.

The following is the primary method of our lexical analyzer. (The rest of its implementation was omitted for brevity.)

public void Advance()
{
    EatWhitespace();

    if (IsAtEnd)
    {
        _done = true;
        return;
    }
    char nextChar = NextChar();
    if (Syntax.IsSymbol(nextChar))
    {
        //This token is going to be a symbol. There are
        //three special look-ahead cases for '<=', '>=',
//and '!='.

        if ((new[] { '<', '>', '!' }.Contains(nextChar))
&& LookAhead() == '=')
        {
            NextChar();//Eat the '='
            _currentToken = new Token(
TokenType.Symbol, nextChar + "=");
        }
        else
        {
            _currentToken = new Token(
TokenType.Symbol, nextChar.ToString());
        }
    }
    else if (Syntax.IsNumber(nextChar))
    {
        //This token is going to be an integer constant.
        string intConst = nextChar.ToString();
        intConst += EatWhile(Syntax.IsNumber);

        int result;
        if (!int.TryParse(intConst, out result))
        {
            throw new CompilationException(
            "Int const must be in range [0,2147483648), " +
"but got: "
+ intConst, _currentLine);
        }

        _currentToken = new Token(
TokenType.IntConst, intConst);
    }
    else if (Syntax.IsCharOrdinalStart(nextChar))
    {

        char marker = NextChar();
        if (marker == '\\')
        {
            string code = EatWhile(Syntax.IsNumber);
            if (code.Length != 3)
            {
                throw new CompilationException(
"Expected: \\nnn where n are decimal digits",
_currentLine);
            }
            int value = int.Parse(code);
            if (value >= 256)
            {
                throw new CompilationException(
"Character ordinal is out of range [0,255]",
_currentLine);
            }
            _currentToken = new Token(
TokenType.IntConst, value.ToString());
        }
        else
        {
            _currentToken = new Token(
TokenType.IntConst, ((int)marker).ToString());
        }
        NextChar();//Swallow the end of the character ordinal
    }
    else if (Syntax.IsStringConstantStart(nextChar))
    {
        //This token is going to be a string constant.
        string strConst = EatWhile(
c => !Syntax.IsStringConstantStart(c));
        NextChar();//Swallow the end of the string constant
        _currentToken = new Token(
TokenType.StrConst, strConst);
    }
    else if (Syntax.IsStartOfKeywordOrIdent(nextChar))
    {

        string keywordOrIdent = nextChar.ToString();
        keywordOrIdent += EatWhile(
Syntax.IsPartOfKeywordOrIdent);
        if (Syntax.IsKeyword(keywordOrIdent))
        {
            _currentToken = new Token(
TokenType.Keyword, keywordOrIdent);
        }
        else
        {
            _currentToken = new Token(
TokenType.Ident, keywordOrIdent);
        }
    }
    else
    {
        throw new CompilationException(
"Unexpected character: " + nextChar, _currentLine);
    }
}

There are five interesting cases here from which five different token types can be generated:

  1. A symbol [TokenType.Symbol], which may contain two characters—explaining the need for additional look-ahead with ‘<’, ‘>’, and ‘!’. Note that the additional look-ahead may fail if the symbol is placed at the end of the file, but this is not a legal language construct, anyway.
  2. A numeric constant [TokenType.IntConst]—we currently allow only integer constants, as the language doesn’t have floating-point support.
  3. A character ordinal constant such as ‘H’ or ‘\032’—these are translated to numeric constants as in #2.
  4. A literal string constant [TokenType.StrConst] such as “Hello World”—note that ‘”’ is not a legal character within a literal string constant. We leave it for now as a language limitation.
  5. A keyword or an identifier [TokenType.Keyword or TokenType.Ident], matching the previously shown regular expression.

This lexical analyzer is rather “dumb”—it does not record identifier information anywhere, and it doesn’t provide access to anything but the current token. It turns out that we don’t need anything else for the current Jack syntax—formally speaking, it is almost an LL(1) language, i.e. most of its language constructs can be parsed with only one look-ahead token. The single LL(2) exception is subroutine calls within expressions, and we’ll craft a special case in the parser to work around this limitation.

For the “Hello World” program above, this lexical analyzer will produce the following sequence of tokens:

<keyword, class>  <ident, Main>  <symbol, {>

<keyword, function>  <keyword, void>  <ident, main>

<symbol, (>  <symbol, )>  <keyword, do>

<ident, System>  <symbol, .>  <ident, print>

<symbol, (>  <symbol, ">  <strconst, Hello World!>

<symbol, )>  <symbol, ;> …

Next time, some parsing.

Some references if you want to keep reading while I’m writing the subsequent parts:

Add comment
facebook linkedin twitter email

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*

4 comments

  1. TruongHanMarch 13, 2011 ב 6:17 AM

    may i have your source? please

    Reply
  2. GregApril 14, 2011 ב 12:26 AM

    In the line

    else if (Syntax.IsCharOrdinalStart(nextChar))

    What is meant by IsCharOrdinalStart? What qualifies the nextChar as a the start of a CharOrdinal?

    Thanks

    Reply
  3. Mr CarterFebruary 19, 2016 ב 12:12 PM

    Hi, This is my first time I develop a Compiler, I just want to know where you written your code, on C# console or C++? Please my you kindly help.

    Reply
    1. Sasha Goldshtein
      Sasha GoldshteinDecember 21, 2016 ב 11:04 AM

      This specific compiler is developed in C#.

      Reply