Code generation changes to support JIT compilation via LLJIT – JIT Compilation

Now, let’s take a brief look at some of the changes we have made within CodeGen.cpp to support our JIT-based calculator:

  1. As previously mentioned, the code generation class has two important methods: one to compile the user-defined function into LLVM IR and print the IR to the console, and another to prepare the calculation evaluation function, calc_expr_func, which contains a call to the original user-defined function for evaluation. This second function also prints the resulting IR to the user:
    void CodeGen::compileToIR(AST *Tree, Module *M,
    StringMap &JITtedFunctions) {
    ToIRVisitor ToIR(M, JITtedFunctions);
    ToIR.run(Tree);
    M->print(outs(), nullptr);
    }
    void CodeGen::prepareCalculationCallFunc(AST *FuncCall,
    Module *M, llvm::StringRef FnName,
    StringMap &JITtedFunctions) {
    ToIRVisitor ToIR(M, JITtedFunctions);
    ToIR.genFuncEvaluationCall(FuncCall);
    M->print(outs(), nullptr);
    }
  1. As noted in the preceding source, these code generation functions define a ToIRVisitor instance that takes in our module and a JITtedFunctions map to be used in its constructor upon initialization:
    class ToIRVisitor : public ASTVisitor {
    Module *M;
    IRBuilder<> Builder;
    StringMap &JITtedFunctionsMap;
    . . .
    public:
    ToIRVisitor(Module *M,
    StringMap &JITtedFunctions)
    : M(M), Builder(M->getContext()), JITtedFunctionsMap(JITtedFunctions) {
  1. Ultimately, this information is used to either generate IR or evaluate the function that the IR was previously generated for. When generating the IR, the code generator expects to see a DefDecl node, which represents defining a new function. The function name, along with the number of arguments it is defined with, is stored within the function definitions map:
    virtual void visit(DefDecl &Node) override {
    llvm::StringRef FnName = Node.getFnName();
    llvm::SmallVector FunctionVars = Node.getVars();
    (JITtedFunctionsMap)[FnName] = FunctionVars.size();
  1. Afterward, the actual function definition is created by the genUserDefinedFunction() call:
    Function *DefFunc = genUserDefinedFunction(FnName);
  1. Within genUserDefinedFunction(), the first step is to check if the function exists within the module. If it does not, we ensure that the function prototype exists within our map data structure. Then, we use the name and the number of arguments to construct a function that has the number of arguments that were defined by the user, and make the function return a single integer value:
    Function *genUserDefinedFunction(llvm::StringRef Name) {
    if (Function *F = M->getFunction(Name))
    return F;
    Function *UserDefinedFunction = nullptr;
    auto FnNameToArgCount = JITtedFunctionsMap.find(Name);
    if (FnNameToArgCount != JITtedFunctionsMap.end()) {
    std::vector IntArgs(FnNameToArgCount->second, Int32Ty);
    FunctionType *FuncType = FunctionType::get(Int32Ty, IntArgs, false);
    UserDefinedFunction =
    Function::Create(FuncType, GlobalValue::ExternalLinkage, Name, M);
    }
    return UserDefinedFunction;
    }
  1. After generating the user-defined function, a new basic block is created, and we insert our function into the basic block. Each function argument is also associated with a name that is defined by the user, so we also set the names for all function arguments accordingly, as well as generate mathematical operations that operate on the arguments within the function:
    BasicBlock BB = BasicBlock::Create(M->getContext(), “entry”, DefFunc); Builder.SetInsertPoint(BB); unsigned FIdx = 0; for (auto &FArg : DefFunc->args()) { nameMap[FunctionVars[FIdx]] = &FArg; FArg.setName(FunctionVars[FIdx++]); } Node.getExpr()->accept(this);
    };
  1. When evaluating the user-defined function, the AST that is expected in our example is called a FuncCallFromDef node. First, we define the evaluation function and name it calc_expr_func (taking in zero arguments and returning one result):
    virtual void visit(FuncCallFromDef &Node) override {
    llvm::StringRef CalcExprFunName = “calc_expr_func”;
    FunctionType *CalcExprFunTy = FunctionType::get(Int32Ty, {}, false);
    Function *CalcExprFun = Function::Create(
    CalcExprFunTy, GlobalValue::ExternalLinkage, CalcExprFunName, M);
  1. Next, we create a new basic block to insert calc_expr_func into:
    BasicBlock *BB = BasicBlock::Create(M->getContext(), “entry”, CalcExprFun);
    Builder.SetInsertPoint(BB);
  1. Similar to before, the user-defined function is retrieved by genUserDefinedFunction(), and we pass the numerical parameters of the function call into the original function that we have just regenerated:
    llvm::StringRef CalleeFnName = Node.getFnName();
    Function *CalleeFn = genUserDefinedFunction(CalleeFnName);
  1. Once we have the actual llvm::Function instance available, we utilize IRBuilder to create a call to the defined function and also return the result so that it is accessible when the result is printed to the user in the end:
    auto CalleeFnVars = Node.getArgs();
    llvm::SmallVector IntParams;
    for (unsigned i = 0, end = CalleeFnVars.size(); i != end; ++i) {
    int ArgsToIntType;
    CalleeFnVars[i].getAsInteger(10, ArgsToIntType);
    Value *IntParam = ConstantInt::get(Int32Ty, ArgsToIntType, true);
    IntParams.push_back(IntParam);
    }
    Builder.CreateRet(Builder.CreateCall(CalleeFn, IntParams, “calc_expr_res”));
    };

Integrating the LLJIT engine into the calculator – JIT Compilation-2

  1. We then can pass our newly constructed AST into our code generator to compile the IR for the function that the user has defined. The specifics of IR generation will be discussed afterward, but this function that compiles to the IR needs to be aware of the module and our JITtedFunctions map. After generating the IR, we can add this information to our LLJIT instance by calling addIRModule() and wrapping our module and context in a ThreadSafeModule class, to prevent these from being accessed by other concurrent threads: CodeGenerator.compileToIR(Tree, M.get(), JITtedFunctions);
    ExitOnErr(
    JIT->addIRModule(ThreadSafeModule(std::move(M), std::move(Ctx))));
  1. Instead, if the user is calling a function with parameters, which is represented by the Token::ident token, we also need to parse and semantically check if the user input is valid prior to converting the input into a valid AST. The parsing and checking here are slightly different compared to before, as it can include checks such as ensuring the number of parameters that the user has supplied to the function call matches the number of parameters that the function was originally defined with: } else if (CalcTok == Token::ident) {
    outs() << “Attempting to evaluate expression:\n”;
    Parser Parser(Lex);
    AST *Tree = Parser.parse();
    if (!Tree || Parser.hasError()) {
    llvm::errs() << “Syntax errors occured\n”;
    return 1;
    }
    Sema Semantic;
    if (Semantic.semantic(Tree, JITtedFunctions)) {
    llvm::errs() << “Semantic errors occured\n”;
    return 1;
    }
  1. Once a valid AST is constructed for a function call, FuncCallFromDef, we get the name of the function from the AST, and then the code generator prepares to generate the call to the function that was previously added to the LLJIT instance. What occurs under the cover is that the user-defined function is regenerated as an LLVM call within a separate function that will be created that does the actual evaluation of the original function. This step requires the AST, the module, the function call name, and our map of function definitions: llvm::StringRef FuncCallName = Tree->getFnName();
    CodeGenerator.prepareCalculationCallFunc(Tree, M.get(), FuncCallName, JITtedFunctions);
  1. After the code generator has completed its work to regenerate the original function and to create a separate evaluation function, we must add this information to the LLJIT instance. We create a ResourceTracker instance to track the memory that is allocated to the functions that have been added to LLJIT, as well as another ThreadSafeModule instance of the module and context. These two instances are then added to the JIT as an IR module: auto RT = JIT->getMainJITDylib().createResourceTracker();
    auto TSM = ThreadSafeModule(std::move(M), std::move(Ctx));
    ExitOnErr(JIT->addIRModule(RT, std::move(TSM)));
  1. The separate evaluation function is then queried for within our LLJIT instance through the lookup() method, by supplying the name of our evaluation function, calc_expr_func, into the function. If the query is successful, the address for the calc_expr_func function is cast to the appropriate type, which is a function that takes no arguments and returns a single integer. Once the function’s address is acquired, we call the function to generate the result of the user-defined function with the parameters they have supplied and then print the result to the console: auto CalcExprCall = ExitOnErr(JIT->lookup(“calc_expr_func”));
    int (*UserFnCall)() = CalcExprCall.toPtr();
    outs() << “User defined function evaluated to: ” << UserFnCall() << “\n”;
  1. After the function call is completed, the memory that was previously associated with our functions is then freed by ResourceTracker:

ExitOnErr(RT->remove());

Integrating the LLJIT engine into the calculator – JIT Compilation-1

Firstly, let’s discuss how to set up the JIT engine in our interactive calculator. All of the implementation pertaining to the JIT engine exists within Calc.cpp, and this file has one main() loop for the execution of the program:

  1. We must include several header files, aside from the headers including our code generation, semantic analyzer, and parser implementation. The LLJIT.h header defines the LLJIT class and the core classes of the ORC API. Next, the InitLLVM.h header is needed for the basic initialization of the tool, and the TargetSelect.h header is needed for the initialization of the native target. Finally, we also include the C++ header to allow for user input into our calculator application:

include “CodeGen.h”
include “Parser.h”
include “Sema.h”
include “llvm/ExecutionEngine/Orc/LLJIT.h”
include “llvm/Support/InitLLVM.h”
include “llvm/Support/TargetSelect.h”
include

  1. Next, we add the llvm and llvm::orc namespaces to the current scope:

using namespace llvm;
using namespace llvm::orc;

  1. Many of the calls from our LLJIT instance that we will be creating return an error type, Error. The ExitOnError class allows us to discard Error values that are returned by the calls from the LLJIT instance while logging to stderr and exiting the application. We declare a global ExitOnError variable as follows:

ExitOnError ExitOnErr;

  1. Then, we add the main() function, which initializes the tool and the native target:

int main(int argc, const char **argv{
InitLLVM X(argc, argv);
InitializeNativeTarget();
InitializeNativeTargetAsmPrinter();
InitializeNativeTargetAsmParser();

  1. We use the LLJITBuilder class to create an LLJIT instance, wrapped in the previously declared ExitOnErr variable in case an error occurs. A possible source of error would be that the platform does not yet support JIT compilation:

auto JIT = ExitOnErr(LLJITBuilder().create());

  1. Next, we declare our JITtedFunctions map that keeps track of the function definitions, as we have previously described:

StringMap JITtedFunctions;

  1. To facilitate an environment that waits for user input, we add a while() loop and allow the user to type in an expression, saving the line that the user typed within a string called calcExp: while (true) {
    outs() << “JIT calc > “;
    std::string calcExp;
    std::getline(std::cin, calcExp);
  1. Afterward, the LLVM context class is initialized, along with a new LLVM module. The module’s data layout is also set accordingly, and we also declare a code generator, which will be used to generate IR for the function that the user has defined on the command line: std::unique_ptr Ctx = std::make_unique();
    std::unique_ptr M = std::make_unique(“JIT calc.expr”, *Ctx);
    M->setDataLayout(JIT->getDataLayout());
    CodeGen CodeGenerator;
  1. We must interpret the line that was entered by the user to determine if the user is defining a new function or calling a previous function that they have defined with an argument. A Lexer class is defined while taking in the line of input that the user has given. We will see that there are two main cases that the lexer cares about: Lexer Lex(calcExp);
    Token::TokenKind CalcTok = Lex.peek();
  1. The lexer can check the first token of the user input. If the user is defining a new function (represented by the def keyword, or the Token::KW_def token), then we parse it and check its semantics. If the parser or the semantic analyzer detects any issues with the user-defined function, errors will be emitted accordingly, and the calculator program will halt. If no errors are detected from either the parser or the semantic analyzer, this means we have a valid AST data structure, DefDecl: if (CalcTok == Token::KW_def) {
    Parser Parser(Lex);
    AST *Tree = Parser.parse();
    if (!Tree || Parser.hasError()) {
    llvm::errs() << “Syntax errors occured\n”;
    return 1;
    }
    Sema Semantic;
    if (Semantic.semantic(Tree, JITtedFunctions)) {
    llvm::errs() << “Semantic errors occured\n”;
    return 1;
    }

Creating a new TableGen tool – The TableGen Language-5

  1. The only missing part now is a way to call this implementation, for which you define a global function, EmitTokensAndKeywordFilter(). The emitSourceFileHeader() function declared in the llvm/TableGen/TableGenBackend.h header emits a comment at the top of the generated file:

void EmitTokensAndKeywordFilter(RecordKeeper &RK,
raw_ostream &OS) {
emitSourceFileHeader(“Token Kind and Keyword Filter “
“Implementation Fragment”,
OS);
TokenAndKeywordFilterEmitter(RK).run(OS);
}

With that, you finished the implementation of the source emitter in the TokenEmitter.cpp file. Overall, the coding is not too complicated.
The TableGenBackends.h header file only contains the declaration of the EmitTokensAndKeywordFilter() function. To avoid including other files, you use forward declarations for the raw_ostream and RecordKeeper classes:

ifndef TABLEGENBACKENDS_H
define TABLEGENBACKENDS_H
namespace llvm {
class raw_ostream;
class RecordKeeper;
} // namespace llvm
void EmitTokensAndKeywordFilter(llvm::RecordKeeper &RK,
llvm::raw_ostream &OS);
endif

The missing part is the implementation of the driver. Its task is to parse the TableGen file and emit the records according to the command-line options. The implementation is in the TableGen.cpp file:

  1. As usual, the implementation begins with including the required headers. The most important one is llvm/TableGen/Main.h because this header declares the frontend of TableGen:

include “TableGenBackends.h”
include “llvm/Support/CommandLine.h”
include “llvm/Support/PrettyStackTrace.h”
include “llvm/Support/Signals.h”
include “llvm/TableGen/Main.h”
include “llvm/TableGen/Record.h”

  1. To simplify coding, the llvm namespace is imported:

using namespace llvm;

  1. The user can choose one action. The ActionType enumeration contains all possible actions:

enum ActionType {
PrintRecords,
DumpJSON,
GenTokens,
};

  1. A single command-line option object called Action is used. The user needs to specify the –gen-tokens option to emit the token filter you implemented. The other two options, –print-records and –dump-json, are standard options to dump read records. Note that the object is in an anonymous namespace:

namespace {
cl::opt Action(
cl::desc(“Action to perform:”),
cl::values(
clEnumValN(
PrintRecords, “print-records”,
“Print all records to stdout (default)”),
clEnumValN(DumpJSON, “dump-json”,
“Dump all records as “
“machine-readable JSON”),
clEnumValN(GenTokens, “gen-tokens”,
“Generate token kinds and keyword “
“filter”)));

  1. The Main() function performs the requested action based on the value of Action. Most importantly, your EmitTokensAndKeywordFilter() function is called if –gen-tokens was specified on the command line. After the end of the function, the anonymous namespace is closed:

bool Main(raw_ostream &OS, RecordKeeper &Records) {
switch (Action) {
case PrintRecords:
OS << Records; // No argument, dump all contents
break;
case DumpJSON:
EmitJSON(Records, OS);
break;
case GenTokens:
EmitTokensAndKeywordFilter(Records, OS);
break;
}
return false;
}
} // namespace

  1. And lastly, you define a main() function. After setting up the stack trace handler and parsing the command-line options, the TableGenMain() function is called to parse the TableGen file and create records. That function also calls your Main() function if there are no errors:

int main(int argc, char **argv) {
sys::PrintStackTraceOnErrorSignal(argv[0]);
PrettyStackTraceProgram X(argc, argv);
cl::ParseCommandLineOptions(argc, argv);
llvm_shutdown_obj Y;
return TableGenMain(argv[0], &Main);
}

Your own TableGen tool is now implemented. After compiling, you can run it with the KeywordC.td sample input file as follows:

$ tinylang-tblgen –gen-tokens –o TokenFilter.inc KeywordC.td

The generated C++ source code is written to the TokenFilter.inc file.

Creating a new TableGen tool – The TableGen Language-3

  1. Next, the function declarations are emitted. This is only a constant string, so nothing exciting happens. This finishes emitting the declarations: OS << ” const char *getTokenName(TokenKind Kind) “
    “LLVM_READNONE;\n”
    << ” const char *getPunctuatorSpelling(TokenKind “
    “Kind) LLVM_READNONE;\n”
    << ” const char *getKeywordSpelling(TokenKind “
    “Kind) “
    “LLVM_READNONE;\n”
    << “}\n”
    << “endif\n”;
  2. Now, let’s turn to emitting the definitions. Again, this generated code is guarded by a macro called GET_TOKEN_KIND_DEFINITION. First, the token names are emitted into a TokNames array, and the getTokenName() function uses that array to retrieve the name. Please note that the quote symbol must be escaped as \” when used inside a string: OS << “ifdef GET_TOKEN_KIND_DEFINITION\n”; OS << “undef GET_TOKEN_KIND_DEFINITION\n”; OS << “static const char * const TokNames[] = {\n”; for (Record *CC : Records.getAllDerivedDefinitions(“Token”)) { OS << ” \”” << CC->getValueAsString(“Name”)
    << “\”,\n”;
    }
    OS << “};\n\n”;
    OS << “const char *tok::getTokenName(TokenKind Kind) “
    “{\n”
    << ” if (Kind <= tok::NUM_TOKENS)\n”
    << ” return TokNames[Kind];\n”
    << ” llvm_unreachable(\”unknown TokenKind\”);\n”
    << ” return nullptr;\n”
    << “};\n\n”;
  3. Next, the getPunctuatorSpelling() function is emitted. The only notable difference to the other parts is that the loop goes over all records derived from the Punctuator class. Also, a switch statement is generated instead of an array: OS << “const char ” “*tok::getPunctuatorSpelling(TokenKind ” “Kind) {\n” << ” switch (Kind) {\n”; for (Record *CC : Records.getAllDerivedDefinitions(“Punctuator”)) { OS << ” ” << CC->getValueAsString(“Name”)
    << “: return \”” << CC->getValueAsString(“Spelling”) << “\”;\n”;
    }
    OS << ” default: break;\n”
    << ” }\n”
    << ” return nullptr;\n”
    << “};\n\n”;
  4. And finally, the getKeywordSpelling() function is emitted. The coding is similar to emitting getPunctuatorSpelling(). This time, the loop goes over all records of the Keyword class, and the name is again prefixed with kw_: OS << “const char *tok::getKeywordSpelling(TokenKind ” “Kind) {\n” << ” switch (Kind) {\n”; for (Record *CC : Records.getAllDerivedDefinitions(“Keyword”)) { OS << ” kw_” << CC->getValueAsString(“Name”)
    << “: return \”” << CC->getValueAsString(“Name”)
    << “\”;\n”;
    }
    OS << ” default: break;\n”
    << ” }\n”
    << ” return nullptr;\n”
    << «};\n\n»;
    OS << «endif\n»;
    }
  5. The emitKeywordFilter() method is more complex than the previous methods since emitting the filter requires collecting some data from the records. The generated source code uses the std::lower_bound() function, thus implementing a binary search.
    Now, let’s make a shortcut. There can be several records of the TokenFilter class defined in the TableGen file. For demonstration purposes, just emit at most one token filter method: std::vector AllTokenFilter =
    Records.getAllDerivedDefinitionsIfDefined(
    “TokenFilter”);
    if (AllTokenFilter.empty())
    return;

Implementing a TableGen backend – The TableGen Language-1

Since parsing and creation of records are done through an LLVM library, you only need to care about the backend implementation, which consists mostly of generating C++ source code fragments based on the information in the records. First, you need to be clear about what source code to generate before you can put it into the backend.
Sketching the source code to be generated
The output of the TableGen tool is a single file containing C++ fragments. The fragments are guarded by macros. The goal is to replace the TokenKinds.def database file. Based on the information in the TableGen file, you can generate the following:

  1. The enumeration members used to define flags. The developer is free to name the type; however, it should be based on the unsigned type. If the generated file is named TokenKinds.inc, then the intended use is this:

enum Flags : unsigned {
define GET_TOKEN_FLAGS
include “TokenKinds.inc”
}

  1. The TokenKind enumeration, and the prototypes and definitions of the getTokenName(), getPunctuatorSpelling(), and getKeywordSpelling() functions. This code replaces the TokenKinds.def database file, most of the TokenKinds.h include file and the TokenKinds.cpp. source file.
  2. A new lookupKeyword() function that can be used instead of the current implementation using the llvm::StringMap. type. This is the function you want to optimize.
    Knowing what you want to generate, you can now turn to implementing the backend.
    Creating a new TableGen tool
    A simple structure for your new tool is to have a driver that evaluates the command-line options and calls the generation functions and the actual generator functions in a different file. Let’s call the driver file TableGen.cpp and the file containing the generator TokenEmitter.cpp. You also need a TableGenBackends.h header file. Let’s begin the implementation with the generation of the C++ code in the TokenEmitter.cpp file:
  3. As usual, the file begins with including the required headers. The most important one is llvm/TableGen/Record.h, which defines a Record class, used to hold records generated by parsing the .td file:

include “TableGenBackends.h”
include “llvm/Support/Format.h”
include “llvm/TableGen/Record.h”
include “llvm/TableGen/TableGenBackend.h”
include

  1. To simplify coding, the llvm namespace is imported:

using namespace llvm;

  1. The TokenAndKeywordFilterEmitter class is responsible for generating the C++ source code. The emitFlagsFragment(), emitTokenKind(), and emitKeywordFilter() methods emit the source code, as described in the previous section, Sketching the source code to be generated. The only public method, run(), calls all the code-emitting methods. The records are held in an instance of RecordKeeper, which is passed as a parameter to the constructor. The class is inside an anonymous namespace:

namespace {
class TokenAndKeywordFilterEmitter {
RecordKeeper &Records;
public:
explicit TokenAndKeywordFilterEmitter(RecordKeeper &R)
: Records(R) {}
void run(raw_ostream &OS);
private:
void emitFlagsFragment(raw_ostream &OS);
void emitTokenKind(raw_ostream &OS);
void emitKeywordFilter(raw_ostream &OS);
};
} // End anonymous namespace

Generating C++ code from a TableGen file – The TableGen Language

In the previous section, you defined records in the TableGen language. To make use of those records, you need to write your own TableGen backend that can produce C++ source code or do other things using the records as input.

In Chapter 3, Turning the Source File into an Abstract Syntax Tree, the implementation of the Lexer class uses a database file to define tokens and keywords. Various query functions make use of that database file. Besides that, the database file is used to implement a keyword filter. The keyword filter is a hash map, implemented using the llvm::StringMap class. Whenever an identifier is found, the keyword filter is called to find out if the identifier is actually a keyword. If you take a closer look at the implementation using the ppprofiler pass from Chapter 6, Advanced IR Generation, then you will see that this function is called quite often. Therefore, it may be useful to experiment with different implementations to make that functionality as fast as possible.

However, this is not as easy as it seems. For example, you can try to replace the lookup in the hash map with a binary search. This requires that the keywords in the database file are sorted. Currently, this seems to be the case, but during development, a new keyword might be added in the wrong place undetected. The only way to make sure that the keywords are in the right order is to add some code that checks the order at runtime.

You can speed up the standard binary search by changing the memory layout. For example, instead of sorting the keywords, you can use the Eytzinger layout, which enumerates the search tree in breadth-first order. This layout increases the cache locality of the data and therefore speeds up the search. Personally speaking, maintaining the keywords in breadth-first order manually in the database file is not possible.

Another popular approach for searching is the generation of minimal perfect hash functions. If you insert a new key into a dynamic hash table such as llvm::StringMap, then that key might be mapped to an already occupied slot. This is called a key collision. Key collisions are unavoidable, and many strategies have been developed to mitigate that problem. However, if you know all the keys, then you can construct hash functions without key collisions. Such hash functions are called perfect. In case they do not require more slots than keys, then they are called minimal. Perfect hash functions can be generated efficiently – for example, with the gperf GNU tool.

In summary, there is some incentive to be able to generate a lookup function from keywords. So, let’s move the database file to TableGen!

TIP FOR DEBUGGING – The TableGen Language

To get a more detailed dump of the records, you can use the –-print-detailed-records option. The output includes the line numbers of record and class definitions, and where record fields are initialized. They can be very helpful if you try to track down why a record field was assigned a certain value.
In general, the ADD and SUB instructions have a lot in common, but there is also a difference: addition is a commutative operation but subtraction is not. Let’s capture that fact in the record, too. A small challenge is that TableGen only supports a limited set of data types. You already used string and int in the examples. The other available data types are bit, bits, list, and dag. The bit type represents a single bit; that is, 0 or 1. If you need a fixed number of bits, then you use the bits type. For example, bits<5> is an integer type 5 bits wide. To define a list based on another type, you use the list type. For example, list is a list of integers, and list is a list of records of the Inst class from the example. The dag type represents directed acyclic graph (DAG) nodes. This type is useful for defining patterns and operations and is used extensively in LLVM backends.
To represent a flag, a single bit is sufficient, so you can use one to mark an instruction as commutable. The majority of instructions are not commutable, so you can take advantage of default values:

class Inst {
string Mnemonic = mnemonic;
int Opcode = opcode;
bit Commutable = commutable;
}
def ADD : Inst<“add”, 0xA0, 1>;
def SUB : Inst<“sub”, 0xB0>;

You should run llvm-tblgen to verify that the records are defined as expected.
There is no requirement for a class to have parameters. It is also possible to assign values later. For example, you can define that all instructions are not commutable:

class Inst {
string Mnemonic = mnemonic;
int Opcode = opcode;
bit Commutable = 0;
}
def SUB : Inst<“sub”, 0xB0>;

Using a let statement, you can overwrite that value:

let Commutable = 1 in
def ADD : Inst<“add”, 0xA0>;

Alternatively, you can open a record body to overwrite the value:

def ADD : Inst<“add”, 0xA0> {
let Commutable = 1;
}

Again, please use llvm-tblgen to verify that the Commutable flag is set to 1 in both cases.
Classes and records can be inherited from multiple classes, and it is always possible to add new fields or overwrite the value of existing fields. You can use inheritance to introduce a new CommutableInst class:

class Inst {
string Mnemonic = mnemonic;
int Opcode = opcode;
bit Commutable = 0;
}
class CommutableInst
: Inst {
let Commutable = 1;
}
def SUB : Inst<“sub”, 0xB0>;
def ADD : CommutableInst<“add”, 0xA0>;

The resulting records are always the same, but the language allows you to define records in different ways. Please note that, in the latter example, the Commutable flag may be superfluous: the code generator can query a record for the classes it is based on, and if that list contains the CommutableInst class, then it can set the flag internally.

Defining records and classes – The TableGen Language

Let’s define a simple record for an instruction:

def ADD {
string Mnemonic = “add”;
int Opcode = 0xA0;
}

The def keyword signals that you define a record. It is followed by the name of the record. The record body is surrounded by curly braces, and the body consists of field definitions, similar to a structure in C++.
You can use the llvm-tblgen tool to see the generated records. Save the preceding source code in an inst.td file and run the following:

$ llvm-tblgen –print-records inst.td
————- Classes —————–
————- Defs —————–
def ADD {
string Mnemonic = “add”;
int Opcode = 160;
}

This is not yet exciting; it only shows the defined record was parsed correctly.
Defining instructions using single records is not very comfortable. A modern CPU has hundreds of instructions, and with this amount of records, it is very easy to introduce typing errors in the field names. And if you decide to rename a field or add a new field, then the number of records to change becomes a challenge. Therefore, a blueprint is needed. In C++, classes have a similar purpose, and in TableGen, it is also called a class. Here is the definition of an Inst class and two records based on that class:

class Inst {
string Mnemonic = mnemonic;
int Opcode = opcode;
}
def ADD : Inst<“add”, 0xA0>;
def SUB : Inst<“sub”, 0xB0>;

The syntax for classes is similar to that of records. The class keyword signals that a class is defined, followed by the name of the class. A class can have a parameter list. Here, the Inst class has two parameters, mnemonic and opcode, which are used to initialize the records’ fields. The values for those fields are given when the class is instantiated. The ADD and SUB records show two instantiations of the class. Again, let’s use llvm-tblgen to look at the records:

$ llvm-tblgen –print-records inst.td
————- Classes —————–
class Inst {
string Mnemonic = Inst:mnemonic;
int Opcode = Inst:opcode;
}
————- Defs —————–
def ADD { // Inst
string Mnemonic = “add”;
int Opcode = 160;
}
def SUB { // Inst
string Mnemonic = “sub”;
int Opcode = 176;
}

Now, you have one class definition and two records. The name of the class used to define the records is shown as a comment. Please note that the arguments of the class have the default value ?, which indicates int is uninitialized.

Understanding the TableGen language – The TableGen Language

LLVM comes with its own domain-specific language (DSL) called TableGen. It is used to generate C++ code for a wide range of use cases, thus reducing the amount of code a developer has to produce. The TableGen language is not a full-fledged programming language. It is only used to define records, which is a fancy word for a collection of names and values. To understand why such a restricted language is useful, let’s examine two examples.

Typical data you need to define one machine instruction of a CPU is:

  • The mnemonic of the instruction
  • The bit pattern
  • The number and types of operands
  • Possible restrictions or side effects

It is easy to see that this data can be represented as a record. For example, a field named asmstring could hold the value of the mnemonic; say, “add”. Also, a field named opcode could hold the binary representation of the instruction. Together, the record would describe an additional instruction. Each LLVM backend describes the instruction set in this way.

Records are such a general concept that you can describe a wide variety of data with them. Another example is the definition of command-line options. A command-line option:

  • Has a name
  • May have an optional argument
  • Has a help text
  • May belong to a group of options

Again, this data can be easily seen as a record. Clang uses this approach for the command-line options of the Clang driver.

The TableGen language

In LLVM, the TableGen language is used for a variety of tasks. Large parts of a backend are written in the TableGen language; for example, the definition of a register file, all instructions with mnemonic and binary encoding, calling conventions, patterns for instruction selection, and scheduling models for instruction scheduling. Other uses of LLVM are the definition of intrinsic functions, the definition of attributes, and the definition of command-line options.

You’ll find the Programmer’s Reference at https://llvm.org/docs/TableGen/ProgRef.html and the Backend Developer’s Guide at https://llvm.org/docs/TableGen/BackGuide.html.

To achieve this flexibility, the parsing and the semantics of the TableGen language are implemented in a library. To generate C++ code from the records, you need to create a tool that takes the parsed records and generates C++ code from it. In LLVM, that tool is called llvm-tblgen, and in Clang, it is called clang-tblgen. Those tools contain the code generators required by the project. But they can also be used to learn more about the TableGen language, which is what we’ll do in the next section.

Experimenting with the TableGen language

Very often, beginners feel overwhelmed by the TableGen language. But as soon as you start experimenting with the language, it becomes much easier.