Plugins, Kernels and Pre-Processor Magic

Mantaflow defines extension keywords to allow the quick and efficient writing of new plugins and kernels, without the need for tons of boilerplate code. There are only two of them: KERNEL() and PYTHON(), each of them always followed by () with zero or more parameters. Their usage is quite simple - the keyword is simply placed in front of a function or class definition. Here is an example:

KERNEL(bnd=1,idx) void myKernel(FlagGrid& flags) { ... }

During the build process, the pre-processor prep is run, which parses the source files and automagically generates glue code, in the spirit of e.g. QT. In most case, you should not realize that it exists at all - for editing and debugging purposes, you will always work with your source files. If you want to get an intuition what the pre-processor is up to, compile the project with the cmake switch PREPDEBUG=ON and have a look at the generated files in build/pp. This switch will also redirect your editor to the generated files for debugging, so you can debug any issues with the pre-processor.

Plugins and Data Types

The easiest way to extend mantaflow are custom plugins. Plugins are simply free C++ functions, preceded by the keyword PYTHON().

PYTHON() void myPlugin(Grid<Real>& grid, int number=42) { ... }

This will automatically generate glue code to bind and expose your function to Python scene files. Arguments are automatically translated, most basic data types as well as any classes exposed to Python are supported. In Python, the arguments can be specified by either position or named parameters (recommended). Specifying wrong parameters will cause a Python runtime error on the first run of your python file. If you need more control over the argument parsing, you can access the hidden variable _args of type PbArgs which is generated for every plugin function. To get a better intuition of how plugins work, just have a look at some examples in the plugins directory.

The other way of extension is by providing data types. For example, the Grid datatypes are exposed in this manner. This works by implementing a C++ class and adding the PYTHON() keyword in front of the class keyword, and every function that you want to expose. Your class has to inherit PbClass (or any class inheriting PbClass). Here is an example:

PYTHON() class MyClass : public PbClass {
   PYTHON() MyClass(PbClass* parent) : PbClass(parent) {} // always pass parent to base class

   PYTHON() int getNumber(Grid<Real>& grid) {...} // available as a class method in scene file
   void myMethod() {} // no PYTHON keyword, so not available in scene file
};

Single inheritance, simple templating etc. will work (see grid.h / grid.cpp for an advanced example), so you can use polymorphism when calling plugins both in C++ and Python scene files. If you try to do something too fancy, the pre-processor will complain. If you encounter this type of issues with the pre-processor, please file a bug, and we will take a look.

Kernels

The second keyword is KERNEL(). It is used to simplify the generation of operators on grids and particle systems, which benefit from multicore parallelism. Let's look at a simple example:

KERNEL(bnd=1) void KnTest(FlagGrid& flags, MACGrid& vel) {
  if (flags.isFluid(i-1, j, k))
     vel(i,j,k) = Vec3(1,2,3);
}

A kernel is a C++ function. It is executed for all cells in the first grid specified in the argument list, in this case the flag grid. The cell coordinates are stored in i,j,k as integers. For the single-core case, this is identical to three stacked for-loops around the function body. For multicore operation, the calls are divided between cores using Thread Building Blocks (libtbb) or OpenMP, depending on your compilation switches. The KERNEL() keyword takes the following arguments

ijk i,j,k addressing mode. This is the default.
idx Use one variable idx instead of i,j,k to linearly adress the cells (this is slighly more efficient)
bnd=n Exclude the boundary (outer n cells) from the looped range.
For example, bnd=2 would imply a range of (2,2,2) to (61,61,61) for a 64^3 grid.
points Run the kernel on a particle system instead of a grid (with index idx)
single Never generate multi-threaded code for this kernel
reduce=OP The kernel performs a map/reduce operation. This is necessary if you want to reduce values from the kernel, e.g. a sum of all values in the grid. This requires a slightly different syntax for the kernel: A simple example of such a reduce kernel can be found in commonkernels.h.

You may have noticed the lack of overflow guards at the vel(i-1,j,k) statement in the code box above. The reason for this is the bnd=1 option in the kernel options, which guarantees the expression will never go out of bounds. While this also means that the outermost cell layer is never processed, avoiding the boundary guard if-statements reduces code length (and therefore sources of error) and also improves efficiency. As kernels with accessing -1/+1 neighborhood information are common, it is good practise to always fill the outermost cell layer with obstacle cells and use <bnd> kernels instead of boundary-guards.

An important thing to keep in mind when working with kernels is that, as a rule of thumb, they should never modify data outside of the cell they're called for. I.e., the example above should only write to vel(i,j,k), but never to vel(i+1,j,k). The reason is that the kernel function can be invoked multiple in parallel times, and out of order, depending on your compiler settings. Thus, similar to a GPU shader, they should only perform gather operations, but no data scatter. Otherwise, undeterministic results or crashes will be the consequence. To summarize, all kernels should only write to (i,j,k) and no other cells.

In addition, we'd also recommend not to read and write to the same grid (when reading from cells other than ijk). Again, due to parallelization, the order of the kernel executions is not guaranteed, so reading from a grid while modifying it will give randomized results (some cells hainvg old, some having new data values). As a rule of thumb, read from grid A, write to grid B.

Grid & Pdata Operations

Another useful thing to know are standard operations with grids. You can actually typically do a good amount of work with grids directly, without having to write C++ code. E.g., there are existing plugins with corresponding kernel functions for adding or multiplying values in grids. Below you can find the standard operations, where T stands for the data type of the grid:

Grid copyFrom(Grid a)
void add(T a)
void sub(T a)
void addScaled(Grid b, T factor)
void mult(T b)

The copyFrom operation is potentially the most important one. Note that assignments in Python with "=" won't copy any content, neither create new grids. Thus, copyFrom is crucial for transferring data between existing grids.

In addition, the following helpers can come in handy to set grids to constant values, or gather statistics about their content:

setConst(T s)
addConst(T s)
multConst(T s)
clamp(Real min, Real max)
Real getMaxAbsValue()
Real getMaxValue()
Real getMinValue()

Note that the same functions exist for Pdata field of particle systems. Lastly, you can use the functions printGrid and printPdata to print the content of grids and Pdata fields (or parts of it).