Thursday, April 14, 2022

Using Lisp libraries from other programming languages - now with sbcl

Have you ever wanted to call into your Lisp library from C? Have you ever written your nice scientific application in Lisp, only to be requested by people to rewrite it in Python, so they can use its functionality? Or, maybe you've written an RPC or pipes library to coordinate different programming languages, running things in different processes and passing messages around to simulate foreign function calls.

Calling into C from other languages is much easier in comparison. A dlopen() and a foreign function interface suffices to use library functionality exported with the C ABI, without spawning separate processes. Bindings written over such a C standard library for language X make it seem like the underlying functions really are written in X. For cross-language inter-operation, this is the most lightweight option available. Languages like Rust and C++ can use extern "C" { } blocks to wrap their own functions in the C standard ABI, to expose their functions in exactly this way for the foreign function interfaces of other programming languages.

For languages with a sizable runtime included (with, for example, automatic memory management, signal handling, etc.) such as Python or even Javascript, having a way to expose functions to be callable with the C ABI is much less common. You quickly need to deal with issues of how objects are represented internally, how to deal with type translations, and what happens when objects in memory move for GC. Some other Lisp implementations like ECL (the E stands for Embeddable Common Lisp) and some commercial Lisps do support this use-case however: that of allowing Lisp libraries to essentially be packaged up as a shared library linkable with ordinary object files where Lisp code is callable with the normal C calling convention.

Some of you may already know you can use ECL for this purpose; ECL is a great implementation, but it does lack some functionality and features that makes it unattractive for some.

If you prefer using SBCL, you can now join in on the cross-language programming frenzy too.

1.1. wait, really?

Those of you who have used foreign callbacks in libraries such as CFFI will already be somewhat familiar with how the Lisp-side interface works – you write your Lisp code normally and then define C convention callable function pointers in Lisp through a fairly comprehensive translation interface. With callbacks, you just pass these function pointers as parameters to your foreign functions. What's needed to make Lisp callable as a shared library on top of this interface is to iron out runtime management and linkage issues, e.g. how do we initialize the Lisp runtime and GC, and then how does the system linker actually find the C callable function pointers defined in Lisp so that you can call those functions normally as if they were written in C?

I'll explain how to use low-level machinery now exposed in SBCL and how it works, and wrap up by talking about a convenience library that creates a high level interface for creating bindings and manages some of the details.

1.2. example scenario: a calculator

Let's take a classic example to illustrate this new functionality: Suppose you want to write C code and call out to a library called calc. calc is written entirely in Common Lisp, full of classes and objects and methods defining ASTs for a simple symbolic calculator. It has functions like parse-expr, simplify, and pretty-print-expr, where everything works on Lisp objects.

(defclass expression ()
  ()
  (:documentation "An abstract expression."))
(defclass int-literal (expression)
  ((value :reader int-literal-value))
  (:documentation "An integer literal."))
(defclass sum-expression (expression)
  ((left-arg :reader sum-expression-left-arg)
   (right-arg :reader sum-expression-right-arg)))
(defun parse (string) ...)
(defun simplify (expr) ...)
(defun expression-to-string (expr) ...)

We'd like to expose these functions so that we can call these functions from other programming languages using the C ABI. So how does it work in the simplest case of using calc from a simple C program?

First, we need to define these function pointers in Lisp. The primitives SBCL now exposes to deal with this is through the macro sb-alien:define-alien-callable. You can use this macro in the following way:

(define-alien-callable calc-parse int ((source c-string) (result (* (* t))))
  (handler-case
      (progn
         ;; The following needs to use SBCL internal functions to
         ;; coerce a Lisp object into a raw pointer value. This is
         ;; unsafe and will be fixed in the next section.
         (setf (deref result) (sap-alien (int-sap (get-lisp-obj-address (parse source))) (* t)))
         0)
    (t (condition) (declare (ignore condition)) 1)))

(define-alien-callable calc-simplify int ((expr (* t)) (result (* (* t))))
  (handler-case
      (progn
         ;; The following needs to use SBCL internal functions to
         ;; coerce a raw pointer value into a Lisp object and
         ;; back. This is unsafe and will be fixed in the next
         ;; section.
         (setf (deref result)
               (sap-alien (int-sap (get-lisp-obj-address
                                    (simplify (%make-lisp-obj (sap-int (alien-sap expr))))))
                          (* t)))
         0)
    (t (condition) (declare (ignore condition)) 1)))

(define-alien-callable calc-expression-to-string int ((expr (* t)) (result (* c-string)))
  (handler-case
      (progn
         ;; The following needs to use SBCL internal functions to coerce a raw
         ;; pointer value into a Lisp object. This is unsafe and
         ;; will be fixed in the next section.
         (setf (deref result) (%make-lisp-obj (sap-int (alien-sap (expression-to-string result)))))
         0)
    (t (condition) (declare (ignore condition)) 1)))

Notice that for example purposes we just translate any exceptional Lisp conditions into the C return error code convention. The actual returned value is set through the second passing argument's out-pointer in typical C fashion.

Now we've defined C callable function pointers associated with names. You can get the underlying function pointers with the function sb-alien:alien-callable-function, which is useful for passing these callable functions as callbacks to C. SBCL actually didn't even have an external interface to callbacks before this, and CFFI had been using an internal system interface which is now superseded by this interface.

However, we're not too interested in callbacks right now. We want to be able to call these functions from C! You can save the Lisp image containing new callable function pointer definitions like so:

(sb-ext:save-lisp-and-die "calc.core"
  :callable-exports '("calc_parse" "calc_simplify" "calc_expression_to_string" ...))

What the :callable-exports argument describes is the set of C symbols you want to initialize with the corresponding function pointers created with define-alien-callable. This image, when started, has only two jobs: finish starting up the Lisp system and initialize these callable exports with the proper function pointers. It then immediately passes control back to C

Here's an example C file which uses these functions:

#include "libcalc.h"
int main () {
    initialize_lisp("libcalc.core");
    expr_type expr;
    if (calc_parse(source, &expr) != ERR_SUCCESS)
       die("unable to parse expression");
    char *result;
    expr_type simplified_expr;
    calc_simplify(expr, &simplified_expr);
    if (calc_expression_to_string(simplified_expr, &result) != ERR_SUCCESS)
      die("unable to print expression to string");
    printf("\n%s\n", result);
    return 0;
}

Notice that there is a symbol, initialize_lisp, that we can use to initialize the Lisp runtime. Its arguments are the same as the arguments you can normally pass to the main sbcl executable, so once the core name is specified and the runtime initializes the function pointers we are going to use, control is returned to the program.1

Once the Lisp runtime has been initialized, your function pointers are ready to use, and calling them from C works just like a callback would, with the same type translation machinery used.

1.3. what about those raw pointers? objects? GC???

Now comes the question of what to do with objects. There are actually several:

  1. How do we make sure that references living outside the Lisp heap keep the object alive?
  2. Solving that, how can we make sure objects in the Lisp heap don't stay alive forever and do end up getting garbage collected at some point?
  3. Even more conspicuously, how do we deal with objects getting moved by the GC?

In ECL, we don't really need to worry about this last aspect too much. Since the garbage collector in ECL uses Boehm, objects in memory never move, so you could get away with just passing the object address as a void pointer into C, and opaquely manipulating that in your non-Lisp code, as long as you have some way of telling Boehm where in your C program Lisp objects can reside.

Unfortunately, this is an unworkable scheme with SBCL. The primary garbage collector used in SBCL is generational and copying. Therefore, passing an object address to the outside world directly outside of the managed Lisp heap means that once that object moves, the pointer becomes invalid, and a segfault ensues. Luckily, the solution is not too complicated: A layer of indirection for opaque pointers is all that's needed to resolve the problem. This can be implemented in any number of ways, but arguably the simplest scheme is just to have a map associating integers (fixnums) to objects which cross the foreign function boundary. If the map exists in the Lisp heap, for example by using a global hash table stored in a global Lisp variable, then GCs simply update the map values like everything else, and because fixnums do not move, the keys to the map are therefore stable handles we can pass to the outside world. Since we usually only use object pointers opaquely, the indirection through the map can simply be handled on the Lisp side. This solution also solves problems 1) and 2) at the same time: the map lives in the Lisp heap, so it will keep any objects in its entries live. When we want to remove the reference to the object from the outside world, we simply remove the entry in the map. If that object is no longer referenced anywhere else, the garbage collector happily cleans it up.

Let's illustrate this indirection by modifying the above example to be safe and also teach the C program how to clean up memory. Assuming we have Lisp functions make-handle and dereference-handle to add this layer of indirection, we would simply change our callable definitions like so, allowing us to remove unsafe raw pointer coercions. make-handle and dereference-handle deal with Lisp fixnums coerced into opaque pointer values, so we know that they are always safe to pass around.

(define-alien-callable calc-parse int ((source c-string) (result (* (* t))))
  (handler-case
      (progn
         (setf (deref result) (make-handle (parse source)))
         0)
    (t (condition) (declare (ignore condition)) 1)))

(define-alien-callable calc-simplify int ((expr (* t)) (result (* (* t))))
  (handler-case
      (progn
         (setf (deref result) (make-handle (simplify (dereference-handle expr))))
         0)
    (t (condition) (declare (ignore condition)) 1)))

(define-alien-callable calc-expression-to-string int ((expr (* t)) (result (* c-string)))
  (handler-case
      (progn
         (setf (deref result) (expression-to-string (dereference-handle result)))
         0)
    (t (condition) (declare (ignore condition)) 1)))

Now let's modify our C example to include this type of handling using the exported function lisp_release_handle:

#include "libcalc.h"
int main () {
    initialize_lisp("libcalc.core");
    expr_type expr;
    if (calc_parse(source, &expr) != ERR_SUCCESS)
       die("unable to parse expression");
    char *result;
    expr_type simplified_expr;
    calc_simplify(expr, &simplified_expr);
    if (calc_expression_to_string(simplified_expr, &result) != ERR_SUCCESS)
      die("unable to print expression to string");
    printf("\n%s\n", result);
    lisp_release_handle(expr);
    lisp_release_handle(simplified_expr);
    return 0;
}

As you can see, we only ever need to make and dereference handles on the Lisp callable definition side, while the foreign world only needs to pass around these objects opaquely and know how to "free" their references to these objects. Any time you feel like you need to peek inside the internals of Lisp object from the foreign world, all you need to do is define an alien callable function exposing that portion of the object. That way, cross-language pointers can be kept totally opaque and safe through some indirection like through the handles illustrated above.

1.4. sharing a process

There are some more sophisticated intra-process related issues I haven't touched upon, including how the Lisp runtime interacts with the outside world in relation to signals, sessions, threading, or file descriptors such as standard input and out. In particular, users will need to be careful when using Lisp together with another managed runtime in the same process, such as a Python interpreter. However, since many of these issues are general problems anyone running multiple systems in one process has to deal with, I won't cover the myriad number of ways people use to share process stuff. However, POSIX signal handlers can all be set and removed from Lisp itself in SBCL, if the need arises. Some examples of ways Lisp implementations could use signals:

  • signals relating to handling keyboard interrupts to drop into the debugger. The debugger can of course be disabled.
  • some kind of mechanism for telling other threads a stop-the-world garbage collection is happening and they need to suspend. This can be implemented via signals, but it can be implemented by other means such as safepoints.
  • some kind of mechanism for actually starting a garbage collection, for example through a segfault on an unallocated page.

Different builds of SBCL on different operating systems require different signals or do threading in different ways, so consult the documentation for details.

1.5. wrapping it up

While the functionality that SBCL exposes is fairly low-level, a high level library called sbcl-librarian is in development (used in, for example, the Quil language stack), which defines a declarative interfaces for generating bindings for types and errors, as well as producing the corresponding Lisp callable definitions. It also defines some API exports for important Lisp-side functionality such as the handles described above, as well as exposing the Lisp loader and debugger for loading and debugging Lisp code at runtime from the foreign world. Much of the interface is still subject to design and change, but hopefully things will stabilize soon with use.

Well, so now you have another implementation on the block for embedding Lisp in larger systems. Go try it out!

Footnotes:

1

Note that behind the scenes, the symbols parse_expr, simplify_expr, etc. are all global variables that happen to hold function pointers. They are not function symbols in linker lingo, but from the perspective of C, you can call them the same as if you defined a normal function. Hence there's only one level of indirection to make the call on the C side, which is the same as the indirection to call any other function from a shared library via the normal PLT/GOT mechanism, which is pretty much optimal for cross-language shared library linkage, if you ask me.

Saturday, February 29, 2020

Block compilation - "Fresh" in SBCL 2.0.2

I've just managed to land a feature I've been missing in SBCL for quite some time now called block compilation, enabling whole-program-like optimizations for a dynamic language. There was at least one person back in 2003 who requested such a feature on the sbcl-help mailing list, so it's definitely been a long time coming. My suspicion is that although many of you old-timers have at least heard of this feature, this is one of those old state-of-the-art features in dynamic language implementations lost and forgotten for younger generations to the passage of time...

what is "block compilation" anyway?

Those of you using Lisp, or any dynamic language know one thing: Function calls to global, top-level functions are expensive. Much more expensive than in a statically compiled language. They're slow because of the late-bound nature of top-level defined functions, allowing arbitrary redefinition while the program is running and forcing runtime checks on whether the function is being called with the right number or types of arguments. This type of call is known as a "full call" in Python (the compiler used in CMUCL and SBCL, not to be confused with the programming language), and their calling convention permits the most runtime flexibility.

But there is another type of call available to us: the local call. A local call is the type of call you would see between local functions inside a top-level function, say, a call to a function introduced via anonymous LAMBDAs, LABELs or FLETs in Lisp, or internal defines in Scheme and Python. These calls are more 'static' in the sense that they are treated more like function calls in static languages, being compiled "together" and at the same time as the local functions they reference, allowing them to be optimized at compile-time. For example, argument checking can be done at compile time because the number of arguments of the callee is known at compile time, unlike in the full call case where the function, and hence the number of arguments it takes, can change dynamically at runtime at any point. Additionally, the local call calling convention can allow for passing unboxed values like floats around, as they are put into unboxed registers never used in the full call convention, which must use boxed argument  and return value registers.

Block compilation is simply the compilation mode that turns what would normally be full calls to top-level defined functions into local calls to said functions, by compiling all functions in a unit of code (e.g. a file) together in "block" or "batch" fashion, just as local functions are compiled together in a single top-level form. You can think of the effect of block compilation as transforming all the DEFUNs in a file into one large LABELs form. You can think of it as being a tunable knob that increases or decreases how dynamic or static the compiler should act with the respect to function definitions, by controlling whether function name resolution is early-bound or late-bound in a given block compiled unit of code.

We can achieve block compilation with a file-level granularity in CMUCL and SBCL specifically by specifying the :block-compile keyword to compile-file. Here's an example:

In foo.lisp:
(defun foo (x y)
  (print (bar x y))
  (bar x y))

(defun bar (x y)
  (+ x y))

(defun fact (n)
  (if (zerop n)
      1
      (* n (fact (1- n)))))

> (compile-file "foo.lisp" :block-compile t :entry-points nil)
> (load "foo.fasl")

> (sb-disassem:disassemble-code-component #'foo)

; Size: 210 bytes. Origin: #x52E63F90 (segment 1 of 4)        ; (XEP BAR)
; 3F90:       .ENTRY BAR(X Y)                           
     ; (SB-INT:SFUNCTION
                                                              ;  (T T) NUMBER)
; 3FA0:       8F4508           POP QWORD PTR [RBP+8]
; 3FA3:       4883F904         CMP RCX, 4
; 3FA7:       0F85B1000000     JNE L2
; 3FAD:       488D65D0         LEA RSP, [RBP-48]
; 3FB1:       4C8BC2           MOV R8, RDX
; 3FB4:       488BF7           MOV RSI, RDI
; 3FB7:       EB03             JMP L1
; 3FB9: L0:   8F4508           POP QWORD PTR [RBP+8]
; Origin #x52E63FBC (segment 2 of 4)                          ; BAR
; 3FBC: L1:   498B4510         MOV RAX, [R13+16]              ; thread.binding-stack-pointer
; 3FC0:       488945F8         MOV [RBP-8], RAX
; 3FC4:       4C8945D8         MOV [RBP-40], R8
; 3FC8:       488975D0         MOV [RBP-48], RSI
; 3FCC:       498BD0           MOV RDX, R8
3FCF:       488BFE           MOV RDI, RSI
; 3FD2:       E8B9CB29FF       CALL #x52100B90                ; GENERIC-+
; 3FD7:       488B75D0         MOV RSI, [RBP-48]
; 3FDB:       4C8B45D8         MOV R8, [RBP-40]
; 3FDF:       488BE5           MOV RSP, RBP
; 3FE2:       F8               CLC
; 3FE3:       5D               POP RBP
; 3FE4:       C3               RET
; Origin #x52E63FE5 (segment 3 of 4)                          ; (XEP FOO)
; 3FE5:       .SKIP 11
; 3FF0:       .ENTRY FOO(X Y)                           
     ; (SB-INT:SFUNCTION
                                                              ;  (T T) NUMBER)
; 4000:       8F4508           POP QWORD PTR [RBP+8]
; 4003:       4883F904         CMP RCX, 4
; 4007:       7557             JNE L3
; 4009:       488D65D0         LEA RSP, [RBP-48]
; 400D:       488955E8         MOV [RBP-24], RDX
; 4011:       48897DE0         MOV [RBP-32], RDI
; Origin #x52E64015 (segment 4 of 4)                          ; FOO
; 4015:       498B4510         MOV RAX, [R13+16]              ; thread.binding-stack-pointer
; 4019:       488945F0         MOV [RBP-16], RAX
; 401D:       4C8BCD           MOV R9, RBP
; 4020:       488D4424F0       LEA RAX, [RSP-16]
; 4025:       4883EC40         SUB RSP, 64
; 4029:       4C8B45E8         MOV R8, [RBP-24]
; 402D:       488B75E0         MOV RSI, [RBP-32]
; 4031:       4C8908           MOV [RAX], R9
; 4034:       488BE8           MOV RBP, RAX
; 4037:       E87DFFFFFF       CALL L0
; 403C:       4883EC10         SUB RSP, 16
; 4040:       B902000000       MOV ECX, 2
; 4045:       48892C24         MOV [RSP], RBP
; 4049:       488BEC           MOV RBP, RSP
; 404C:       E8F1E163FD       CALL #x504A2242                ; #<FDEFN PRINT>
; 4051:       4C8B45E8         MOV R8, [RBP-24]
; 4055:       488B75E0         MOV RSI, [RBP-32]
; 4059:       E95EFFFFFF       JMP L1
; 405E: L2:   CC10             INT3 16                        ; Invalid argument count trap
; 4060: L3:   CC10             INT3 16                        ; Invalid argument count trap

You can see that FOO and BAR are now compiled into the same component (with local calls), and both have valid external entry points. This improves locality of code quite a bit and still allows calling both FOO and BAR externally from the file (e.g. in the REPL). The only thing that has changed is that within the file foo.lisp, all calls to functions within that file shortcut going through the global fdefinition's external entry points which do all the slow argument checking and boxing. Even FACT is faster because the compiler can recognize the tail recursive local call and directly turn it into a loop. Without block-compilation, the user is licensed to, say, redefine FACT while it is running, which forces the compiler to make the self call into a normal full call to allow redefinition and full argument and return value processing.
But there is one more goody block compilation adds...

the :entry-points keyword

Notice we specified :entry-points nil above. That's telling the compiler to still create external entry points to every function in the file, since we'd like to be able to call them normally from outside the code component (i.e. the compiled compilation unit, here the entire file). Now, those of you who know C know there is a useful way to get the compiler to optimize file-local functions, for example automatically inlining them if they are once-use, and also enforce that the function is not visible externally from the file. This is the static keyword in C. The straightforward analogue when block compiling is the :entry-points keyword. Essentially, it makes the DEFUNs which are not entry-points not have any external entry points, i.e. they are not visible to any functions outside the block compiled unit and so become subject to an assortment of optimizations, For example, if a function with no external entry point is never called in the block-compiled unit, it will just be deleted as dead code. Better yet, if a function is once-use, it will be removed and directly turned into a LET at the call site, essentially acting as inlining with no code size tradeoff and is always an optimization.
So, for example, we have
> (compile-file "test.lisp" :block-compile t :entry-points '(bar fact))
which removes FOO for being unused in the block compiled unit (the file). This is all documented very well in the CMUCL manual section Advanced Compiler Use and Efficiency Hints under "Block compilation". Unfortunately this section (among others) never made it over to SBCL's manual, though it is still 99% accurate for SBCL.

a brief history of block compilation

Now to explain why the word "Fresh" in the title of this post is in quotes. You may be surprised to hear that CMUCL, the progenitor of SBCL, has had this interface to block compilation described above since 1991. Indeed, the Python compiler, which was first started in 1985 by Rob MacLachlan, was designed with the explicit goal of being able to block compile arbitrary amounts of code at once in bulk in the manner described above as a way to close the gap between dynamic and static language implementations. Indeed, the top-level intermediate representation data structure in Python is the COMPONENT, which represents a connected component of the flow graph created by compiling multiple functions together. So, what happened? Why did SBCL not have this feature despite its compiler being designed around it?

the fork

When SBCL was first release and forked off from CMUCL in late 1999, the goal of the system was to make it sanely bootstrappable and more maintainable. Many casualties of CMUCL features occured during this fork for the purpose of getting something working, such as the loss of the bytecode compiler, many extensions, hemlock, numerous backends, and block compilation. Many of these features such as the numerous CMUCL backends were eventually restored, but block compilation was one of those features that was never brought back into the fold, with bitrotted remnants of the interface lying around in the compiler for decades. The processing of DECLAIM and PROCLAIM forms, which were a crucial part of the more fine grained form of block compilation, were revamped entirely to make things more ANSI compatible. In fact, the proclamation level of block compilation of CMUCL has not made it back into SBCL even now for this reason, and it is still unclear whether it would be worth adding this form of elegant block compilation back into SBCL and whether it can be done in a clean, ANSI manner. Perhaps once this feature becomes more well known, people will find the finer granularity form of block compilation useful enough to request (or implement) it.

the revival

Reanimating this bitrotted zombie of a feature was surprisingly easy and difficult at the same time. Because the idea of block compilation was so deeply embedded into the structure of the compiler, most things internally worked right off the bat, sans a few assertions that had crept in after assuming an invariant along the lines of one top-level function definition per code component, which is contrary definitionally to the idea of block compilation. The front-end interface to block compilation was in contrast completely blown away and destroyed, with fundamental macros like DEFUN having been rewritten and the intermediate representation namespace behaving more locally. It took some time to redesign the interface to block compilation to fit with this new framework, and my first attempt last month to land the change ended in a Quicklisp library dropping the compiler into an infinite loop. The cause was a leakage in the intermediate representation which I patched up this month. Now things seem robust enough that it doesn't cause regressions for normal compilation mode.


what's left?

Not all is perfect though, and there are still a few bugs lurking around. For example, block compiling and inlining currently does not interact very well, while the same is not true for CMUCL. There are also probably a few bugs lurking around with respect to ensuring the right policies are in effect and have consistent semantics with block compilation. In addition, as mentioned above, the form-by-form level granularity given by the CMUCL-style (declaim (start-block ...)) ... (declaim (end-block ..)) proclamations are still missing. In fact, the CMUCL compiler sprinkled a few of these block compilation declarations around, and it would be nice if SBCL could block compile some of its own code to ensure maximum efficiency. However, the basic apparatus works, and I hope that as more people rediscover this feature and try it on their own performance-oriented code bases, bug reports and feature requests around block compilation and whole program optimization will develop and things will start maturing very quickly!

Tuesday, January 14, 2020

SBCL20 in Vienna

Last month, I attended the SBCL20 workshop in Vienna. Many thanks to the organizers and sponsors for inviting me to give a talk about my RISC-V porting work to SBCL and allowing me to otherwise throw around some ideas in the air with a bunch of SBCLites.

This was my first Lisp conference. It was really nice being able to meet a lot of people who up until then had only been floating voices on the internet. Given the location, it's no surprise that most of the attendees were European, but what did surprise me was the actual turnout for the event, some even having attended SBCL10. I, like what it seems to be many others, was certainly not expecting around 25 to attend. (Robert Smith had given a paltry estimate of about 5!)

On Sunday we had a nice tour around some cool places around Vienna by our gracious host, Phillip Marek. I got to the group right as they were in Donauturm, and had lunch afterwards. We then moved to Karlsplatz where Phillip hunted for daguerreotypes. Fortune looked down upon us that day, since it was a nice sunny 10°C in Vienna in the middle of December!

Then on Monday, we proceeded to start the workshop proper, at about 8:30 am. We were hosted by the Bundesrechnenzentrum (the Austrian Federal Computing Center), and accordingly, after Christophe kicked off the workshop, had some BRZ representatives talk about how the work they do combats things like tax fraud in Austria. We had a nice room with a lot of space, mixer-style tables and snacks in the back of the room. At first, the schedule was that Douglas Katzman was to go Monday morning, and Robert Smith, in the afternoon, with my talk scheduled for Tuesday morning. I ended up asking Robert if he would switch with me as I was pretty anxious to get my talk over with that day... And thus we pressed forward into our first talk of the day, maybe at around 10:30 am.

SBCL & Unix

Doug Katzman talked about his work at Google getting SBCL to work with Unix better. For those of you who don't know, he's done a lot of work on SBCL over the past couple of years, not only adding a lot of new features to the GC and making it play better with applications which have alien parts to them, but also has done a tremendous amount of cleanup on the internals and has helped SBCL become even more Sanely Bootstrappable. That's a topic for another time, and I hope Doug or Christophe will have the time to write up about the recent improvements to the process, since it really is quite interesting.

Anyway, what Doug talked about was his work on making SBCL more amenable to external debugging tools, such as gdb and external profilers. It seems like they interface with aliens a lot from Lisp at Google, so it's nice to have backtraces from alien tools understand Lisp. It turns out a lot of prerequisite work was needed to make SBCL play nice like this, including implementing a non-moving GC runtime, so that Lisp objects and especially Lisp code (which are normally dynamic space objects and move around just like everything else) can't evade the aliens and will always have known locations.

Now it's time for questions, and hacking around until the next talk! (oh, wait a second...) Christophe had encouraged us all to 'go forth and produce something' in the meantimes, but I needed to add a few more examples to my slides and eat something before I gave my talk. We had some cold sandwiches of various types for the day, and people started working on various projects.

RISC-V porting talk, VOPs

Around 1:10 pm or so, I went up to the podium to get my laptop set up for the talk. The HDMI cable when plugged into my laptop directly didn't work, but curiously fitting the HDMI cable through a USB3 converter and connecting that to my laptop made the projector work. Anyway, I got a laser pointer, which, now that I think back on it, probably waved around way too much and was probably fairly distracting. The slides are now posted on the website if you're curious what I talked about. I ended up following it pretty closely and was unsure how much detail to get into because I wasn't sure of the audience's familiarity with the SBCL internals, which porting a new backend is usually going to get pretty deep into.

There was general knowledge of the internal VOP facility in SBCL though, which is usually defined for a given backend to translate the generic machine independent low level intermediate representation (IR2 in internals parlance, VMR (virtual machine representation) in "public" internals documentation parlance) to target-specific machine code. Lots of people want to write their own inline assembly and integrate them with high level Lisp functions (with register allocation done for them), usually so they can use some hardware feature SBCL doesn't expose at a high level directly. For example, SIMD instructions are popular for number crunching people to want to use with SBCL. Well, VOPs are a nice way to do this, so that's why so many people knew about them. Except for VOP lifetimes. Lots of people were also confused about VOP lifetimes. The only reason I ended up understanding VOP lifetimes was because I had debugged too many backend issues where the register allocator destroyed a register I needed the value of. In fact, the first patches I got (from Phillip Mathias Schäfer) for SB-ROTATE-BYTE support on RISC-V had lifetime issues, which I fixed before merging. And, incidentally, right after my talk, Doug showed me a tool written by Alastair Bridgewater called voplife.el that visualizes VOP lifetimes and told me that he never writes VOPs without that tool. Well, that would've been nice to have! And then Christophe told me that of course the tool didn't exist when he was doing backend work.

Speaking of 'back in my day', in my slides I gave out (to use an Irish expression) about how long bootstrapping took with the emulator. Christophe proceeds to tell me about his experience porting to HPPA machines in the early 2000's where it took about a full day to wait for the system to bootstrap... It's easy to forget that Moore's law happens (happened?) sometimes.

Oh, and just so I remember for the future, I got some questions from Tobias Rittweiler about how I handled memory model issues. I basically said I didn't, because I was porting a new CPU, not an OS, since the Linux support routines handle almost all of those concerns. Then Doug asked me about why Load-Immediate-64 on RISC-V was so complicated: couldn't I have just loaded a word from memory? To which I responded that it's not clear whether its more expensive to load a word from memory versus materialize it with only register operations. This is a problem they solved in the RISC-V GCC backend, and last time I checked, the RISC-V backend for LLVM just punts and does the basic, unoptimized sequence. Then he asked me why I started with Cheney GC, which I deflected straight away to Christophe, who made the initial decision. He basically said, "it's easy to fit Cheney GC entirely in my head at once." Fair.

Monday lightning talks

After the talk we had some more time to work on stuff, like demos for the lightning talks. I didn't really do much besides talking to people about our internals though. Rui from 3e asked me about supporting tracing through local functions. One of my main interests is actually optimizing away local functions, so, I'm probably not the one who's going to implement it (unless you paid me to), but it seems fairly straightforward to port the support which was added to CMUCL after the fork.

Then we had our Monday afternoon lightning talks. I remember this just being a 'if you have something to say or demo come up and do it' kind of thing. Marco Heisig went up first to talk about SB-SIMD. I had helped him debug getting the VOPs installed into SBCL properly a little bit before his demo, and he showed us some cool support he's adding. He ended up sending a follow up email after the conference with a more formal proposal to integrate it into SBCL. I hope he has the time to move it forward and have it in-tree in some form or another.

Then james anderson, in what is a very memorable moment for me, went up for his 'demo' which ended up being a quite punctuated proclamation: 'We need concurrent GC!' Indeed, we do.

I'm already starting to forget the details of the remaining talks on Monday. Folks who were there, help me remember!

Dinner

We had an official SBCL20 dinner afterwards, and it was time for some Austrian food. I sat in front of Marco and next to Luís Oliveira and enjoyed some Viennese schnitzel. I asked for tap water and got something that looked like but was definitely not tap water...

SBCL & quantum computing

Tuesday morning was a similar drill. We had a new (smaller) room, and this time, we needed our passports for access. Tuesday was lightning talk day, but first, Robert Smith gave a talk about how they use SBCL for quantum computing at Rigetti. They have a state-of-the-art quantum compiler and a quantum simulator, but Robert first gave us a quick primer on some physics and math (including tensor products in a concrete way, which was a nice breath of fresh air after I had what seemed like endless classes characterizing it according to its universal property). His slides are online, check it out! He's also interested in making SBCL play nice with aliens, but in a different way than Doug is. For one thing, he's interested in making an ECL-like API for SBCL to expose their quantum compiler code compiled with SBCL as a traditional C API. What really struck me in his talk was their compiler's ability to propagate fidelity of qubits to make the compiler sometimes 'miscompile' to sometimes get a more 'accurate' answer. (Scarequotes because quantum is spooky.)

Also, they rewrote one of their Lisp applications into another language due to outside pressure, but the rewrite was slower. It's also cool to know that, according to him, most of Robert's team actually did not have much Lisp exposure before joining, giving a sense that the community is still kicking.

Tuesday lightning talks

We proceeded to hack on more stuff after the talk and had a hot meal for lunch this time. I actually started working on something this time. A conversation with Doug the previous day had me saying that we do loop invariant code motion and stuff, to which Doug said, "but we don't." So, I looked and he was right, although I was genuinely surprised because it is a transformation our compiler framework easily supports. We do do a lot of traditional optimizations, in addition to some state of the art dynamic language type inference (stuff all written in the late 80's!) since our intermediate representations are well suited for that sort of thing. In fact, SBCL's front-end intermediate representation is essentially CPS in a flow graph, which anticipates a lot of research in the area done in the 90's and 2000's, basically being locally equivalent to SSA and not falling into the trap of being bound to scope trees.

So I started working on loop invariant code motion, and while I didn't quite finish by the end of the conference, I did get a proof of concept afterwards that almost self builds and works alright. Though after discovering some ancient notes by the original implementer (Rob MacLachlan) on the issue, I've decided I took the wrong approach after all. (The issue is that I worked on the level of IR1 instead of IR2.) Oh well.

Meanwhile, we had lightning talks starting around 1:00 pm with a short break at around 2:45 pm, if I recall correctly. The full list of topics that day is on the website, in order. We descended into a bit of a wishlist game, with Phillip talking about where to move hosting for SBCL. (The options were, stay with SourceForge, move to GitHub, move to GitLab, move to common-lisp.net hosted GitLab. It was honestly quite the controversy.) Then I talked about the loop invariant code motion work I was doing briefly, and then asked the audience who has heard of Block Compilation. I don't remember the exact number, but I think there were more who didn't know than who knew.  After complaining a little about how SBCL doesn't have it even though CMUCL does, I made it one of my big wishlist item, since I think that the ability to do whole program optimization is pretty important for a high-performance compiler, especially for a dynamic language like Lisp where most of the dynamic facilities go unused once an application is up and running in production (usually). Well, I ended up (re)implementing it yesterday, so maybe people will learn about it again. I might write up about it sooner or later. Then Stelian Ionescu talked about his wishlist items (such as gutting out a lot of the 'useless' backends) and we opened it up to the floor.

Wrap-up

After the official end of the conference, most of the crew went across the street into a mall to eat and chat for the rest of the night. Doug ended up showing me some cross disassembler stuff after some prompting about its removal, while Luís did a great job getting relocatable-heaps working on Windows next to us, which he promptly got upstreamed after the workshop. Great to see that new projects were motivated and finished as a result of SBCL20. It was a fun time, and, as Zach Beane said, I'm hoping we organize and meet again soon!


Using Lisp libraries from other programming languages - now with sbcl

Have you ever wanted to call into your Lisp library from C? Have you ever written your nice scientific application in Lisp, only to be requ...