Saturday, February 29, 2020

Block compilation - "Fresh" in SBCL 2.0.2

I've just managed to land a feature I've been missing in SBCL for quite some time now called block compilation, enabling whole-program-like optimizations for a dynamic language. There was at least one person back in 2003 who requested such a feature on the sbcl-help mailing list, so it's definitely been a long time coming. My suspicion is that although many of you old-timers have at least heard of this feature, this is one of those old state-of-the-art features in dynamic language implementations lost and forgotten for younger generations to the passage of time...

what is "block compilation" anyway?

Those of you using Lisp, or any dynamic language know one thing: Function calls to global, top-level functions are expensive. Much more expensive than in a statically compiled language. They're slow because of the late-bound nature of top-level defined functions, allowing arbitrary redefinition while the program is running and forcing runtime checks on whether the function is being called with the right number or types of arguments. This type of call is known as a "full call" in Python (the compiler used in CMUCL and SBCL, not to be confused with the programming language), and their calling convention permits the most runtime flexibility.

But there is another type of call available to us: the local call. A local call is the type of call you would see between local functions inside a top-level function, say, a call to a function introduced via anonymous LAMBDAs, LABELs or FLETs in Lisp, or internal defines in Scheme and Python. These calls are more 'static' in the sense that they are treated more like function calls in static languages, being compiled "together" and at the same time as the local functions they reference, allowing them to be optimized at compile-time. For example, argument checking can be done at compile time because the number of arguments of the callee is known at compile time, unlike in the full call case where the function, and hence the number of arguments it takes, can change dynamically at runtime at any point. Additionally, the local call calling convention can allow for passing unboxed values like floats around, as they are put into unboxed registers never used in the full call convention, which must use boxed argument  and return value registers.

Block compilation is simply the compilation mode that turns what would normally be full calls to top-level defined functions into local calls to said functions, by compiling all functions in a unit of code (e.g. a file) together in "block" or "batch" fashion, just as local functions are compiled together in a single top-level form. You can think of the effect of block compilation as transforming all the DEFUNs in a file into one large LABELs form. You can think of it as being a tunable knob that increases or decreases how dynamic or static the compiler should act with the respect to function definitions, by controlling whether function name resolution is early-bound or late-bound in a given block compiled unit of code.

We can achieve block compilation with a file-level granularity in CMUCL and SBCL specifically by specifying the :block-compile keyword to compile-file. Here's an example:

In foo.lisp:
(defun foo (x y)
  (print (bar x y))
  (bar x y))

(defun bar (x y)
  (+ x y))

(defun fact (n)
  (if (zerop n)
      (* n (fact (1- n)))))

> (compile-file "foo.lisp" :block-compile t :entry-points nil)
> (load "foo.fasl")

> (sb-disassem:disassemble-code-component #'foo)

; Size: 210 bytes. Origin: #x52E63F90 (segment 1 of 4)        ; (XEP BAR)
; 3F90:       .ENTRY BAR(X Y)                           
                                                              ;  (T T) NUMBER)
; 3FA0:       8F4508           POP QWORD PTR [RBP+8]
; 3FA3:       4883F904         CMP RCX, 4
; 3FA7:       0F85B1000000     JNE L2
; 3FAD:       488D65D0         LEA RSP, [RBP-48]
; 3FB1:       4C8BC2           MOV R8, RDX
; 3FB4:       488BF7           MOV RSI, RDI
; 3FB7:       EB03             JMP L1
; 3FB9: L0:   8F4508           POP QWORD PTR [RBP+8]
; Origin #x52E63FBC (segment 2 of 4)                          ; BAR
; 3FBC: L1:   498B4510         MOV RAX, [R13+16]              ; thread.binding-stack-pointer
; 3FC0:       488945F8         MOV [RBP-8], RAX
; 3FC4:       4C8945D8         MOV [RBP-40], R8
; 3FC8:       488975D0         MOV [RBP-48], RSI
; 3FCC:       498BD0           MOV RDX, R8
3FCF:       488BFE           MOV RDI, RSI
; 3FD2:       E8B9CB29FF       CALL #x52100B90                ; GENERIC-+
; 3FD7:       488B75D0         MOV RSI, [RBP-48]
; 3FDB:       4C8B45D8         MOV R8, [RBP-40]
; 3FDF:       488BE5           MOV RSP, RBP
; 3FE2:       F8               CLC
; 3FE3:       5D               POP RBP
; 3FE4:       C3               RET
; Origin #x52E63FE5 (segment 3 of 4)                          ; (XEP FOO)
; 3FE5:       .SKIP 11
; 3FF0:       .ENTRY FOO(X Y)                           
                                                              ;  (T T) NUMBER)
; 4000:       8F4508           POP QWORD PTR [RBP+8]
; 4003:       4883F904         CMP RCX, 4
; 4007:       7557             JNE L3
; 4009:       488D65D0         LEA RSP, [RBP-48]
; 400D:       488955E8         MOV [RBP-24], RDX
; 4011:       48897DE0         MOV [RBP-32], RDI
; Origin #x52E64015 (segment 4 of 4)                          ; FOO
; 4015:       498B4510         MOV RAX, [R13+16]              ; thread.binding-stack-pointer
; 4019:       488945F0         MOV [RBP-16], RAX
; 401D:       4C8BCD           MOV R9, RBP
; 4020:       488D4424F0       LEA RAX, [RSP-16]
; 4025:       4883EC40         SUB RSP, 64
; 4029:       4C8B45E8         MOV R8, [RBP-24]
; 402D:       488B75E0         MOV RSI, [RBP-32]
; 4031:       4C8908           MOV [RAX], R9
; 4034:       488BE8           MOV RBP, RAX
; 4037:       E87DFFFFFF       CALL L0
; 403C:       4883EC10         SUB RSP, 16
; 4040:       B902000000       MOV ECX, 2
; 4045:       48892C24         MOV [RSP], RBP
; 4049:       488BEC           MOV RBP, RSP
; 404C:       E8F1E163FD       CALL #x504A2242                ; #<FDEFN PRINT>
; 4051:       4C8B45E8         MOV R8, [RBP-24]
; 4055:       488B75E0         MOV RSI, [RBP-32]
; 4059:       E95EFFFFFF       JMP L1
; 405E: L2:   CC10             INT3 16                        ; Invalid argument count trap
; 4060: L3:   CC10             INT3 16                        ; Invalid argument count trap

You can see that FOO and BAR are now compiled into the same component (with local calls), and both have valid external entry points. This improves locality of code quite a bit and still allows calling both FOO and BAR externally from the file (e.g. in the REPL). The only thing that has changed is that within the file foo.lisp, all calls to functions within that file shortcut going through the global fdefinition's external entry points which do all the slow argument checking and boxing. Even FACT is faster because the compiler can recognize the tail recursive local call and directly turn it into a loop. Without block-compilation, the user is licensed to, say, redefine FACT while it is running, which forces the compiler to make the self call into a normal full call to allow redefinition and full argument and return value processing.
But there is one more goody block compilation adds...

the :entry-points keyword

Notice we specified :entry-points nil above. That's telling the compiler to still create external entry points to every function in the file, since we'd like to be able to call them normally from outside the code component (i.e. the compiled compilation unit, here the entire file). Now, those of you who know C know there is a useful way to get the compiler to optimize file-local functions, for example automatically inlining them if they are once-use, and also enforce that the function is not visible externally from the file. This is the static keyword in C. The straightforward analogue when block compiling is the :entry-points keyword. Essentially, it makes the DEFUNs which are not entry-points not have any external entry points, i.e. they are not visible to any functions outside the block compiled unit and so become subject to an assortment of optimizations, For example, if a function with no external entry point is never called in the block-compiled unit, it will just be deleted as dead code. Better yet, if a function is once-use, it will be removed and directly turned into a LET at the call site, essentially acting as inlining with no code size tradeoff and is always an optimization.
So, for example, we have
> (compile-file "test.lisp" :block-compile t :entry-points '(bar fact))
which removes FOO for being unused in the block compiled unit (the file). This is all documented very well in the CMUCL manual section Advanced Compiler Use and Efficiency Hints under "Block compilation". Unfortunately this section (among others) never made it over to SBCL's manual, though it is still 99% accurate for SBCL.

a brief history of block compilation

Now to explain why the word "Fresh" in the title of this post is in quotes. You may be surprised to hear that CMUCL, the progenitor of SBCL, has had this interface to block compilation described above since 1991. Indeed, the Python compiler, which was first started in 1985 by Rob MacLachlan, was designed with the explicit goal of being able to block compile arbitrary amounts of code at once in bulk in the manner described above as a way to close the gap between dynamic and static language implementations. Indeed, the top-level intermediate representation data structure in Python is the COMPONENT, which represents a connected component of the flow graph created by compiling multiple functions together. So, what happened? Why did SBCL not have this feature despite its compiler being designed around it?

the fork

When SBCL was first release and forked off from CMUCL in late 1999, the goal of the system was to make it sanely bootstrappable and more maintainable. Many casualties of CMUCL features occured during this fork for the purpose of getting something working, such as the loss of the bytecode compiler, many extensions, hemlock, numerous backends, and block compilation. Many of these features such as the numerous CMUCL backends were eventually restored, but block compilation was one of those features that was never brought back into the fold, with bitrotted remnants of the interface lying around in the compiler for decades. The processing of DECLAIM and PROCLAIM forms, which were a crucial part of the more fine grained form of block compilation, were revamped entirely to make things more ANSI compatible. In fact, the proclamation level of block compilation of CMUCL has not made it back into SBCL even now for this reason, and it is still unclear whether it would be worth adding this form of elegant block compilation back into SBCL and whether it can be done in a clean, ANSI manner. Perhaps once this feature becomes more well known, people will find the finer granularity form of block compilation useful enough to request (or implement) it.

the revival

Reanimating this bitrotted zombie of a feature was surprisingly easy and difficult at the same time. Because the idea of block compilation was so deeply embedded into the structure of the compiler, most things internally worked right off the bat, sans a few assertions that had crept in after assuming an invariant along the lines of one top-level function definition per code component, which is contrary definitionally to the idea of block compilation. The front-end interface to block compilation was in contrast completely blown away and destroyed, with fundamental macros like DEFUN having been rewritten and the intermediate representation namespace behaving more locally. It took some time to redesign the interface to block compilation to fit with this new framework, and my first attempt last month to land the change ended in a Quicklisp library dropping the compiler into an infinite loop. The cause was a leakage in the intermediate representation which I patched up this month. Now things seem robust enough that it doesn't cause regressions for normal compilation mode.

what's left?

Not all is perfect though, and there are still a few bugs lurking around. For example, block compiling and inlining currently does not interact very well, while the same is not true for CMUCL. There are also probably a few bugs lurking around with respect to ensuring the right policies are in effect and have consistent semantics with block compilation. In addition, as mentioned above, the form-by-form level granularity given by the CMUCL-style (declaim (start-block ...)) ... (declaim (end-block ..)) proclamations are still missing. In fact, the CMUCL compiler sprinkled a few of these block compilation declarations around, and it would be nice if SBCL could block compile some of its own code to ensure maximum efficiency. However, the basic apparatus works, and I hope that as more people rediscover this feature and try it on their own performance-oriented code bases, bug reports and feature requests around block compilation and whole program optimization will develop and things will start maturing very quickly!

Tuesday, January 14, 2020

SBCL20 in Vienna

Last month, I attended the SBCL20 workshop in Vienna. Many thanks to the organizers and sponsors for inviting me to give a talk about my RISC-V porting work to SBCL and allowing me to otherwise throw around some ideas in the air with a bunch of SBCLites.

This was my first Lisp conference. It was really nice being able to meet a lot of people who up until then had only been floating voices on the internet. Given the location, it's no surprise that most of the attendees were European, but what did surprise me was the actual turnout for the event, some even having attended SBCL10. I, like what it seems to be many others, was certainly not expecting around 25 to attend. (Robert Smith had given a paltry estimate of about 5!)

On Sunday we had a nice tour around some cool places around Vienna by our gracious host, Phillip Marek. I got to the group right as they were in Donauturm, and had lunch afterwards. We then moved to Karlsplatz where Phillip hunted for daguerreotypes. Fortune looked down upon us that day, since it was a nice sunny 10°C in Vienna in the middle of December!

Then on Monday, we proceeded to start the workshop proper, at about 8:30 am. We were hosted by the Bundesrechnenzentrum (the Austrian Federal Computing Center), and accordingly, after Christophe kicked off the workshop, had some BRZ representatives talk about how the work they do combats things like tax fraud in Austria. We had a nice room with a lot of space, mixer-style tables and snacks in the back of the room. At first, the schedule was that Douglas Katzman was to go Monday morning, and Robert Smith, in the afternoon, with my talk scheduled for Tuesday morning. I ended up asking Robert if he would switch with me as I was pretty anxious to get my talk over with that day... And thus we pressed forward into our first talk of the day, maybe at around 10:30 am.

SBCL & Unix

Doug Katzman talked about his work at Google getting SBCL to work with Unix better. For those of you who don't know, he's done a lot of work on SBCL over the past couple of years, not only adding a lot of new features to the GC and making it play better with applications which have alien parts to them, but also has done a tremendous amount of cleanup on the internals and has helped SBCL become even more Sanely Bootstrappable. That's a topic for another time, and I hope Doug or Christophe will have the time to write up about the recent improvements to the process, since it really is quite interesting.

Anyway, what Doug talked about was his work on making SBCL more amenable to external debugging tools, such as gdb and external profilers. It seems like they interface with aliens a lot from Lisp at Google, so it's nice to have backtraces from alien tools understand Lisp. It turns out a lot of prerequisite work was needed to make SBCL play nice like this, including implementing a non-moving GC runtime, so that Lisp objects and especially Lisp code (which are normally dynamic space objects and move around just like everything else) can't evade the aliens and will always have known locations.

Now it's time for questions, and hacking around until the next talk! (oh, wait a second...) Christophe had encouraged us all to 'go forth and produce something' in the meantimes, but I needed to add a few more examples to my slides and eat something before I gave my talk. We had some cold sandwiches of various types for the day, and people started working on various projects.

RISC-V porting talk, VOPs

Around 1:10 pm or so, I went up to the podium to get my laptop set up for the talk. The HDMI cable when plugged into my laptop directly didn't work, but curiously fitting the HDMI cable through a USB3 converter and connecting that to my laptop made the projector work. Anyway, I got a laser pointer, which, now that I think back on it, probably waved around way too much and was probably fairly distracting. The slides are now posted on the website if you're curious what I talked about. I ended up following it pretty closely and was unsure how much detail to get into because I wasn't sure of the audience's familiarity with the SBCL internals, which porting a new backend is usually going to get pretty deep into.

There was general knowledge of the internal VOP facility in SBCL though, which is usually defined for a given backend to translate the generic machine independent low level intermediate representation (IR2 in internals parlance, VMR (virtual machine representation) in "public" internals documentation parlance) to target-specific machine code. Lots of people want to write their own inline assembly and integrate them with high level Lisp functions (with register allocation done for them), usually so they can use some hardware feature SBCL doesn't expose at a high level directly. For example, SIMD instructions are popular for number crunching people to want to use with SBCL. Well, VOPs are a nice way to do this, so that's why so many people knew about them. Except for VOP lifetimes. Lots of people were also confused about VOP lifetimes. The only reason I ended up understanding VOP lifetimes was because I had debugged too many backend issues where the register allocator destroyed a register I needed the value of. In fact, the first patches I got (from Phillip Mathias Schäfer) for SB-ROTATE-BYTE support on RISC-V had lifetime issues, which I fixed before merging. And, incidentally, right after my talk, Doug showed me a tool written by Alastair Bridgewater called voplife.el that visualizes VOP lifetimes and told me that he never writes VOPs without that tool. Well, that would've been nice to have! And then Christophe told me that of course the tool didn't exist when he was doing backend work.

Speaking of 'back in my day', in my slides I gave out (to use an Irish expression) about how long bootstrapping took with the emulator. Christophe proceeds to tell me about his experience porting to HPPA machines in the early 2000's where it took about a full day to wait for the system to bootstrap... It's easy to forget that Moore's law happens (happened?) sometimes.

Oh, and just so I remember for the future, I got some questions from Tobias Rittweiler about how I handled memory model issues. I basically said I didn't, because I was porting a new CPU, not an OS, since the Linux support routines handle almost all of those concerns. Then Doug asked me about why Load-Immediate-64 on RISC-V was so complicated: couldn't I have just loaded a word from memory? To which I responded that it's not clear whether its more expensive to load a word from memory versus materialize it with only register operations. This is a problem they solved in the RISC-V GCC backend, and last time I checked, the RISC-V backend for LLVM just punts and does the basic, unoptimized sequence. Then he asked me why I started with Cheney GC, which I deflected straight away to Christophe, who made the initial decision. He basically said, "it's easy to fit Cheney GC entirely in my head at once." Fair.

Monday lightning talks

After the talk we had some more time to work on stuff, like demos for the lightning talks. I didn't really do much besides talking to people about our internals though. Rui from 3e asked me about supporting tracing through local functions. One of my main interests is actually optimizing away local functions, so, I'm probably not the one who's going to implement it (unless you paid me to), but it seems fairly straightforward to port the support which was added to CMUCL after the fork.

Then we had our Monday afternoon lightning talks. I remember this just being a 'if you have something to say or demo come up and do it' kind of thing. Marco Heisig went up first to talk about SB-SIMD. I had helped him debug getting the VOPs installed into SBCL properly a little bit before his demo, and he showed us some cool support he's adding. He ended up sending a follow up email after the conference with a more formal proposal to integrate it into SBCL. I hope he has the time to move it forward and have it in-tree in some form or another.

Then james anderson, in what is a very memorable moment for me, went up for his 'demo' which ended up being a quite punctuated proclamation: 'We need concurrent GC!' Indeed, we do.

I'm already starting to forget the details of the remaining talks on Monday. Folks who were there, help me remember!


We had an official SBCL20 dinner afterwards, and it was time for some Austrian food. I sat in front of Marco and next to Luís Oliveira and enjoyed some Viennese schnitzel. I asked for tap water and got something that looked like but was definitely not tap water...

SBCL & quantum computing

Tuesday morning was a similar drill. We had a new (smaller) room, and this time, we needed our passports for access. Tuesday was lightning talk day, but first, Robert Smith gave a talk about how they use SBCL for quantum computing at Rigetti. They have a state-of-the-art quantum compiler and a quantum simulator, but Robert first gave us a quick primer on some physics and math (including tensor products in a concrete way, which was a nice breath of fresh air after I had what seemed like endless classes characterizing it according to its universal property). His slides are online, check it out! He's also interested in making SBCL play nice with aliens, but in a different way than Doug is. For one thing, he's interested in making an ECL-like API for SBCL to expose their quantum compiler code compiled with SBCL as a traditional C API. What really struck me in his talk was their compiler's ability to propagate fidelity of qubits to make the compiler sometimes 'miscompile' to sometimes get a more 'accurate' answer. (Scarequotes because quantum is spooky.)

Also, they rewrote one of their Lisp applications into another language due to outside pressure, but the rewrite was slower. It's also cool to know that, according to him, most of Robert's team actually did not have much Lisp exposure before joining, giving a sense that the community is still kicking.

Tuesday lightning talks

We proceeded to hack on more stuff after the talk and had a hot meal for lunch this time. I actually started working on something this time. A conversation with Doug the previous day had me saying that we do loop invariant code motion and stuff, to which Doug said, "but we don't." So, I looked and he was right, although I was genuinely surprised because it is a transformation our compiler framework easily supports. We do do a lot of traditional optimizations, in addition to some state of the art dynamic language type inference (stuff all written in the late 80's!) since our intermediate representations are well suited for that sort of thing. In fact, SBCL's front-end intermediate representation is essentially CPS in a flow graph, which anticipates a lot of research in the area done in the 90's and 2000's, basically being locally equivalent to SSA and not falling into the trap of being bound to scope trees.

So I started working on loop invariant code motion, and while I didn't quite finish by the end of the conference, I did get a proof of concept afterwards that almost self builds and works alright. Though after discovering some ancient notes by the original implementer (Rob MacLachlan) on the issue, I've decided I took the wrong approach after all. (The issue is that I worked on the level of IR1 instead of IR2.) Oh well.

Meanwhile, we had lightning talks starting around 1:00 pm with a short break at around 2:45 pm, if I recall correctly. The full list of topics that day is on the website, in order. We descended into a bit of a wishlist game, with Phillip talking about where to move hosting for SBCL. (The options were, stay with SourceForge, move to GitHub, move to GitLab, move to hosted GitLab. It was honestly quite the controversy.) Then I talked about the loop invariant code motion work I was doing briefly, and then asked the audience who has heard of Block Compilation. I don't remember the exact number, but I think there were more who didn't know than who knew.  After complaining a little about how SBCL doesn't have it even though CMUCL does, I made it one of my big wishlist item, since I think that the ability to do whole program optimization is pretty important for a high-performance compiler, especially for a dynamic language like Lisp where most of the dynamic facilities go unused once an application is up and running in production (usually). Well, I ended up (re)implementing it yesterday, so maybe people will learn about it again. I might write up about it sooner or later. Then Stelian Ionescu talked about his wishlist items (such as gutting out a lot of the 'useless' backends) and we opened it up to the floor.


After the official end of the conference, most of the crew went across the street into a mall to eat and chat for the rest of the night. Doug ended up showing me some cross disassembler stuff after some prompting about its removal, while Luís did a great job getting relocatable-heaps working on Windows next to us, which he promptly got upstreamed after the workshop. Great to see that new projects were motivated and finished as a result of SBCL20. It was a fun time, and, as Zach Beane said, I'm hoping we organize and meet again soon!

Block compilation - "Fresh" in SBCL 2.0.2

I've just managed to land a feature I've been missing in SBCL for quite some time now called block compilation, enabling whole-progr...