I wrote this a while ago, it covers the basic stuff that affects compilation performance and lists a few troubleshooting tips.
Thanks for sharing. I learned stuff.
Curious question. I’m not sure I’m reading correctly. The hello world compiled in roughly .537 seconds, and the statement “compiling some of my projects (with tens of thousands lines of code, and a lot more in dependencies) don’t compile that much slower”. How long does it typically take to compile those projects without the optimizations mentioned in the blog?
I tried this on the examples from the afmt library I recently shared and it did better than .5 seconds. It took roughly .386 seconds and it does alot of fmt stuff. Im in linux kernel 6.17 (booting and running entirely from external USB SSD) on ROG Strix G18 G814JIR laptop.
Total Time - 196.462 ms - 100.00%
initialization - 2.690 ms - 1.36%
parse files - 14.797 ms - 7.53%
type check - 63.315 ms - 32.22%
LLVM API Code Gen ( 57 modules ) - 71.689 ms - 36.48%
lld-link - 43.961 ms - 22.37%
those projects generally take about a second, unless there’s something particularly dumb to trigger a slow path in the compiler. I ran into that with sokol, where it generated 20k loc of shader bytecode and took like 5 seconds to compile the project.
what’s “afmt”? a custom formatting library?
Yes for quick ansi formatting while still being able to use fmt print functions as normal. I print out alot of debug info of variables for sanity checks, but because all of it will be deleted when I’m satisfied, I don’t wish to spend much time on organizing and sorting the output too much. So I like to be able to bang out some colors to differentiate the output. Colors for things I’m changing, no colors for things I expect to stay the same, but still wish to monitor or reference.
ansi-printing-library-collection-afmt
So I’ll do something like:
afmt.printfln("%#w", "-f[red]", myvar1)
afmt.printfln("%#w", "-f[yellow]", myvar2)
afmt.printfln("%#w", "-f[green]", myvar3)
// or
_ = is_ok ? afmt.printfln("%#w", "-f[green]", myvar1) : afmt.printfln("%#w", "-f[red]", myvar1)
// ...etc
This is not too bad! But interestingly, compiling some of my projects (with tens of thousands lines of code, and a lot more in dependencies) don’t compile that much slower.
That’s too true. Odin compilation speeds are surprising. I was afraid my project was going to be very slow to compile. 5000 lines of code later, it was roughly the same. 10000 lines of code later, same story.
Another thing that the article doesn’t mention is that for -o:speed adding -use-separate-modules improves my build times by 8-9 seconds.
for optimized builds you really shouldn’t be using the -use-separate-modules flag since it prevents a lot of optimization due to opaque dependencies between the actual modules.
There’s a new Thin LTO mode which is quite interesting and allows many cross-module optimizations during linking
In your experience how did lto compare to normal compilation?
I see. I didn’t know that. Doesn’t seem to make a difference in terms of performance for me though.
For my project here’s the perf of compilation with lto (note, I have an ancient CPU):
baseline -o:speed: 9.5s
-lto:thin: 5.8s
-lto:thin-files: 4.9s
As for the perf of the compiled binary, I see no tangible difference.