Is it true that Odin can't be as fast as Zig/Rust because all LLVM optimizations aren't possible?

A reddit comment from PlateEquivalent2910 says:

Bounds checking enabled by default. Even when it is disabled, Odin explicitly does not use all of the llvm’s optimizations because they either don’t fit Odin’s model, or rely on some sort of UB.

This is why in some cases straight looking code will not be optimized with Odin. Some of the optimization passes can be enabled with --aggressive but as far as I understand that is basically compiler giving up; program correctness is out the window.

In comparison, zig, rust, c, c++, does not disable those passes. You can get some UB or surprising behavior, but the language and the compiler works with the assumption that those optimization passes will happen, at some point. Which is why they will be faster than Odin in aggregate results.

Everyone always assumes plugging your language into llvm makes it insta C speed. That’s not the case.

Is there any truth to what they said?

2 Likes

TL;DR the original comment is not true and mostly just out of date.


I’ll break down this original Reddit comment to show it doesn’t really make much sense.

Bounds checking enabled by default.

Yes and that has the possibility of being optimized too.

Even when it is disabled, Odin explicitly does not use all of the llvm’s optimizations because they either don’t fit Odin’s model, or rely on some sort of UB.

This is has NOTHING to do with bounds checking. It’s a really weird non-sequitur.

In previous versions of Odin, it did have to turn off some of LLVM’s “optimization passes” because of certain bugs in LLVM; along with how Odin did generate some the LLVM IR not being liked by the passes. You have to remember that LLVM is developed alongside Clang and it is developed to optimize C and C++ code primarily, and how clang outputs LLVM IR in a specific way too. It doesn’t really know how to deal with LLVM IR code that wasn’t generated by Clang.

Odin using the later versions of LLVM now enables all of the same optimizations now and tries to generate LLVM IR closer to what it expects.


As Odin has different semantics to other languages (e.g. C, Rust, Zig, etc), it means it needs to do different kinds of optimization passes. This does not mean it cannot be as fast. That is just a complete misunderstanding of what an optimization is in the first place. It’s like comparing oranges to carrots rather oranges to other oranges. People are not comparing the same thing.

n.b. I’ve talked about it before as to Why I Hate Language Benchmarks, so I won’t get into it here. But people are rarely comparing things even in the same category, let alone similar things.

In many cases, certain Rust code can be compiled much faster than C code. When people talk about “fast” they are usually talk about the aggregate sets of “optimizations” and not specific optimization passes, or even whether you can perform the same sorts of optimizations as other languages. In Rust for example, there can never be more than one mutable reference to something (due to its affine substructural type system) which means it can do aliasing based optimizations by default unlike C which requires restrict or Odin which requires #no_alias.

Rust has more rules about how things work enforced by the compiler, and its backend can make more guarantees and thus do more “optimization” passes. And a language like C or Odin has to do a lot more explicit things to tell the compiler what guarantees can be made that the user specifies.

However, due to how the majority of Rust code is written because of how strict its type-system, ownership-semantics, and lifetime-semantics are: Rust code is not really fast than similar C or Odin code in practice. Things like Arc<Mutex<Box<dyn T>>> (that is a minor joke but I have seen it enough) do make code a heck of a lot slower because people writing Rust in an “idiomatic way” which is not going to run well on the computer. But Rust is not really aiming to be fast, per se, but rather memory safe, and it really does take it to the extreme (in a good way and bad way too). Rust could nudge people to write better code that would run faster whilst still adhering to memory safety, but it doesn’t currently (nor is it really a goal of Rust).


You might have noted that I am writing “optimization” in quotes a lot, and this is because I am trying to stress something which is lost on a lot of people. In order to do an optimization in the first place, you must have rules in which to optimize within. This might sound really obvious but there are a lot of people who have learnt that certain “optimizations” are done by “exploiting undefined behaviour”. This is kind of like a Common Law legal practice misapplied to a domain it shouldn’t be applied to—people think “well the spec doesn’t say I cannot do this, therefore I will do this and call it an ‘optimization’”. In my personal view, this isn’t an “optimization” whatsoever and closer to Garbage In Garbage Out. This is why if you have ever used C for a long time, you cannot always trust the optimizer to do the right thing and you have to check if it’s doing what you assume to be correct. Yes people might invoke “undefined behaviour” but why can the compilers not give better diagnostics about this or even just do something which the user assumes to be more likely? Or better yet, why not just define that behaviour somewhere?!

With regards to Zig’s “optimizations in the aggregate”, it pretty much does the exact opposite philosophy to Odin in terms of “optimizations”. It pretty much says it can basically do almost anything it can do and in the process you have to tell the compiler NOT to make such assumptions/guarantees. It makes a lot more assumptions that the user may assume, and thus it can produce code which is not expected by the user. So Zig’s compiler is full of “optimizations” that need to be gated, but it doesn’t really support a proper proofing system nor a strong enough type system that emits diagnostics before generating actually broken code. A good example of this is loads of aliasing assumptions everywhere for both function inputs and function returns. It does a lot of assumptions regarding aliasing that returns and inputs and things don’t alias, and as such this does lead to things breaking in code people expect to work. Some people like this approach, but I personally do not.

My philosophy for Odin was that you want to opt-in to the generally unsafe “optimizations” (when possible) rather than them be opt-out, especially when the language/compiler does not enforce stricter semantic analysis. This is why I get a bit annoyed by a lot of the talk around “optimizations” because have loads of wrong conceptions about how any of it works and even what is an optimization in the first place.


I hope this answers the original comment enough.

25 Likes

I personally care more about compile time, the faster the compile time the more iterations I can make to optimize my code and ship something for once. Having +1% +5% performance boost due to compiler optimizations doesnt matter if you have to do a medium/big project where you have bad code everywhere and every time you iterate on writing good code OR optimizing it you get -20ms -40ms performance boosts.
I differentiate good code vs Optimization because for me they are not the same thing, good code is where you understand what are you doing and move memory seamlessly around to do the stuff you want, and optimizing for me is coming up with clever tricks to scratch some milliseconds out of your game loop.

4 Likes

Learning Odin has helped me write much better rust code. I used to rely on Arc a lot.

1 Like

Could you elaborate, what do you do in rust which you learned from programming in Odin? For me it is that I more often give function &mut T instead of returning owned T - in order to reduce allocations in hot loops.

I use fixed arrays much more now and just slice into them as needed. I use almost no traits or methods and just write plain old functions. I used to try to use iterator chains for everything but now I barely touch them. Overall I don’t think in terms of objects having behaviour, which rust initially made me do, and more about transforming data.

4 Likes