According to the description in the docs it shouldn’t return any negative value ever.
So why does it not just return a uint?
According to the description in the docs it shouldn’t return any negative value ever.
So why does it not just return a uint?
Because Odin defaults to signed integers. And I am in the signed by default camp. I am not getting into this argument.
There is an entire “debate” over signed-by-default vs unsigned-by-default and over the years, I eventually became swayed the signed-by-default side as I think it has the least issues in practice.
A good example of this is to ask a person to write a for loop in reverse with unsigned integers, and do it correctly the first time.
I’d argue people asking this question usually cannot do it the first time.
I’ll write the example people do two see if you can spot the TWO bugs:
for i := uint(n)-1; i >= 0; i -= 1 {
...
}
And yes, there are TWO bugs.
I do like a good puzzle.
I don’t think I’ve ever written a reverse for loop with unsigned integers before, but now that I’ve had a thoughtful look at this, I can see why.
Spoiler: The two bugs are when n == 0
thus i
becomes max(uint)
causing it to loop when it was expected not to, and when i == 0
then doing i -= 1
means it becomes max(uint)
because of wrapping rules and thus loop forever.
I noticed both bugs the first time and I’m not a great dev.
But I wanted to ask, making the default signed means losing half of the indexable space, is this really just not a problem in practice?
Languages like C and Rust default to size_t
and usize
as their indexable int type but Odin defaults to essentially ssize_t
. Through the years of maintaining and writing Odin this need for such a big number just never risen? I’d guess such high addresses would be quite useful for OSes or drivers or embedded systems where you’d need to poke at memory locations.
You can address any reachable memory with uintptr
. int
only restricts indexing from the base pointer times the size of the element, and that itself is quite a wide range.
On a system with 64-bit integers, the lack of one bit is the difference between being able to access up to 9,223,372,036,854,775,807
elements versus 18,446,744,073,709,551,616
, to put this into perspective.
I suspect nobody needs to index up to 9 quintillion individual bytes from one single base pointer, let alone 18 quintillion.
Even if you find yourself on a system where indexes are actually limited, you can still break your data structure into indexable chunks with shifted base pointers. In the end, the limiting factor is the size of data type that you convert into a pointer to access memory, not how the indexes are used in calculation.
I’m gonna be honest I didn’t notice the bugs. I just knew they had to do something with the overflow, but only because bill said there was a bug. If i was casually reviewing a 1000+ LoC diff, I’d probably miss it.