What features of Odin do you dislike?

Sometimes it’s something like “lib/v0.4”

I think that would indicate an issue with how the codebase itself is structured.

1 Like

I’m sorry, but I begrudgingly dislike proc groups. They don’t work well with tooling and they give worse error messages than just typing out the more specific procedure. I wanted to like them, but they have not treated me well. Frankly, one of the worst things about them is that the docs say, the proc groups should be preferred.

More words on proc groups here by Rickard Andersson (gonz) (and some more feedback from him about Odin, plus some unexpectedly passionate OSX bashing):

1 Like

They are kind of inherently bad because the compiler cannot know what you wanted from the possible options. That’s the problem with overloading in general. The thing is… overloaded.

1 Like

I totally get that! What I am more interested in is how you and others use proc groups and if they work well for you and if you think changing the docs might be beneficial to users. If I’m just part of a minority of users, then I would absolutely support ignoring people like me since it’s about making the language a joy for the biggest number of people and I can simply decide not to use proc groups and be happy.

What works really well for me is using proc groups for procs that all have only a single parameter (for example the delete proc group) because then the errors are inherently simple and obvious, just like they are inherently bad in the case where you have many procs with a varying number of parameters.

So what do you think about changing or removing the “prefer make() over make_slice() etc.” docs? To me it’s a similar situation as with using: Don’t use it unless it helps you and you should not overuse it.

As an aside: Odin is awesome, thank you making it :smiley:

Isn’t one of the advantages of proc groups being explicit that you could actually specify an order? This is actually one thing I found surprising–the method of resolving which proc to use is never mentioned in the docs that I saw, and since it was explicit I thought a proc group would just be “try each in the defined order and take the first that can match the given arguments”, but instead there’s some kind of special weighting that’s done to disambiguate. That seems like a level of opaqueness that wasn’t really necessary, unless there’s some need for it that I’m not seeing.

1 Like

Regarding the make thing, I honestly prefer just make in general. The problem Rickard brings up is most likely a bug/oversight in the compiler. The type argument should be more than clear enough when reading.

1 Like

So the weighting has to be done regardless because just picking the “first” might result in the other procedure NEVER being selected. So we try to figure out what the “best” procedure is with some basic weighting rules. The rules are relatively simple but long.

1 Like

That can still happen, though.

Basically:

foo_a :: proc () {}
foo_b :: proc (a := 0) {}
foo :: proc { foo_b, foo_a }
foo(1) // calls foo_b
foo() // calls foo_b as well--there's no way to call foo_a via the proc group (with scoring)

Using the order defined in the proc group would make the error more obvious, and give you more control over what it actually does (e.g. the above could be fixed by changing the order of the proc group).

You could maybe fix this exact case based on the scoring (using a default should lower the score, maybe?), but I still don’t see the need for weighting at all when you have an ordered list in code. You can get that wrong, yes–but at least it’s within your control then.

Based on a real issue that happened in base. It had to be fixed by removing the default value, because there’s no control over how the proc group resolves. That’s probably fine in that case–but as it is now, you have to change your code to accommodate the proc group.

This should be a compile error as there is no way to disambiguate between foo_a and foo_b.

2 Likes

What needs to be supported officialy? Shouldnt it technically be possible to do it through Android NDK like c and c++ do it?

We have a few unnecessary duplicate features for the sake of a couple characters, which goes against the first Odin principle—Simplicity:

  • do (including do if)
  • cond ? x : y since if and when variants exist, which match their use (run/comptime) elsewhere wonderfully
  • multiple ways to cast
  • @thing and @(thing)

Consistency in minimizing useless additional code (already done for unused imports and variables):

  • unnecessary casts/transmutes int(an_int)
  • unused procedure parameters
  • place the errors for unused code after a panic call behind a flag, as we do with the rest

Sometimes you can’t follow code without knowing how the compiler works:

  • require parentheses if there’s any possibility of ambiguity in equations/expressions

Flags making things more strict rather than more lax means you have to both be aware of flags as well as remember to apply them yourself:

  • code quality can only go down by default
  • people have to now fix lax library code to use them alongside flags that force correctness, instead of library authors being heavily incentivized to be correct by default
  • you have to enable a bunch of flags to force correctness, adding that much more friction to do the right thing instead of a single -be-chill-and-enjoy-the-party flag while experimenting

Already mentioned by others, but the way we’ve normalized certain instances of implicitness causes implicit issues when being able to tell if we’ve fucked something up or not would be vastly preferred:

  • I have to know when something allocates since it always requires additional work/thinking (changing the allocator and/or having to deallocate it), so it only does harm to hide the allocator parameter from plain view
  • @(require_results): I have to check every procedure I call to ensure I’ve used all the return values, so my application doesn’t have the ability to magically blow up in my face. We already have the ability to assign to _, and outside of a few procs such as print, are necessary in order to have more reliable software. There’s hundreds of these in the codebase, and there should be far more—reversing it so a flag is required when you want optional results makes more sense

The usual motivational video to help get the spec written ^ _ ^ https://www.youtube.com/watch?v=ZXsQAXx_ao0

1 Like

You can just use Zig. All your points seem to be directly influenced by Zig if I am not mistaken.

Simplicity is somewhat subjective and it doesn’t always mean only 1 way to do something. Another principle of Odin is to “enjoy programming”, which translates to being intuitive and not annoy the programmer by default. IIUC, odin tries hard to optimize for intuitiveness.

  • @thing and @(thing)

This has an actual use. @(thing1, thing2, thing3=abc) vs @thing.

  • multiple ways to cast

Why are there two ways to do type conversions?

  • unnecessary casts/transmutes int(an_int)

Seems like a missing case. cast(i32) case is caught by -vet

I have to know when something allocates since it always requires additional work/thinking (changing the allocator and/or having to deallocate it), so it only does harm to hide the allocator parameter from plain view

Having an allocator parameter is just a guideline even in something like Zig. You can easily bypass that and do your own thing. Also, a function can take an allocator and just not do allocations(misleading?). It is not easy to enforce such a thing without being a massive PITA. To really know about allocations, you need to use tools (for e.g., a tracking allocator) and actually read/understand the code.

Also check Commentary on Friction in Language Design By Ginger Bill

2 Likes

You can just use Zig. All your points seem to be directly influenced by Zig if I am not mistaken.

I have only a surface-level understanding of Zig as it is these days, and wasn’t thinking about it in the slightest.
You should have seen the amount of complaints I spoke to Andrew about regarding Zig when I went to one of the first meetups!

This has an actual use. @(thing1, thing2, thing3=abc) vs @thing.

@(thing) also works—like I said, it’s only there to save a couple characters.

Why are there two ways to do type conversions?

Yes I’ve read that, this thread is about features I dislike. I don’t agree with the argument that it’s nicer on longer expressions, nor that any perceived niceness justifies having a duplicate feature. I can read Go without just fine.
I’m not under some delusion Bill will change a single thing by the way—I can’t even get him to add runtime.Kibibyte instead of solely using runtime.Kilobyte to mean 1024 bytes!

Seems like a missing case. cast(i32) case is caught by -vet

I’ve only encountered it with u32(var) so I assumed the rest didn’t work either. I’ve fixed this mistake in the standard library in the past. A bug has been logged.

Having an allocator parameter is just a guideline even in something like Zig. You can easily bypass that and do your own thing. Also, a function can take an allocator and just not do allocations(misleading?). It is not easy to enforce such a thing without being a massive PITA.

The point was I’d change the standard library so allocations were visible, not to have some sort of compiler restriction. If it’s misleading in the code it would be a bug (and even less with my unused parameter check), as with anything else.

To really know about allocations, you need to use tools (for e.g., a tracking allocator) and actually read/understand the code.

That’s the point—you can’t just read the code, you have to actually sit there and go “Hmm, does the functionality of this procedure mean it allocates?” instead of just skimming the code and seeing an allocator being passed in. It sucks even more when you’re refactoring and moving blocks of code around, if you weren’t vigilant about making them explicit yourself.

Also check Commentary on Friction in Language Design By Ginger Bill

I’ve seen every video with Bill in it (although I may not recall every point haha) =b

The point was I’d change the standard library so allocations were visible

A lot of std lib functions do take an allocator (defaulted to context.temp/context.allocator of course), but perhaps not all. But yeah, I guess if you are relying on seeing ‘allocator’ parameter to guess about allocations the defaulted parameter wont work for you.

1 Like

The only pet peeve is not being able to write comments like this:

// this is legal...
if condition
{
}

// this is illegal... :(
else
{
}

I don’t comment often, but sometimes adding a bit of context to each branch is very useful. So I’m forced to to this:

else
{
  // context...
}

which is weird.

1 Like

This is just an unfortunate consequence of the automatic semicolon insertion rules, and adding more edge cases might please some more people, but it makes the rules even worse, and even more likely to introduce possible bugs e.g. else without an if.

2 Likes

While I’d also like a file-level directive for require_results, it’s not accurate to claim the application can “magically blow up in my face”. You made a mistake in ignoring a return value and nothing magical is happening.

1 Like

…it’s not accurate to claim the application can “magically blow up in my face”.

Implicitness is quite literally what people mean when they say “magic”, and Odin even mentions how operator overloading is both magical and goes against explicitness—one of Odin’s design goals.
In this case you can’t see that blowing up is even a possibility unless you dive in and actually check each procedure signature manually. And once again, we’re losing code clarity just to save a couple characters.

You made a mistake in ignoring a return value

_ = procedure() not only stands out and is easily greppable, but shows there was thought and a purpose behind ignoring it, while procedure() gives us absolutely zero information (and we now have to check it).
Don’t forget all of this is greatly amplified when working on a codebase with others.
Leading people towards the correct path to avoid mistakes in the first place is important.

2 Likes

@dozn I don’t understand your point. Magic “is used to describe code that handles complex tasks while hiding that complexity to present a simple interface” (from your link). That’s not what is happening here. You’re calling procedures that clearly return values and you’re not checking them. Nothing is being hidden from you.

1 Like

Magic “is used to describe code that handles complex tasks while hiding that complexity to present a simple interface” (from your link). That’s not what is happening here

A procedure with a return value is more complex than one that doesn’t.
The interface through which you use a procedure appears like so: procedure().
When reading procedure(), the fact that it has a return value is hidden from you, making it appear more simple than it really is.

It’s the exact same thing with operator overloading, where something (possibly) more complex than it appears by reading the code is happening and is hidden from you, which Odin already admits (correctly) is implicit/magical.

You’re calling procedures that clearly return values and you’re not checking them.

Not being able to see that it returns values while reading the code is on the opposite side of the spectrum as “clear”.

Imagine you’re reading a brand new codebase. How can you tell, without digging into every single procedure signature, that they don’t all return something important?

It’s incredibly easy to forget to add @(require_results) to every procedure, which is why implicitness by default is incorrect.
If you fuck up and forget to do @(optional_results) (or whatever you want to call it), it’s immediately obvious the first time to try to ignore them implicitly.

I’d much rather hear if it interferes with another feature I haven’t thought of, than the peanut gallery’s opinion on the matter.

Nothing is being hidden from you.

If you’re unable to comprehend what information’s hidden between reading _ = procedure() and procedure(), I’m afraid we’re experiencing separate realities and are unable to help each other any further on the subject.