What features of Odin do you dislike?

if you go back to the times, where C compilers didn’t even had a warning for truncation, you’d know the pain of chasing missing bits in your bitmask just to realize that an unsigned long long got passed through a function which took an unsigned int, throwing away all the top bits resulting in a week of furious debugging whether the hardware is borked or not. :slightly_smiling_face:

All hail explicit casting… and even more love for explicit endianness. :heart:

2 Likes

I do not need a time machine, I am normally programming in languages that do not require explicit casting, and so are most people. I do not think it is a good argument to bring up an anecdote about a bug, because you can do that with anything, including obviously manual memory management. These things are tradeoffs, and I think it should be justified an argument of net benefit, rather than potential benefit in some (possibly rare) cases.

True, but it’s also a misconclusion to say that “most people used to not having type conversions so it’s fine”, unfortunately majority of software today is either python or javascript, which doesn’t even have types (on the surface at least). And now they both have variants with types because of all the mess untyped programming languages come with… which if you think about it, is the extreme of “auto casting”.

And other than anecdotes, there’re an entire class of CVEs called “type confusion”. It’s not entirely number casting, but in a lot of cases CVEs triggered by truncation, mismatch in signed/unsigned value assignment, etc.

It’s a common enough issue in C/C++ that if you want to have a language in the same niche you add explicit casting to at least partially guard against it.

2 Likes

not a lanuage feature, however I dislike that printf like format string literals are not validated by the compiler

As I personally find I screw them up often enough that I’ve started hacking on the compiler to try add it

Part of the reason they are not validated is because as a user, you can even override that behaviour at runtime. We could add some intrinsics which give warnings for such things, but it’s not going to be perfect.

Also, try to prefer %v by default if you need to, if you even need to do printf style printing.

1 Like

Well it’s not really a feature cause it ain’t there…

But I really dislike that Odin does not have a build system.
I can hear Bill’s voice in my head saying “Turns out you don’t really need a build system…”. I just disagree on that.

Even the example repository has .bat and .sh files so now that is your build system.
This makes me often write batch files hoping they will work cause I’m currently not on windows or the other way around.

Also I just compiled and I got something like ld: open() failed, errno=2 for .... because a folder in my build folder didn’t exists. Now I need to look up on how to make sure folders exists and so on.

This is a line from the example repository for example:

for /f "delims=" %%i in ('odin.exe root') do set "ODIN_ROOT=%%i".

bah…

1 Like

I’m not against build systems, but rather Odin is trying to solve the 99% case.

And I have no issue with people needing an external build system if it is necessary. But is the argument that you want the “Build system” language to be Odin too rather than batch/shell/python/lua/etc? If so, sure, but what do you actually need to do that even in Odin? You probably have all the tools you need already.

And the for /f "delims=" %%i in ('odin.exe root') do set "ODIN_ROOT=%%i" example is more to do with batch being a crap language than “Odin needing a build system”.

da: [dynamic]string
append(&da, "This works!")
hm: map[string]int
hm["This also works!"] = 1

I realize Odin tries to make the zero-value useful, but this behavior is so unexpected (to me) that I’d rather it be an exception to the rule. Building up one of these structures in one spot of the code, then “moving” them to another place (assign to new struct, then nilling the original) is a pattern that will come naturally for people working in other languages; they will get bitten by this.

Personally, I don’t see any benefit to using these things without explicit initialization. I’d bet if this behavior toggled with the dynamic-literals flag you’d have fewer surprised people (but at that point the flag isn’t well-named etc.)

fair. I have to say I haven’t put any time in it to see how I can tailor it to my needs cause it’s hardly worth time spending on it. I am wokring on a project where at some point it might be worth spending a bit more time on it. Python might be indeed be a good candidate.

I mean, you could just take the nob.h approach and compile Odin with Odin. That’s what I do in my project whose building needs outgrew simple shell scripts because I needed to create a meta program that uses “core:odin/ast” package. The added bonus was that engine, game library, and editor library are now also built in parallel which sped up the whole process. Here’s the repo. I’m only posting this to give you an idea/inspiration. You shouldn’t use this in your own project because the library isn’t battle-tested (for one, I make an assumption that the user of this library won’t free the temporary allocator in their build “script”; I mean, why would anyone?).

1 Like

I don’t like that some procedures in standard library use context allocators implicilty. e.g. log.info calls fmt.tprintf which implicitly uses context.temp_allocator. not even allocator := context.allocator in signature, so #vet explicit-allocators won’t detect it

Regarding fmt.tprint* specifically, their name itself encodes the context.temp_allocator bit in them. That’s what the prefix of t is for.

As for others like log.info, I agree it is unclear that they use fmt.tprint* internally, but part of that is because they also log which is also stored on the context. So there is not much you can really do unless you remove the point of the context too. And if you don’t override the context.temp_allocator, everything is “freed” automatically anyway, so no memory leaks.

1 Like

I had a problem exactly with this. I used the -default-to-panic-allocator with #vet explicit-allocators with hope to track all allocations manually but got panic from log package. So it feels that current log package doesn’t suite well with panic allocator approach… Is it possible to have allocator := context.temp_allocator in logging proc signatures in future? :pleading_face:

1 Like

I like the flags package for many reasons, especially the flags.parse procedure.

I’m not fond of the flags.write_usage procedure, in that the printed format is not modifiable and for my taste uses too much white space and extra characters. I’d prefer if there was a procedure that returns a structure that contains all the validated data without the extra formatting. One that does all the same things write_usage does (sort, validate, determine required, min, max, range, types, usage description, flag names, etc), but does not print. This way I could create my own uniform usage printing format from an expected structure that could be reusable across projects.

As a side note, if this is considered as a request, I’d like it if the sort had options like: ([sort only position], [sort position and required], [sort position and required and alphanumeric], etc), where the ignored category of sort-ables are left in their relative position.

Something like [ ]Arg_Tag where the len(Arg_Tag) is the number of fields in the Args :: struct.

Arg_Tag :: struct {
	style:    enum, //.Odin or .Unix
	name:     string, // from subtag args:"name=some_name" || field name
	stype:    string, // string derived from nested type info and named types, (?indistinct?) etc.
	rtype:    ^runtime.Type_Info // for validation procs ??
	usage:    string, // from subtags || empty
	file:     string, // from subtags || default || empty if type is not a file
	perms:    string, // from subtags || default || empty if type is not a file
	min:      int, // from subtags || -1 and/or determined by combinations of range, etc
	max:      int, // from subtags || -1 and/or determined by combinations of range, etc
	pos:      int, // from subtags || -1
	manifold: int, // from subtags || -1 -> .Unix only
	hidden:   bool, // from subtags true if present
	required: bool, // from subtags true if present
	overflow: bool, // from subtags true if present
}

This could have the added benefit that both the flags.parse and flags.write_usage (and other procedures) in flag:core could validate from the same structure without either having to reinvent the wheel on their own, as it appears.

1 Like

Just a minor thing:
I don’t like that I cannot use the …array Syntax on non-variadic procedures, even if it could “technically” work, e.g.

draw_stuff :: proc(x, y, z: f32){
// do something
}
vector := [3]f32{1, 2, 3}
draw_stuff(…vector) // not allowed

It’s because .. just states “treat this slice as the variadic parameters”, meaning you have to slice it first e.g. draw_stuff(..vector[:]).