Looking for Feedback on Approach

I have created a library that I’m starting to use in all my projects, which I really enjoy using. Before I move forward with more projects that may use it, I’ve realized I need to make a potentially code breaking change in the library. I very much would appreciate some opinions to help me decide.

Please bare with me as I try to frame the situation, and what I believe are the only 2 options.

I have the following structures, which are not going to change:

Large_Struct :: struct {
	item01: [3]u8,
	// has about 30 elements
	item30: [3]u8
}

ID :: enum {
	ONE,
	TWO,
	// Could go up to 50 or more
}

Option 1
Now the part I think needs to change. To access 1 of potentially 50 or more definitions of this Large_Struct, I have the following. The concern is the static global array may grow to upwards of 50 or more definitions, becoming a mega-array of Large_Structs.

// Could go up to 50 or more
Definitions :: [ID]Large_Struct {
	.ONE = {
		item01 = {},
		item30 = {},
	},
	.TWO = {
		item01 = {},
		item30 = {},
	}
}

Option 2
Would it be better to instead define each definition in it’s own procedure, and then reference those by ID? Would this save on memory allocations when using the library? Are we talking only read-only memory, or other considerations? Or, would I be trading only on approach and gaining nothing?

// Each ID instead defined in it's procedure that returns the Large_Struct
id_one :: proc() -> Large_Struct { return {} }
id_two :: proc() -> Large_Struct { return {} }

// Get Large_Struct based on ID
get_definition :: proc(id: ID) -> (ls: Large_Struct) {
	switch id {
	case .ONE: ls = id_one()
	case .TWO: ls = id_two()
	}
	return
}

Firstly, I bet you to do the following:

@(rodata)
Definitions := [ID]Large_Struct { 
    ...

Otherwise, It depends on what you are trying to do. If each instance is the same, I’d go for the look-up-table (LUT) approach (i.e. Option 1). Option 2 only makes sense if you need to do other kinds of logic.

When in doubt, prefer a LUT.

1 Like

Can I offer a mix of the two as a third option? I’ve used this approach in the beginnings to a opcode lookup table and while quite verbose can be packaged in its own file.

You essentially create an array of function pointers at startup so that ‘heavy’ lifting is done upfront and then index into the array to grab the one you need when you need it.

Large_Struct :: struct {
	item01: [3]u8,
	// has about 30 elements
	item30: [3]u8,
}

// enums default to int? u8 gives you 255 for 1 byte instead of 8
ID :: enum u8 {  
	ONE,
	TWO,
}

LargeStructHandler :: proc() -> Large_Struct // this can take proc(something: thing) as long as the functions you using take it to.

id_one :: proc() -> Large_Struct {return {}}
id_two :: proc() -> Large_Struct {return {}}

large_struct_db: [ID]Large_StructHandler
init_large_struct_db :: proc() {
	// I run at startup .
	// if you use ols it will raise errors if you miss a enum entry.
	large_struct_db[.ONE] = id_one
	large_struct_db[.TWO] = id_two
}

large_struct_handler := large_struct_db[.ONE]
my_struct := large_struct_handler() // this is also how you would call large_struct_handler(thing = something)