Tables-driven test output

Is there a way to mark testcases in table-driven tests? Like if one of testcases fail it will output in which testcase that happened. Something like Go’s t.Run("testcase name", func(t *testing.T) { ... }).

Let’s say I have this test:

@(test)
bulk_string_test :: proc(t: ^testing.T) {
    Testcase :: struct {
        input: string,
        res: string,
        err: bool,
    }

    testcases := []Testcase{
        { input = "5\r\nhello\r\n", res = "hello", err = false },
        { input = "0\r\n\r\n",      res = "",      err = false },
        { input = "5\rhello\r\n",   res = "",      err = true },
        { input = "4\r\nhello\r\n", res = "",      err = true },
    }

    for tt in testcases {
        free_all(context.allocator)

        r := string_to_stream(tt.input, context.allocator)
        res, err := read_bulk_string(r, context.allocator)
        if tt.err {
            testing.expect(t, err != nil)
            continue
        }

        testing.expect_value(t, err, nil)
        testing.expect_value(t, res, tt.res)

        w: bytes.Buffer
        bytes.buffer_init_allocator(&w, 0, 1024, context.allocator)
        testing.expect_value(t, bytes.buffer_to_string(&w), tt.input)
    }
}

And it fails with this message.

[INFO ] --- [2025-07-10 21:16:10] Starting test runner with 1 thread. Set with -define:ODIN_TEST_THREADS=n.
[INFO ] --- [2025-07-10 21:16:10] The random seed sent to every test is: 93478856182770. Set with -define:ODIN_TEST_RANDOM_SEED=n.
[INFO ] --- [2025-07-10 21:16:10] Memory tracking is enabled. Tests will log their memory usage if there's an issue.
[INFO ] --- [2025-07-10 21:16:10] < Final Mem/ Total Mem> <  Peak Mem> (#Free/Alloc) :: [package.test_name]
[ERROR] --- [2025-07-10 21:16:10] [main.odin:138:bulk_string_test()] expected bytes.buffer_to_string(&w) to be 5
hello
, got
[WARN ] --- [2025-07-10 21:16:10] <       70B/   1.20KiB> <   1.07KiB> (    0/    9) :: main.bulk_string_test
        +++ leak        56B @ 0x128420038 [main.odin:143:string_to_stream()]
        +++ leak        10B @ 0x128420078 [main.odin:144:string_to_stream()]
        +++ leak         4B @ 0x12842008A [main.odin:87:read_bulk_string()]
main  [|                       ]         1 :: [package done] (1 failed)

Finished 1 test in 635µs. The test failed.
 - main.bulk_string_test        expected bytes.buffer_to_string(&w) to be 5
hello
, got

Currently I need to look at error message and guess for which testcase this output corresponds to. But in some cases it’s not possibles.

I also can see a workaround: include describsion into the msg in every testing.expect. But that won’t be possible for testing.expect_value because it doesn’t have msg argument…

Another workaround can be to include log.info("running testcase", i) in the beginning of a loop

[INFO ] --- [2025-07-10 21:40:33] Starting test runner with 1 thread. Set with -define:ODIN_TEST_THREADS=n.
[INFO ] --- [2025-07-10 21:40:33] The random seed sent to every test is: 93513973629725. Set with -define:ODIN_TEST_RANDOM_SEED=n.
[INFO ] --- [2025-07-10 21:40:33] Memory tracking is enabled. Tests will log their memory usage if there's an issue.
[INFO ] --- [2025-07-10 21:40:33] < Final Mem/ Total Mem> <  Peak Mem> (#Free/Alloc) :: [package.test_name]
[INFO ] --- [2025-07-10 21:40:33] [main.odin:128:bulk_string_test()] running testcase: 0
[INFO ] --- [2025-07-10 21:40:33] [main.odin:128:bulk_string_test()] running testcase: 1
[INFO ] --- [2025-07-10 21:40:33] [main.odin:128:bulk_string_test()] running testcase: 2
[INFO ] --- [2025-07-10 21:40:33] [main.odin:128:bulk_string_test()] running testcase: 3
[INFO ] --- [2025-07-10 21:40:33] [main.odin:128:bulk_string_test()] running testcase: 4
[INFO ] --- [2025-07-10 21:40:33] [main.odin:128:bulk_string_test()] running testcase: 5
[WARN ] --- [2025-07-10 21:40:33] <       56B/   2.37KiB> <   1.07KiB> (    0/   15) :: main.bulk_string_test
        +++ leak        56B @ 0x140008038 [main.odin:149:string_to_stream()]
main  [|                       ]         1 :: [package done]

Finished 1 test in 8.49ms. The test was successful.

This is ill-advised due to the parallel nature of the test runner; couple error messages with their source.

Use testing.expectfor wrap it.

We use this pattern in several places throughout the test suite, but it’s simple enough to not need a generic implementation. You can make your own, tailored to your needs.

This is the pattern: iterate over the test sub-cases in the test, and when it fails, use an error log message or fail by virtue of calling one of the expect* procedures, then return. Any error or fatal log message will cause the test to fail too, not just expect*.

1 Like