This is a weird one. I have to apologize for not being able to provide an example that does not include code from a foreign c library, but I was unable to create a more generic example that reproduced this issue.
Quick Info:
- OpenSSL client/server application
- Odin version
dev-2025-08for both client and server - built using
odin run . --debugfor both client and server, so nothing fancy - client: Windows 10 amd64
- server: Debain 12 (bookworm) arm64 (Pi5)
The Issue
Whenever the server receives a message from the client, the first four bytes of server buffer is always zero with the rest of the buffer matching expectations. If the client sends the message “Hello from the otherside.” as the byte stream [72, 101, 108, 108, 111, 32, 102, 114, 111, 109, 32, 116, 104, 101, 32, 111, 116, 104, 101, 114, 115, 105, 100, 101, 46]. the server will receive the message “^@^@^@^@o from the otherside.” as the byte stream [0, 0, 0, 0, 111, 32, 102, 114, 111, 109, 32, 116, 104, 101, 32, 111, 116, 104, 101, 114, 115, 105, 100, 101, 46]. The first four bytes are always zero regardless of the number of calls to the receive procedure and the time delta between procedure calls.
It is important to note that this is only an issue with the server. Clients receiving messages from the server using the method in example_01 does not result in any missing/corrupted data.
OpenSSL Details:
The client and server both use the function SSL_read_ex to receive buffers. Here is some additional context for the binding used in the examples below:
// c declaration
__owur int SSL_read_ex(SSL *ssl, void *buf, size_t num, size_t *readbytes);
// odin proc binding
ssl_read_ex :: proc(ssl: SSL, buf: [^]byte, num: c.size_t, read: ^c.size_t) -> c.int ---
Ruling out OpenSSL:
I validated the client buffer is staged correctly using the RAD Debugger:
- the buffer address is correctly staged for the proc
- the internal memory for the OpenSSL structures contain the valid data prior to transmission from the client
- no issue with the client using technique from
example_01to receive buffers from the server
Validation of the server memory was a little more tricky for me:
- sshing into the pi and the linux cli debugging tools are tricky for me to navigate in that environment (If anyone has any remote debug workflow suggestions please share. lldb and gdb via the cli are not especially great for me…)
- things appear to be in good standing
- 95% sure its not OpenSSL “server” side
Attempts Using Fixed Arrays as backing buffer:
First I tried to take a slice of the fixed-array and get the Multi-Pointer. This works fine on the client machine, but does not on the server.
// example_01
buffer: [1024]byte
read: c.size_t
ossl.ssl_read_ex(ssl_handle, raw_data(buffer[:]), len(buffer), &read)
fmt.printf("client_message: %v\n", buffer[:read])
// client_message: [0, 0, 0, 0, 111, 32, 102, 114, 111, 109, 32, 116, 104, 101, 32, 111, 116, 104, 101, 114, 115, 105, 100, 101, 46]
Next I tried to be a bit more explicit in case my understanding of raw_data() was inaccurate. This still produced the same results which was not entirely surprising to me.
// example_02
buffer: [1024]byte
read: c.size_t
ossl.ssl_read_ex(ssl_handle, transmute([^]byte) &buffer[0], len(buffer), &read)
fmt.printf("client_message: %v\n", buffer[:read])
// client_message: [0, 0, 0, 0, 111, 32, 102, 114, 111, 109, 32, 116, 104, 101, 32, 111, 116, 104, 101, 114, 115, 105, 100, 101, 46]
In a brute force attempt of offsetting the buffer by four bytes, everything now works as expected. This was attempted thinking that maybe on some platforms fixed arrays might have some protected/padded memory at the front of the buffer. Looking through the documentation and implementation, I could not find anything to support this idea, but the expected behavior was achieved.
// example_03
buffer: [1024]byte
buffer_ptr := raw_data(buffer[4:])
buffer_len := c.size_t(len(buffer) - 4)
read: c.size_t
ossl.ssl_read_ex(ssl_handle, bufferptr, buffer_len, &read)
fmt.printf("client_message: %v\n", buffer_ptr[:read])
// client_message: [72, 101, 108, 108, 111, 32, 102, 114, 111, 109, 32, 116, 104, 101, 32, 111, 116, 104, 101, 114, 115, 105, 100, 101, 46]
The Curious Case:
// example_04
buffer: [1024]byte
buffer_ptr := raw_data(buffer[4:])
read: c.size_t
ossl.ssl_read_ex(ssl_handle, raw_data(buffer[:]), len(buffer), &read)
fmt.printf("client_message: %v\n", buffer[:read])
// client_message: [72, 101, 108, 108, 111, 32, 102, 114, 111, 109, 32, 116, 104, 101, 32, 111, 116, 104, 101, 114, 115, 105, 100, 101, 46]
Something strange then occurred while commenting/uncommenting various versions of the examples. After a seemingly arbitrary slicing of the initial buffer, example_01 will now behave as expected. The multi-pointer buffer_ptr is never used/referenced aside from initialization, so I’m not sure why this would change the behavior of buffer. I would have thought the compiler might even optimize this line away. Remove the line initializing buffer_ptr and the first four bytes return to always being zero.
Closing Thoughts:
I am left with the following possibilities for why this behavior occurs:
- I have a fundamental misunderstanding on how
raw_data()and multi-pointers interact with fixed arrays - This is a misuse of fixed arrays and another technique should be used
- This is some combination of issues with Debian, OpenSSL, and/or arm64
- This is a weird bug/edge case for Odin
- Something else entirely different is occurring