Just wondering what the best practice regarding I²C register maps in C or rather what other people use often/prefer.
Up to this point, I have usually done lots of defines, one for every register and one for all the bits, masks, shifts etc. However, lately I've seen some drivers use (possibly packed) structs instead of defined. I think these were Linux kernel modules.
Anyway, they would
struct i2c_sensor_fuu_registers {
uint8_t id;
uint16_t big_register;
uint8_t another_register;
...
} __attribute__((packed));
Then they'd use offsetof (or a macro) to get the i2c register and use sizeof for the number of bytes to read.
I find that both approaches have their merit:
struct approach:
- (+) Register offsets are all logically contained inside a struct instead of having to spell each register out in a define.
- (+) Entry sizes are explicitly stated using a data type of appropriate size.
- (-) This doesn't account for bit fields which are widely used
- (-) This doesn't account for register maps that aren't byte mapped (e.g. LM75), where one reads 2 bytes from offset n+0x00, yet n+0x01 is another register, not the high/low byte of register n+0x00
- (-) This doesn't account for large gaps in address space (e.g. registers at 0x00, 0x01, 0x80, 0xAA, no in-betweens...) and (I think?) relies on compiler optimization to get rid of the struct.
define approach:
- (+) Each of the registers along with its bits is usually defined in a block, making finding the right symbol easy and relying on a naming convention.
- (+) Transparent/unaware of address space gaps.
- (-) Each of the registers have to be defined individually, even when there are no gaps
- (-) Because defines tend to be global, the names are usually very long, somewhat littering the source code with dozens of long symbol names.
- (-) Sizes of data to read are usually either hard-coded magic numbers or (end - start + 1) style computations with possibly long symbol names.
- (o) Transparent/unaware of data size vs. address in map.
Basically, I'm looking for a smarter way to handle these cases. I often find myself typing lots and lots of agonizingly long symbol names for each and every register and each bit and possibly masks and shifts (latter two depending on data type) as well, just to end up using just a few of them (but hating to redefine missing symbols later on, which is why I type all in one session). Still, I notice that sizes of bytes to read/write are mostly magic numbers and usually reading the datasheet and source code side-by-side is required to understand even the most basic interaction.
I wonder how other people handle these kinds of situations? I found some examples online where people also arduously typed every single register, bit etc. in a big header, but nothing quite definitive... However, neither of the two options above seems too smart at this point :(