Integer literals like 1
in C code are always of the type int
. int
is the same thing as signed int
. One adds u
or U
(equivalent) to the literal to ensure it is unsigned int, to prevent various unexpected bugs and strange behavior.
One example of such a bug:
On a 16-bit machine where int is 16 bits, this expression will result in a negative value:
long x = 30000 + 30000;
Both 30000 literals are int, and since both operands are int, the result will be int. A 16-bit signed int can only contain values up to 32767, so it will overflow. x
will get a strange, negative value because of this, rather than 60000 as expected.
The code
long x = 30000u + 30000u;
will however behave as expected.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…