Consider this program
int main()
{
float f = 11.22;
double d = 44.55;
int i,j;
i = f; //cast float to int
j = d; //cast double to int
printf("i = %d, j = %d, f = %d, d = %d", i,j,f,d);
//This prints the following:
// i = 11, j = 44, f = -536870912, d = 1076261027
return 0;
}
Can someone explain why the casting from double/float to int works correctly in the first case, and does not work when done in printf?
This program was compiled on gcc-4.1.2 on 32-bit linux machine.
EDIT:
Zach's answer seems logical, i.e. use of format specifiers to figure out what to pop off the stack. However then consider this follow up question:
int main()
{
char c = 'd'; // sizeof c is 1, however sizeof character literal
// 'd' is equal to sizeof(int) in ANSI C
printf("lit = %c, lit = %d , c = %c, c = %d", 'd', 'd', c, c);
//this prints: lit = d, lit = 100 , c = d, c = 100
//how does printf here pop off the right number of bytes even when
//the size represented by format specifiers doesn't actually match
//the size of the passed arguments(char(1 byte) & char_literal(4 bytes))
return 0;
}
How does this work?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…