I just answered this question, which asked why iterating until 10 billion in a for loop takes so much longer (the OP actually aborted it after 10 mins) than iterating until 1 billion:
for (i = 0; i < 10000000000; i++)
Now my and many others' obvious answer was that it was due to the iteration variable being 32-bit (which never reaches 10 billion) and the loop getting an infinite loop.
But though I realized this problem, I still wonder what was really going on inside the compiler?
Since the literal was not appended with an L
, it should IMHO be of type int
, too, and therefore 32-bit. So due to overflow it should be a normal int
inside the range to be reachable. To actually recognize that it cannot be reached from int
, the compiler needs to know that it is 10 billion and therefore see it as a more-than-32-bit constant.
Does such a literal get promoted to a fitting (or at least implementation-defined) range (at least 64-bit, in this case) automatically, even if not appended an L
and is this standard behaviour? Or is something different going on behind the scenes, like UB due to overflow (is integer overflow actually UB)? Some quotes from the Standard may be nice, if any.
Although the original question was C, I also appreciate C++ answers, if any different.
Question&Answers:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…