Description
The Orcc simulator does not agree with the C backend about
compositions of arithmetic operators on 32bit and 64bit integers. The
simulator is correct, the C backend is incorrect. For example:
my_action: action ==>
var
uint(size=32) a,
uint(size=32) b,
uint(size=64) c,
uint(size=32) d
do
a := 1073741824;
b := 8;
c := a * b;
d := c / 8;
println("d: " + d);
end
The range of 32bit integers is -2147483648..2147483647, so the number
1073741824 in variable a
is within the 32bit boundary. Multiplying
it by 8 is 8589934592, which does not fit withing the 32bit range but
does fit in the 64bit range of
-9223372036854775808..9223372036854775807.
Dividing 8589934592 by 8 to compute the value for d
gives
1073741824, again within the 32bit range. Moreover, the CAL type
definition says that d
is a 32bit integer, so the CAL type
declarations are correct for all four integers.
When this action is fired by the simulator, the Java-based CAL
intepreter prints the expected output:
d: 1073741824
However, the C version of this CAL action prints:
d: 0
The problem stems from the fact that the C backend of the Orcc
compiler does not type cast values between 32bit and 64bit
integers. The corresponding C is:
static void my_action() {
u32 a;
u32 b;
u64 c;
u32 d;
a = 1073741824;
b = 8;
c = a * b;
d = c / 8;
printf("d: %u\n", d);
}
If the assignment to c
and d
is casted, we can recover the
expected outputs:
c = (u64) a * b;
d = (u32) (c / 8);
Recompiling and running the modified C prints:
d: 1073741824
Perhaps then, in the C backend we want to track when:
- arithmetic operators on 32bit values are calculating a 64bit
value. Checking ranges at compile time might be impossible if the
32bit constants are not known at compile time. So instead, we
determine when operators on 32bit integers produce a 64bit integer
using the user's type declarations. This check would be possible on
the CAL type declarations above. When this check determines
32bit->64bit operators, the C backend casts the result of the
operator with(u64)
. - arithmetic operators on 64bit values are calculating a 32bit
value. Again we rely on the CAL type declarations, i.e. one of the
operands is a 64bit value and the assigned variable is declared as
32bit. When this check determines 64bit->32bit operators, the C
backend casts the result of the operator with(u32)
.
This might relate to @endrix 's explicit casting idea in #113
. The question is this: does Orcc's C backend need more information
from CAL syntax, e.g. in the form of explicit casting, or should the C
backend work a bit harder to inject (u32)
and (u64)
casts when
necessary?
Thoughts?