You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks so much for the paper and the implementation. I'm particularly interested in how you are populating your lookup table for fast-ray transform. The LUT stores a mapping between voxel coordinates -> image feature coordinates. And this is pre-configured before training/inference.
I'm having difficulty in understanding how you calculated the transformation matrix that allows the transform from voxel coordinates to image feature coordinates.
I see a transformation matrix called "sensor2ego" in nuscenes, is the inverse of this transformation used? Any insight would be greatly appreciated, thanks.
The text was updated successfully, but these errors were encountered:
Hi, thanks so much for the paper and the implementation. I'm particularly interested in how you are populating your lookup table for fast-ray transform. The LUT stores a mapping between voxel coordinates -> image feature coordinates. And this is pre-configured before training/inference.
I'm having difficulty in understanding how you calculated the transformation matrix that allows the transform from voxel coordinates to image feature coordinates.
I see a transformation matrix called "sensor2ego" in nuscenes, is the inverse of this transformation used? Any insight would be greatly appreciated, thanks.
The text was updated successfully, but these errors were encountered: