This paper shows how to close the sim-to-real gap for event cameras: networks trained on event streams from simulators systematically underperform on real data because the simulator's contrast threshold and scene dynamics don't match what real sensors produce. By carefully matching simulator statistics to the target use case and releasing a new High Quality Frames (HQF) dataset of well-exposed ground-truth frames, the paper delivers a 20–40% boost in video reconstruction quality and up to 15% on optical flow with no change to the network architecture. [ECCV 2020 paper]

No comments:
Post a Comment
Note: only a member of this blog may post a comment.