This paper shows how to train a rotated-object detector without ever having rotated bounding-box annotations. The key insight is that axis-aligned annotations from one dataset can be combined with per-pixel segmentation masks from another to bootstrap an oriented detection signal, with the rotation knowledge transferred into the final detector through a combination loss. This removes the need for expensive rotated annotation on every new domain where orientation matters, such as aerial imagery or industrial inspection.

No comments:
Post a Comment
Note: only a member of this blog may post a comment.