This paper shows how to make gradient-based meta-learners adapt faster by actively controlling the conditioning of their inner-loop optimisation problem. By recasting meta-learning as a non-linear least-squares problem, the method can place a loss on the condition number (local curvature) of the adaptation landscape, enforcing a well-conditioned parameter space at meta-train time. The result is substantially faster adaptation in the first few inner-loop steps, opening the door to dynamically choosing the number of steps at inference based on task difficulty.

No comments:
Post a Comment
Note: only a member of this blog may post a comment.