ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot

[ICCV 2021][ORAL] ACE Github [多专家] [one-stage] [无预训练]

文章目录

  • ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot
    • Motivation
    • Method
    • 总结

Motivation

同样针对长尾问题中的跷跷板问题,希望不以牺牲head acc来提升tail acc,希望在one-stage中同时提升head&tail acc。

Method

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot_第1张图片

  • 在大小不同的sub-set上训练多个expert,对于tail数据,会分给更多的model。
  • Loss包括一般的individual 分类loss,与RIDE类似;以及互补loss。
    • individual CE loss:
      ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot_第2张图片

    • complement loss Lcom Loss:

        对于model未见过的image,预测值尽可能低。为了降低不干扰其他expert,抑制对未见过image的贡献。
        minimizes the logits of non-target categories for Ei so as to put down their effect.
      

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot_第3张图片

  • 总体lossACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot_第4张图片

总结

sub-set大小不同
在one-stage中,all,many, mid, few的acc均提升(但many相较于RIDE仍低)
这里解决跷跷板问题也是采用多专家,但是loss与RIDE不同,这里loss抑制了专家对不熟悉领域的effect,避免其瞎说,直接让他闭麦。

你可能感兴趣的:(long_tailed,深度学习,机器学习,pytorch)