Fine-grained 서베이 논문 3편

2021. 3. 16. 17:12딥러닝

  1. A Survey of Recent Advances in CNN-Based Fine-Grained Visual Categorization
  2. A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN Classifiers
  3. Deep Learning for Fine-Grained Image Analysis: A Survey

3번째 논문.

  • Fine-grained의 특징(도전)
    • inter-class similarity, intra-class variability
  • 데이터셋 정리
    • 유명한거 순서
      • CUB200-2011 > Standford Car > FGVC Aircraft > Oxford Flowers > Standford Dog >...
  • 푸는 방법 2가지
    • Strongly-supervised
      • Category말고도 어느 위치에 있는지 box or location이 추가적으로 존재함.(이미지마다)
    • Weakly-supervised - 3가지
      • Category만 존재
      • 3가지 방향
        • Part-based Approach
          • 중요한 Part를 찾으려고 노력하는 architecture를 만듦
        • End-to-End
          • Architecture 변형 접근
          • Loss 접근
      • with External information
        • Web image / knowledge graph / text가 존재.

End-to-End 자세히

  1. Architecture
    1. high-order mixing of CNN features가 주임.
    2. Hierarchical Bilinear Pooling(HBP-ECCV 2018)
    3. Deep bilinear transformation(DBT-NIPS 2019)
  2. Loss
    1. GCE loss(AAAI 2020) : focus on top-k confusing negative classes
    2. hinge loss(API-Net AAAI 2020) : score ranking regularization

Fine-Grained CNN vs Traditional CNN

  • 결론
    • Traditional CNN 승
      • DenseNet161최고
      • Traditional CNN이 ImageNet pretraining에서 이득을 봄.

3번째 논문 : Image classification말고도 여러가지 있음

donaricano-btn