<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>HyperML</title>
    <link>https://hyperml.tistory.com/</link>
    <description>머신러닝 블로그</description>
    <language>ko</language>
    <pubDate>Sun, 12 Apr 2026 11:27:15 +0900</pubDate>
    <generator>TISTORY</generator>
    <ttl>100</ttl>
    <managingEditor>곰돌이만세</managingEditor>
    <item>
      <title>도움이 되는 영상들</title>
      <link>https://hyperml.tistory.com/31</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;- 회복 탄력성 (NVidia)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[Jensen Huang] &lt;a href=&quot;https://www.youtube.com/shorts/X6giT3YRT6U&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://www.youtube.com/shorts/X6giT3YRT6U&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;- 핵심가치 (Apple)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[Steve Jobs] Think different &lt;a href=&quot;https://www.youtube.com/watch?v=EWSA7Lykvt4&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://www.youtube.com/watch?v=EWSA7Lykvt4&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;- 스트레스 (Amazon)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[Jeff Bezos] &lt;a href=&quot;https://www.youtube.com/shorts/WvNVRnAgFt0&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://www.youtube.com/shorts/WvNVRnAgFt0&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;- 핵심가치 (Amazon 4:10~)&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=l9X8X-Ixbo8&amp;amp;t=1230s&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://www.youtube.com/watch?v=l9X8X-Ixbo8&amp;amp;t=1230s&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>테크리더 comments</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/31</guid>
      <comments>https://hyperml.tistory.com/31#entry31comment</comments>
      <pubDate>Sun, 6 Oct 2024 22:05:00 +0900</pubDate>
    </item>
    <item>
      <title>An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (ViT)</title>
      <link>https://hyperml.tistory.com/25</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;Transformer 는 본래 attention mechanisim 에 기반하여 language model의 학습을 위해 설계되었다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;간단한 구조와 적은 inductive bias 및 큰 weight capacity로 거대하게 모델을 만들고 거대한 데이터 학습에도 그 성능이 포화되지 않고,&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;언어모델의 self-supervised 학습 -&amp;gt; finetune 과정을 쉽게 수행함으로써 &lt;span style=&quot;letter-spacing: 0px;&quot;&gt;BERT, GPT와 같은 거대모델의 출현 및 다양한 task 활용을 이끌어 자연어 처리분야의 사실상 표준(de-facto standard)이 되었다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;이러한 transformer가 vision task에 적용될 것이라는 것은 누구나 예상할 수 있었고 많은 시도가 있었지만, &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;ViT에 이르러서야 구글의 많은 데이터로 vision classification에서 그 SOTA 수준의 성능을 입증할 수 있게 되었다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;이 리뷰를 통해 그 과정을 어떻게 풀어가는지 보고자 한다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;&amp;nbsp;다만 논문의 형식 (특히 실험)은 이전에 구글에서 발표했던 BiT: Big Transfer를 따라가고 있다. 저자도 일부 겹친다. 이들은 기존 resnet의 모델을 크게 만들어 많은 데이터를 집어넣고 돌려서 나온 결과를 분석하였다. 어떤면에서 ViT는 BiT에 그저 transformer를 끼얹은 것 같은 느낌이 든다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;Introduction&lt;/span&gt;&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;&amp;nbsp;널리 알려진 대로 Transformer에서 self-attention 기반의 아키텍쳐는 언어모델에서 흔히 사용되었다. 그러나 vision task에서는 여전히 CNN기반의 아키텍쳐가 주류였다. 이것을 비전에 적용하기 위해서 이미지를 patch로 쪼갰다. (서로 겹치지 않게)&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기존 vision task에서 pretrain 으로 많이 사용하던 ImageNet은 transformer 기준으로 본다면 중규모 데이터셋에 불과하다. 실험에서는 ImageNet(100만)을 포함하여 구글 내부용 데이터셋인 JFT-300M(1400만 랜덤샘플링, 3억)을 이용하여 결과를 냈다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 결과 best model은 ImageNet 평가에서 88.55% acc.를 달성할 수 있었다. 이밖에 ImageNet-ReaL 90.72%, CIFAR-100에서 94.55%, VTAB에서 77.63%를 기록했다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Method&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;679&quot; data-origin-height=&quot;375&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/YVz4l/btq9NyucLlz/IxkfySxhs7nTajFB7SuXzK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/YVz4l/btq9NyucLlz/IxkfySxhs7nTajFB7SuXzK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/YVz4l/btq9NyucLlz/IxkfySxhs7nTajFB7SuXzK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FYVz4l%2Fbtq9NyucLlz%2FIxkfySxhs7nTajFB7SuXzK%2Fimg.png&quot; data-origin-width=&quot;679&quot; data-origin-height=&quot;375&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;구조는 예상가능한대로 매우 심플한 편이다. 저자들은 standard한 transformer를 최대한 따랐다고 하며 아래의 과정을 거쳐 이미지를 처리했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1. 이미지를 정해진 patch수 (N) 로 쪼개고 (patch resolution : P, channels : C)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2. 각각을 embedding 한다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3. embedding 한 데이터를 linear projection 한 것을 position embedding 값을 각 embedding에 더하고 class embedding을 concat 한다 (기존 transformer와 동일)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;4. transformer encoder에서 이 embedding을 처리해서 MLP로 feature vector를 계산하여 분류 task를 수행한다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이것을 식으로 나타내면 아래와 동일한데 큰 의미가 있는지는 모르겠다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;819&quot; data-origin-height=&quot;176&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/BcRjR/btq9Nulz1S5/r8A1O5FxskbYpHr9KbmmA0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/BcRjR/btq9Nulz1S5/r8A1O5FxskbYpHr9KbmmA0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/BcRjR/btq9Nulz1S5/r8A1O5FxskbYpHr9KbmmA0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FBcRjR%2Fbtq9Nulz1S5%2Fr8A1O5FxskbYpHr9KbmmA0%2Fimg.png&quot; data-origin-width=&quot;819&quot; data-origin-height=&quot;176&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Transformer 짚고 넘어가기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Transformer 에 익숙하지 않은 분들을 위해 RNN을 안다고 가정하고 약간 설명하자면&amp;nbsp;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(참조 : &lt;a href=&quot;https://wikidocs.net/31379&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://wikidocs.net/31379&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;302&quot; data-origin-height=&quot;178&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/qNLSH/btq9Vkg02Gf/8bw14kzkFlIg4dyGkrqKn0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/qNLSH/btq9Vkg02Gf/8bw14kzkFlIg4dyGkrqKn0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/qNLSH/btq9Vkg02Gf/8bw14kzkFlIg4dyGkrqKn0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FqNLSH%2Fbtq9Vkg02Gf%2F8bw14kzkFlIg4dyGkrqKn0%2Fimg.png&quot; data-origin-width=&quot;302&quot; data-origin-height=&quot;178&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;RNN에서 단어 토큰을 받아들여서 many-to-many 방식으로 번역 프로세스를 수행한다고 생각해보자&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그러면 위 그림과 같이 그 과정을 단순화 할 수 있다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 ViT의 경우에는 위에서 feature 만 뽑아내면 되므로 encoder만 사용한다&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;270&quot; data-origin-height=&quot;289&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ZGUdU/btq9VjCoQt8/vgUgPsMQTZqkuMi3jOKQWK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ZGUdU/btq9VjCoQt8/vgUgPsMQTZqkuMi3jOKQWK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ZGUdU/btq9VjCoQt8/vgUgPsMQTZqkuMi3jOKQWK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FZGUdU%2Fbtq9VjCoQt8%2FvgUgPsMQTZqkuMi3jOKQWK%2Fimg.png&quot; data-origin-width=&quot;270&quot; data-origin-height=&quot;289&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 부터 위로 올라가면서 처리가되는 구조인데 아랫부분 embedding에는 아래 예시와 같은 단어들이 들어온다&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;284&quot; data-origin-height=&quot;324&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dcx0pk/btq9RfOhUx3/bCzGW6pY9D9SWn1jSHKly1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dcx0pk/btq9RfOhUx3/bCzGW6pY9D9SWn1jSHKly1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dcx0pk/btq9RfOhUx3/bCzGW6pY9D9SWn1jSHKly1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdcx0pk%2Fbtq9RfOhUx3%2FbCzGW6pY9D9SWn1jSHKly1%2Fimg.png&quot; data-origin-width=&quot;284&quot; data-origin-height=&quot;324&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;transformer 가 기존 모델들과 다른 특이한 점은 바로 이런 점인데, 정해진 학습 weight가 있다기 보다는 입력된 문장의 단어간의 유사도를 기반으로 weight를 학습한다는 것이다. (위치와 의미 모두)&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;transformer에서는 동일한 문장을 embedding된 단어 하나씩 서로 비교해가면서 유사도를 측정하는데&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;위 예를 보면 우측의 it은 좌측의 모든 단어들과 match해서 유사도를 계산한다. 그래서 가장 연관성이 높은 단어에 대해서 높은 유사도를 가지게끔 학습을 한다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 각 단어 embedding 별로 key/query/value weight matrix가 존재하게 되며 transformer의 학습 대상은 이 matrix가 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 예에서는 query는 it이 되고 key는 동일 문장내 단어들이 된다 (어차피 key, query는 서로 같은 단어들을 가리키고 서로 역할만 바꿔서 동작) 나중에 동일 문장내 단어들에 대해 softmax 계산을 한 후 value를 곱해서 1문장에 대한 최종 결과(attention value)를 얻는다&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;308&quot; data-origin-height=&quot;315&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/MF63g/btq9RfndbgA/LpmfM9gW3ZPHj7OqJeIKXk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/MF63g/btq9RfndbgA/LpmfM9gW3ZPHj7OqJeIKXk/img.png&quot; data-alt=&quot;각 단어 embedding은 하나씩 Wq, Wk, Wv 웨이트를 가진다&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/MF63g/btq9RfndbgA/LpmfM9gW3ZPHj7OqJeIKXk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FMF63g%2Fbtq9RfndbgA%2FLpmfM9gW3ZPHj7OqJeIKXk%2Fimg.png&quot; data-origin-width=&quot;308&quot; data-origin-height=&quot;315&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;각 단어 embedding은 하나씩 Wq, Wk, Wv 웨이트를 가진다&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;663&quot; data-origin-height=&quot;333&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/s95AM/btq9PTLsVUi/P6agsQ6m7MA3sl8f3wZBwk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/s95AM/btq9PTLsVUi/P6agsQ6m7MA3sl8f3wZBwk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/s95AM/btq9PTLsVUi/P6agsQ6m7MA3sl8f3wZBwk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fs95AM%2Fbtq9PTLsVUi%2FP6agsQ6m7MA3sl8f3wZBwk%2Fimg.png&quot; data-origin-width=&quot;663&quot; data-origin-height=&quot;333&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 과정을 간단히 그림으로 나타내면 바로 위 그림과 같다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;식으로 나타내면 아래와 같다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;325&quot; data-origin-height=&quot;76&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bfATsJ/btq9Vkg1k1P/AeOoh9N1Sz9koIcGo0SOf0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bfATsJ/btq9Vkg1k1P/AeOoh9N1Sz9koIcGo0SOf0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bfATsJ/btq9Vkg1k1P/AeOoh9N1Sz9koIcGo0SOf0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbfATsJ%2Fbtq9Vkg1k1P%2FAeOoh9N1Sz9koIcGo0SOf0%2Fimg.png&quot; data-origin-width=&quot;325&quot; data-origin-height=&quot;76&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설명이 길었는데 어쨌든 transformer 결과 출력도 하나의 feature가 되는 셈이다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 그림에는 multihead란 표현이 있는데 입력 embedding 단어 크기가 512라고 하고 head가 8개라고 한다면 64 size로 쪼개서 위 과정을 동일하게 계산하고 결과를 concat한 후 다시 차원을 원래 output에 맞게 맞추는 과정을 수행한다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이렇게 쪼개서 처리하면 embedding을 여러 측면에서 볼 수 있기 때문에 성능이 좋아진다고 한다. 대신 weight 크기가 head 개수에 비례하여 커지게 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;---------------------------------------------------------------------------------------------------------------------------&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ViT에는 P x P 크기의 patch 하나하나가 바로 단어가 되는 셈이다. 다만 그냥 생각해 봤을 때 단어 사전이 존재하는 자연어 분야 대비 그런 것이 없는 image patch는 경우의 수가 많을 것으로 생각해볼 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Inductive Bias&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;참조 : &lt;a href=&quot;https://velog.io/@euisuk-chung/Inductive-Bias%EB%9E%80#:~:text=Inductive%20Bias%20(%EA%B7%80%EB%82%A9%ED%8E%B8%ED%96%A5),-%EA%B7%B8%EB%A0%87%EB%8B%A4%EB%A9%B4%20%EC%9D%B4%EB%B2%88%20%ED%8F%AC%EC%8A%A4%ED%8C%85&amp;amp;text=Models%20are%20spurious%20%3A%20%EB%8D%B0%EC%9D%B4%ED%84%B0%20%EB%B3%B8%EC%97%B0,biases)%EB%A5%BC%20%ED%95%99%EC%8A%B5%ED%95%98%EA%B2%8C%20%EB%90%A9%EB%8B%88%EB%8B%A4.&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Inductive Bias 란?&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1626572899054&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;[머신러닝/딥러닝] Inductive Bias란?&quot; data-og-description=&quot;Inductive Bias란 무엇일까요? 최근 논문들을 보면 그냥 Bias도 아니고 inductive Bias라는 말이 자주 나오는 것을 확인할 수 있는데요! 오늘은 해당 개념에 대해 정리해보는 시간을 가지려고 합니다.&quot; data-og-host=&quot;velog.io&quot; data-og-source-url=&quot;https://velog.io/@euisuk-chung/Inductive-Bias%EB%9E%80#:~:text=Inductive%20Bias%20(%EA%B7%80%EB%82%A9%ED%8E%B8%ED%96%A5),-%EA%B7%B8%EB%A0%87%EB%8B%A4%EB%A9%B4%20%EC%9D%B4%EB%B2%88%20%ED%8F%AC%EC%8A%A4%ED%8C%85&amp;amp;text=Models%20are%20spurious%20%3A%20%EB%8D%B0%EC%9D%B4%ED%84%B0%20%EB%B3%B8%EC%97%B0,biases)%EB%A5%BC%20%ED%95%99%EC%8A%B5%ED%95%98%EA%B2%8C%20%EB%90%A9%EB%8B%88%EB%8B%A4.&quot; data-og-url=&quot;https://velog.io/@euisuk-chung/Inductive-Bias란&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/FSWYX/hyKVb9nwjs/qZ1ZNzJvwjlsdUYIF1UKA1/img.png?width=1395&amp;amp;height=370&amp;amp;face=0_0_1395_370,https://scrap.kakaocdn.net/dn/c2FEiM/hyKU67598w/bWkzqRAKSH3Kasq1xXxoq0/img.png?width=1395&amp;amp;height=370&amp;amp;face=0_0_1395_370,https://scrap.kakaocdn.net/dn/6RYrJ/hyKVgv5mqP/W7jIPaj0Gg3TYmuo1XMiIk/img.png?width=1327&amp;amp;height=1023&amp;amp;face=0_0_1327_1023&quot;&gt;&lt;a href=&quot;https://velog.io/@euisuk-chung/Inductive-Bias%EB%9E%80#:~:text=Inductive%20Bias%20(%EA%B7%80%EB%82%A9%ED%8E%B8%ED%96%A5),-%EA%B7%B8%EB%A0%87%EB%8B%A4%EB%A9%B4%20%EC%9D%B4%EB%B2%88%20%ED%8F%AC%EC%8A%A4%ED%8C%85&amp;amp;text=Models%20are%20spurious%20%3A%20%EB%8D%B0%EC%9D%B4%ED%84%B0%20%EB%B3%B8%EC%97%B0,biases)%EB%A5%BC%20%ED%95%99%EC%8A%B5%ED%95%98%EA%B2%8C%20%EB%90%A9%EB%8B%88%EB%8B%A4.&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://velog.io/@euisuk-chung/Inductive-Bias%EB%9E%80#:~:text=Inductive%20Bias%20(%EA%B7%80%EB%82%A9%ED%8E%B8%ED%96%A5),-%EA%B7%B8%EB%A0%87%EB%8B%A4%EB%A9%B4%20%EC%9D%B4%EB%B2%88%20%ED%8F%AC%EC%8A%A4%ED%8C%85&amp;amp;text=Models%20are%20spurious%20%3A%20%EB%8D%B0%EC%9D%B4%ED%84%B0%20%EB%B3%B8%EC%97%B0,biases)%EB%A5%BC%20%ED%95%99%EC%8A%B5%ED%95%98%EA%B2%8C%20%EB%90%A9%EB%8B%88%EB%8B%A4.&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/FSWYX/hyKVb9nwjs/qZ1ZNzJvwjlsdUYIF1UKA1/img.png?width=1395&amp;amp;height=370&amp;amp;face=0_0_1395_370,https://scrap.kakaocdn.net/dn/c2FEiM/hyKU67598w/bWkzqRAKSH3Kasq1xXxoq0/img.png?width=1395&amp;amp;height=370&amp;amp;face=0_0_1395_370,https://scrap.kakaocdn.net/dn/6RYrJ/hyKVgv5mqP/W7jIPaj0Gg3TYmuo1XMiIk/img.png?width=1327&amp;amp;height=1023&amp;amp;face=0_0_1327_1023');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;[머신러닝/딥러닝] Inductive Bias란?&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Inductive Bias란 무엇일까요? 최근 논문들을 보면 그냥 Bias도 아니고 inductive Bias라는 말이 자주 나오는 것을 확인할 수 있는데요! 오늘은 해당 개념에 대해 정리해보는 시간을 가지려고 합니다.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;velog.io&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;blockquote data-ke-style=&quot;style1&quot;&gt;&lt;span style=&quot;font-family: 'Noto Serif KR';&quot;&gt;&quot;Wikipedia에서 정의를 빌려오자면, Inductive bias란, 학습 시에는 만나보지 않았던 상황에 대하여 정확한 예측을 하기 위해 사용하는 추가적인 가정 (additional assumptions)을 의미합니다.&quot;&lt;/span&gt;&lt;/blockquote&gt;
&lt;blockquote data-ke-style=&quot;style1&quot;&gt;&lt;span style=&quot;font-family: 'Noto Serif KR';&quot;&gt;&quot;이러한 딱딱한 개념이 아닌 조금 더 비유적인 표현을 가지고 예시를 들어보겠습니다. 우리가 흔히 말하는 머신러닝/딥러닝을 input과 output의 데이터가 주어지면, 주어진 데이터에 맞는 함수를 가방에서 찾는 것이라고 비유해 보겠습니다. Inductive Bias는 우리가 함수를 찾는 가방의 크기에 반비례(가정의 강도와는 비례)되는 개념으로 보시면 될 것 같습니다. 실제로 거의 모든 함수를 표현할 수 있는 MLP(Multi-Linear Perceptron)의 경우 엄청 큰 가방이라고 생각하면되고, CNN(Convolutional Neural-Net)의 경우 전자보다는 작은 가방이라고 생각하면 될 것 같습니다.&quot;&lt;/span&gt;&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;정의에 따르면 CNN은 MLP보다 더 많은 가정이 들어가므로 (2차원, local 처리) 더 큰 inductive bias가 있다고 볼 수 있다. MLP와 transformer는 이 측면에서 서로 비교하기가 쉽지 않고 논문에도 비교에 대한 언급이 없다. 다만 transformer에 적용된 inductive bias는 바로 patch를 쪼개서 입력하는 것 정도로 이야기 하고 있다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(위 표기중에 MLP는 multilayer perceptron이 옳은 표현이다.)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Experiments&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;아키텍쳐 자체가 심플하고 널리 알려진 구조를 가져다가 약간의 처리 + 거대한 데이터로 실험한 것이다보니 원리에 대한 내용보다는 실험이 많은 부분을 차지하고 있다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;568&quot; data-origin-height=&quot;149&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cIgKLg/btq9O65nhDY/RhvkIKvr0cpbykKXdgTfm1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cIgKLg/btq9O65nhDY/RhvkIKvr0cpbykKXdgTfm1/img.png&quot; data-alt=&quot;실험에 사용된 ViT 모델들&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cIgKLg/btq9O65nhDY/RhvkIKvr0cpbykKXdgTfm1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcIgKLg%2Fbtq9O65nhDY%2FRhvkIKvr0cpbykKXdgTfm1%2Fimg.png&quot; data-origin-width=&quot;568&quot; data-origin-height=&quot;149&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;실험에 사용된 ViT 모델들&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;datasets&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- ImageNet (1k classes, 1.3M images)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- ImageNet-ReaL (위와 동일, cleand-up ReaL labels)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- ImageNet-21k (21k classes, 14M images)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- JFT (18k classes, 303M images)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- CIFAR10/100, Oxford-IIIT Pets&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- Oxford Flowers-102&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;models&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- BiT (ResNet, baseline, BN을 GN으로 변경한 버전)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- Noisy Student (EfficientNet B7을 기반으로 teacher-student 방식으로 weight를 늘려가며 self-train)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;실험들&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;아무래도 기존의 task를 완전 새로운 메커니즘을 적용하다보니 실험거리가 많은 편이고 appendix에 실려있는 것 들중에도 의미있는 것들이 있어 함께 분석해 보았다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;887&quot; data-origin-height=&quot;279&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/PB7nK/btq9TENXLxa/ZgD1bSoNgURDJGv0qdKX1K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/PB7nK/btq9TENXLxa/ZgD1bSoNgURDJGv0qdKX1K/img.png&quot; data-alt=&quot;SOTA와의 비교, 맨 아랫줄은 pre-train에 들어간 비용&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/PB7nK/btq9TENXLxa/ZgD1bSoNgURDJGv0qdKX1K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FPB7nK%2Fbtq9TENXLxa%2FZgD1bSoNgURDJGv0qdKX1K%2Fimg.png&quot; data-origin-width=&quot;887&quot; data-origin-height=&quot;279&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;SOTA와의 비교, 맨 아랫줄은 pre-train에 들어간 비용&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;ViT-H/14가 거의 모든 dataset에서 SOTA를 기록했다. 특히 pretrain 에 들어간 비용 효율성 면에서 ViT가 기존의 BiT-L이나 Noisy Student와 현격히 차이나게 우수함을 알 수 있다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;877&quot; data-origin-height=&quot;314&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/PM472/btq9PTkD1n4/K5lkd1WT1x4i8joVb57ZP1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/PM472/btq9PTkD1n4/K5lkd1WT1x4i8joVb57ZP1/img.png&quot; data-alt=&quot;BiT, ViT를 데이터셋 크기에 따라 ImageNet 정확도 성능 측정한 결과&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/PM472/btq9PTkD1n4/K5lkd1WT1x4i8joVb57ZP1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FPM472%2Fbtq9PTkD1n4%2FK5lkd1WT1x4i8joVb57ZP1%2Fimg.png&quot; data-origin-width=&quot;877&quot; data-origin-height=&quot;314&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;BiT, ViT를 데이터셋 크기에 따라 ImageNet 정확도 성능 측정한 결과&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(좌) ImageNet 평가 accuracy (우) Imagenet Few-shot 평가 (가로축 pretraining 샘플수)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;위 실험에서는 데이터셋 크기를 늘려가면서 acc 측면에서 포화되는지 여부를 확인한 실험이다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;두 실험 모두 CNN기반인 BiT 대비하여 학습데이터를 많이 투입해도 포화가 안되거나 덜 되는 것을 보여준다 같은 ViT 안에서도 B(base) 보다 L(Large)가 더 성능이 우수함을 확인할 수 있고 patch 개수가 적을 수록 더 우수한 것을 알 수 있다. 저자들은 우측의 차트를 보고 transfer learning을 이용해서 few-shot 으로 진행하는것이 앞으로의 방향성이라고 생각한다고 한다. (JFT 이미지를 본적은 없지만 ImageNet이랑 크게 다르지 않을 걸로 생각하는데 기뻐할 일인지는 잘 ...)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;재미있는 것은 데이터가 적을 때는 CNN모델이 더 우수한 성능을 보인다는 점이고, 이는 저자들이 앞단의 embedding 부분을 cnn으로 처리한 hybrid 모델에서도 마찬가지 결과를 보인다. (다음 그림)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;883&quot; data-origin-height=&quot;516&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/btMDwa/btq9XcpINir/kxPbUkYYq9GgATXfN3MjW1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/btMDwa/btq9XcpINir/kxPbUkYYq9GgATXfN3MjW1/img.png&quot; data-alt=&quot;각 데이터셋 별 ImageNet 평가 정확도&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/btMDwa/btq9XcpINir/kxPbUkYYq9GgATXfN3MjW1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbtMDwa%2Fbtq9XcpINir%2FkxPbUkYYq9GgATXfN3MjW1%2Fimg.png&quot; data-origin-width=&quot;883&quot; data-origin-height=&quot;516&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;각 데이터셋 별 ImageNet 평가 정확도&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;패치 size(16/32/14, 가로세로 해상도)가 클수록 총 패치개수가 줄어들고 연산이 줄어들며 정확도가 낮아진다.&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Scaling Study&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;823&quot; data-origin-height=&quot;357&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/s8Xag/btq9Vkhglyr/S2qCjrp96DNbGA6OPPcUxk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/s8Xag/btq9Vkhglyr/S2qCjrp96DNbGA6OPPcUxk/img.png&quot; data-alt=&quot;여러개의 모델을 가지고 비교한 실험&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/s8Xag/btq9Vkhglyr/S2qCjrp96DNbGA6OPPcUxk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fs8Xag%2Fbtq9Vkhglyr%2FS2qCjrp96DNbGA6OPPcUxk%2Fimg.png&quot; data-origin-width=&quot;823&quot; data-origin-height=&quot;357&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;여러개의 모델을 가지고 비교한 실험&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;891&quot; data-origin-height=&quot;429&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bC680E/btq9R4F0zCh/kKvEo0K6qcb58PeDNSchk1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bC680E/btq9R4F0zCh/kKvEo0K6qcb58PeDNSchk1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bC680E/btq9R4F0zCh/kKvEo0K6qcb58PeDNSchk1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbC680E%2Fbtq9R4F0zCh%2FkKvEo0K6qcb58PeDNSchk1%2Fimg.png&quot; data-origin-width=&quot;891&quot; data-origin-height=&quot;429&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- 실험에 사용된 networks&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; R50x1, R50x2, R101x1, R152x1, R152x2 (pretrained 7 epochs)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; R152x2, R200x3 (pretrained 14 epochs)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; ViT-B/32, B/16, L/32, L/16 (pretrained 7 epochs)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; L/16, H/14 (pretrained 14 epochs)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; hybrids (끝의 32, 16등은 패치해상도가 아니며, downsampling을 얼마나 했냐의 의미임)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전반저긍로 ResNet(BiT)의 성능이 낮은 것을 알 수 있다. 같은 연산일 때 ViT가 BiT를 앞선다. 여기서의 포인트는 ViT와 Hybrid ViT의 비교인데 Hybrid 가 비슷한 정확도에서 더 적은 연산을 요구하는 것으로 보인다. 저자들은 CNN은 필요 없다는 듯한 태도를 보이면서도 hybrid 방식에 대한 실험결과를 남겼다. x축이 커질수록 gap이 줄어든다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Self-Supervision&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;NLP task에 transformer가 인상적인 활약을 보이는 것은 그 자체의 scalability 가 훌륭한 것 뿐 아니라 large scale self-supervised pre-training이 가능하다는 저자들의 말에 공감한다. 저자들은 BERT에서한 것과 유사하게 masked patch prediction을 수행했고 ViT-B/16에서 79.9% (ImageNet) 를 달성했으나 supervised 방식에 비해 4% 뒤쳐지는 결과를 달성했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;저자들은 masked patch prediction으로 50% 의 embedded patch를 corrupt 시키고, [mask] embedding을 80%, randon하게 다른 patch embedding 으로 대치(10%), 그냥 그대로 두기 (10)%로 처리하였다. JFT로 14 epoch을 돌려서 학습했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 다음과 같이 진행을 했는데&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) 3bit color의 mean을 예측&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) 4x4 downsized patch 3bit color 예측(16x16의 미니버전)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) L2를 사용해서 full patch 예측&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;놀랍게도 모두 잘 동작했고 3)번만 약간 부족한 결과를 보였다고 한다. 다만 숫자로 된 결과는 확인시켜주지 않았다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;저자들은 이전의 transformer를 이용한 이미지 인식 방법들과 달리 별도의 이미지를 위한 inductive bias를 도입하지 않았다고 강조한다. 그리고 그냥 패치를 가지고 NLP task와 동일하게 처리했는데도 놀라운 성능을 보이고 pre-trained 모델로 잘 동작한다. 그럼에도 많은 도전이 남은 부분이 detection, segmentation과 같은 task에 잘 적용되어야 한다는 것이다. 게다가 self-supervised learning도 large scale supervised 방식과의 gap이 존재한다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;사실 얼마전 다시 ViT에 필적할만한 논문을 구글에서 MLP-Mixer라는 논문으로 발표했다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ViT와 거의 유사한 구조이나 MLP를 이용한 간단하지만 계산량이 적고 성능이 좋은 방법론이다. 다만 여전히 large-scale pretraining dataset이 필요하다는 점에서는 동일하므로 BiT -&amp;gt; ViT 를 잇는 연장선상에서 살펴보는 것도 의미 있을 것이다. 좀 더 효율적인 Transformer 기반 network는 DEIT를 살펴볼 필요가 있다.&lt;/p&gt;</description>
      <category>논문읽기</category>
      <category>Transformer #classification #SOTA #Google</category>
      <category>vit</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/25</guid>
      <comments>https://hyperml.tistory.com/25#entry25comment</comments>
      <pubDate>Sun, 18 Jul 2021 10:56:53 +0900</pubDate>
    </item>
    <item>
      <title>RandAugment: Practical automated data augmentation with a reduced search space (google brain)</title>
      <link>https://hyperml.tistory.com/24</link>
      <description>&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/sNvhX/btqEtNWqPXw/abkLKKepsK10P6Huv4vR91/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/sNvhX/btqEtNWqPXw/abkLKKepsK10P6Huv4vR91/img.png&quot; data-alt=&quot;Augmentation 함수(operation) 예시 및 visualized 효과&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/sNvhX/btqEtNWqPXw/abkLKKepsK10P6Huv4vR91/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FsNvhX%2FbtqEtNWqPXw%2FabkLKKepsK10P6Huv4vR91%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;Augmentation 함수(operation) 예시 및 visualized 효과&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;- Data augmentation은 전문가의 손길과 설계 policy를 위해서는 매뉴얼한 조작이 필요합니다.&lt;/p&gt;
&lt;p&gt;- 현재까지 소개되었던 학습가능한 augmentation policy 기법들은 정확도, 모델의 견고성과 성능을 높여주었습니다.&lt;/p&gt;
&lt;p&gt;- NAS(Neural Architecture Search)기반의 최적화 방법은 더 나은 예측 성능을 높였으나 복잡성과 엄청난 계산량 요구때문에 기피되었습니다.&lt;/p&gt;
&lt;p&gt;- 그래서 좀 더 효율적인 방식의 augmentation 함수의 탐색기법으로 AutoAugment(18.05, Google Brain), Fast AutoAugment(19.05 Kakao Brain) 같은 방식이 제안되었습니다.&lt;/p&gt;
&lt;p&gt;- 그럼에도 불구하고 ML 모델의 학습에 여전히 비용이 많이 들었고, 복잡도가 높았습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;- 결론부터 말하면 여기서 모델의 data augmentation은 모델의 size와 학습셋의 size에 따른 최적 magnitude에 의존합니다.&amp;nbsp;&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;&amp;nbsp;쉽게 얘기하면 본 논문은 데이터셋이 크거나 모델사이즈가 큰 경우 augment의 함수(rotate, shear 등과 같은 operation)와 상관 없이 각 함수의 normalized 된 magnitude를 조절하면서 최적값을 간단히 찾을 수 있는 방법에 대해서 소개합니다.&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;AA vs FAA in GPU computation hours&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; width=&quot;533&quot; height=&quot;NaN&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lSJdC/btqEtOVmZkm/wB7KGvWij66kFOy92fjP61/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lSJdC/btqEtOVmZkm/wB7KGvWij66kFOy92fjP61/img.png&quot; data-alt=&quot;AutoAugment와 Fast AutoAugment의 gpu x hours&amp;amp;amp;nbsp;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lSJdC/btqEtOVmZkm/wB7KGvWij66kFOy92fjP61/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlSJdC%2FbtqEtOVmZkm%2FwB7KGvWij66kFOy92fjP61%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; width=&quot;533&quot; height=&quot;NaN&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;AutoAugment와 Fast AutoAugment의 gpu x hours&amp;nbsp;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;현재까지 가장 발전되고 효율적인 learned augmentation 기법은 AA와 FAA인데 아직은 현실적이지 않은 cost를 보여줍니다. AA는 강화학습을 이용하여 operation을 최적화(operation 종류, 강도, 확률적인 빈도 등을 결정)하고 FAA는 베이지안 최적화를 사용합니다.&lt;/p&gt;
&lt;p&gt;(5000 gpu x hour라는 말은 연산시간이 gpu1개일 때 5000시간 혹은 gpu가 5000개 일때 1시간이란 의미 입니다)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Search Space&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c41KrZ/btqEvwZ2FAd/amY8zCEAKCc85k7N72hiC0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c41KrZ/btqEvwZ2FAd/amY8zCEAKCc85k7N72hiC0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c41KrZ/btqEvwZ2FAd/amY8zCEAKCc85k7N72hiC0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc41KrZ%2FbtqEvwZ2FAd%2FamY8zCEAKCc85k7N72hiC0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;AA, FAA는 위에서 소개드렸고, PBA는 Population Based Augment(ICML19, 19.05)로서 유전 알고리즘을 이용한 최적화로 augment 함수를 최적화 하는 기법입니다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;왜 많이들 사용하는 gradient descent like 기법을 사용하지 못하냐 하면, 뒤에 소개드릴 함수들 중에 일부는 미분불가능하기 때문입니다. 그래서 강화학습이나, 유전 알고리즘등의 다소 휴리스틱한 방법으로 최적화를 수행합니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Related Works&lt;/h2&gt;
&lt;p&gt;앞에서 간략하게 설명드렸지만 조금 자세하게 이전 논문들에 대한 소개를 드립니다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;AutoAugment&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ODupI/btqEvigD6AC/ybuC9cFQCagnPVgcAkHGFK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ODupI/btqEvigD6AC/ybuC9cFQCagnPVgcAkHGFK/img.png&quot; data-alt=&quot;AutoAugment, Google Brain&amp;amp;amp;nbsp;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ODupI/btqEvigD6AC/ybuC9cFQCagnPVgcAkHGFK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FODupI%2FbtqEvigD6AC%2FybuC9cFQCagnPVgcAkHGFK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;AutoAugment, Google Brain&amp;nbsp;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;- RNN과 강화학습을 이용한 심플한 전략입니다. operation type, prob., magnitude 3가지를 최적화 합니다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Population Based Augmentation&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cVlGhv/btqEuYJJir1/p5RJkCJIgKUiUBypk36KHK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cVlGhv/btqEuYJJir1/p5RJkCJIgKUiUBypk36KHK/img.png&quot; data-alt=&quot;PBA, Berkeley Univ.&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cVlGhv/btqEuYJJir1/p5RJkCJIgKUiUBypk36KHK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcVlGhv%2FbtqEuYJJir1%2Fp5RJkCJIgKUiUBypk36KHK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;PBA, Berkeley Univ.&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- 매 세대 마다 16개의 child model을 생성해서 평가후 살아남은 것을 다음 세대에 넘기는 전략입니다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Fast AutoAugment&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; width=&quot;682&quot; height=&quot;NaN&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/1oJPw/btqEtFLbvi6/obsVNUs5kggkFEBMJf3fj0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/1oJPw/btqEtFLbvi6/obsVNUs5kggkFEBMJf3fj0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/1oJPw/btqEtFLbvi6/obsVNUs5kggkFEBMJf3fj0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F1oJPw%2FbtqEtFLbvi6%2FobsVNUs5kggkFEBMJf3fj0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; width=&quot;682&quot; height=&quot;NaN&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Method&lt;/h2&gt;
&lt;p&gt;- 실제 적용 방법은 매우 간단합니다. 아래 코드와 같이 코드를 만들고 grid search를 수행하면 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/coD1Ri/btqEtFRUEA0/74gCZiaaEsr6HvhRJ90yyk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/coD1Ri/btqEtFRUEA0/74gCZiaaEsr6HvhRJ90yyk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/coD1Ri/btqEtFRUEA0/74gCZiaaEsr6HvhRJ90yyk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcoD1Ri%2FbtqEtFRUEA0%2F74gCZiaaEsr6HvhRJ90yyk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- 간단하게 랜덤으로 구성된 operation 선택 funtion입니다. 모든 14개의 함수는 1/14확률로 call됩니다.(uniformly)&lt;/p&gt;
&lt;p&gt;- 대상 operation 함수는 아래 그림에 있습니다. 흔히 사용하는 flip이 없는게 눈에 띕니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/djxkOD/btqEvxEF3gN/JOnna7BWosEMzTKkPLMzKK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/djxkOD/btqEvxEF3gN/JOnna7BWosEMzTKkPLMzKK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/djxkOD/btqEvxEF3gN/JOnna7BWosEMzTKkPLMzKK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdjxkOD%2FbtqEvxEF3gN%2FJOnna7BWosEMzTKkPLMzKK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- Operation은 호출되는 횟수에 따라 K^N으로 표현됩니다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;- 그러므로 이전 AA나 FAA, PBA등에서 operation 및 prob. ,magnitude 까지 결정하기 위해 search space가 매우 컸다는 점을 생각해보면 매우 단순해진 형태입니다.&lt;/p&gt;
&lt;p&gt;- Magnitude가 어떻게 normalize되는지를 보기 위해 저자들이 이전 논문인 AA에서의 op를 잠시 살펴 보겠습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b3Zs6W/btqEu6HzTds/gqhsIMnkMKayYBk4uXt5N0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b3Zs6W/btqEu6HzTds/gqhsIMnkMKayYBk4uXt5N0/img.png&quot; data-alt=&quot;AutoAugment의 operation 및 magnitude 구간&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b3Zs6W/btqEu6HzTds/gqhsIMnkMKayYBk4uXt5N0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb3Zs6W%2FbtqEu6HzTds%2FgqhsIMnkMKayYBk4uXt5N0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;AutoAugment의 operation 및 magnitude 구간&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- 제일 오른쪽 컬럼이 normalize된 magnitude입니다.&lt;/p&gt;
&lt;p&gt;- 이전 논문에서는 이 범위에서 정수인 0~10으로 결정했으나, 이번 RA(RandAugment)에서는 1~30으로 magnitude 범위를 좀 더 넓게 설정했습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;- 혹자는 이렇게 의심할 수도 있습니다. &quot;기존의 learned augment 방식에서 operation관련 parameter 최적화를 위한 학습이나 반복과정에서 각각의 op에서의 magnitude가 변화할 수도 있지 않을까?&quot;&lt;/p&gt;
&lt;p&gt;- 하지만 아래 그래프 (PBA에서 가지고 옴) 보면 그렇지 않음을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/btGSfo/btqEvinspkm/lmsNi1Lun42CZko1EwLC80/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/btGSfo/btqEvinspkm/lmsNi1Lun42CZko1EwLC80/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/btGSfo/btqEvinspkm/lmsNi1Lun42CZko1EwLC80/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbtGSfo%2FbtqEvinspkm%2FlmsNi1Lun42CZko1EwLC80%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- 학습이 진행되는 동안 각 operation들의 mag 비율은 크게 변화하지 않습니다. 그러므로 RA방식에서 초기에 지정한 각 op별 magnitude를 굳이 변화시키지 않아도 될 것으로 저자들은 생각했습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;- 또한 그렇다면 고정된 초기 magnitude를 결정함에 있어서 어떤 방식이 좋을지도 실험을 수행했습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b7Gk7k/btqEuz4vdJc/uhKoesouD3uX53lQuZgei0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b7Gk7k/btqEuz4vdJc/uhKoesouD3uX53lQuZgei0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b7Gk7k/btqEuz4vdJc/uhKoesouD3uX53lQuZgei0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb7Gk7k%2FbtqEuz4vdJc%2FuhKoesouD3uX53lQuZgei0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- 4가지 방식으로 실험을 수행하였는데 거의 차이를 보이지 않습니다.&lt;/p&gt;
&lt;p&gt;- 따라서 가장 cost가 적은 Constant Magnitude가 선택되었습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Experimets&lt;/h2&gt;
&lt;p&gt;- 기존의 AA, FAA 등에서는 augmentation된 결과를 데이터셋에 적용하기 위해 proxy task가 필수 였습니다.&lt;/p&gt;
&lt;p&gt;- proxy task는 최적 augmentation policy를 찾기 위한 원래의 학습대상인 큰 데이터셋의 부분집합으로 구성한 작은 데이터셋을 가지고 최적화 작업을 진행하는 것을 의미합니다.&lt;/p&gt;
&lt;p&gt;- 그런데 여기서의 가정은 작은 데이터셋에서 결정한 policy가 큰 데이터셋에서도 효과적이다 라는 것입니다&lt;/p&gt;
&lt;p&gt;- 사실 그러한지 저자들은 실험을 수행했습니다. ( model size, dataset size )&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bLPWck/btqEuXRAPBc/ziuaSKUmB59VcLnQ5ZKIuk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bLPWck/btqEuXRAPBc/ziuaSKUmB59VcLnQ5ZKIuk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bLPWck/btqEuXRAPBc/ziuaSKUmB59VcLnQ5ZKIuk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbLPWck%2FbtqEuXRAPBc%2FziuaSKUmB59VcLnQ5ZKIuk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;(a) network size가 큰 것을 사용할 수록 acc가 높음을 알 수 있고 magnitude가 큰 값에서 결정됨을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;(b) 좀 더 자세하게 보기 위해 최적 magnitude가 결정되는 구간을 WRN(Wide ResNet)의 widening param을 이용해서 (network size) 확인할 수 있습니다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color: #333333;&quot;&gt;(c) 학습셋의 크기가 클 수록 magnitude가 큰 설정이 acc가 높음을 알수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color: #333333;&quot;&gt;(d) 학습셋의 크기가 클 수록 최적 magnitude가 단조증가 함을 알 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color: #333333;&quot;&gt;- 사실 이런 부분들은 실험을 해보지 않고서는 알기가 힘든 부분입니다. magnitude의 범위가 아니라 고정된 magnitude를 크게 해서 큰 데이터셋이나 큰 네트워크 사이즈에 대응한다는 것은 직관적이지 않습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;- 여기서 여전히 magnitude를 개별로 관리하는 것이 더 낫지 않을까? 라는 생각을 할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/JTRVG/btqEu61QHa6/Rnkqx7OFqPdYwd9N8gDgMK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/JTRVG/btqEu61QHa6/Rnkqx7OFqPdYwd9N8gDgMK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/JTRVG/btqEu61QHa6/Rnkqx7OFqPdYwd9N8gDgMK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FJTRVG%2FbtqEu61QHa6%2FRnkqx7OFqPdYwd9N8gDgMK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- 위 그래프에서 그렇지 않다라는 것을 보여줍니다. 14개의 op중 rotate, translate(shift) op의 mag만 변화시켰을 때 전체적인 acc변화양상을 보면 그렇게 차이가 나지 않음을 알 수 있습니다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Test Results&lt;/h2&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;CIFAR &amp;amp; SVHN&lt;/h4&gt;
&lt;table style=&quot;border-collapse: collapse; width: 93.3655%;&quot; border=&quot;1&quot; width=&quot;853&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot; height=&quot;39&quot;&gt;
&lt;p&gt;&lt;span&gt;Dataset&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;Network&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;optimized N&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;optimized M&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot; height=&quot;39&quot;&gt;
&lt;p&gt;&lt;span&gt;CIFAR-10&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;&amp;nbsp;&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;1&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;5&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot; height=&quot;39&quot;&gt;
&lt;p&gt;&lt;span&gt;CIFAR-100&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;WRN-28-2&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;1&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;2&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot; height=&quot;39&quot;&gt;
&lt;p&gt;&lt;span&gt;CIFAR-100&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;WRM-28-10&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;2&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;td style=&quot;width: 24.9729%;&quot; width=&quot;213&quot;&gt;
&lt;p&gt;&lt;span&gt;14&lt;/span&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;N = {1, 2}&lt;/p&gt;
&lt;p&gt;M = {2, 6, 10, 14}&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;N은 op를 한번만 호출할지, 2번을 호출할지를&lt;/p&gt;
&lt;p&gt;M은 magnitude 값을 어떻게 설정할지를 결정합니다.&lt;/p&gt;
&lt;p&gt;- 이를 CIFAR-10, 100에 적용했을 때 위 실험 2 x 4 = 8번의 학습에서 찾아낸 최적 parameter가 표에 기록되어 있습니다.&lt;/p&gt;
&lt;p&gt;- search space 10^2라고 되어 있으나 사실 간격을 두어 실험하면 그 보다 적은 횟수에서도 최적값을 찾아 낼 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wFPBl/btqEt6nYFZJ/AS8GDynfZDgD1sGkWS8os0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wFPBl/btqEt6nYFZJ/AS8GDynfZDgD1sGkWS8os0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wFPBl/btqEt6nYFZJ/AS8GDynfZDgD1sGkWS8os0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwFPBl%2FbtqEt6nYFZJ%2FAS8GDynfZDgD1sGkWS8os0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- 다른 방법들과의 성능비교 표 입니다. 매우 간단해진 계산 규칙에도 성능이 대등하거나 우월한 것을 불 수 있습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;ImageNet &amp;amp; COCO&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cN3YGr/btqEuAWEigV/zHbWgXrNFHOTHSEjbAz4Ak/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cN3YGr/btqEuAWEigV/zHbWgXrNFHOTHSEjbAz4Ak/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cN3YGr/btqEuAWEigV/zHbWgXrNFHOTHSEjbAz4Ak/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcN3YGr%2FbtqEuAWEigV%2FzHbWgXrNFHOTHSEjbAz4Ak%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- FAA의 경우 아예 ImageNet결과가 없습니다. 구글과 카카오의 차이인 것도 같은데, 구글 정도의 엄청난 GPU Power를 소유하지 않는 경우엔 ImageNet이나 COCO실험이 영원히 끝나지 않을 수도 있을 것 같습니다.&lt;/p&gt;
&lt;p&gt;- RA가 여전히 제일 우수한 성능을 보여줍니다. CutOut도 ImageNet 실험을 수행했으나 성능향상엔 실패했다고 합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/PGuRT/btqEtNoFIc3/brvKKxI24OJpnd4EAyFF3k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/PGuRT/btqEtNoFIc3/brvKKxI24OJpnd4EAyFF3k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/PGuRT/btqEtNoFIc3/brvKKxI24OJpnd4EAyFF3k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FPGuRT%2FbtqEtNoFIc3%2FbrvKKxI24OJpnd4EAyFF3k%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- COCO에서의 실험은 AA가 더 우수합니다만 search space의 효율성에서는 서로간의 넘을 수 없는 벽이 있는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;- 이번엔 operation 별로 얼마나 기여하는가 깎아먹는가에 대한 내용입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bFimY8/btqEttRGuFg/eCjeYwTL9owu730L382IXK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bFimY8/btqEttRGuFg/eCjeYwTL9owu730L382IXK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bFimY8/btqEttRGuFg/eCjeYwTL9owu730L382IXK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbFimY8%2FbtqEttRGuFg%2FeCjeYwTL9owu730L382IXK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/80Oso/btqEvwseE2o/UrKTvTYzdxbjJ4WEkFv8q0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/80Oso/btqEvwseE2o/UrKTvTYzdxbjJ4WEkFv8q0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/80Oso/btqEvwseE2o/UrKTvTYzdxbjJ4WEkFv8q0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F80Oso%2FbtqEvwseE2o%2FUrKTvTYzdxbjJ4WEkFv8q0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; data-ke-mobilestyle=&quot;widthContent&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;- 그래프를 보면 직관적으로 알 수 있습니다. 아래 표는 다른 모든 op 함수들에 대해 기여도를 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;- rotation이 가장 우수하고 posterize가 가장 좋지 않습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;- 이상으로 구글 브레인에서 발표한 RandAugment에 대해 알아 보았습니다.&lt;/p&gt;
&lt;p&gt;- 구글 브레인 팀은 이 작업을 다른 ML 도메인에 대해서도 적용을 해보겠다고 말했습니다. (3D perception, speech recognition등) 기대해 보아도 좋을 것 같습니다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;읽어주셔서 감사합니다.&lt;/p&gt;</description>
      <category>논문읽기</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/24</guid>
      <comments>https://hyperml.tistory.com/24#entry24comment</comments>
      <pubDate>Fri, 29 May 2020 05:37:55 +0900</pubDate>
    </item>
    <item>
      <title>[ICCV17]Unified Deep Supervised Domain Adaptation and Generalization</title>
      <link>https://hyperml.tistory.com/22</link>
      <description>&lt;h2 data-ke-size=&quot;size26&quot;&gt;Domain adaptation이란&lt;/h2&gt;
&lt;p&gt;일반적으로 사람들은 imagenet이나 coco dataset으로 학습한 모델을 사용하여, 내가 하고자 하는 task에 적용을 해본다음 성능을 높이기 위해, transfer learning등의 단계로 넘어간다.&lt;/p&gt;
&lt;p&gt;imagenet과 같은 학습용 open dataset은 구하기 쉬우나, 세상 모든 task에 대응할 수는 없다. 이를테면 공사현장 cctv등을 감시하는 시스템을 개발한다고 하면, imagenet이나 coco person에서 학습한 object detector 모델을 그대로 사용하여 detection등을 수행할 경우, 처참한 결과를 보게 된다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;여기서 domain adaptation의 필요가 생겨났다.&lt;/p&gt;
&lt;p&gt;이미 유사하게 학습한 모델을 적은 노력을 들여서 catastrophic forgetting없이 내가 수행하고자 하는 domain에서 잘 동작하도록 재학습하는 것이다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;학습은 source dataset에서 하고, test는 target dataset에서 하되 기존의 학습된 네트워크가 이 둘을 구분하지 못하게 하는 방식이 많이 쓰인다.&lt;/p&gt;
&lt;p&gt;DA는 supervised, unsupervised, semi-supervised 등 다양하게 시도되었으나 supervised (SDA)방식이 unsupervised (UDA) 방식보다는 더 우수한 성능을 보이는 것으로 알려져 있다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Related Works&lt;/h2&gt;
&lt;p&gt;(1) source와 target간의 distribution을 mapping (UDA)&lt;/p&gt;
&lt;p&gt;(2) source와 target간의 distribution의 shared latent space를 찾기 (UDA, SDA)&lt;/p&gt;
&lt;p&gt;(3) source distribution에서 학습된 classifier를 regularize 하여 target distribution에서 잘 동작하게 함 (SDA)&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;본 논문의 핵심은&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.24.33.png&quot; data-origin-width=&quot;1524&quot; data-origin-height=&quot;476&quot; width=&quot;902&quot; height=&quot;282&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/OdLmX/btqAErwaBaJ/AKfg9B44GWrqbio4E0ju6K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/OdLmX/btqAErwaBaJ/AKfg9B44GWrqbio4E0ju6K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/OdLmX/btqAErwaBaJ/AKfg9B44GWrqbio4E0ju6K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FOdLmX%2FbtqAErwaBaJ%2FAKfg9B44GWrqbio4E0ju6K%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.24.33.png&quot; data-origin-width=&quot;1524&quot; data-origin-height=&quot;476&quot; width=&quot;902&quot; height=&quot;282&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Weight를 공유하는 Siamese network를 활용하여 feature extractor를 학습한다&lt;/li&gt;
&lt;li&gt;전체 네트워크를 g(feature extractor), h(classifier)로 나누어서 classifier는 source dataset으로만 학습한다&lt;/li&gt;
&lt;li&gt;domain은 서로 다르나 class가 같은 sample간의 차이를 contrastive하게 minimize하는 loss를 도입하여 성능을 높였다.&amp;nbsp;&lt;br /&gt;반면 domain과 상관없이 class가 다른 sample들 간의 차이는 margin을 두어 maximize하여 penalty를 부과하였다.&lt;/li&gt;
&lt;li&gt;빈약한 target dataset에 대응하여 optimize하기위해 기존 training dataset의 샘플을 n(from source dataset) x m(from target dataset) pair를 random sampling하여 loss 계산을 하였다&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Losses&lt;/h2&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Total&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.20.03.png&quot; data-origin-width=&quot;748&quot; data-origin-height=&quot;82&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/q5DZh/btqAEsotWSv/lH9OvFZ5JH8KnmRv30Qo7k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/q5DZh/btqAEsotWSv/lH9OvFZ5JH8KnmRv30Qo7k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/q5DZh/btqAEsotWSv/lH9OvFZ5JH8KnmRv30Qo7k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fq5DZh%2FbtqAEsotWSv%2FlH9OvFZ5JH8KnmRv30Qo7k%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.20.03.png&quot; data-origin-width=&quot;748&quot; data-origin-height=&quot;82&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;network를 extract embedding 하는 부분(g)과 classify 하는 부분(h)으로 나누어 표현하였다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Classification loss&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.19.41.png&quot; data-origin-width=&quot;440&quot; data-origin-height=&quot;78&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/GBOPi/btqAGDCAzz4/vqyLTb8ZkaXwwkBSYNeGWK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/GBOPi/btqAGDCAzz4/vqyLTb8ZkaXwwkBSYNeGWK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/GBOPi/btqAGDCAzz4/vqyLTb8ZkaXwwkBSYNeGWK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FGBOPi%2FbtqAGDCAzz4%2FvqyLTb8ZkaXwwkBSYNeGWK%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.19.41.png&quot; data-origin-width=&quot;440&quot; data-origin-height=&quot;78&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;classification은 source dataset의 ground truth로만 계산한다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Semantic Alignment loss&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.23.14.png&quot; data-origin-width=&quot;644&quot; data-origin-height=&quot;146&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bCVztl/btqADHGmwHB/r7KlMMirEJ6yT0auJnN2p1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bCVztl/btqADHGmwHB/r7KlMMirEJ6yT0auJnN2p1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bCVztl/btqADHGmwHB/r7KlMMirEJ6yT0auJnN2p1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbCVztl%2FbtqADHGmwHB%2Fr7KlMMirEJ6yT0auJnN2p1%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.23.14.png&quot; data-origin-width=&quot;644&quot; data-origin-height=&quot;146&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;g는 feature extractor, DA의 목적중 하나는 source dataset과 target dataset을 구분하지 못하게 하는 것이므로, 클래스를 고려하여 그 둘의 분포의 거리를 loss로 계산하였다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Separation loss&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.24.28.png&quot; data-origin-width=&quot;672&quot; data-origin-height=&quot;128&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/TfHlo/btqAHp4T8Se/f5trCNrNb0tUVabumKfxG1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/TfHlo/btqAHp4T8Se/f5trCNrNb0tUVabumKfxG1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/TfHlo/btqAHp4T8Se/f5trCNrNb0tUVabumKfxG1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FTfHlo%2FbtqAHp4T8Se%2Ff5trCNrNb0tUVabumKfxG1%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.24.28.png&quot; data-origin-width=&quot;672&quot; data-origin-height=&quot;128&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;클래스가 서로 다른 경우 페널티를 부과하였다. k는 source dataset과 target dataset 분포의 거리를 측정하는 metric이다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.38.44.png&quot; data-origin-width=&quot;1994&quot; data-origin-height=&quot;642&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dch5gN/btqAHHK17pS/iLliJzUCwsJ2lmWLRzPP80/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dch5gN/btqAHHK17pS/iLliJzUCwsJ2lmWLRzPP80/img.png&quot; data-alt=&quot;MNIST-USPS dataset visualized by t-SNE (좌) DA를 수행하지 않은 상태 (중) 2d embedding만하고 DA를 수행하지 않은 상태 (우) DA를 수행한 상태&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dch5gN/btqAHHK17pS/iLliJzUCwsJ2lmWLRzPP80/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdch5gN%2FbtqAHHK17pS%2FiLliJzUCwsJ2lmWLRzPP80%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 10.38.44.png&quot; data-origin-width=&quot;1994&quot; data-origin-height=&quot;642&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;MNIST-USPS dataset visualized by t-SNE (좌) DA를 수행하지 않은 상태 (중) 2d embedding만하고 DA를 수행하지 않은 상태 (우) DA를 수행한 상태&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;(좌) 두 dataset에서 온 같은 class sample들이 모여있지 않고 여기저기 따로 존재한다(USPS의 0과 MNIST의 0은 서로 멀리 떨어져 있음)&lt;/p&gt;
&lt;p&gt;(중) 여전히 서로 다른 domain에서 온&amp;nbsp; 같은 class의 샘플들이 따로 떨어져 있다(MNIST로만 학습하고 DA는 하지 않음)&lt;/p&gt;
&lt;p&gt;(우) 서로 다른 domain에서 온 같은 class의 샘플들이 모여있다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;적은 수(scarce)의 데이터로 어떻게 학습시키는가&lt;/h2&gt;
&lt;p&gt;DA문제가 보통 supervised의 경우, 데이터의 수가 제한되는 경우가 많다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/oskve/btqAErXsAmQ/e9YXjDpL9HK29yjvV09QHk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/oskve/btqAErXsAmQ/e9YXjDpL9HK29yjvV09QHk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/oskve/btqAErXsAmQ/e9YXjDpL9HK29yjvV09QHk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Foskve%2FbtqAErXsAmQ%2Fe9YXjDpL9HK29yjvV09QHk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;k: Xsa와 Xtb 사이의 유사도를 측정하는 대중적인 metric (s: source, t: target, a: class a, b: class b)&lt;/p&gt;
&lt;p&gt;X source와 X target의 분포를 바로 loss로 결정하기엔 sample이 부족하므로 source와 target의 sample을 pair로 만들면 보다 많은 경우의 수가 발생한다는 점을 이용해서 분포를 근사한다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/qjplv/btqAHoydeNb/gTlR6qqTNmR2pBdxDr2zN0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/qjplv/btqAHoydeNb/gTlR6qqTNmR2pBdxDr2zN0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/qjplv/btqAHoydeNb/gTlR6qqTNmR2pBdxDr2zN0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fqjplv%2FbtqAHoydeNb%2FgTlR6qqTNmR2pBdxDr2zN0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;m: margin&lt;/p&gt;
&lt;p&gt;샘플들의 차를 바로 loss에 반영하지 않고 margin에서 얼마나 차이가 나는지를 측정해서 차이가 0이더라도 클래스간의 거리는 margin 만큼 유지할 수 있도록 한다.&lt;/p&gt;
&lt;h2&gt;Experiment&lt;/h2&gt;
&lt;p style=&quot;font-size: 1.25em;&quot;&gt;First experiment&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Office dataset
&lt;ul style=&quot;list-style-type: disc;&quot;&gt;
&lt;li&gt;3개의 domain(Amazon, Webcam, DSLR)&lt;/li&gt;
&lt;li&gt;각각 31개의 공통된 class를 가짐, 75MB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/m6Ryu/btqAFnNNoXU/AaHhcy9aHB84FOab4vTXaK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/m6Ryu/btqAFnNNoXU/AaHhcy9aHB84FOab4vTXaK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/m6Ryu/btqAFnNNoXU/AaHhcy9aHB84FOab4vTXaK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fm6Ryu%2FbtqAFnNNoXU%2FAaHhcy9aHB84FOab4vTXaK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;A: amazon, W: Webcam, D:DSLR domain&lt;/li&gt;
&lt;li&gt;lower bound : base model without DA&lt;/li&gt;
&lt;li&gt;splits: 5 (train-test)&lt;/li&gt;
&lt;li&gt;g: VGG16, 2 fc 1024, 128 sized, imagenet pretrained&lt;/li&gt;
&lt;/ul&gt;
&lt;p style=&quot;font-size: 1.25em;&quot;&gt;Second experiment&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li style=&quot;font-size: 1.12em;&quot;&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;15개의 클래스 중에 10개의 labeled 샘플만 학습시 반영하고 test는 15개 전체에 대해서 수행&lt;/p&gt;
&lt;/li&gt;
&lt;li style=&quot;font-size: 1.12em;&quot;&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;나머지 10개의 클래스에 대해서도 동일하게 test&lt;/p&gt;
&lt;/li&gt;
&lt;li style=&quot;font-size: 1.12em;&quot;&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;'[60]에 소개된 실험기법&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/qupt8/btqAErb8EY6/akbjnevdQ46Dsge1CyK7W0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/qupt8/btqAErb8EY6/akbjnevdQ46Dsge1CyK7W0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/qupt8/btqAErb8EY6/akbjnevdQ46Dsge1CyK7W0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fqupt8%2FbtqAErb8EY6%2FakbjnevdQ46Dsge1CyK7W0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;font-size: 1.25em;&quot;&gt;Third experiment&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li style=&quot;font-size: 1.25em;&quot;&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;31개 클래스중 10개만 사용해서 SURF, DeCaF-fc6 feature를 입력으로 한 DA를 test, 왜 10개만 선택했는지는 안나옴&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cb3gTS/btqADISTfIa/uuAI0Uoxtd1JIra2pV0vrK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cb3gTS/btqADISTfIa/uuAI0Uoxtd1JIra2pV0vrK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cb3gTS/btqADISTfIa/uuAI0Uoxtd1JIra2pV0vrK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcb3gTS%2FbtqADISTfIa%2FuuAI0Uoxtd1JIra2pV0vrK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;MNIST-USPS dataset test&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bOHtfm/btqAICxmo1a/IhVBHG6eK9qgJK37I438v0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bOHtfm/btqAICxmo1a/IhVBHG6eK9qgJK37I438v0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bOHtfm/btqAICxmo1a/IhVBHG6eK9qgJK37I438v0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbOHtfm%2FbtqAICxmo1a%2FIhVBHG6eK9qgJK37I438v0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;두 dataset은 공히 0-9 손글씨 image&lt;/li&gt;
&lt;li&gt;CCSA-n에서 n은 클래스당 sample수&lt;/li&gt;
&lt;li&gt;MNIST에서 2000images, USPS에서 1800 images를 random sample&lt;/li&gt;
&lt;li&gt;학습샘플은 1개만 넣어도 78%를 넘는 acc.를 보여준다. 8개면 90%를 초과한다. mnist같은 간단한 dataset이 아니라 실제 영상에서도 잘 될까?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Ablation Study&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bQ0cFH/btqAHJRpznP/kgCpMrXEhmkT1Wh9y0GXhk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bQ0cFH/btqAHJRpznP/kgCpMrXEhmkT1Wh9y0GXhk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bQ0cFH/btqAHJRpznP/kgCpMrXEhmkT1Wh9y0GXhk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbQ0cFH%2FbtqAHJRpznP%2FkgCpMrXEhmkT1Wh9y0GXhk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;CS: loss를 classification loss와 separation loss만으로 구성&lt;/li&gt;
&lt;li&gt;CSA: loss를 classification loss와 semantic alignment loss만으로 구성&lt;/li&gt;
&lt;li&gt;CCSA: 모든 loss를 다 적용&lt;/li&gt;
&lt;li&gt;fine-tune: classifier만 fine-tune&lt;/li&gt;
&lt;li&gt;separation loss와 semantic alignment loss가 둘 다 contrastive한 성격을 가져서인지 하나씩만 적용했을 때 유사한 성능을 보인다&lt;/li&gt;
&lt;li&gt;SA는 클래스가 같으면 모아주고, S는 클래스가 다르면 사이를 벌려주는 역할&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&amp;nbsp;&lt;/h2&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Domain Generalization&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/blsEQS/btqAGdRH9dO/RuEtPyVVwk8KjhCACMpND0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/blsEQS/btqAGdRH9dO/RuEtPyVVwk8KjhCACMpND0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/blsEQS/btqAGdRH9dO/RuEtPyVVwk8KjhCACMpND0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FblsEQS%2FbtqAGdRH9dO%2FRuEtPyVVwk8KjhCACMpND0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;source domain을 특정한 target domain에 adapt 시키는 것이 아니라 embedding feature자체가 invariant space에 mapping되게 하는 것이 목적이다.&lt;/li&gt;
&lt;li&gt;그렇게 하기위해 순서가 없는 source domain의 샘플 u, v에 sa loss랑 s loss로 optimize한다.&lt;/li&gt;
&lt;li&gt;이렇게 하면 extractor g와 classifier h는 mapping된 어떤 샘플에도 invariant한 특성을 나타낸다.&lt;/li&gt;
&lt;li&gt;샘플들은 모든 pair를 계산하지는 않고 random selection한다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Links&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;a style=&quot;letter-spacing: 0px;&quot; href=&quot;https://github.com/samotiian/CCSA&quot;&gt;https://github.com/samotiian/CCSA&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Domain Adaptation</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/22</guid>
      <comments>https://hyperml.tistory.com/22#entry22comment</comments>
      <pubDate>Mon, 23 Dec 2019 22:44:59 +0900</pubDate>
    </item>
    <item>
      <title>Domain Adaptation</title>
      <link>https://hyperml.tistory.com/21</link>
      <description>&lt;p&gt;Surveys:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://arxiv.org/pdf/1802.03601.pdf&quot;&gt;https://arxiv.org/pdf/1802.03601.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://arxiv.org/pdf/1702.05374.pdf&quot;&gt;https://arxiv.org/pdf/1702.05374.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Awesome DA:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://porter.io/github.com/artix41/awesome-transfer-learning&quot;&gt;https://porter.io/github.com/artix41/awesome-transfer-learning&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1577018295355&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Porter.io&quot; data-og-description=&quot;Porter.io&quot; data-og-host=&quot;porter.io&quot; data-og-source-url=&quot;https://porter.io/github.com/artix41/awesome-transfer-learning&quot; data-og-url=&quot;https://porter.io/github.com/artix41/awesome-transfer-learning&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://porter.io/github.com/artix41/awesome-transfer-learning&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://porter.io/github.com/artix41/awesome-transfer-learning&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot;&gt;Porter.io&lt;/p&gt;
&lt;p class=&quot;og-desc&quot;&gt;Porter.io&lt;/p&gt;
&lt;p class=&quot;og-host&quot;&gt;porter.io&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;SDA(Supervised Domain Adaptation):&lt;/p&gt;
&lt;p&gt;&lt;b&gt;CCSA&lt;/b&gt;&lt;span&gt;:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;a href=&quot;https://arxiv.org/abs/1709.10190&quot;&gt;Unified Deep Supervised Domain Adaptation and Generalization&lt;/a&gt;&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;(2017)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;FADA&lt;/b&gt;:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://arxiv.org/abs/1711.02536&quot;&gt;Few-Shot Adversarial Domain Adaptation&lt;/a&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;(2017)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Augmented-Cyc&lt;/b&gt;:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://arxiv.org/abs/1807.00374&quot;&gt;Augmented Cyclic Adversarial Learning for Domain Adaptation&lt;/a&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;(2018)&lt;/p&gt;</description>
      <category>Domain Adaptation</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/21</guid>
      <comments>https://hyperml.tistory.com/21#entry21comment</comments>
      <pubDate>Sun, 22 Dec 2019 21:51:31 +0900</pubDate>
    </item>
    <item>
      <title>(CenterNet) Objects as Points</title>
      <link>https://hyperml.tistory.com/20</link>
      <description>&lt;p&gt;두 가지 종류의 CenterNet이 있다&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;하나는 본 논문 &lt;a href=&quot;https://arxiv.org/abs/1904.07850&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Objects as Points&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;나머지 하나는 &lt;a href=&quot;https://arxiv.org/abs/1904.08189&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;CenterNet: Keypoint Triplets for Object Detection&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;둘다 arxiv.org 기준으로 19.04에 등재되었다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/chKJKp/btqAnwi5OD7/OUDSk0lkejViuwWeW6k4Uk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/chKJKp/btqAnwi5OD7/OUDSk0lkejViuwWeW6k4Uk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/chKJKp/btqAnwi5OD7/OUDSk0lkejViuwWeW6k4Uk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FchKJKp%2FbtqAnwi5OD7%2FOUDSk0lkejViuwWeW6k4Uk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;지난번 소개한 CornerNet 과 같이 keypoint heatmap 방식으로 object detection을 수행하는 논문들이 늘고 있는데 본 논문도 그런 흐름위에 있다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Centernet의 특징은?&lt;/h2&gt;
&lt;p&gt;- 별도의 anchorbox없이 object detection을 object의 중앙에 놓인 point의 heatmap으로 결정한다는 점&lt;/p&gt;
&lt;p&gt;- 중앙 point의 feature값으로 detection뿐 아니라 object size, dimension, 3D extent, orientation, pose등도 regression할 수 있다는 점&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dx4EIw/btqAlQXo3kr/0o3GFLN9GiQP698XdaCjUK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dx4EIw/btqAlQXo3kr/0o3GFLN9GiQP698XdaCjUK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dx4EIw/btqAlQXo3kr/0o3GFLN9GiQP698XdaCjUK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdx4EIw%2FbtqAlQXo3kr%2F0o3GFLN9GiQP698XdaCjUK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- nms가 필요 없다는 점&lt;/p&gt;
&lt;p&gt;- 빠른 속도 (backbone에 따른 속도변화가 꽤 크다)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;142fps@ResNet-18 (28.1% COCO AP),&lt;/p&gt;
&lt;p&gt;&amp;nbsp;52fps@DLA-34 (37.4% COCO AP),&lt;/p&gt;
&lt;p&gt;&amp;nbsp;1.4fps@Hourglass-104 (45.1% COCO AP)&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/btJ7Vg/btqAnwi6lSy/OM7Ehgi608J0zwDg1RvX81/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/btJ7Vg/btqAnwi6lSy/OM7Ehgi608J0zwDg1RvX81/img.png&quot; data-alt=&quot;Speed-accuracy trade-off, FasterRCNN은 물론 YOLO보다 더 바깥쪽을 커버하고 있다.&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/btJ7Vg/btqAnwi6lSy/OM7Ehgi608J0zwDg1RvX81/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbtJ7Vg%2FbtqAnwi6lSy%2FOM7Ehgi608J0zwDg1RvX81%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;Speed-accuracy trade-off, FasterRCNN은 물론 YOLO보다 더 바깥쪽을 커버하고 있다.&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;- 다른 keypoint기반 object detector(CornerNet 2개, ExtremeNet 5개) 보다 적은수인 1개의 keypoint를 요구한다는 점&lt;/p&gt;
&lt;p&gt;&amp;nbsp;(다른 논문들은 동일 bounding box에 속한 keypoints확인을 위해 반드시 grouping 작업이 필요함)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;(이는 keypoint 기반 알고리즘인 bottom-up 방식 2d pose estimation에서도 issue임(grouping에 많은 연산량 필요))&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;어떤 구조를 사용하였나?&lt;/h2&gt;
&lt;p&gt;- Stacked Hourglass Network, ResNet, DLA (Deep Layer Aggregation)를 실험에 사용했다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/CvIgP/btqAkSA8obl/ON08dZrBYczCn0a98weQh1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/CvIgP/btqAkSA8obl/ON08dZrBYczCn0a98weQh1/img.png&quot; data-alt=&quot;Speed/accuracy trade off for different networks on COCO val. N.A.: no test augmentation, F: flip testing, MS: multi-scale augmetation&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/CvIgP/btqAkSA8obl/ON08dZrBYczCn0a98weQh1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FCvIgP%2FbtqAkSA8obl%2FON08dZrBYczCn0a98weQh1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;Speed/accuracy trade off for different networks on COCO val. N.A.: no test augmentation, F: flip testing, MS: multi-scale augmetation&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;- 이미 유사한 구조인 CornerNet때문인지 본문에는 별도의 network 구조에대해서는 그림이 없고 다만 위 구조들에 대한 성능만 간단하게 나와있다. 서플먼트에 아래와 같이 좀 더 구체적인 그림이 있다&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/uJ8hp/btqAjVrxad7/lRUuzGMrVmZK4Vh8KDtyGk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/uJ8hp/btqAjVrxad7/lRUuzGMrVmZK4Vh8KDtyGk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/uJ8hp/btqAjVrxad7/lRUuzGMrVmZK4Vh8KDtyGk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FuJ8hp%2FbtqAjVrxad7%2FlRUuzGMrVmZK4Vh8KDtyGk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;(a)는 Hourglass 구조인데, single person pose estimation에 사용된 backbone이다. hourglass특유의 반복적인 encoder-decoder 덕분에 반복을 거듭할 수록 refine되어 keypoint 위치가 정교해진다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;(b)는 resnet encoder-decoder&lt;/p&gt;
&lt;p&gt;&amp;nbsp;(c),(d)는 DLA-34 구조인데 (d)에서 약간 변형을 가해서 성능을 개선했다&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;loss&lt;/h2&gt;
&lt;p&gt;전체 loss는 다음과 같다&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/o1tjd/btqAnvS3djH/7CAg80iz5kOr0BiI828CV1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/o1tjd/btqAnvS3djH/7CAg80iz5kOr0BiI828CV1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/o1tjd/btqAnvS3djH/7CAg80iz5kOr0BiI828CV1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fo1tjd%2FbtqAnvS3djH%2F7CAg80iz5kOr0BiI828CV1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Lk는 gaussian kernel을 씌운 point의 위치를 focal loss로 계산한 것이고&lt;/p&gt;
&lt;p&gt;Lsize는 예측한 bounding box (가로길이, 세로길이)와 실제의 box를 L1 loss를 계산한 것&lt;/p&gt;
&lt;p&gt;Loff는 feature map을 축소했다가 원래 사이즈로 회복시키면서 발생하는 위치차&lt;span style=&quot;color: #333333;&quot;&gt;(discretization error)&lt;/span&gt;를 보정하기 위한 loss이다&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;하나씩 보면,&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lE5YV/btqAo7Row7l/csywiMKAplvnRTlKtmL6f1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lE5YV/btqAo7Row7l/csywiMKAplvnRTlKtmL6f1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lE5YV/btqAo7Row7l/csywiMKAplvnRTlKtmL6f1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlE5YV%2FbtqAo7Row7l%2FcsywiMKAplvnRTlKtmL6f1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Y : gaussian kernel을 씌운 ground truth center point 값&lt;/p&gt;
&lt;p&gt;Yhat : &lt;span style=&quot;color: #333333;&quot;&gt;gaussian kernel을 씌운 predicted center point 값&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/runN8/btqApIxN3yi/zk0lxuR2wmDD5JrzJKUKt0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/runN8/btqApIxN3yi/zk0lxuR2wmDD5JrzJKUKt0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/runN8/btqApIxN3yi/zk0lxuR2wmDD5JrzJKUKt0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FrunN8%2FbtqApIxN3yi%2Fzk0lxuR2wmDD5JrzJKUKt0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;R은 stride (=4), p는 ground truth keypoint, p(tilde)는 low resolution equivalent&lt;/p&gt;
&lt;p&gt;Ohat은 predicted local offset&lt;/p&gt;
&lt;p&gt;기본은 L1 loss&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/k4tj1/btqArSsH3G5/D4s31KrddsbNRe2cONqTxK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/k4tj1/btqArSsH3G5/D4s31KrddsbNRe2cONqTxK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/k4tj1/btqArSsH3G5/D4s31KrddsbNRe2cONqTxK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fk4tj1%2FbtqArSsH3G5%2FD4s31KrddsbNRe2cONqTxK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.11.29.png&quot; data-origin-width=&quot;1642&quot; data-origin-height=&quot;280&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/3JX5K/btqADYA3Is6/6lXg3JkbRJzTyDioK0W0C1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/3JX5K/btqADYA3Is6/6lXg3JkbRJzTyDioK0W0C1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/3JX5K/btqADYA3Is6/6lXg3JkbRJzTyDioK0W0C1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F3JX5K%2FbtqADYA3Is6%2F6lXg3JkbRJzTyDioK0W0C1%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.11.29.png&quot; data-origin-width=&quot;1642&quot; data-origin-height=&quot;280&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Objects as points&lt;/h2&gt;
&lt;p&gt;예측한 object의 point(및 box size)를 가지고, bounding box 및 confidence(해당 점의 높이) 를 얻는다&lt;/p&gt;
&lt;p&gt;몇가지 head를 달리함으로써 부가적인 기능을 수행할 수 있다&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;3D detction&lt;/h4&gt;
&lt;p&gt;&amp;nbsp; keypoint estimator출력을 3차원 box를 predict할 수 있다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;point의 depth를 predict해서 다시 다음처럼 trasnformation 한다&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/QKwvy/btqAt50t1sv/HUq0JJrqB2Z9KCIUc8z5U1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/QKwvy/btqAt50t1sv/HUq0JJrqB2Z9KCIUc8z5U1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/QKwvy/btqAt50t1sv/HUq0JJrqB2Z9KCIUc8z5U1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FQKwvy%2FbtqAt50t1sv%2FHUq0JJrqB2Z9KCIUc8z5U1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;이렇게 얻어진 것을 L1 loss로 학습한다&lt;/p&gt;
&lt;p&gt;&amp;nbsp;3d dimension(W,H,D)는 3개의 스칼라인데 이것은 별도의 head, L1 loss로 학습한다&lt;/p&gt;
&lt;p&gt;&amp;nbsp;orientation은 Mousavin을 따랐고, 그는 orientation을 2개의 bin으로 표현했다&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp; 각 bin에 4개씩 8개의 스칼라를 사용했다. 첫번째 bin에서 2개는 softmax classification, 나머지 2개는 각 bin의 angle을 regress했다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;각각의 loss는 아래와 같다&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/2Pqxk/btqAuMTHKNt/P5iyDlUn2uuR2PUJ0R2j7k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/2Pqxk/btqAuMTHKNt/P5iyDlUn2uuR2PUJ0R2j7k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/2Pqxk/btqAuMTHKNt/P5iyDlUn2uuR2PUJ0R2j7k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F2Pqxk%2FbtqAuMTHKNt%2FP5iyDlUn2uuR2PUJ0R2j7k%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bgrtis/btqAsmvCs6g/V0nzu5TCo69TXVkBXdi4RK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bgrtis/btqAsmvCs6g/V0nzu5TCo69TXVkBXdi4RK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bgrtis/btqAsmvCs6g/V0nzu5TCo69TXVkBXdi4RK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbgrtis%2FbtqAsmvCs6g%2FV0nzu5TCo69TXVkBXdi4RK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;gamma는 object의 height, width, length이다 (meter)&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/uSWJw/btqAtCxtB3J/kxDuXk0yd6GJU3dtQ1X8P1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/uSWJw/btqAtCxtB3J/kxDuXk0yd6GJU3dtQ1X8P1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/uSWJw/btqAtCxtB3J/kxDuXk0yd6GJU3dtQ1X8P1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FuSWJw%2FbtqAtCxtB3J%2FkxDuXk0yd6GJU3dtQ1X8P1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bdwPv5/btqAt7RyrSH/372Ec0ClBcmm9SLVcho7Y1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bdwPv5/btqAt7RyrSH/372Ec0ClBcmm9SLVcho7Y1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bdwPv5/btqAt7RyrSH/372Ec0ClBcmm9SLVcho7Y1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbdwPv5%2FbtqAt7RyrSH%2F372Ec0ClBcmm9SLVcho7Y1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bBBn3L/btqAtCEeUvN/AyD1luqf3U2ypLZH6WKlj0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bBBn3L/btqAtCEeUvN/AyD1luqf3U2ypLZH6WKlj0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bBBn3L/btqAtCEeUvN/AyD1luqf3U2ypLZH6WKlj0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbBBn3L%2FbtqAtCEeUvN%2FAyD1luqf3U2ypLZH6WKlj0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;mi는 bin center, bi는 softmax classification, ai는 in-bin offset의 sin, cos 값이다&lt;/p&gt;
&lt;p&gt;각 bin의 angle은 위 theta hat처럼 encoding 된다&lt;/p&gt;
&lt;p&gt;j는 bin index인데 더 큰 classification score를 갖는 것이다&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.11.00.png&quot; data-origin-width=&quot;1646&quot; data-origin-height=&quot;406&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b9wu25/btqAFn7Y7g1/tpByhgh2FYlaBIOR5fz24k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b9wu25/btqAFn7Y7g1/tpByhgh2FYlaBIOR5fz24k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b9wu25/btqAFn7Y7g1/tpByhgh2FYlaBIOR5fz24k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb9wu25%2FbtqAFn7Y7g1%2FtpByhgh2FYlaBIOR5fz24k%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.11.00.png&quot; data-origin-width=&quot;1646&quot; data-origin-height=&quot;406&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;Human Pose Estimation&lt;/h4&gt;
&lt;p&gt;COCO keypoint 기준 17개(=k)의 관절부를 keypoint로 찾아주는 것이다.&lt;/p&gt;
&lt;p&gt;(x,y 이므로 k * 2) 본 논문에서는 L1 loss로 바로 keypoint를 계산하고 가려진 keypoint는 제외하고 학습하였다&lt;/p&gt;
&lt;p&gt;학습 결과를 refine 하기위해 일반적인 pose estimation 방식 두가지중 bottom-up 처리 방식을 사용했다(PAF, Stacked Hourglass, PersonLab)&lt;/p&gt;
&lt;p&gt;좀 더 keypoint를 잘 예측하기 위해서 center offset을 grouping cue로 사용했는데, 각각의 keypoint를 center offset 기준으로 할당했다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.12.01.png&quot; data-origin-width=&quot;772&quot; data-origin-height=&quot;410&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Ak2wb/btqADYA3LF0/drIzfR7WeQr9MYuLoFN2CK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Ak2wb/btqADYA3LF0/drIzfR7WeQr9MYuLoFN2CK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Ak2wb/btqADYA3LF0/drIzfR7WeQr9MYuLoFN2CK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FAk2wb%2FbtqADYA3LF0%2FdrIzfR7WeQr9MYuLoFN2CK%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.12.01.png&quot; data-origin-width=&quot;772&quot; data-origin-height=&quot;410&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://hyperml.tistory.com/1&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;CMU-Pose 가 본 블로그에 포스팅된 PAF.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;여러 구성에도 불구하고 SOTA수준은 아니지만, 준수하게 human pose에 대해서도 feature를 뽑아낸다는 것을 알 수 있다.&lt;/p&gt;
&lt;p&gt;다만 Hourglass 구조 자체가 이미 &lt;a href=&quot;https://arxiv.org/abs/1603.06937&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;single pose 전용으로 오래전에 발표된 것&lt;/a&gt;이기에 19년도에 발표된 본 논문이 해당 구조를 사용해서 pose estimation을 수행한 것은 아쉬움이 있다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.10.35.png&quot; data-origin-width=&quot;1648&quot; data-origin-height=&quot;556&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cYOAo9/btqAD0r5Bwv/XarFYRNMreKoNSJLLoeVXk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cYOAo9/btqAD0r5Bwv/XarFYRNMreKoNSJLLoeVXk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cYOAo9/btqAD0r5Bwv/XarFYRNMreKoNSJLLoeVXk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcYOAo9%2FbtqAD0r5Bwv%2FXarFYRNMreKoNSJLLoeVXk%2Fimg.png&quot; data-filename=&quot;스크린샷 2019-12-23 오후 8.10.35.png&quot; data-origin-width=&quot;1648&quot; data-origin-height=&quot;556&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;관련 링크&lt;/p&gt;
&lt;p&gt;github : &lt;a href=&quot;https://github.com/xingyizhou/CenterNet&quot;&gt;https://github.com/xingyizhou/CenterNet&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Object Detection</category>
      <category>centernet</category>
      <category>hourglass</category>
      <category>objectdetector</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/20</guid>
      <comments>https://hyperml.tistory.com/20#entry20comment</comments>
      <pubDate>Wed, 11 Dec 2019 00:38:05 +0900</pubDate>
    </item>
    <item>
      <title>[ECCV18]CornerNet: Detecting Object as Paired Keypoints</title>
      <link>https://hyperml.tistory.com/19</link>
      <description>&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vVfCA/btqAlYNsPTO/wHigXv5G4p8Yy9kIZSqI21/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vVfCA/btqAlYNsPTO/wHigXv5G4p8Yy9kIZSqI21/img.png&quot; data-alt=&quot;뭔가 이상하다고 생각했다면 당신이 옳다. error가 있는 것만 모여있는 그림이니까...ㅡ,.ㅡ&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vVfCA/btqAlYNsPTO/wHigXv5G4p8Yy9kIZSqI21/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvVfCA%2FbtqAlYNsPTO%2FwHigXv5G4p8Yy9kIZSqI21%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;뭔가 이상하다고 생각했다면 당신이 옳다. error가 있는 것만 모여있는 그림이니까...ㅡ,.ㅡ&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;CornerNet은 1-stage 계열에서 새로운 방식으로 등장한 object detector이다.&lt;/p&gt;
&lt;p&gt;일반적으로 box를 나타내는 top-left, bottom-right 정보를 바로 keypoint detection으로 찾는 방식인데, 마치 2d human pose estimation에서 heatmap을 가지고 joint의 keypoint를 찾는 것과 유사하다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bBEPBP/btqAjnHVbex/ZzBeKexCMktyMPJFMjJkMK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bBEPBP/btqAjnHVbex/ZzBeKexCMktyMPJFMjJkMK/img.png&quot; data-alt=&quot;https://github.com/hoya012/deep_learning_object_detection&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bBEPBP/btqAjnHVbex/ZzBeKexCMktyMPJFMjJkMK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbBEPBP%2FbtqAjnHVbex%2FZzBeKexCMktyMPJFMjJkMK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;https://github.com/hoya012/deep_learning_object_detection&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;기존의 1-stage detector와 무엇이 다른가?&lt;/h2&gt;
&lt;p&gt;1. bounding box를 pair corner point로 찾고 각각의 포인트의 embedding을 통해 pair를 구성한다. 100k에 이르는 anchor box가 필요하지 않다&lt;/p&gt;
&lt;p&gt;2. corner pooling : left-most, top-most 검색을 하고, max인 교차점을 구한다&lt;/p&gt;
&lt;p&gt;3. 1-stage detector중에서는 가장 우수한 편, 2-stage detector 와 비교해도 나쁘지 않은 수준 (아래 표)&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cJpE4F/btqAjn2fqHX/DjSafwvH0uop6KcA7cRI30/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cJpE4F/btqAjn2fqHX/DjSafwvH0uop6KcA7cRI30/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cJpE4F/btqAjn2fqHX/DjSafwvH0uop6KcA7cRI30/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcJpE4F%2FbtqAjn2fqHX%2FDjSafwvH0uop6KcA7cRI30%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;어떤 구조를 사용하는가?&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/biM7Ol/btqAlQIUjzL/6hacObCIm9H3fjbKkML4wk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/biM7Ol/btqAlQIUjzL/6hacObCIm9H3fjbKkML4wk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/biM7Ol/btqAlQIUjzL/6hacObCIm9H3fjbKkML4wk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbiM7Ol%2FbtqAlQIUjzL%2F6hacObCIm9H3fjbKkML4wk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Hourglass 구조는 Newell et al. 저자들이 &quot;&lt;a href=&quot;https://arxiv.org/pdf/1603.06937.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Stacked Hourglass Networks for Human Pose Estimation&lt;/a&gt;&quot; 논문에서 소개하였으며, 본 논문의 저자 중 한명인 Jia Deng이 해당 논문의 공저자이기도 하다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;마치 autoencoder와 유사하게 생긴 반복적인 모래시계(hourglass)형 네트워크를 통해 정제된 feature는 다시 top-left, bottom-right corner를 찾는데 사용된다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;다른 network도 backbone으로 사용해보았으나 hourglass가 가장 성능이 우수하여 선택했다고 한다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bCnlwo/btqAjM1JTq7/I6nT9H0d2uWGKLQWRsUIfK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bCnlwo/btqAjM1JTq7/I6nT9H0d2uWGKLQWRsUIfK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bCnlwo/btqAjM1JTq7/I6nT9H0d2uWGKLQWRsUIfK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbCnlwo%2FbtqAjM1JTq7%2FI6nT9H0d2uWGKLQWRsUIfK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;위의 hourglass를 통과한 결과물인 heatmap이 class별, top-left별, bottom-right별로 생성된다.&lt;/p&gt;
&lt;p&gt;각 heatmap의 point별로 다시 embedding이 계산되고, 이 embedding의 유사도를 측정하여 pair를 구성한다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dY5prg/btqAmsnfEvr/fDUBos1EAkeM6FbUjYO5Wk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dY5prg/btqAmsnfEvr/fDUBos1EAkeM6FbUjYO5Wk/img.png&quot; data-alt=&quot;top-left corner pooling layer 개념&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dY5prg/btqAmsnfEvr/fDUBos1EAkeM6FbUjYO5Wk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdY5prg%2FbtqAmsnfEvr%2FfDUBos1EAkeM6FbUjYO5Wk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;top-left corner pooling layer 개념&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;corner pooling은 위와 같은 방식으로 이루어지는데,&amp;nbsp; feature map 우측에서 좌측으로 이동해오면서 발견한 max값을 발견위치부터 채워버린다. 아래에서 위 방향으로도 동일한 방식으로 하고, 두 개의 map을 elementwise add한다.&lt;/p&gt;
&lt;p&gt;이것이 왜 필요하냐면, corner point가 보통 가장자리에 있어서 local evidence가 부족하여 무언가 존재하는 것으로부터 출발해야 하기 때문이다. 다만, pr12 youtube 논문읽기 모임에서 본 논문을 소개한 jaewon lee님이 우려하는 바처럼 CCTV영상에서 비슷한 위치에 여러 person이 존재할 경우 대응이 어려울 수 있을 듯 하다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/IqWaM/btqAjaPyDdz/gQiFX4B9t20GDfnkyEXXZK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/IqWaM/btqAjaPyDdz/gQiFX4B9t20GDfnkyEXXZK/img.png&quot; data-alt=&quot;pr12 jaewon lee 님 동영상에서 퍼옴&amp;amp;amp;nbsp;https://www.youtube.com/watch?v=6OYmOtivQY8&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/IqWaM/btqAjaPyDdz/gQiFX4B9t20GDfnkyEXXZK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FIqWaM%2FbtqAjaPyDdz%2FgQiFX4B9t20GDfnkyEXXZK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;pr12 jaewon lee 님 동영상에서 퍼옴&amp;nbsp;https://www.youtube.com/watch?v=6OYmOtivQY8&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color: #333333;&quot;&gt;위는 corner point가 왜 local evidence가 부족한지를 분명하게 보여주는 예시이다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bVN6oc/btqAlY7MR3F/SkINbjXCLcQaDk6m02gmKk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bVN6oc/btqAlY7MR3F/SkINbjXCLcQaDk6m02gmKk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bVN6oc/btqAlY7MR3F/SkINbjXCLcQaDk6m02gmKk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbVN6oc%2FbtqAlY7MR3F%2FSkINbjXCLcQaDk6m02gmKk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;변형된 residual net을 이용하였다. 첫번째 layer에 corner pooling을 집어넣고 출력단에 heatmap, embedding, offset을 나오게 했다. offset은 원 image를 downsampling 후 upsampling 하면서 정확히 pixel좌표를 보정하기 위해 필요하다고 한다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cbWDMA/btqAmtGwdC0/ZjkSNrBSBfd2MqdwfnIOV1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cbWDMA/btqAmtGwdC0/ZjkSNrBSBfd2MqdwfnIOV1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cbWDMA/btqAmtGwdC0/ZjkSNrBSBfd2MqdwfnIOV1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcbWDMA%2FbtqAmtGwdC0%2FZjkSNrBSBfd2MqdwfnIOV1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;heatmap으로 corner와 class를 찾고, embedding으로 pair를 짝짓는다. 매우 간단한 구조.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;어떤 loss를 사용하는가?&lt;/h2&gt;
&lt;p&gt;object detector답게 여러가지의 loss를 사용한다. 먼저 모두 나열하자면 아래와 같다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bp7BBL/btqAlY01zFD/FZiUa6wPfs7qb67H0UQHUk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bp7BBL/btqAlY01zFD/FZiUa6wPfs7qb67H0UQHUk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bp7BBL/btqAlY01zFD/FZiUa6wPfs7qb67H0UQHUk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbp7BBL%2FbtqAlY01zFD%2FFZiUa6wPfs7qb67H0UQHUk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; width=&quot;544&quot; height=&quot;72&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bAh3dR/btqAjWQAs1C/3wrpdG0TcHPJ1TK9aSvcC0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bAh3dR/btqAjWQAs1C/3wrpdG0TcHPJ1TK9aSvcC0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bAh3dR/btqAjWQAs1C/3wrpdG0TcHPJ1TK9aSvcC0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbAh3dR%2FbtqAjWQAs1C%2F3wrpdG0TcHPJ1TK9aSvcC0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; width=&quot;544&quot; height=&quot;72&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dsQ4uB/btqAjM1KuvT/RF0Ls1ibLTPpgGWg1IjOT0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dsQ4uB/btqAjM1KuvT/RF0Ls1ibLTPpgGWg1IjOT0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dsQ4uB/btqAjM1KuvT/RF0Ls1ibLTPpgGWg1IjOT0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdsQ4uB%2FbtqAjM1KuvT%2FRF0Ls1ibLTPpgGWg1IjOT0%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/coNcBL/btqAjn8YwLT/giqtSgk4j3MGDW0eRrVj60/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/coNcBL/btqAjn8YwLT/giqtSgk4j3MGDW0eRrVj60/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/coNcBL/btqAjn8YwLT/giqtSgk4j3MGDW0eRrVj60/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcoNcBL%2FbtqAjn8YwLT%2FgiqtSgk4j3MGDW0eRrVj60%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bzjz7p/btqAlPJZ5MP/SPN8N3pETzVL1SVgI1dGXk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bzjz7p/btqAlPJZ5MP/SPN8N3pETzVL1SVgI1dGXk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bzjz7p/btqAlPJZ5MP/SPN8N3pETzVL1SVgI1dGXk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbzjz7p%2FbtqAlPJZ5MP%2FSPN8N3pETzVL1SVgI1dGXk%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;1,&amp;nbsp; Ldet : &lt;a href=&quot;https://arxiv.org/pdf/1708.02002.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;focal loss&lt;/a&gt;의 변형이다. heatmap에 기여. p는 해당 위치에서의 score, y는 ground-truth이다. N은 image에서 object의 수, alpha(2), beta(4)는 각 point의 기여도를 조절하는 hyperparameter&lt;/p&gt;
&lt;p&gt;2. SmoothL1Loss : offset에 기여. faster-rcnn에서 bounding box regression(RPN) 하는 loss인데, 여기서도 box의 위치를 regression한다. 이것저것 써봤으나 smoothl1loss가 가장 좋았다고 한다.&lt;/p&gt;
&lt;p&gt;3. push/pull loss : embedding에 기여. Newell et al. 저자들의 &quot;&lt;a href=&quot;https://arxiv.org/pdf/1611.05424.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Associative embedding: End-to-end learning for joint detection and grouping&lt;/a&gt;&quot;에서의 embedding 형식과 유사함. face recognition에서 contrastive 형태로 종종 사용되는 loss인데, 서로 같은 object끼리는 같은 embedding을 가지게 만들고 다른 object끼리는 gap을 두어 밀어내는 역할을 한다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;etk : top-left embedding, ebk: bottom-right embedding, ek: etk와 ebk의 평균&lt;/p&gt;
&lt;p&gt;&amp;nbsp;ej : ek와는 다른 클래스의 평균&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/H0QRt/btqAiWRwO6r/mLkey0ksg6gjbRBi8ozmh1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/H0QRt/btqAiWRwO6r/mLkey0ksg6gjbRBi8ozmh1/img.png&quot; data-alt=&quot;이것도 pr12 jaewon lee님의 설명에서 가지고 왔음. 그분 설명짱 인듯.&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/H0QRt/btqAiWRwO6r/mLkey0ksg6gjbRBi8ozmh1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FH0QRt%2FbtqAiWRwO6r%2FmLkey0ksg6gjbRBi8ozmh1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이것도 pr12 jaewon lee님의 설명에서 가지고 왔음. 그분 설명짱 인듯.&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #333333;&quot;&gt;우측 chart를 보면, embedding을 vector로 구성한 것이 아니라 scalar로 만들었다. 뭐든 작동되기만 하면야...&lt;/span&gt;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;성능/속도는 어떠한가?&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ujcUK/btqAjaPzdiC/vSwNNsV4CQty00FGgpgK9K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ujcUK/btqAjaPzdiC/vSwNNsV4CQty00FGgpgK9K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ujcUK/btqAjaPzdiC/vSwNNsV4CQty00FGgpgK9K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FujcUK%2FbtqAjaPzdiC%2FvSwNNsV4CQty00FGgpgK9K%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;일단 1-stage에서는 최우수, mask r-cnn 보다는 조금 더 좋고, cascade r-cnn보단 조금 못한 수준.&lt;/p&gt;
&lt;p&gt;속도는 244ms per image on a Titan X&lt;/p&gt;
&lt;p&gt;1-stage이긴 한데 분명 1-stage들 중에서 우수한 AP인데, 속도는 2-stage detector 보다 못한 느낌이다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bI7LRL/btqAkwYr9Pv/BQ7mGffIRD6L7DU4Ku0kYK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bI7LRL/btqAkwYr9Pv/BQ7mGffIRD6L7DU4Ku0kYK/img.png&quot; data-alt=&quot;정성적인 평가 (MS COCO)&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bI7LRL/btqAkwYr9Pv/BQ7mGffIRD6L7DU4Ku0kYK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbI7LRL%2FbtqAkwYr9Pv%2FBQ7mGffIRD6L7DU4Ku0kYK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;정성적인 평가 (MS COCO)&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;참조 링크:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=6OYmOtivQY8&quot;&gt;[pr12] https://www.youtube.com/watch?v=6OYmOtivQY8&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://arxiv.org/pdf/1808.01244.pdf&quot;&gt;[paper] https://arxiv.org/pdf/1808.01244.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=aJnvTT1-spc&quot;&gt;[ECCV18 oral] https://www.youtube.com/watch?v=aJnvTT1-spc&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://heilaw.github.io/&quot;&gt;[author] https://heilaw.github.io/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Object Detection</category>
      <category>cornernet #simple #다음은centernet #hourglass생명력이란</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/19</guid>
      <comments>https://hyperml.tistory.com/19#entry19comment</comments>
      <pubDate>Tue, 10 Dec 2019 00:36:58 +0900</pubDate>
    </item>
    <item>
      <title>Deep Double Descent</title>
      <link>https://hyperml.tistory.com/18</link>
      <description>&lt;p&gt;원문 : &lt;a href=&quot;https://openai.com/blog/deep-double-descent/?fbclid=IwAR2kjb-SCR2wEWWlIKk3lnzVh9y_VYIInryB-DH7gBIcApi4xfdKRllnlx8&quot;&gt;https://openai.com/blog/deep-double-descent/?fbclid=IwAR2kjb-SCR2wEWWlIKk3lnzVh9y_VYIInryB-DH7gBIcApi4xfdKRllnlx8&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1575818864574&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-og-type=&quot;article&quot; data-og-title=&quot;Deep Double Descent&quot; data-og-description=&quot;We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful&quot; data-og-host=&quot;openai.com&quot; data-og-source-url=&quot;https://openai.com/blog/deep-double-descent/?fbclid=IwAR2kjb-SCR2wEWWlIKk3lnzVh9y_VYIInryB-DH7gBIcApi4xfdKRllnlx8&quot; data-og-url=&quot;https://openai.com/blog/deep-double-descent/&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/br52nO/hyD2ZmJAVy/F1BTmckqAc4u4sgPbOkKRK/img.png?width=2270&amp;amp;height=1130&amp;amp;face=0_0_2270_1130,https://scrap.kakaocdn.net/dn/2RVMH/hyD22wZEiQ/tbP75Dq0vMjfRcc12ADeGk/img.png?width=2270&amp;amp;height=1130&amp;amp;face=0_0_2270_1130,https://scrap.kakaocdn.net/dn/5LYsO/hyD2Y2qQ9o/IgLc1dSXtkCrgVX2o9GkoK/img.png?width=2400&amp;amp;height=1460&amp;amp;face=0_0_2400_1460&quot;&gt;&lt;a href=&quot;https://openai.com/blog/deep-double-descent/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://openai.com/blog/deep-double-descent/?fbclid=IwAR2kjb-SCR2wEWWlIKk3lnzVh9y_VYIInryB-DH7gBIcApi4xfdKRllnlx8&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/br52nO/hyD2ZmJAVy/F1BTmckqAc4u4sgPbOkKRK/img.png?width=2270&amp;amp;height=1130&amp;amp;face=0_0_2270_1130,https://scrap.kakaocdn.net/dn/2RVMH/hyD22wZEiQ/tbP75Dq0vMjfRcc12ADeGk/img.png?width=2270&amp;amp;height=1130&amp;amp;face=0_0_2270_1130,https://scrap.kakaocdn.net/dn/5LYsO/hyD2Y2qQ9o/IgLc1dSXtkCrgVX2o9GkoK/img.png?width=2400&amp;amp;height=1460&amp;amp;face=0_0_2400_1460');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot;&gt;Deep Double Descent&lt;/p&gt;
&lt;p class=&quot;og-desc&quot;&gt;We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful&lt;/p&gt;
&lt;p class=&quot;og-host&quot;&gt;openai.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;며칠전 커뮤니티에서 위 현상을 다룬 포스팅들이 올라와서 잠깐 해당 내용을 확인해보았다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;제목인 Deep Double Descent를 다룬 것인데, 이 현상은 별도의 regularization을 쓰지 않은 CNN, Resnet, Transformer 네트워크에서 발견된다고 한다.&lt;/p&gt;
&lt;p&gt;아쉽게도, 원인은 아직 규명되지 않았고, 이것과 관련된 인자만 몇가지가 확인되었다고 한다.&lt;/p&gt;
&lt;p&gt;포스팅에서 설명하고자 하는 부분의 머신러닝에 대한 이해가 충분하지 않으므로 대체로 번역만 하고, 의견은 배제하려고 하였다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 더 큰 모델이 (성능이) 더 나빠지는 구간이 있다&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cksA2A/btqAhVdDadm/yenJvlwKEHObZMeWbBMUGk/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cksA2A/btqAhVdDadm/yenJvlwKEHObZMeWbBMUGk/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cksA2A/btqAhVdDadm/yenJvlwKEHObZMeWbBMUGk/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcksA2A%2FbtqAhVdDadm%2FyenJvlwKEHObZMeWbBMUGk%2Fimg.jpg&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;위 차트에서 보면 resnet모델 width크기가 커질 수록 test error가 커지는 구간(ciritical regime)이 존재한다는 것을 알 수 있다. 이 구간에서의 train error 추세와 대비된다.&lt;/p&gt;
&lt;p&gt;최적화 알고리즘(SGD같은..), 학습 샘플수, 라벨노이즈에 따라서 interpolation threshold와 test error peak가 변화되는 것이 확인되었다.&lt;/p&gt;
&lt;p&gt;특히 의도적으로 라벨노이즈를 추가했을 때 이 현상이 증폭되어 잘 관찰되었다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. 더 많은 데이터를 투입해도 효과가 더 떨어지는 구간이 있다&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/sXDNj/btqAjWoEWhT/Buis3PAWHbRch0t7wp13K1/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/sXDNj/btqAjWoEWhT/Buis3PAWHbRch0t7wp13K1/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/sXDNj/btqAjWoEWhT/Buis3PAWHbRch0t7wp13K1/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FsXDNj%2FbtqAjWoEWhT%2FBuis3PAWHbRch0t7wp13K1%2Fimg.jpg&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;위는 트랜스포머를 라벨노이즈 없이 학습했을때의 차트인데, 18k trainset의 test ce loss가 더 높은 구간이 존재한다.&lt;/p&gt;
&lt;p&gt;더 많은 학습데이터를 투입한 경우에 더 큰 모델이 요구되는 것은 사실인데, 그러면 위 1번의 interpolation threshold(와 test peak)가 우측으로 이동한다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. 더 오래 학습한 경우에 오버피팅이 뒤집어지는 구간이 존재한다&lt;/h2&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/R6ihO/btqAhVxVMyS/EXNiA5lmc7cSimt78agh7k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/R6ihO/btqAhVxVMyS/EXNiA5lmc7cSimt78agh7k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/R6ihO/btqAhVxVMyS/EXNiA5lmc7cSimt78agh7k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FR6ihO%2FbtqAhVxVMyS%2FEXNiA5lmc7cSimt78agh7k%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;인자가 2개(epochs, width params)에 보고자 하는 값이 error(색상)이므로 주의 깊게 봐야 한다.&lt;/p&gt;
&lt;p&gt;주목할 곳은 우측 차트인데, 대략 7 이상의 width parameter 에서 epoch을 진행시켜 위로 이동하다 보면 train error와 달리 test error가 어두운 보라색이다가 다시 밝은 보라색으로 변경되는 현상이 발생한다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;즉, epoch이 진행됨에 따라 test error가 지속적으로 감소하지 않고 증가했다가 감소하는 구간이 존재한다. (&lt;span&gt;epoch-wise double&amp;nbsp;descent&lt;/span&gt;)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;일반적으로 test error peak은 모델이 해당 학습셋을 거의 fit 시킬 수 없을 때 시스템적으로 나타난다고 한다.&lt;/p&gt;
&lt;p&gt;위의 현상이 나타날 조건에 있는 모델들에겐 하나의 학습 모델만이 fit을 제대로 시켜줄 수 있으며 약간의 라벨 노이즈만 존재해도 전체적인 구조가 무너진다고 한다. 즉, 학습셋을 잘 보간하고 테스트셋에서 잘 성능을 발휘할 모델은 없다. (여전히 어려운 말이다)&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;그러나 아예 파라미터수가 많은 모델을 사용할 경우, 이러한 학습셋을 잘 fit할 수 있는 많은 모델이 존재한다. 게다가 SGD의 암묵적인 바이어스가 아직 이해할수는 없는 이유로 이런 경우에도 모델을 잘 학습한다.&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>논문읽기</category>
      <category>DeepDoubleDescent #OpenAI #어려워</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/18</guid>
      <comments>https://hyperml.tistory.com/18#entry18comment</comments>
      <pubDate>Mon, 9 Dec 2019 01:08:59 +0900</pubDate>
    </item>
    <item>
      <title>Object Detection</title>
      <link>https://hyperml.tistory.com/17</link>
      <description>&lt;p&gt;&quot;Recent Advances in Deep Learning for Object Detection&quot;&lt;/p&gt;
&lt;p&gt;출처 : &lt;a href=&quot;https://arxiv.org/pdf/1908.03673v1.pdf&quot;&gt;https://arxiv.org/pdf/1908.03673v1.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 99.7672%; height: 222px;&quot; border=&quot;1&quot;&gt;
&lt;tbody&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 25.1163%; height: 20px;&quot;&gt;제목&lt;/td&gt;
&lt;td style=&quot;width: 24.8837%; height: 20px;&quot;&gt;링크&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 20px;&quot;&gt;출판연도&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 20px;&quot;&gt;관련링크&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 40px;&quot;&gt;
&lt;td style=&quot;width: 25.1163%; height: 40px;&quot;&gt;FCOS: Fully Convolutional One-Stage Object Detection&lt;/td&gt;
&lt;td style=&quot;width: 24.8837%; height: 40px;&quot;&gt;&lt;a href=&quot;https://arxiv.org/pdf/1904.01355.pdf&quot;&gt;https://arxiv.org/pdf/1904.01355.pdf&lt;/a&gt;&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 40px;&quot;&gt;19.04&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 40px;&quot;&gt;&lt;a href=&quot;https://github.com/tianzhi0549/FCOS&quot;&gt;https://github.com/tianzhi0549/FCOS&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 40px;&quot;&gt;
&lt;td style=&quot;width: 25.1163%; height: 40px;&quot;&gt;CenterNet: Keypoint Triplets for Object Detection&lt;/td&gt;
&lt;td style=&quot;width: 24.8837%; height: 40px;&quot;&gt;&lt;a href=&quot;https://arxiv.org/pdf/1904.08189.pdf&quot;&gt;https://arxiv.org/pdf/1904.08189.pdf&lt;/a&gt;&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 40px;&quot;&gt;19.04&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 40px;&quot;&gt;&lt;a href=&quot;https://github.com/xingyizhou/CenterNet&quot;&gt;https://github.com/xingyizhou/CenterNet&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 25.1163%; height: 20px;&quot;&gt;NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection&lt;/td&gt;
&lt;td style=&quot;width: 24.8837%; height: 20px;&quot;&gt;&lt;a href=&quot;https://arxiv.org/abs/1904.07392&quot;&gt;https://arxiv.org/abs/1904.07392&lt;/a&gt;&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 20px;&quot;&gt;19.04&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 20px;&quot;&gt;&lt;a href=&quot;https://github.com/TuSimple/simpledet/tree/master/models/NASFPN&quot;&gt;https://github.com/TuSimple/simpledet/tree/master/models/NASFPN&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 25.1163%; height: 20px;&quot;&gt;&lt;a href=&quot;https://arxiv.org/pdf/1911.09070.pdf&quot;&gt;EfficientDet: Scalable and Efficient Object Detection &lt;/a&gt;&lt;/td&gt;
&lt;td style=&quot;width: 24.8837%; height: 20px;&quot;&gt;&lt;a href=&quot;https://arxiv.org/pdf/1911.09070.pdf&quot;&gt;https://arxiv.org/pdf/1911.09070.pdf&lt;/a&gt;&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 20px;&quot;&gt;19.11&lt;/td&gt;
&lt;td style=&quot;width: 25%; height: 20px;&quot;&gt;&lt;a href=&quot;https://hoya012.github.io/blog/EfficientDet-Review/&quot;&gt;https://hoya012.github.io/blog/EfficientDet-Review/&lt;/a&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/xuannianz/EfficientDet&quot;&gt;https://github.com/xuannianz/EfficientDet&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 25.1163%;&quot;&gt;Cascade R-CNN: High Quality Object Detection and Instance Segmentation&lt;/td&gt;
&lt;td style=&quot;width: 24.8837%;&quot;&gt;&lt;a href=&quot;https://arxiv.org/pdf/1906.09756.pdf&quot;&gt;https://arxiv.org/pdf/1906.09756.pdf&lt;/a&gt;&lt;/td&gt;
&lt;td style=&quot;width: 25%;&quot;&gt;19.06&lt;/td&gt;
&lt;td style=&quot;width: 25%;&quot;&gt;&lt;a href=&quot;https://github.com/zhaoweicai/Detectron-Cascade-RCNN&quot;&gt;https://github.com/zhaoweicai/Detectron-Cascade-RCNN&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 25.1163%;&quot;&gt;M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network&lt;/td&gt;
&lt;td style=&quot;width: 24.8837%;&quot;&gt;&lt;a href=&quot;https://arxiv.org/pdf/1811.04533.pdf&quot;&gt;https://arxiv.org/pdf/1811.04533.pdf&lt;/a&gt;&lt;/td&gt;
&lt;td style=&quot;width: 25%;&quot;&gt;18.11&lt;/td&gt;
&lt;td style=&quot;width: 25%;&quot;&gt;&lt;a href=&quot;https://github.com/qijiezhao/M2Det&quot;&gt;https://github.com/qijiezhao/M2Det&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b9eKP2/btqAgUy16Tk/iThWKsBGFWhi3fXqajkK11/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b9eKP2/btqAgUy16Tk/iThWKsBGFWhi3fXqajkK11/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b9eKP2/btqAgUy16Tk/iThWKsBGFWhi3fXqajkK11/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb9eKP2%2FbtqAgUy16Tk%2FiThWKsBGFWhi3fXqajkK11%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/9ZyLM/btqAhdrCyZC/P1B7d4ktR4k66syLQAhAU1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/9ZyLM/btqAhdrCyZC/P1B7d4ktR4k66syLQAhAU1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/9ZyLM/btqAhdrCyZC/P1B7d4ktR4k66syLQAhAU1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F9ZyLM%2FbtqAhdrCyZC%2FP1B7d4ktR4k66syLQAhAU1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/d8ZjpH/btqAjod72MS/8XYLQo0CLxFGFwUjLQ7wN1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/d8ZjpH/btqAjod72MS/8XYLQo0CLxFGFwUjLQ7wN1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/d8ZjpH/btqAjod72MS/8XYLQo0CLxFGFwUjLQ7wN1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fd8ZjpH%2FbtqAjod72MS%2F8XYLQo0CLxFGFwUjLQ7wN1%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bB4BGV/btqAiWCnxZe/sygsThg1BT1SM86gygtoSK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bB4BGV/btqAiWCnxZe/sygsThg1BT1SM86gygtoSK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bB4BGV/btqAiWCnxZe/sygsThg1BT1SM86gygtoSK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbB4BGV%2FbtqAiWCnxZe%2FsygsThg1BT1SM86gygtoSK%2Fimg.png&quot; data-origin-width=&quot;0&quot; data-origin-height=&quot;0&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://hoya012.github.io/blog/Tutorials-of-Object-Detection-Using-Deep-Learning-how-to-measure-performance-of-object-detection/&quot;&gt;https://hoya012.github.io/blog/Tutorials-of-Object-Detection-Using-Deep-Learning-how-to-measure-performance-of-object-detection/&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1577378305323&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-og-type=&quot;article&quot; data-og-title=&quot;&amp;ldquo;Tutorials of Object Detection using Deep Learning [4] How to measure performance of object detection&amp;rdquo;&quot; data-og-description=&quot;Deep Learning을 이용한 Object detection Tutorial - [4] How to measure performance of object detection&quot; data-og-host=&quot;hoya012.github.io&quot; data-og-source-url=&quot;https://hoya012.github.io/blog/Tutorials-of-Object-Detection-Using-Deep-Learning-how-to-measure-performance-of-object-detection/&quot; data-og-url=&quot;https://hoya012.github.io//blog/Tutorials-of-Object-Detection-Using-Deep-Learning-how-to-measure-performance-of-object-detection/&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/unGCN/hyEjHzucN9/YveWq1KbnTUmz3WZlzIwAK/img.jpg?width=1500&amp;amp;height=649&amp;amp;face=0_0_1500_649,https://scrap.kakaocdn.net/dn/evSBCU/hyEjxcz7Ts/YlCSZPuKMWsIAnfV1pxa8K/img.png?width=835&amp;amp;height=500&amp;amp;face=0_0_835_500,https://scrap.kakaocdn.net/dn/bcsPNp/hyEhvtUXyP/0dvW2sQM34nlNMh9c21dg1/img.png?width=855&amp;amp;height=481&amp;amp;face=0_0_855_481&quot;&gt;&lt;a href=&quot;https://hoya012.github.io//blog/Tutorials-of-Object-Detection-Using-Deep-Learning-how-to-measure-performance-of-object-detection/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://hoya012.github.io/blog/Tutorials-of-Object-Detection-Using-Deep-Learning-how-to-measure-performance-of-object-detection/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/unGCN/hyEjHzucN9/YveWq1KbnTUmz3WZlzIwAK/img.jpg?width=1500&amp;amp;height=649&amp;amp;face=0_0_1500_649,https://scrap.kakaocdn.net/dn/evSBCU/hyEjxcz7Ts/YlCSZPuKMWsIAnfV1pxa8K/img.png?width=835&amp;amp;height=500&amp;amp;face=0_0_835_500,https://scrap.kakaocdn.net/dn/bcsPNp/hyEhvtUXyP/0dvW2sQM34nlNMh9c21dg1/img.png?width=855&amp;amp;height=481&amp;amp;face=0_0_855_481');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot;&gt;&amp;ldquo;Tutorials of Object Detection using Deep Learning [4] How to measure performance of object detection&amp;rdquo;&lt;/p&gt;
&lt;p class=&quot;og-desc&quot;&gt;Deep Learning을 이용한 Object detection Tutorial - [4] How to measure performance of object detection&lt;/p&gt;
&lt;p class=&quot;og-host&quot;&gt;hoya012.github.io&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Object Detection</category>
      <category>정리용 #하나씩읽고있음 #많기도많네 #내가찾는바로그논문은어디에</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/17</guid>
      <comments>https://hyperml.tistory.com/17#entry17comment</comments>
      <pubDate>Sat, 7 Dec 2019 01:12:25 +0900</pubDate>
    </item>
    <item>
      <title>Pose estimation 정리 링크</title>
      <link>https://hyperml.tistory.com/8</link>
      <description>&lt;p&gt;&lt;a href=&quot;https://heartbeat.fritz.ai/a-2019-guide-to-human-pose-estimation-c10b79b64b73&quot;&gt;https://heartbeat.fritz.ai/a-2019-guide-to-human-pose-estimation-c10b79b64b73&lt;/a&gt;&lt;/p&gt;</description>
      <category>논문읽기</category>
      <author>곰돌이만세</author>
      <guid isPermaLink="true">https://hyperml.tistory.com/8</guid>
      <comments>https://hyperml.tistory.com/8#entry8comment</comments>
      <pubDate>Tue, 22 Oct 2019 01:10:42 +0900</pubDate>
    </item>
  </channel>
</rss>