Big Ben

一个半吊子的编码爱好者

0%

Edit

本章讨论的是目标检测的一个特殊用例——人脸识别。所谓人脸识别,就是输入一张照片,或者现场采集头像照片,并匹配数据库中的数据,来翻译成身份信息。通常一个人脸识别算法可以分作两步:

  • Face verification
  • Face recognition

前者输入照片和ID信息,判断是否匹配。后者输入照片信息,与数据库中已有的照片集进行匹配。当前者算法精确度足够高时,就可以应用到后者中。

One Shot Learning

Face recognition首要解决的就是One Shot Learning问题。例如公司的门禁系统,通常每个员工只上传一张个人照片,那么这个系统就要能够正确识别该员工。即便所有的员工都上传了照片,其实这个样本集的数量级仍然不会很高。怎么在一个小样本集上有效训练来达到比较好的效果,这就是One Shot Learning需要解决的问题。后面提到的Siamese网络就是One Shot Learning的一种解决方案。

Siamese Network

Siamese就是下图这样的一个网络。它包含2个或者更多个完全一样的网络分支。每个分支将输入数据映射成最终的activation向量输出。然后通过比较这两个向量的相似度,来度量两幅图片的相似度。


其中两路输出的difference用下式来表明:

如果输出层,采用sigmoid激活函数,则预测输出为:

上面是训练好所有权重后,如何使用一个Siamese网络来得到图片异同的预测。那么如何训练一个Siamese网络呢?要训练一个网络,首先我们需要模型的损失函数J,其次我们需要满足训练的样本集。

Triplet Loss

要训练一个Siamese网络,它的输入样本可以这样选择


一共3张图片,分成两组,训练的目标是左边一组输出的difference函数要小于右边一组输出的difference函数。考虑随机噪声,再添加一些margin,可以得到损失函数如下:

  • 就是margin
  • A代表Anchor,P代表Positive,N代表Negative
  • m是mini-batch中的样本数

如果有1k个人的10k张图片,可以生成很多个这样APN的样本组。生成时,最好不要采用随机算法,因为如果是随机生成APN的话,导致AN很有可能图片本身差别就很大,所以就很容易满足,这样的样本就是无效样本。在生成样本时,尤其是挑选AN对时,尽量挑选比较相似的,以进行有效训练。
通过对该损失函数J进行前向传播和反向传播,来运行Gradient descent算法,迭代得到最终Siamese网络的模型参数。

参考文献

  • Siamese Network: Taigman et. al., 2014, DeepFace closing the gap to human level performance.
  • Triplet Loss: Schroff et. al., 2015, FaceNet: A unified embedding for face recognition and clustering.
  • Siamese Network & Triplet Loss
%23%20Deep%20Learning%20%2813%29%20-%20Face%20Recognition%0A@%28myblog%29%5Bdeep%20learning%2C%20machine%20learning%5D%0A%0A%u672C%u7AE0%u8BA8%u8BBA%u7684%u662F%u76EE%u6807%u68C0%u6D4B%u7684%u4E00%u4E2A%u7279%u6B8A%u7528%u4F8B%u2014%u2014%u4EBA%u8138%u8BC6%u522B%u3002%u6240%u8C13%u4EBA%u8138%u8BC6%u522B%uFF0C%u5C31%u662F%u8F93%u5165%u4E00%u5F20%u7167%u7247%uFF0C%u6216%u8005%u73B0%u573A%u91C7%u96C6%u5934%u50CF%u7167%u7247%uFF0C%u5E76%u5339%u914D%u6570%u636E%u5E93%u4E2D%u7684%u6570%u636E%uFF0C%u6765%u7FFB%u8BD1%u6210%u8EAB%u4EFD%u4FE1%u606F%u3002%u901A%u5E38%u4E00%u4E2A%u4EBA%u8138%u8BC6%u522B%u7B97%u6CD5%u53EF%u4EE5%u5206%u4F5C%u4E24%u6B65%uFF1A%0A-%20Face%20verification%0A-%20Face%20recognition%0A%0A%u524D%u8005%u8F93%u5165%u7167%u7247%u548CID%u4FE1%u606F%uFF0C%u5224%u65AD%u662F%u5426%u5339%u914D%u3002%u540E%u8005%u8F93%u5165%u7167%u7247%u4FE1%u606F%uFF0C%u4E0E%u6570%u636E%u5E93%u4E2D%u5DF2%u6709%u7684%u7167%u7247%u96C6%u8FDB%u884C%u5339%u914D%u3002%u5F53%u524D%u8005%u7B97%u6CD5%u7CBE%u786E%u5EA6%u8DB3%u591F%u9AD8%u65F6%uFF0C%u5C31%u53EF%u4EE5%u5E94%u7528%u5230%u540E%u8005%u4E2D%u3002%0A%0A%23%23%20One%20Shot%20Learning%0AFace%20recognition%u9996%u8981%u89E3%u51B3%u7684%u5C31%u662FOne%20Shot%20Learning%u95EE%u9898%u3002%u4F8B%u5982%u516C%u53F8%u7684%u95E8%u7981%u7CFB%u7EDF%uFF0C%u901A%u5E38%u6BCF%u4E2A%u5458%u5DE5%u53EA%u4E0A%u4F20%u4E00%u5F20%u4E2A%u4EBA%u7167%u7247%uFF0C%u90A3%u4E48%u8FD9%u4E2A%u7CFB%u7EDF%u5C31%u8981%u80FD%u591F%u6B63%u786E%u8BC6%u522B%u8BE5%u5458%u5DE5%u3002%u5373%u4FBF%u6240%u6709%u7684%u5458%u5DE5%u90FD%u4E0A%u4F20%u4E86%u7167%u7247%uFF0C%u5176%u5B9E%u8FD9%u4E2A%u6837%u672C%u96C6%u7684%u6570%u91CF%u7EA7%u4ECD%u7136%u4E0D%u4F1A%u5F88%u9AD8%u3002%u600E%u4E48%u5728%u4E00%u4E2A%u5C0F%u6837%u672C%u96C6%u4E0A%u6709%u6548%u8BAD%u7EC3%u6765%u8FBE%u5230%u6BD4%u8F83%u597D%u7684%u6548%u679C%uFF0C%u8FD9%u5C31%u662FOne%20Shot%20Learning%u9700%u8981%u89E3%u51B3%u7684%u95EE%u9898%u3002%u540E%u9762%u63D0%u5230%u7684Siamese%u7F51%u7EDC%u5C31%u662FOne%20Shot%20Learning%u7684%u4E00%u79CD%u89E3%u51B3%u65B9%u6848%u3002%0A%23%23%20Siamese%20Network%0ASiamese%u5C31%u662F%u4E0B%u56FE%u8FD9%u6837%u7684%u4E00%u4E2A%u7F51%u7EDC%u3002%u5B83%u5305%u542B2%u4E2A%u6216%u8005%u66F4%u591A%u4E2A%u5B8C%u5168%u4E00%u6837%u7684%u7F51%u7EDC%u5206%u652F%u3002%u6BCF%u4E2A%u5206%u652F%u5C06%u8F93%u5165%u6570%u636E%u6620%u5C04%u6210%u6700%u7EC8%u7684activation%u5411%u91CF%u8F93%u51FA%u3002%u7136%u540E%u901A%u8FC7%u6BD4%u8F83%u8FD9%u4E24%u4E2A%u5411%u91CF%u7684%u76F8%u4F3C%u5EA6%uFF0C%u6765%u5EA6%u91CF%u4E24%u5E45%u56FE%u7247%u7684%u76F8%u4F3C%u5EA6%u3002%0A%21%5BAlt%20text%7C700x0%5D%28./1538276798191.png%29%0A%u5176%u4E2D%u4E24%u8DEF%u8F93%u51FA%u7684difference%u7528%u4E0B%u5F0F%u6765%u8868%u660E%uFF1A%0A%24%24d%28x%5E%7B%28i%29%7D%2C%20x%5E%7B%28j%29%7D%29%20%3D%20%5Cbegin%7BVmatrix%7D%20f%28x%5E%7B%28i%29%7D%29%20-%20f%28x%5E%7B%28j%29%7D%29%20%5Cend%7BVmatrix%7D_2%5E2%24%24%0A%u5982%u679C%u8F93%u51FA%u5C42%uFF0C%u91C7%u7528sigmoid%u6FC0%u6D3B%u51FD%u6570%uFF0C%u5219%u9884%u6D4B%u8F93%u51FA%u4E3A%uFF1A%0A%24%24%5Chat%20y%20%3D%20%5Csigma%28%20%5CSigma_%7Bk%3D1%7D%5E%7B128%7D%20w_i%5Cbegin%7Bvmatrix%7D%20f%28x%5E%7B%28i%29%7D%29_k%20-%20f%28x%5E%7B%28j%29%7D%29_k%5Cend%7Bvmatrix%7D%20+%20b%29%24%24%0A%0A%u4E0A%u9762%u662F%u8BAD%u7EC3%u597D%u6240%u6709%u6743%u91CD%u540E%uFF0C%u5982%u4F55%u4F7F%u7528%u4E00%u4E2ASiamese%u7F51%u7EDC%u6765%u5F97%u5230%u56FE%u7247%u5F02%u540C%u7684%u9884%u6D4B%u3002%u90A3%u4E48%u5982%u4F55%u8BAD%u7EC3%u4E00%u4E2ASiamese%u7F51%u7EDC%u5462%uFF1F%u8981%u8BAD%u7EC3%u4E00%u4E2A%u7F51%u7EDC%uFF0C%u9996%u5148%u6211%u4EEC%u9700%u8981%u6A21%u578B%u7684%u635F%u5931%u51FD%u6570J%uFF0C%u5176%u6B21%u6211%u4EEC%u9700%u8981%u6EE1%u8DB3%u8BAD%u7EC3%u7684%u6837%u672C%u96C6%u3002%0A%0A%23%23%23%20Triplet%20Loss%0A%u8981%u8BAD%u7EC3%u4E00%u4E2ASiamese%u7F51%u7EDC%uFF0C%u5B83%u7684%u8F93%u5165%u6837%u672C%u53EF%u4EE5%u8FD9%u6837%u9009%u62E9%0A%21%5BAlt%20text%7C500x0%5D%28./1538274930628.png%29%0A%u4E00%u51713%u5F20%u56FE%u7247%uFF0C%u5206%u6210%u4E24%u7EC4%uFF0C%u8BAD%u7EC3%u7684%u76EE%u6807%u662F%u5DE6%u8FB9%u4E00%u7EC4%u8F93%u51FA%u7684difference%u51FD%u6570%u8981%u5C0F%u4E8E%u53F3%u8FB9%u4E00%u7EC4%u8F93%u51FA%u7684difference%u51FD%u6570%u3002%u8003%u8651%u968F%u673A%u566A%u58F0%uFF0C%u518D%u6DFB%u52A0%u4E00%u4E9Bmargin%uFF0C%u53EF%u4EE5%u5F97%u5230%u635F%u5931%u51FD%u6570%u5982%u4E0B%uFF1A%0A%24%24%5Cmathscr%20L%28A%2CP%2CN%29%20%3D%20max%28%5Cbegin%7BVmatrix%7D%20f%28A%29%20-%20f%28P%29%5Cend%7BVmatrix%7D%5E2%20-%20%5Cbegin%7BVmatrix%7D%20f%28A%29%20-%20f%28N%29%5Cend%7BVmatrix%7D%5E2%20+%20%5Calpha%2C%200%29%24%24%0A%24%24%20J%20%3D%20%5CSigma_%7Bi%3D1%7D%5Em%20%5Cmathscr%20L%28A%5E%7B%28i%29%7D%2C%20P%5E%7B%28i%29%7D%2C%20N%5E%7B%28i%29%7D%29%24%24%0A-%20%24%5Calpha%24%u5C31%u662Fmargin%0A-%20A%u4EE3%u8868Anchor%uFF0CP%u4EE3%u8868Positive%uFF0CN%u4EE3%u8868Negative%0A-%20m%u662Fmini-batch%u4E2D%u7684%u6837%u672C%u6570%0A%0A%u5982%u679C%u67091k%u4E2A%u4EBA%u768410k%u5F20%u56FE%u7247%uFF0C%u53EF%u4EE5%u751F%u6210%u5F88%u591A%u4E2A%u8FD9%u6837APN%u7684%u6837%u672C%u7EC4%u3002%u751F%u6210%u65F6%uFF0C%u6700%u597D%u4E0D%u8981%u91C7%u7528%u968F%u673A%u7B97%u6CD5%uFF0C%u56E0%u4E3A%u5982%u679C%u662F%u968F%u673A%u751F%u6210APN%u7684%u8BDD%uFF0C%u5BFC%u81F4AN%u5F88%u6709%u53EF%u80FD%u56FE%u7247%u672C%u8EAB%u5DEE%u522B%u5C31%u5F88%u5927%uFF0C%u6240%u4EE5%u5C31%u5F88%u5BB9%u6613%u6EE1%u8DB3%24%5Cmathscr%20L%28A%2CP%2CN%29%20%5Cge%200%24%uFF0C%u8FD9%u6837%u7684%u6837%u672C%u5C31%u662F%u65E0%u6548%u6837%u672C%u3002%u5728%u751F%u6210%u6837%u672C%u65F6%uFF0C%u5C24%u5176%u662F%u6311%u9009AN%u5BF9%u65F6%uFF0C%u5C3D%u91CF%u6311%u9009%u6BD4%u8F83%u76F8%u4F3C%u7684%uFF0C%u4EE5%u8FDB%u884C%u6709%u6548%u8BAD%u7EC3%u3002%0A%u901A%u8FC7%u5BF9%u8BE5%u635F%u5931%u51FD%u6570J%u8FDB%u884C%u524D%u5411%u4F20%u64AD%u548C%u53CD%u5411%u4F20%u64AD%uFF0C%u6765%u8FD0%u884CGradient%20descent%u7B97%u6CD5%uFF0C%u8FED%u4EE3%u5F97%u5230%u6700%u7EC8Siamese%u7F51%u7EDC%u7684%u6A21%u578B%u53C2%u6570%u3002%0A%0A%23%23%20%u53C2%u8003%u6587%u732E%0A-%20Siamese%20Network%3A%20Taigman%20et.%20al.%2C%202014%2C%20DeepFace%20closing%20the%20gap%20to%20human%20level%20performance.%0A-%20Triplet%20Loss%3A%20Schroff%20et.%20al.%2C%202015%2C%20FaceNet%3A%20A%20unified%20embedding%20for%20face%20recognition%20and%20clustering.%0A-%20%5BSiamese%20Network%20%26%20Triplet%20Loss%5D%28https%3A//towardsdatascience.com/siamese-network-triplet-loss-b4ca82c1aec8%29

Edit


这一章主要介绍了当下大行其道的Computer Vision中的Object Detection,也就是目标检测。课程由浅入深,其间也深入介绍了目标检测算法的集大成者YOLO算法。

模型定义


输入是一张图片,经过CNN网络,输出预测标签。标签定义通常是:

  • Pc: 是否有需要检测的目标
  • : 代表bounding box
  • : 代表三种分类,例如:car, pedestrian, motorcycle

损失函数(用最小二乘法):

检测方法

滑动窗口检测法


检测方法就是如图所示,先用最小的窗口截取图片的每个角落,跑一遍CNN,看看有没有命中的格子,如果没有,就换大一点的窗口,如果还没有就再换大一点的窗口,以此类推。这是最容易想到的方法,但是缺点就是计算量太大,效率极低。

滑动窗口的卷积实现

其本质还是利用窗口来采样图片,并输出该窗口中是否有检测对象。但滑动窗口类似串行算法,卷积实现类似于并行算法,其共享了很多计算步骤,效率更高。
下面将会对滑动窗口算法网络进行一步步的改造,最终实现卷积算法来一次性检测所有窗口的结果。

1. 使用1x1 convolution (Network in network)来取代FC层


假设就是14x14的窗口使用在14x14的图片上,最终运算出来的结果是一个1x1x4的volume,代表4个分类上的结果。

2. 增加图像尺寸后的情况


图像尺寸在长宽方向上都增加2个pixel,则图像尺寸是16x16。如果仍然采用14x14的窗口,则会需要4个窗口来检测这个16x16的图像。经过上述CNN网络迭代后,结果是4x4x4。其每一个格子代表了一个窗口的运算结果。这实际上就是滑动窗口的卷积实现了。

更大一点的图像会是怎么样的情况呢?


同理,如果是28x28的图像,则会有8个窗口来检测,最终产生的volume是8x8x4,代表这8个窗口的检测结果。
卷积算法通过一次性迭代,输出了一幅图片上所有分片的检测结果,其效率要远远大于滑动窗口的运算效率。

YOLO Algorithm

YOLO = You Only Look Once
这是公认比较高效的目标检测算法。据说这篇论文的难度也很高,比较难读懂。

IOU

IOU = Intersection Over Union,翻译成交并比。交集和并集的比例。如下图:


蓝色框是算法预测出的目标位置,红色框是实际目标位置。黄色阴影是交集,绿色阴影是并集。那么:

以IOU的大小来判断,这一次目标检测的质量。例如,我们通常要求交并比达到0.5或0.6以上,才算一次有效的检测。

Non-max Suppression Algorithm

翻译成非极大值抑制。
当有如下的检测结果的时候:


理论上,一个物体只属于一个box。但是,通常在预测的时候,每一个物体四周的box都有可能产生有效预测。Non-max suppression要做的就是过滤掉质量较差的预测,为每个物体只留下一个有效预测。具体做法如下


经过Non-max suppression过滤后,最终结果如下:


每个物体只保留一个有效检测。

Anchor Box

Anchor Box要解决的问题就是像下面这幅图,有两个不同形状的物体,其中心点重叠。也就是一个grid box中有两个物体,这个通过一般的y label是没法反映出来的。


通常的y label是:

解决办法就是设置Anchor Box,这样的图片就设置两个Anchor Box。当如果有更多的图片交叠的可能的时候,那就要设置多个Anchor Box。y修正成为:
y上半部分检测Anchor Box 1的物体,下半部分检测Anchor Box 2的物体。

YOLO算法

先将样本图片切割成网格,针对每个网格填充标签y。

  • 每个网格的左上点坐标为(0,0),右下角坐标为(1,1)。所以是真实物体尺寸针对网格的比例。所以上图右边车子的bounding box的长宽可能是0.4x0.9,左边车子的长宽比可能是0.5x0.6。
  • 如果有目标物体则,否则
  • 如果有可能有图片重叠在一个格子的时候,需要设置对应的Anchor Box

后面就是经典的CNN了,最终得出对y的预测


最终输出的图像,可能是如下:


采用non-max suppression来过滤出最终的预测结果。

参考文献

YOLO: Redmon et al., 2015, You Only Look Once: Unified real-time object detection

%23%20Deep%20Learning%20%2812%29%20-%20%20Object%20Detection%0A@%28myblog%29%5Bdeep%20learning%2C%20machine%20learning%5D%0A%u8FD9%u4E00%u7AE0%u4E3B%u8981%u4ECB%u7ECD%u4E86%u5F53%u4E0B%u5927%u884C%u5176%u9053%u7684Computer%20Vision%u4E2D%u7684Object%20Detection%uFF0C%u4E5F%u5C31%u662F%u76EE%u6807%u68C0%u6D4B%u3002%u8BFE%u7A0B%u7531%u6D45%u5165%u6DF1%uFF0C%u5176%u95F4%u4E5F%u6DF1%u5165%u4ECB%u7ECD%u4E86%u76EE%u6807%u68C0%u6D4B%u7B97%u6CD5%u7684%u96C6%u5927%u6210%u8005YOLO%u7B97%u6CD5%u3002%0A%23%23%20%u6A21%u578B%u5B9A%u4E49%0A%21%5BAlt%20text%7C600x0%5D%28./1537747343242.png%29%0A%u8F93%u5165%u662F%u4E00%u5F20%u56FE%u7247%uFF0C%u7ECF%u8FC7CNN%u7F51%u7EDC%uFF0C%u8F93%u51FA%u9884%u6D4B%u6807%u7B7E%u3002%u6807%u7B7E%u5B9A%u4E49%u901A%u5E38%u662F%uFF1A%0A%24%24y%20%3D%20%5Cbegin%7Bbmatrix%7D%20Pc%20%5C%5C%20b_x%20%5C%5C%20b_y%20%5C%5C%20b_h%20%5C%5C%20b_w%20%5C%5C%20c_1%20%5C%5C%20c_2%20%5C%5C%20c_3%20%5Cend%7Bbmatrix%7D%24%24%0A-%20Pc%3A%20%u662F%u5426%u6709%u9700%u8981%u68C0%u6D4B%u7684%u76EE%u6807%0A-%20%24b_x%2C%20b_y%2C%20b_h%2C%20b_w%24%3A%20%u4EE3%u8868bounding%20box%0A-%20%24c_1%2C%20c_2%2C%20c_3%24%3A%20%u4EE3%u8868%u4E09%u79CD%u5206%u7C7B%uFF0C%u4F8B%u5982%uFF1Acar%2C%20pedestrian%2C%20motorcycle%0A%0A%u635F%u5931%u51FD%u6570%28%u7528%u6700%u5C0F%u4E8C%u4E58%u6CD5%29%uFF1A%0A%24%24%5Cmathscr%20L%20%28%5Chat%20y%2C%20y%29%20%3D%20%0A%5Cbegin%7Bcases%7D%0A%5CSigma_i%20%28%5Chat%20y_i%20-%20y_i%29%5E2%20%26%20y_1%20%3D%201%20%5C%5C%20%28%5Chat%20y_1%20-%20y_1%29%5E2%20%26%20y_1%20%3D%200%0A%5Cend%7Bcases%7D%24%24%0A%0A%23%23%20%u68C0%u6D4B%u65B9%u6CD5%0A%23%23%23%20%u6ED1%u52A8%u7A97%u53E3%u68C0%u6D4B%u6CD5%0A%21%5BAlt%20text%5D%28./1537828997891.png%29%0A%u68C0%u6D4B%u65B9%u6CD5%u5C31%u662F%u5982%u56FE%u6240%u793A%uFF0C%u5148%u7528%u6700%u5C0F%u7684%u7A97%u53E3%u622A%u53D6%u56FE%u7247%u7684%u6BCF%u4E2A%u89D2%u843D%uFF0C%u8DD1%u4E00%u904DCNN%uFF0C%u770B%u770B%u6709%u6CA1%u6709%u547D%u4E2D%u7684%u683C%u5B50%uFF0C%u5982%u679C%u6CA1%u6709%uFF0C%u5C31%u6362%u5927%u4E00%u70B9%u7684%u7A97%u53E3%uFF0C%u5982%u679C%u8FD8%u6CA1%u6709%u5C31%u518D%u6362%u5927%u4E00%u70B9%u7684%u7A97%u53E3%uFF0C%u4EE5%u6B64%u7C7B%u63A8%u3002%u8FD9%u662F%u6700%u5BB9%u6613%u60F3%u5230%u7684%u65B9%u6CD5%uFF0C%u4F46%u662F%u7F3A%u70B9%u5C31%u662F%u8BA1%u7B97%u91CF%u592A%u5927%uFF0C%u6548%u7387%u6781%u4F4E%u3002%0A%0A%23%23%23%20%u6ED1%u52A8%u7A97%u53E3%u7684%u5377%u79EF%u5B9E%u73B0%0A%u5176%u672C%u8D28%u8FD8%u662F%u5229%u7528%u7A97%u53E3%u6765%u91C7%u6837%u56FE%u7247%uFF0C%u5E76%u8F93%u51FA%u8BE5%u7A97%u53E3%u4E2D%u662F%u5426%u6709%u68C0%u6D4B%u5BF9%u8C61%u3002%u4F46%u6ED1%u52A8%u7A97%u53E3%u7C7B%u4F3C%u4E32%u884C%u7B97%u6CD5%uFF0C%u5377%u79EF%u5B9E%u73B0%u7C7B%u4F3C%u4E8E%u5E76%u884C%u7B97%u6CD5%uFF0C%u5176%u5171%u4EAB%u4E86%u5F88%u591A%u8BA1%u7B97%u6B65%u9AA4%uFF0C%u6548%u7387%u66F4%u9AD8%u3002%0A%u4E0B%u9762%u5C06%u4F1A%u5BF9%u6ED1%u52A8%u7A97%u53E3%u7B97%u6CD5%u7F51%u7EDC%u8FDB%u884C%u4E00%u6B65%u6B65%u7684%u6539%u9020%uFF0C%u6700%u7EC8%u5B9E%u73B0%u5377%u79EF%u7B97%u6CD5%u6765%u4E00%u6B21%u6027%u68C0%u6D4B%u6240%u6709%u7A97%u53E3%u7684%u7ED3%u679C%u3002%0A%23%23%23%23%201.%20%u4F7F%u75281x1%20convolution%20%28Network%20in%20network%29%u6765%u53D6%u4EE3FC%u5C42%0A%21%5BAlt%20text%5D%28./1537916911046.png%29%0A%u5047%u8BBE%u5C31%u662F14x14%u7684%u7A97%u53E3%u4F7F%u7528%u572814x14%u7684%u56FE%u7247%u4E0A%uFF0C%u6700%u7EC8%u8FD0%u7B97%u51FA%u6765%u7684%u7ED3%u679C%u662F%u4E00%u4E2A1x1x4%u7684volume%uFF0C%u4EE3%u88684%u4E2A%u5206%u7C7B%u4E0A%u7684%u7ED3%u679C%u3002%0A%23%23%23%23%202.%20%u589E%u52A0%u56FE%u50CF%u5C3A%u5BF8%u540E%u7684%u60C5%u51B5%0A%21%5BAlt%20text%5D%28./1537917090503.png%29%0A%u56FE%u50CF%u5C3A%u5BF8%u5728%u957F%u5BBD%u65B9%u5411%u4E0A%u90FD%u589E%u52A02%u4E2Apixel%uFF0C%u5219%u56FE%u50CF%u5C3A%u5BF8%u662F16x16%u3002%u5982%u679C%u4ECD%u7136%u91C7%u752814x14%u7684%u7A97%u53E3%uFF0C%u5219%u4F1A%u9700%u89814%u4E2A%u7A97%u53E3%u6765%u68C0%u6D4B%u8FD9%u4E2A16x16%u7684%u56FE%u50CF%u3002%u7ECF%u8FC7%u4E0A%u8FF0CNN%u7F51%u7EDC%u8FED%u4EE3%u540E%uFF0C%u7ED3%u679C%u662F4x4x4%u3002%u5176%u6BCF%u4E00%u4E2A%u683C%u5B50%u4EE3%u8868%u4E86%u4E00%u4E2A%u7A97%u53E3%u7684%u8FD0%u7B97%u7ED3%u679C%u3002%u8FD9%u5B9E%u9645%u4E0A%u5C31%u662F%u6ED1%u52A8%u7A97%u53E3%u7684%u5377%u79EF%u5B9E%u73B0%u4E86%u3002%0A%0A%u66F4%u5927%u4E00%u70B9%u7684%u56FE%u50CF%u4F1A%u662F%u600E%u4E48%u6837%u7684%u60C5%u51B5%u5462%uFF1F%0A%21%5BAlt%20text%5D%28./1537917281175.png%29%0A%u540C%u7406%uFF0C%u5982%u679C%u662F28x28%u7684%u56FE%u50CF%uFF0C%u5219%u4F1A%u67098%u4E2A%u7A97%u53E3%u6765%u68C0%u6D4B%uFF0C%u6700%u7EC8%u4EA7%u751F%u7684volume%u662F8x8x4%uFF0C%u4EE3%u8868%u8FD98%u4E2A%u7A97%u53E3%u7684%u68C0%u6D4B%u7ED3%u679C%u3002%0A%u5377%u79EF%u7B97%u6CD5%u901A%u8FC7%u4E00%u6B21%u6027%u8FED%u4EE3%uFF0C%u8F93%u51FA%u4E86%u4E00%u5E45%u56FE%u7247%u4E0A%u6240%u6709%u5206%u7247%u7684%u68C0%u6D4B%u7ED3%u679C%uFF0C%u5176%u6548%u7387%u8981%u8FDC%u8FDC%u5927%u4E8E%u6ED1%u52A8%u7A97%u53E3%u7684%u8FD0%u7B97%u6548%u7387%u3002%0A%0A%23%23%20YOLO%20Algorithm%0AYOLO%20%3D%20You%20Only%20Look%20Once%0A%u8FD9%u662F%u516C%u8BA4%u6BD4%u8F83%u9AD8%u6548%u7684%u76EE%u6807%u68C0%u6D4B%u7B97%u6CD5%u3002%u636E%u8BF4%u8FD9%u7BC7%u8BBA%u6587%u7684%u96BE%u5EA6%u4E5F%u5F88%u9AD8%uFF0C%u6BD4%u8F83%u96BE%u8BFB%u61C2%u3002%0A%23%23%23%20IOU%0AIOU%20%3D%20Intersection%20Over%20Union%uFF0C%u7FFB%u8BD1%u6210%u4EA4%u5E76%u6BD4%u3002%u4EA4%u96C6%u548C%u5E76%u96C6%u7684%u6BD4%u4F8B%u3002%u5982%u4E0B%u56FE%uFF1A%0A%21%5BAlt%20text%5D%28./1538001652894.png%29%0A%u84DD%u8272%u6846%u662F%u7B97%u6CD5%u9884%u6D4B%u51FA%u7684%u76EE%u6807%u4F4D%u7F6E%uFF0C%u7EA2%u8272%u6846%u662F%u5B9E%u9645%u76EE%u6807%u4F4D%u7F6E%u3002%u9EC4%u8272%u9634%u5F71%u662F%u4EA4%u96C6%uFF0C%u7EFF%u8272%u9634%u5F71%u662F%u5E76%u96C6%u3002%u90A3%u4E48%uFF1A%0A%24%24IOU%28%u4EA4%u5E76%u6BD4%29%20%3D%20%5Ccfrac%20%7B%u9EC4%u8272%u7684%u9762%u79EF%7D%7B%u7EFF%u8272%u7684%u9762%u79EF%7D%24%24%0A%u4EE5IOU%u7684%u5927%u5C0F%u6765%u5224%u65AD%uFF0C%u8FD9%u4E00%u6B21%u76EE%u6807%u68C0%u6D4B%u7684%u8D28%u91CF%u3002%u4F8B%u5982%uFF0C%u6211%u4EEC%u901A%u5E38%u8981%u6C42%u4EA4%u5E76%u6BD4%u8FBE%u52300.5%u62160.6%u4EE5%u4E0A%uFF0C%u624D%u7B97%u4E00%u6B21%u6709%u6548%u7684%u68C0%u6D4B%u3002%0A%0A%23%23%23%20Non-max%20Suppression%20Algorithm%0A%u7FFB%u8BD1%u6210%u975E%u6781%u5927%u503C%u6291%u5236%u3002%0A%u5F53%u6709%u5982%u4E0B%u7684%u68C0%u6D4B%u7ED3%u679C%u7684%u65F6%u5019%uFF1A%0A%21%5BAlt%20text%7C400x0%5D%28./1538002346979.png%29%0A%u7406%u8BBA%u4E0A%uFF0C%u4E00%u4E2A%u7269%u4F53%u53EA%u5C5E%u4E8E%u4E00%u4E2Abox%u3002%u4F46%u662F%uFF0C%u901A%u5E38%u5728%u9884%u6D4B%u7684%u65F6%u5019%uFF0C%u6BCF%u4E00%u4E2A%u7269%u4F53%u56DB%u5468%u7684box%u90FD%u6709%u53EF%u80FD%u4EA7%u751F%u6709%u6548%u9884%u6D4B%u3002Non-max%20suppression%u8981%u505A%u7684%u5C31%u662F%u8FC7%u6EE4%u6389%u8D28%u91CF%u8F83%u5DEE%u7684%u9884%u6D4B%uFF0C%u4E3A%u6BCF%u4E2A%u7269%u4F53%u53EA%u7559%u4E0B%u4E00%u4E2A%u6709%u6548%u9884%u6D4B%u3002%u5177%u4F53%u505A%u6CD5%u5982%u4E0B%0A%21%5BAlt%20text%7C400x0%5D%28./1538002565930.png%29%0A%u7ECF%u8FC7Non-max%20suppression%u8FC7%u6EE4%u540E%uFF0C%u6700%u7EC8%u7ED3%u679C%u5982%u4E0B%uFF1A%0A%21%5BAlt%20text%7C400x0%5D%28./1538002626027.png%29%0A%u6BCF%u4E2A%u7269%u4F53%u53EA%u4FDD%u7559%u4E00%u4E2A%u6709%u6548%u68C0%u6D4B%u3002%0A%0A%23%23%23%20Anchor%20Box%0AAnchor%20Box%u8981%u89E3%u51B3%u7684%u95EE%u9898%u5C31%u662F%u50CF%u4E0B%u9762%u8FD9%u5E45%u56FE%uFF0C%u6709%u4E24%u4E2A%u4E0D%u540C%u5F62%u72B6%u7684%u7269%u4F53%uFF0C%u5176%u4E2D%u5FC3%u70B9%u91CD%u53E0%u3002%u4E5F%u5C31%u662F%u4E00%u4E2Agrid%20box%u4E2D%u6709%u4E24%u4E2A%u7269%u4F53%uFF0C%u8FD9%u4E2A%u901A%u8FC7%u4E00%u822C%u7684y%20label%u662F%u6CA1%u6CD5%u53CD%u6620%u51FA%u6765%u7684%u3002%0A%21%5BAlt%20text%5D%28./1538002823587.png%29%0A%u901A%u5E38%u7684y%20label%u662F%uFF1A%0A%24%24y%20%3D%20%5Cbegin%7Bbmatrix%7D%20Pc%20%5C%5C%20b_x%20%5C%5C%20b_y%20%5C%5C%20b_h%20%5C%5C%20b_w%20%5C%5C%20c_1%20%5C%5C%20c_2%20%5C%5C%20c_3%20%5Cend%7Bbmatrix%7D%24%24%0A%u89E3%u51B3%u529E%u6CD5%u5C31%u662F%u8BBE%u7F6EAnchor%20Box%uFF0C%u8FD9%u6837%u7684%u56FE%u7247%u5C31%u8BBE%u7F6E%u4E24%u4E2AAnchor%20Box%u3002%u5F53%u5982%u679C%u6709%u66F4%u591A%u7684%u56FE%u7247%u4EA4%u53E0%u7684%u53EF%u80FD%u7684%u65F6%u5019%uFF0C%u90A3%u5C31%u8981%u8BBE%u7F6E%u591A%u4E2AAnchor%20Box%u3002y%u4FEE%u6B63%u6210%u4E3A%uFF1A%0A%24%24y%20%3D%20%5Cbegin%7Bbmatrix%7D%20Pc%20%5C%5C%20b_x%20%5C%5C%20b_y%20%5C%5C%20b_h%20%5C%5C%20b_w%20%5C%5C%20c_1%20%5C%5C%20c_2%20%5C%5C%20c_3%20%5C%5C%20Pc%20%5C%5C%20b_x%20%5C%5C%20b_y%20%5C%5C%20b_h%20%5C%5C%20b_w%20%5C%5C%20c_1%20%5C%5C%20c_2%20%5C%5C%20c_3%5Cend%7Bbmatrix%7D%24%24%0Ay%u4E0A%u534A%u90E8%u5206%u68C0%u6D4BAnchor%20Box%201%u7684%u7269%u4F53%uFF0C%u4E0B%u534A%u90E8%u5206%u68C0%u6D4BAnchor%20Box%202%u7684%u7269%u4F53%u3002%0A%0A%23%23%23%20YOLO%u7B97%u6CD5%0A%u5148%u5C06%u6837%u672C%u56FE%u7247%u5207%u5272%u6210%u7F51%u683C%uFF0C%u9488%u5BF9%u6BCF%u4E2A%u7F51%u683C%u586B%u5145%u6807%u7B7Ey%u3002%0A%21%5BAlt%20text%7C300x0%5D%28./1538003638979.png%29%0A-%20%u6BCF%u4E2A%u7F51%u683C%u7684%u5DE6%u4E0A%u70B9%u5750%u6807%u4E3A%280%2C0%29%uFF0C%u53F3%u4E0B%u89D2%u5750%u6807%u4E3A%281%2C1%29%u3002%u6240%u4EE5%24b_x%2C%20b_y%20%5Cin%20%280%2C1%29%24%uFF0C%24b_h%2C%20b_w%24%u662F%u771F%u5B9E%u7269%u4F53%u5C3A%u5BF8%u9488%u5BF9%u7F51%u683C%u7684%u6BD4%u4F8B%u3002%u6240%u4EE5%u4E0A%u56FE%u53F3%u8FB9%u8F66%u5B50%u7684bounding%20box%u7684%u957F%u5BBD%u53EF%u80FD%u662F0.4x0.9%uFF0C%u5DE6%u8FB9%u8F66%u5B50%u7684%u957F%u5BBD%u6BD4%u53EF%u80FD%u662F0.5x0.6%u3002%0A-%20%u5982%u679C%u6709%u76EE%u6807%u7269%u4F53%u5219%24P_c%20%3D%201%24%uFF0C%u5426%u5219%24P_c%20%3D%200%24%0A-%20%u5982%u679C%u6709%u53EF%u80FD%u6709%u56FE%u7247%u91CD%u53E0%u5728%u4E00%u4E2A%u683C%u5B50%u7684%u65F6%u5019%uFF0C%u9700%u8981%u8BBE%u7F6E%u5BF9%u5E94%u7684Anchor%20Box%0A%0A%u540E%u9762%u5C31%u662F%u7ECF%u5178%u7684CNN%u4E86%uFF0C%u6700%u7EC8%u5F97%u51FA%u5BF9y%u7684%u9884%u6D4B%24%5Chat%20y%24%0A%21%5BAlt%20text%7C600x0%5D%28./1538051571327.png%29%0A%u6700%u7EC8%u8F93%u51FA%u7684%u56FE%u50CF%uFF0C%u53EF%u80FD%u662F%u5982%u4E0B%uFF1A%0A%21%5BAlt%20text%7C300x0%5D%28./1538051783160.png%29%0A%u91C7%u7528non-max%20suppression%u6765%u8FC7%u6EE4%u51FA%u6700%u7EC8%u7684%u9884%u6D4B%u7ED3%u679C%u3002%0A%0A%0A%0A%23%23%20%u53C2%u8003%u6587%u732E%0AYOLO%3A%20Redmon%20et%20al.%2C%202015%2C%20You%20Only%20Look%20Once%3A%20Unified%20real-time%20object%20detection%0A%0A%0A%0A%0A

Edit

终于讲到真正的deep learning了,本章主要介绍了几个经典的深度卷积网络。包括:

  • Classic Networks:
    • LeNet-5
    • AlexNet
    • VGG
  • ResNet
  • Inception

经典网络都是比较早期的运用卷积层搭建的神经网络,在当时取得了不错的效果,也推动了深度学习社区的进一步发展。这些网络的构成也可以帮助我们学习如何搭建深度卷积网络。ResNet和Inception就是最近几年的研究成果了。由于现在深度学习理论和计算机算力的进一步发展,这些网络的结构已经远比经典网络要复杂的多。我们可以通过学习了解,并在真正的生产环境中去尝试使用这些已有的网络。
所有的网络都有对应的论文介绍,我将其列入参考文献一节。

经典网络

这些经典网络基本都基于经典的卷积网络(LeNet),只是网络的尺寸越来越大,参数越来越多

LeNet-5

上一章讲卷积网络时就以这个网络为例子的。其基本构成如下图:

  • 两个卷积层,每一个卷积层后有一个average pooling层
  • 最后两个神经网络层直接输出
  • 在输出前可以用Softmax层来做多分类
  • 该网络大约有60K个参数

因为是早期网络,所以并不是很深,卷积层没有使用padding,所以图像尺寸越来越小,采用average pooling,这在后期的网络不太常见。
不过不管怎么样,这都是一个很经典的卷积神经网络。

AlexNet

其网络结构如下图:


网络构成基本与LeNet差不多。与LeNet不同的是,这里采用了max pooling,采用了更多的神经网络连接,添加了Softmax输出层,一定程度上使用了padding。参数的数量大约在60M,是LeNet的1000倍。

产生这篇论文时,GPU技术还不是很发达,论文花了很多篇幅讲述如何将改网络拆分到不同的GPU上进行计算。但这个对现在的GPU技术来说已经不重要了。
文中还提到了一个概念叫Local Response Normalization,这也是不常用的概念,不需要理解。

VGG - 16

网络结构如下图:


网络结构变的更复杂了,参数数量达到了138M

ResNet

中文译作残差网络,因为其特点是在经典网络的基础上加入很多如下图的residual block:




<——- 这里的就是残差

整个网络结构大致如下:


这样一个网络就由5个残差模块组成。
残差模块的引入是为了改善,当经典网络(或者这篇论文中提到的plain network,即没有残差模块的网络)深度很大时的vulnerability,即梯度爆炸和梯度消失带来的问题。


理论上机器学习的网络规模越大,深度越大的时候,精度应该越来越高。但因为初始值取值的问题,或其他任何随机误差的引入,都有可能在网络层数过多时会导致梯度消失或爆炸,从而导致实际效果达不到理论效果。
在引入残差网络解决梯度消失或爆炸后,往往能得到比较好的效果:

为什么残差网络有用?(注:这里我也不太理解,先记录下来)

对于这样一个大型神经网络后面接一个残差模块后,有:

如果有weight decay(L2 regularization),,如果当的时候,,如果激活函数时ReLU,则,可见新增加的两层神经网络并不会影响整体网络的performance。所以得证,网络深度的累积对残差网络的影响较小。

Inception

Network in network (1x1 convolution)


这里是一个filter的图例。与其他的卷积网络不同的是,这里不仅是,而是均是向量。如果filter有很多个,则相当于一个小型2层(1个隐藏层)的神经网络。这就是所谓的网络中的网络。
使用1x1 convolution或者network in network的好处是,可以缩减输入图像的通道数。从而达到减少运算量的效果。

运用Network in network来降低运算量


这样一次映射的运算量是28x28x192x5x5x32 = 120M


通过一个1x1convolution做过度的计算量是28x28x192x1x1x16 + 28x28x16x5x5x32 = 12.4M
缩小到接近1/10,相当可观了。而且因为network in network的引入,虽然通道数缩减了,但并不会影响最终模型的performance。
中间的1x1 convolution又称为该网络的bottle neck,很形象的比喻

Inception (GoogLeNet)


这就是一个完整的Inception网络,里面包含了很多如下的Inception Module:


带branch的Inception网络


每个分支都有一个Softmax输出层。这些分支也能输出预测值,这样确保网络的所有隐藏单元和中间层都参与了特征计算。按照Andrew的说法,这些分支可以起到regularization的作用,可以有效降低网络过拟合的可能性。

参考文献

  • LeNet-5: LeCun et al., 1998. Gradient-based learning applied to document recognition
  • AlexNet: Krizhevsky et al., 2012. ImageNet classification with deep convolutional nerual networks
  • VGG-16: Simonyan & Zisserman 2015. Very deep convolutional networks for large-scale image recognition
  • ResNet: He et al., 2015. Deep residual networks for image recognition
  • Network in network: Lin. et al., 2013. Network in network
  • Inception: Szegedy et al., 2014, Going Deeper with Convolutions
%23%20Deep%20Learning%20%2811%29%20-%20%20Deep%20convolutional%20models%0A@%28myblog%29%5Bmachine%20learning%2C%20deep%20learning%5D%0A%0A%u7EC8%u4E8E%u8BB2%u5230%u771F%u6B63%u7684deep%20learning%u4E86%uFF0C%u672C%u7AE0%u4E3B%u8981%u4ECB%u7ECD%u4E86%u51E0%u4E2A%u7ECF%u5178%u7684%u6DF1%u5EA6%u5377%u79EF%u7F51%u7EDC%u3002%u5305%u62EC%uFF1A%0A-%20Classic%20Networks%3A%0A%09-%20LeNet-5%0A%09-%20AlexNet%0A%09-%20VGG%0A-%20ResNet%0A-%20Inception%0A%0A%u7ECF%u5178%u7F51%u7EDC%u90FD%u662F%u6BD4%u8F83%u65E9%u671F%u7684%u8FD0%u7528%u5377%u79EF%u5C42%u642D%u5EFA%u7684%u795E%u7ECF%u7F51%u7EDC%uFF0C%u5728%u5F53%u65F6%u53D6%u5F97%u4E86%u4E0D%u9519%u7684%u6548%u679C%uFF0C%u4E5F%u63A8%u52A8%u4E86%u6DF1%u5EA6%u5B66%u4E60%u793E%u533A%u7684%u8FDB%u4E00%u6B65%u53D1%u5C55%u3002%u8FD9%u4E9B%u7F51%u7EDC%u7684%u6784%u6210%u4E5F%u53EF%u4EE5%u5E2E%u52A9%u6211%u4EEC%u5B66%u4E60%u5982%u4F55%u642D%u5EFA%u6DF1%u5EA6%u5377%u79EF%u7F51%u7EDC%u3002ResNet%u548CInception%u5C31%u662F%u6700%u8FD1%u51E0%u5E74%u7684%u7814%u7A76%u6210%u679C%u4E86%u3002%u7531%u4E8E%u73B0%u5728%u6DF1%u5EA6%u5B66%u4E60%u7406%u8BBA%u548C%u8BA1%u7B97%u673A%u7B97%u529B%u7684%u8FDB%u4E00%u6B65%u53D1%u5C55%uFF0C%u8FD9%u4E9B%u7F51%u7EDC%u7684%u7ED3%u6784%u5DF2%u7ECF%u8FDC%u6BD4%u7ECF%u5178%u7F51%u7EDC%u8981%u590D%u6742%u7684%u591A%u3002%u6211%u4EEC%u53EF%u4EE5%u901A%u8FC7%u5B66%u4E60%u4E86%u89E3%uFF0C%u5E76%u5728%u771F%u6B63%u7684%u751F%u4EA7%u73AF%u5883%u4E2D%u53BB%u5C1D%u8BD5%u4F7F%u7528%u8FD9%u4E9B%u5DF2%u6709%u7684%u7F51%u7EDC%u3002%0A%u6240%u6709%u7684%u7F51%u7EDC%u90FD%u6709%u5BF9%u5E94%u7684%u8BBA%u6587%u4ECB%u7ECD%uFF0C%u6211%u5C06%u5176%u5217%u5165%u53C2%u8003%u6587%u732E%u4E00%u8282%u3002%0A%23%23%20%u7ECF%u5178%u7F51%u7EDC%0A%u8FD9%u4E9B%u7ECF%u5178%u7F51%u7EDC%u57FA%u672C%u90FD%u57FA%u4E8E%u7ECF%u5178%u7684%u5377%u79EF%u7F51%u7EDC%28LeNet%29%uFF0C%u53EA%u662F%u7F51%u7EDC%u7684%u5C3A%u5BF8%u8D8A%u6765%u8D8A%u5927%uFF0C%u53C2%u6570%u8D8A%u6765%u8D8A%u591A%0A%23%23%23%20LeNet-5%0A%u4E0A%u4E00%u7AE0%u8BB2%u5377%u79EF%u7F51%u7EDC%u65F6%u5C31%u4EE5%u8FD9%u4E2A%u7F51%u7EDC%u4E3A%u4F8B%u5B50%u7684%u3002%u5176%u57FA%u672C%u6784%u6210%u5982%u4E0B%u56FE%uFF1A%0A%21%5BAlt%20text%5D%28./1537397452563.png%29%0A-%20%u4E24%u4E2A%u5377%u79EF%u5C42%uFF0C%u6BCF%u4E00%u4E2A%u5377%u79EF%u5C42%u540E%u6709%u4E00%u4E2Aaverage%20pooling%u5C42%0A-%20%u6700%u540E%u4E24%u4E2A%u795E%u7ECF%u7F51%u7EDC%u5C42%u76F4%u63A5%u8F93%u51FA%0A-%20%u5728%u8F93%u51FA%u524D%u53EF%u4EE5%u7528Softmax%u5C42%u6765%u505A%u591A%u5206%u7C7B%0A-%20%u8BE5%u7F51%u7EDC%u5927%u7EA6%u670960K%u4E2A%u53C2%u6570%0A%0A%u56E0%u4E3A%u662F%u65E9%u671F%u7F51%u7EDC%uFF0C%u6240%u4EE5%u5E76%u4E0D%u662F%u5F88%u6DF1%uFF0C%u5377%u79EF%u5C42%u6CA1%u6709%u4F7F%u7528padding%uFF0C%u6240%u4EE5%u56FE%u50CF%u5C3A%u5BF8%u8D8A%u6765%u8D8A%u5C0F%uFF0C%u91C7%u7528average%20pooling%uFF0C%u8FD9%u5728%u540E%u671F%u7684%u7F51%u7EDC%u4E0D%u592A%u5E38%u89C1%u3002%0A%u4E0D%u8FC7%u4E0D%u7BA1%u600E%u4E48%u6837%uFF0C%u8FD9%u90FD%u662F%u4E00%u4E2A%u5F88%u7ECF%u5178%u7684%u5377%u79EF%u795E%u7ECF%u7F51%u7EDC%u3002%0A%0A%23%23%23%20AlexNet%0A%u5176%u7F51%u7EDC%u7ED3%u6784%u5982%u4E0B%u56FE%uFF1A%0A%21%5BAlt%20text%5D%28./1537397767249.png%29%0A%u7F51%u7EDC%u6784%u6210%u57FA%u672C%u4E0ELeNet%u5DEE%u4E0D%u591A%u3002%u4E0ELeNet%u4E0D%u540C%u7684%u662F%uFF0C%u8FD9%u91CC%u91C7%u7528%u4E86max%20pooling%uFF0C%u91C7%u7528%u4E86%u66F4%u591A%u7684%u795E%u7ECF%u7F51%u7EDC%u8FDE%u63A5%uFF0C%u6DFB%u52A0%u4E86Softmax%u8F93%u51FA%u5C42%uFF0C%u4E00%u5B9A%u7A0B%u5EA6%u4E0A%u4F7F%u7528%u4E86padding%u3002%u53C2%u6570%u7684%u6570%u91CF%u5927%u7EA6%u572860M%uFF0C%u662FLeNet%u76841000%u500D%u3002%0A%3E%20%u4EA7%u751F%u8FD9%u7BC7%u8BBA%u6587%u65F6%uFF0CGPU%u6280%u672F%u8FD8%u4E0D%u662F%u5F88%u53D1%u8FBE%uFF0C%u8BBA%u6587%u82B1%u4E86%u5F88%u591A%u7BC7%u5E45%u8BB2%u8FF0%u5982%u4F55%u5C06%u6539%u7F51%u7EDC%u62C6%u5206%u5230%u4E0D%u540C%u7684GPU%u4E0A%u8FDB%u884C%u8BA1%u7B97%u3002%u4F46%u8FD9%u4E2A%u5BF9%u73B0%u5728%u7684GPU%u6280%u672F%u6765%u8BF4%u5DF2%u7ECF%u4E0D%u91CD%u8981%u4E86%u3002%0A%3E%20%u6587%u4E2D%u8FD8%u63D0%u5230%u4E86%u4E00%u4E2A%u6982%u5FF5%u53EBLocal%20Response%20Normalization%uFF0C%u8FD9%u4E5F%u662F%u4E0D%u5E38%u7528%u7684%u6982%u5FF5%uFF0C%u4E0D%u9700%u8981%u7406%u89E3%u3002%0A%23%23%23%20VGG%20-%2016%0A%u7F51%u7EDC%u7ED3%u6784%u5982%u4E0B%u56FE%uFF1A%0A%21%5BAlt%20text%5D%28./1537398631763.png%29%0A%u7F51%u7EDC%u7ED3%u6784%u53D8%u7684%u66F4%u590D%u6742%u4E86%uFF0C%u53C2%u6570%u6570%u91CF%u8FBE%u5230%u4E86138M%0A%0A%23%23%20ResNet%0A%u4E2D%u6587%u8BD1%u4F5C%u6B8B%u5DEE%u7F51%u7EDC%uFF0C%u56E0%u4E3A%u5176%u7279%u70B9%u662F%u5728%u7ECF%u5178%u7F51%u7EDC%u7684%u57FA%u7840%u4E0A%u52A0%u5165%u5F88%u591A%u5982%u4E0B%u56FE%u7684residual%20block%uFF1A%0A%21%5BAlt%20text%7C300x0%5D%28./1537480856087.png%29%0A%24z%5E%7B%5Bl+1%5D%7D%20%3D%20W%5E%7B%5Bl+1%5D%7Da%5E%7B%5Bl%5D%7D%20+%20b%5E%7B%5Bl+1%5D%7D%24%0A%24a%5E%7B%5Bl+1%5D%7D%20%3D%20g%28z%5E%7B%5Bl+1%5D%7D%29%24%0A%24z%5E%7B%5Bl+2%5D%7D%20%3D%20W%5E%7B%5Bl+2%5D%7Da%5E%7B%5Bl+1%5D%7D%20+%20b%5E%7B%5Bl+2%5D%7D%20+%20a%5E%7B%5Bl%5D%7D%24%20%3C-------%20%u8FD9%u91CC%u7684%24a%5E%7B%5Bl%5D%7D%24%u5C31%u662F%u6B8B%u5DEE%0A%24a%5E%7B%5Bl+2%5D%7D%20%3D%20g%28z%5E%7B%5Bl+2%5D%7D%29%24%0A%0A%u6574%u4E2A%u7F51%u7EDC%u7ED3%u6784%u5927%u81F4%u5982%u4E0B%uFF1A%0A%21%5BAlt%20text%5D%28./1537481112346.png%29%0A%u8FD9%u6837%u4E00%u4E2A%u7F51%u7EDC%u5C31%u75315%u4E2A%u6B8B%u5DEE%u6A21%u5757%u7EC4%u6210%u3002%0A%u6B8B%u5DEE%u6A21%u5757%u7684%u5F15%u5165%u662F%u4E3A%u4E86%u6539%u5584%uFF0C%u5F53%u7ECF%u5178%u7F51%u7EDC%28%u6216%u8005%u8FD9%u7BC7%u8BBA%u6587%u4E2D%u63D0%u5230%u7684plain%20network%uFF0C%u5373%u6CA1%u6709%u6B8B%u5DEE%u6A21%u5757%u7684%u7F51%u7EDC%29%u6DF1%u5EA6%u5F88%u5927%u65F6%u7684vulnerability%uFF0C%u5373%u68AF%u5EA6%u7206%u70B8%u548C%u68AF%u5EA6%u6D88%u5931%u5E26%u6765%u7684%u95EE%u9898%u3002%0A%21%5BAlt%20text%5D%28./1537481278978.png%29%0A%u7406%u8BBA%u4E0A%u673A%u5668%u5B66%u4E60%u7684%u7F51%u7EDC%u89C4%u6A21%u8D8A%u5927%uFF0C%u6DF1%u5EA6%u8D8A%u5927%u7684%u65F6%u5019%uFF0C%u7CBE%u5EA6%u5E94%u8BE5%u8D8A%u6765%u8D8A%u9AD8%u3002%u4F46%u56E0%u4E3A%u521D%u59CB%u503C%u53D6%u503C%u7684%u95EE%u9898%uFF0C%u6216%u5176%u4ED6%u4EFB%u4F55%u968F%u673A%u8BEF%u5DEE%u7684%u5F15%u5165%uFF0C%u90FD%u6709%u53EF%u80FD%u5728%u7F51%u7EDC%u5C42%u6570%u8FC7%u591A%u65F6%u4F1A%u5BFC%u81F4%u68AF%u5EA6%u6D88%u5931%u6216%u7206%u70B8%uFF0C%u4ECE%u800C%u5BFC%u81F4%u5B9E%u9645%u6548%u679C%u8FBE%u4E0D%u5230%u7406%u8BBA%u6548%u679C%u3002%0A%u5728%u5F15%u5165%u6B8B%u5DEE%u7F51%u7EDC%u89E3%u51B3%u68AF%u5EA6%u6D88%u5931%u6216%u7206%u70B8%u540E%uFF0C%u5F80%u5F80%u80FD%u5F97%u5230%u6BD4%u8F83%u597D%u7684%u6548%u679C%uFF1A%0A%21%5BAlt%20text%5D%28./1537481430062.png%29%0A%0A%3E%u4E3A%u4EC0%u4E48%u6B8B%u5DEE%u7F51%u7EDC%u6709%u7528%uFF1F%28**%u6CE8%uFF1A**%u8FD9%u91CC%u6211%u4E5F%u4E0D%u592A%u7406%u89E3%uFF0C%u5148%u8BB0%u5F55%u4E0B%u6765%29%0A%3E%21%5BAlt%20text%7C500x0%5D%28./1537483365846.png%29%0A%3E%u5BF9%u4E8E%u8FD9%u6837%u4E00%u4E2A%u5927%u578B%u795E%u7ECF%u7F51%u7EDC%u540E%u9762%u63A5%u4E00%u4E2A%u6B8B%u5DEE%u6A21%u5757%u540E%uFF0C%u6709%uFF1A%0A%24a%5E%7B%5Bl+2%5D%7D%20%3D%20g%28W%5E%7B%5Bl+2%5D%7Da%5E%7B%5Bl+1%5D%7D%20+%20b%5E%7B%5Bl+2%5D%7D%20+%20a%5E%7B%5Bl%5D%7D%29%24%0A%u5982%u679C%u6709weight%20decay%28L2%20regularization%29%uFF0C%24W%5Crightarrow0%2C%20b%5Crightarrow%200%24%uFF0C%u5982%u679C%u5F53%24W%20%3D%200%2C%20b%20%3D%200%24%u7684%u65F6%u5019%uFF0C%24a%5E%7B%5Bl+2%5D%7D%20%3D%20g%28a%5E%7B%5Bl%5D%7D%29%24%uFF0C%u5982%u679C%u6FC0%u6D3B%u51FD%u6570%u65F6ReLU%uFF0C%u5219%24a%5E%7B%5Bl+2%5D%7D%20%3D%20a%5E%7B%5Bl%5D%7D%24%uFF0C%u53EF%u89C1%u65B0%u589E%u52A0%u7684%u4E24%u5C42%u795E%u7ECF%u7F51%u7EDC%u5E76%u4E0D%u4F1A%u5F71%u54CD%u6574%u4F53%u7F51%u7EDC%u7684performance%u3002%u6240%u4EE5%u5F97%u8BC1%uFF0C%u7F51%u7EDC%u6DF1%u5EA6%u7684%u7D2F%u79EF%u5BF9%u6B8B%u5DEE%u7F51%u7EDC%u7684%u5F71%u54CD%u8F83%u5C0F%u3002%0A%0A%0A%23%23%20Inception%0A%23%23%23%20Network%20in%20network%20%281x1%20convolution%29%0A%21%5BAlt%20text%5D%28./1537484677052.png%29%0A%u8FD9%u91CC%u662F%u4E00%u4E2Afilter%u7684%u56FE%u4F8B%u3002%u4E0E%u5176%u4ED6%u7684%u5377%u79EF%u7F51%u7EDC%u4E0D%u540C%u7684%u662F%uFF0C%u8FD9%u91CC%u4E0D%u4EC5%u662F%24%286%5Ctimes6%5Ctimes32%29%20%5Cast%20%281%5Ctimes1%5Ctimes32%29%24%uFF0C%u800C%u662F%24a_%7Bi%2Cj%7D%5E%7B%5Bl+1%5D%7D%20%3D%20g%28W%5E%7B%5Bl+1%5D%7Da_%7Bi%2Cj%7D%5E%7B%5Bl%5D%7D%20+%20b%5E%7B%5Bl+1%5D%7D%29%24%uFF0C%24a_%7Bi%2Cj%7D%5E%7B%5Bl+1%5D%7D%2C%20a_%7Bi%2Cj%7D%5E%7B%5Bl%5D%7D%24%u5747%u662F%24%201%5Ctimes32%24%u5411%u91CF%u3002%u5982%u679Cfilter%u6709%u5F88%u591A%u4E2A%uFF0C%u5219%u76F8%u5F53%u4E8E%u4E00%u4E2A%u5C0F%u578B2%u5C42%281%u4E2A%u9690%u85CF%u5C42%29%u7684%u795E%u7ECF%u7F51%u7EDC%u3002%u8FD9%u5C31%u662F%u6240%u8C13%u7684%u7F51%u7EDC%u4E2D%u7684%u7F51%u7EDC%u3002%0A%u4F7F%u75281x1%20convolution%u6216%u8005network%20in%20network%u7684%u597D%u5904%u662F%uFF0C%u53EF%u4EE5%u7F29%u51CF%u8F93%u5165%u56FE%u50CF%u7684%u901A%u9053%u6570%u3002%u4ECE%u800C%u8FBE%u5230%u51CF%u5C11%u8FD0%u7B97%u91CF%u7684%u6548%u679C%u3002%0A%0A%23%23%23%20%u8FD0%u7528Network%20in%20network%u6765%u964D%u4F4E%u8FD0%u7B97%u91CF%0A%21%5BAlt%20text%7C400x0%5D%28./1537485731408.png%29%0A%u8FD9%u6837%u4E00%u6B21%u6620%u5C04%u7684%u8FD0%u7B97%u91CF%u662F28x28x192x5x5x32%20%3D%20120M%0A%21%5BAlt%20text%7C600x0%5D%28./1537485831290.png%29%0A%u901A%u8FC7%u4E00%u4E2A1x1convolution%u505A%u8FC7%u5EA6%u7684%u8BA1%u7B97%u91CF%u662F28x28x192x1x1x16%20+%2028x28x16x5x5x32%20%3D%2012.4M%0A%u7F29%u5C0F%u5230%u63A5%u8FD11/10%uFF0C%u76F8%u5F53%u53EF%u89C2%u4E86%u3002%u800C%u4E14%u56E0%u4E3Anetwork%20in%20network%u7684%u5F15%u5165%uFF0C%u867D%u7136%u901A%u9053%u6570%u7F29%u51CF%u4E86%uFF0C%u4F46%u5E76%u4E0D%u4F1A%u5F71%u54CD%u6700%u7EC8%u6A21%u578B%u7684performance%u3002%0A%u4E2D%u95F4%u76841x1%20convolution%u53C8%u79F0%u4E3A%u8BE5%u7F51%u7EDC%u7684bottle%20neck%uFF0C%u5F88%u5F62%u8C61%u7684%u6BD4%u55BB%0A%21%5BAlt%20text%7C350x0%5D%28./1537486081451.png%29%0A%0A%23%23%23%20Inception%20%28GoogLeNet%29%0A%21%5BAlt%20text%5D%28./1537493512332.png%29%0A%u8FD9%u5C31%u662F%u4E00%u4E2A%u5B8C%u6574%u7684Inception%u7F51%u7EDC%uFF0C%u91CC%u9762%u5305%u542B%u4E86%u5F88%u591A%u5982%u4E0B%u7684Inception%20Module%uFF1A%0A%21%5BAlt%20text%7C400x0%5D%28./1537494165060.png%29%0A%u5E26branch%u7684Inception%u7F51%u7EDC%0A%21%5BAlt%20text%5D%28./1537495522238.png%29%0A%u6BCF%u4E2A%u5206%u652F%u90FD%u6709%u4E00%u4E2ASoftmax%u8F93%u51FA%u5C42%u3002%u8FD9%u4E9B%u5206%u652F%u4E5F%u80FD%u8F93%u51FA%u9884%u6D4B%u503C%uFF0C%u8FD9%u6837%u786E%u4FDD%u7F51%u7EDC%u7684%u6240%u6709%u9690%u85CF%u5355%u5143%u548C%u4E2D%u95F4%u5C42%u90FD%u53C2%u4E0E%u4E86%u7279%u5F81%u8BA1%u7B97%u3002%u6309%u7167Andrew%u7684%u8BF4%u6CD5%uFF0C%u8FD9%u4E9B%u5206%u652F%u53EF%u4EE5%u8D77%u5230regularization%u7684%u4F5C%u7528%uFF0C%u53EF%u4EE5%u6709%u6548%u964D%u4F4E%u7F51%u7EDC%u8FC7%u62DF%u5408%u7684%u53EF%u80FD%u6027%u3002%0A%0A%0A%23%23%20%u53C2%u8003%u6587%u732E%0A-%20LeNet-5%3A%20LeCun%20et%20al.%2C%201998.%20Gradient-based%20learning%20applied%20to%20document%20recognition%0A-%20AlexNet%3A%20Krizhevsky%20et%20al.%2C%202012.%20ImageNet%20classification%20with%20deep%20convolutional%20nerual%20networks%0A-%20VGG-16%3A%20Simonyan%20%26%20Zisserman%202015.%20Very%20deep%20convolutional%20networks%20for%20large-scale%20image%20recognition%0A-%20ResNet%3A%20He%20et%20al.%2C%202015.%20Deep%20residual%20networks%20for%20image%20recognition%0A-%20Network%20in%20network%3A%20Lin.%20et%20al.%2C%202013.%20Network%20in%20network%0A-%20Inception%3A%20Szegedy%20et%20al.%2C%202014%2C%20Going%20Deeper%20with%20Convolutions

Edit

终于到CNN了,卷积神经网络。顾名思义,加入了卷积层的神经网络就是一个CNN。

卷积的定义

卷积的数学定义在wiki page上可以找到。大致如下:

而机器学习中使用的卷积略有差别,如下

:实际上机器学习里的convolution是cross-correlation

卷积参数

卷积核

上面做卷积的3x3的矩阵就是卷积核,又可以称作filter,滤波器,过滤器等等。指的都是同一个东西。
在图像处理中,卷积通常被用来做边缘检测。例如上图的3x3 filter可以检测竖边缘。也可以变成下面这样来检测横边缘:

  • : channel number
  • : 下一个layer输入的channel number,本层等于filter number
  • f是卷积核的大小,s是步长,后面会讲到

Padding

Padding要解决的问题就是,如果没有padding,则输出矩阵会越来越小,因为当时,总小于n。如果有了padding,则这个数字修正成。此时可以修订padding的大小来调整输出矩阵的大小。

Stride

Stride指步长,上面图中的例子步长采用的是1,步长也可以是任意其他值。当步长为s,padding为p时,输出矩阵的尺寸为:

卷积神经网络

一个完整的卷积神经网络通常包括3个部分:

  • Convolution (CONV)
  • Pooling (POOL)
  • Fully connected (FC)

其中的Fully connected就是经典的全连接神经网络。

卷积网络

下图就是一层卷积网络的大致形态

  • 输入是6x6x3的矩阵
  • 经过两路卷积核和non-linear activation得到4x4x2的输出
  • 其中b是bias,activation采用ReLU

这样一层网络的参数有:

  • : filter size
  • : padding size
  • : stride
  • : number of filters in layer
  • Input:
  • Output:
  • Each filter has shape:
  • After activations: , with mini-batch:
  • Weights:
  • Bias:

如果一个64x64的数据输入,采用10个3x3的卷积核,需要多少个模型参数?
答案是:(3x3+1)*10 = 280个,与输入图像的尺寸无关。即用小尺寸样本数据训练出来的卷积模型,同样可以适用于大尺寸的图像。

Pooling

翻译做池化层。一般有两种池化策略:

  • Max pooling
  • Average pooling

前者使用的更多一些。
具体做法用两张图表示:
Max pooling

Average pooling

这里f=2表示,基于2x2的矩阵做池化,s=2表示每次偏移2获得下一个池化矩阵。f, s都是超参数。所以池化层没有learnable parameter,只有hyper parameter。

为什么要有池化层?网上有很多讨论,摘一段:

本质上,是在精简feature map数据量的同时,最大化保留空间信息和特征信息,的处理技巧;目的是,通过feature map进行压缩浓缩,给到后面hidden layer的input就小了,计算效率能提高;CNN的invariance的能力,本质是由convolution创造的;

我的理解,有几个原因(可能不一定对,请斧正):

  • 卷积滑动累加时,区域有重叠,所以数据是有冗余的,需要精简
  • 池化可以减少位移带来的影响,如max pooling只取一小块区域的最大值,这样虽然有小小位移,输出数据对此并不敏感
  • 可以降维,减少后续数据计算量,也可以减少过拟合的风险,但是增加了欠拟合的风险。

完整的卷积神经网络

一个完整的卷积神经网络大概长这样:

CONV-POOL-CONV-POOL-FC-FC-Softmax

  • 每个卷积网络后跟一个池化层,共两个卷积网络两个池化层
  • 卷积层后,输出串行化到一个列向量里,作为后续神经网络的输入
  • FC3和FC4是两个全连接的标准神经网络
  • 最后是Softmax的输出层

这样一个网络所有的参数如下表:

%23%20Deep%20Learning%20%2810%29%20-%20%20Convolutional%20Neural%20Network%0A@%28myblog%29%5Bdeep%20learning%2C%20machine%20learning%5D%0A%0A%u7EC8%u4E8E%u5230CNN%u4E86%uFF0C%u5377%u79EF%u795E%u7ECF%u7F51%u7EDC%u3002%u987E%u540D%u601D%u4E49%uFF0C%u52A0%u5165%u4E86%u5377%u79EF%u5C42%u7684%u795E%u7ECF%u7F51%u7EDC%u5C31%u662F%u4E00%u4E2ACNN%u3002%0A%0A%23%23%20%u5377%u79EF%u7684%u5B9A%u4E49%0A%u5377%u79EF%u7684%u6570%u5B66%u5B9A%u4E49%u5728%5Bwiki%20page%5D%28https%3A//zh.wikipedia.org/wiki/%25E5%258D%25B7%25E7%25A7%25AF%29%u4E0A%u53EF%u4EE5%u627E%u5230%u3002%u5927%u81F4%u5982%u4E0B%uFF1A%0A%24%24h%28x%29%20%3D%20%28f%20%5Cast%20g%29%28x%29%20%3D%20%5Cint_%7B-%5Cinfty%7D%5E%5Cinfty%20f%28%5Ctau%29g%28x-%5Ctau%29dx%24%24%0A%u800C%u673A%u5668%u5B66%u4E60%u4E2D%u4F7F%u7528%u7684%u5377%u79EF%u7565%u6709%u5DEE%u522B%uFF0C%u5982%u4E0B%0A%21%5BAlt%20text%7C500x0%5D%28./1537223810801.png%29%0A%u4E24%u8005%u7684%u5DEE%u522B%u5728%u4E8E%uFF0C%u673A%u5668%u5B66%u4E60%u4E2D%u7684%u5377%u79EF%u6CA1%u6709%u5BF9%u51FD%u6570g%28x%29%u8FDB%u884C%u7FFB%u8F6C%u3002%u800C%u662F%u76F4%u63A5%u4E0E%u6E90%u6570%u636E%u8FDB%u884C%u79FB%u52A8%u76F8%u4E58%u5E76%u7D2F%u79EF%u3002%u524D%u8005%u7FFB%u8F6C%u7684%u610F%u4E49%u5728%u4E8E%uFF0C%u53EF%u4EE5%u8FD0%u7528%u7ED3%u5408%u5F8B%u5373%24%28A%20%5Cast%20B%29%5Cast%20C%20%3D%20A%20%5Cast%20%28B%20%5Cast%20C%29%24%u3002%0A%3E**%u6CE8**%uFF1A%u5B9E%u9645%u4E0A%u673A%u5668%u5B66%u4E60%u91CC%u7684convolution%u662Fcross-correlation%0A%0A%23%23%20%u5377%u79EF%u53C2%u6570%0A%23%23%23%20%u5377%u79EF%u6838%0A%u4E0A%u9762%u505A%u5377%u79EF%u76843x3%u7684%u77E9%u9635%u5C31%u662F%u5377%u79EF%u6838%uFF0C%u53C8%u53EF%u4EE5%u79F0%u4F5Cfilter%uFF0C%u6EE4%u6CE2%u5668%uFF0C%u8FC7%u6EE4%u5668%u7B49%u7B49%u3002%u6307%u7684%u90FD%u662F%u540C%u4E00%u4E2A%u4E1C%u897F%u3002%0A%u5728%u56FE%u50CF%u5904%u7406%u4E2D%uFF0C%u5377%u79EF%u901A%u5E38%u88AB%u7528%u6765%u505A%u8FB9%u7F18%u68C0%u6D4B%u3002%u4F8B%u5982%u4E0A%u56FE%u76843x3%20filter%u53EF%u4EE5%u68C0%u6D4B%u7AD6%u8FB9%u7F18%u3002%u4E5F%u53EF%u4EE5%u53D8%u6210%u4E0B%u9762%u8FD9%u6837%u6765%u68C0%u6D4B%u6A2A%u8FB9%u7F18%uFF1A%0A%21%5BAlt%20text%7C150x0%5D%28./1537224799937.png%29%0A%u90A3%u5230%u5E95%u8FD9%u4E2A%u5377%u79EF%u6838%u8BE5%u5982%u4F55%u53D6%u503C%uFF1F%u5728%u673A%u5668%u5B66%u4E60%u4E2D%uFF0C%u8FD9%u4E2A%u5377%u79EF%u6838%u7684%u53D6%u503C%u5B9E%u9645%u662F%u4E00%u4E2Alearnable%20parameter%u3002%u4E5F%u5C31%u662F%u8BF4%u4ED6%u548C%u795E%u7ECF%u7F51%u7EDC%u4E2D%u7684W%uFF0Cb%u662F%u4E00%u6837%u7684%uFF0C%u4E5F%u662F%u901A%u8FC7%u5B66%u4E60%u83B7%u5F97%u7684%u3002%u4ED6%u662F%u6A21%u578B%u53C2%u6570%u3002%u6211%u4EEC%u9700%u8981%u5B9A%u7684%u8D85%u53C2%u6570%u662F%uFF0C%u91C7%u7528%u591A%u5C11%u4E2A%u5377%u79EF%u6838%u3002%0A%u4F8B%u5982%u8FD9%u4E2A%u4F8B%u5B50%uFF1A%0A%21%5BAlt%20text%7C600x0%5D%28./1537225051999.png%29%0A%u8F93%u5165%u662F%u4E00%u4E2A6x6x3%u77E9%u9635%uFF0C%u4EE3%u8868%u4E00%u4E2A%u56FE%u50CF%u7684RGB%u4E09%u4E2A%u901A%u9053%uFF0C%u5377%u57FA%u5C42%u91C7%u7528%u4E24%u4E2A3x3x3%u7684%u5377%u79EF%u6838%uFF0C%u83B7%u5F974x4x2%u7684%u5377%u79EF%u8F93%u51FA%0A%24%24n_H%20%5Ctimes%20n_W%20%5Ctimes%20n_C%20%5Crightarrow%20%28%5Cfrac%20%7Bn_H-f%7D%7Bs%7D+1%29%20%5Ctimes%20%28%5Cfrac%7Bn_W-f%7D%7Bs%7D%20+%201%29%20%5Ctimes%20n_C%27%24%24%0A-%20%24n_C%24%3A%20%20channel%20number%0A-%20%24n_C%27%24%3A%20%u4E0B%u4E00%u4E2Alayer%u8F93%u5165%u7684channel%20number%uFF0C%u672C%u5C42%u7B49%u4E8Efilter%20number%0A-%20f%u662F%u5377%u79EF%u6838%u7684%u5927%u5C0F%uFF0Cs%u662F%u6B65%u957F%uFF0C%u540E%u9762%u4F1A%u8BB2%u5230%0A%0A%23%23%23%20Padding%0APadding%u8981%u89E3%u51B3%u7684%u95EE%u9898%u5C31%u662F%uFF0C%u5982%u679C%u6CA1%u6709padding%uFF0C%u5219%u8F93%u51FA%u77E9%u9635%u4F1A%u8D8A%u6765%u8D8A%u5C0F%uFF0C%u56E0%u4E3A%u5F53%24f%20%5Cgt%201%24%u65F6%uFF0C%24%5Ccfrac%20%7Bn-f%7D%7Bs%7D%20+%201%24%u603B%u5C0F%u4E8En%u3002%u5982%u679C%u6709%u4E86padding%uFF0C%u5219%u8FD9%u4E2A%u6570%u5B57%u4FEE%u6B63%u6210%24%5Ccfrac%20%7Bn-f+2p%7D%7Bs%7D%20+%201%24%u3002%u6B64%u65F6%u53EF%u4EE5%u4FEE%u8BA2padding%u7684%u5927%u5C0F%u6765%u8C03%u6574%u8F93%u51FA%u77E9%u9635%u7684%u5927%u5C0F%u3002%0A%0A%23%23%23%20Stride%0AStride%u6307%u6B65%u957F%uFF0C%u4E0A%u9762%u56FE%u4E2D%u7684%u4F8B%u5B50%u6B65%u957F%u91C7%u7528%u7684%u662F1%uFF0C%u6B65%u957F%u4E5F%u53EF%u4EE5%u662F%u4EFB%u610F%u5176%u4ED6%u503C%u3002%u5F53%u6B65%u957F%u4E3As%uFF0Cpadding%u4E3Ap%u65F6%uFF0C%u8F93%u51FA%u77E9%u9635%u7684%u5C3A%u5BF8%u4E3A%uFF1A%0A%24%24%5Clfloor%20%5Ccfrac%20%7Bn-f+2p%7D%7Bs%7D%20+%201%20%5Crfloor%20%5Ctimes%20%5Clfloor%20%5Ccfrac%20%7Bn-f+2p%7D%7Bs%7D%20+%201%20%5Crfloor%20%5Ctimes%20n_C%24%24%0A%0A%23%23%20%u5377%u79EF%u795E%u7ECF%u7F51%u7EDC%0A%u4E00%u4E2A%u5B8C%u6574%u7684%u5377%u79EF%u795E%u7ECF%u7F51%u7EDC%u901A%u5E38%u5305%u62EC3%u4E2A%u90E8%u5206%uFF1A%0A-%20Convolution%20%28CONV%29%0A-%20Pooling%20%28POOL%29%0A-%20Fully%20connected%20%28FC%29%0A%0A%u5176%u4E2D%u7684Fully%20connected%u5C31%u662F%u7ECF%u5178%u7684%u5168%u8FDE%u63A5%u795E%u7ECF%u7F51%u7EDC%u3002%0A%0A%23%23%23%20%u5377%u79EF%u7F51%u7EDC%0A%u4E0B%u56FE%u5C31%u662F%u4E00%u5C42%u5377%u79EF%u7F51%u7EDC%u7684%u5927%u81F4%u5F62%u6001%0A%21%5BAlt%20text%7C700x0%5D%28./1537238687912.png%29%0A-%20%u8F93%u5165%u662F6x6x3%u7684%u77E9%u9635%0A-%20%u7ECF%u8FC7%u4E24%u8DEF%u5377%u79EF%u6838%u548Cnon-linear%20activation%u5F97%u52304x4x2%u7684%u8F93%u51FA%0A-%20%u5176%u4E2Db%u662Fbias%uFF0Cactivation%u91C7%u7528ReLU%0A%0A%u8FD9%u6837%u4E00%u5C42%u7F51%u7EDC%u7684%u53C2%u6570%u6709%uFF1A%0A-%20%24f%5E%7B%5Bl%5D%7D%24%3A%20filter%20size%0A-%20%24p%5E%7B%5Bl%5D%7D%24%3A%20padding%20size%0A-%20%24s%5E%7B%5Bl%5D%7D%24%3A%20stride%0A-%20%24n_C%5E%7B%5Bl%5D%7D%24%3A%20number%20of%20filters%20in%20layer%20%24l%24%0A-%20Input%3A%20%24n_H%5E%7B%5Bl-1%5D%7D%20%5Ctimes%20n_W%5E%7B%5Bl-1%5D%7D%20%5Ctimes%20n_C%5E%7B%5Bl-1%5D%7D%24%0A-%20Output%3A%20%24n_H%5E%7B%5Bl%5D%7D%20%5Ctimes%20n_W%5E%7B%5Bl%5D%7D%20%5Ctimes%20n_C%5E%7B%5Bl%5D%7D%24%0A-%20Each%20filter%20has%20shape%3A%20%24f%5E%7B%5Bl%5D%7D%20%5Ctimes%20f%5E%7B%5Bl%5D%7D%20%5Ctimes%20n_C%5E%7B%5Bl-1%5D%7D%24%0A-%20After%20activations%3A%20%24a%5E%7B%5Bl%5D%7D%20%3D%20n_H%5E%7B%5Bl%5D%7D%20%5Ctimes%20n_W%5E%7B%5Bl%5D%7D%20%5Ctimes%20n_C%5E%7B%5Bl%5D%7D%24%2C%20with%20mini-batch%3A%20%24A%5E%7B%5Bl%5D%7D%20%3D%20m%20%5Ctimes%20n_H%5E%7B%5Bl%5D%7D%20%5Ctimes%20n_W%5E%7B%5Bl%5D%7D%20%5Ctimes%20n_C%5E%7B%5Bl%5D%7D%24%0A-%20Weights%3A%20%24f%5E%7B%5Bl%5D%7D%20%5Ctimes%20f%5E%7B%5Bl%5D%7D%20%5Ctimes%20n_C%5E%7B%5Bl-1%5D%7D%20%5Ctimes%20n_C%5E%7B%5Bl%5D%7D%24%0A-%20Bias%3A%20%24n_C%5E%7B%5Bl%5D%7D%24%0A%0A%3E%20%u5982%u679C%u4E00%u4E2A64x64%u7684%u6570%u636E%u8F93%u5165%uFF0C%u91C7%u752810%u4E2A3x3%u7684%u5377%u79EF%u6838%uFF0C%u9700%u8981%u591A%u5C11%u4E2A%u6A21%u578B%u53C2%u6570%uFF1F%0A%3E%20%u7B54%u6848%u662F%uFF1A%283x3+1%29*10%20%3D%20280%u4E2A%uFF0C%u4E0E%u8F93%u5165%u56FE%u50CF%u7684%u5C3A%u5BF8%u65E0%u5173%u3002%u5373%u7528%u5C0F%u5C3A%u5BF8%u6837%u672C%u6570%u636E%u8BAD%u7EC3%u51FA%u6765%u7684%u5377%u79EF%u6A21%u578B%uFF0C%u540C%u6837%u53EF%u4EE5%u9002%u7528%u4E8E%u5927%u5C3A%u5BF8%u7684%u56FE%u50CF%u3002%0A%0A%23%23%23%20Pooling%0A%u7FFB%u8BD1%u505A%u6C60%u5316%u5C42%u3002%u4E00%u822C%u6709%u4E24%u79CD%u6C60%u5316%u7B56%u7565%uFF1A%0A-%20Max%20pooling%0A-%20Average%20pooling%0A%0A%u524D%u8005%u4F7F%u7528%u7684%u66F4%u591A%u4E00%u4E9B%u3002%0A%u5177%u4F53%u505A%u6CD5%u7528%u4E24%u5F20%u56FE%u8868%u793A%uFF1A%0A**Max%20pooling**%0A%21%5BAlt%20text%7C500x0%5D%28./1537240125946.png%29%0A%0A**Average%20pooling**%0A%21%5BAlt%20text%7C500x0%5D%28./1537240211256.png%29%0A%0A%u8FD9%u91CCf%3D2%u8868%u793A%uFF0C%u57FA%u4E8E2x2%u7684%u77E9%u9635%u505A%u6C60%u5316%uFF0Cs%3D2%u8868%u793A%u6BCF%u6B21%u504F%u79FB2%u83B7%u5F97%u4E0B%u4E00%u4E2A%u6C60%u5316%u77E9%u9635%u3002f%2C%20s%u90FD%u662F%u8D85%u53C2%u6570%u3002%u6240%u4EE5%u6C60%u5316%u5C42%u6CA1%u6709learnable%20parameter%uFF0C%u53EA%u6709hyper%20parameter%u3002%0A%0A%0A%u4E3A%u4EC0%u4E48%u8981%u6709%u6C60%u5316%u5C42%uFF1F%u7F51%u4E0A%u6709%u5F88%u591A%u8BA8%u8BBA%uFF0C%u6458%u4E00%u6BB5%uFF1A%0A%3E%u672C%u8D28%u4E0A%uFF0C%u662F%u5728%u7CBE%u7B80feature%20map%u6570%u636E%u91CF%u7684%u540C%u65F6%uFF0C%u6700%u5927%u5316%u4FDD%u7559%u7A7A%u95F4%u4FE1%u606F%u548C%u7279%u5F81%u4FE1%u606F%uFF0C%u7684%u5904%u7406%u6280%u5DE7%uFF1B%u76EE%u7684%u662F%uFF0C%u901A%u8FC7feature%20map%u8FDB%u884C%u538B%u7F29%u6D53%u7F29%uFF0C%u7ED9%u5230%u540E%u9762hidden%20layer%u7684input%u5C31%u5C0F%u4E86%uFF0C%u8BA1%u7B97%u6548%u7387%u80FD%u63D0%u9AD8%uFF1BCNN%u7684invariance%u7684%u80FD%u529B%uFF0C%u672C%u8D28%u662F%u7531convolution%u521B%u9020%u7684%uFF1B%0A%0A%u6211%u7684%u7406%u89E3%uFF0C%u6709%u51E0%u4E2A%u539F%u56E0%28%u53EF%u80FD%u4E0D%u4E00%u5B9A%u5BF9%uFF0C%u8BF7%u65A7%u6B63%29%uFF1A%0A-%20%u5377%u79EF%u6ED1%u52A8%u7D2F%u52A0%u65F6%uFF0C%u533A%u57DF%u6709%u91CD%u53E0%uFF0C%u6240%u4EE5%u6570%u636E%u662F%u6709%u5197%u4F59%u7684%uFF0C%u9700%u8981%u7CBE%u7B80%0A-%20%u6C60%u5316%u53EF%u4EE5%u51CF%u5C11%u4F4D%u79FB%u5E26%u6765%u7684%u5F71%u54CD%uFF0C%u5982max%20pooling%u53EA%u53D6%u4E00%u5C0F%u5757%u533A%u57DF%u7684%u6700%u5927%u503C%uFF0C%u8FD9%u6837%u867D%u7136%u6709%u5C0F%u5C0F%u4F4D%u79FB%uFF0C%u8F93%u51FA%u6570%u636E%u5BF9%u6B64%u5E76%u4E0D%u654F%u611F%0A-%20%u53EF%u4EE5%u964D%u7EF4%uFF0C%u51CF%u5C11%u540E%u7EED%u6570%u636E%u8BA1%u7B97%u91CF%uFF0C%u4E5F%u53EF%u4EE5%u51CF%u5C11%u8FC7%u62DF%u5408%u7684%u98CE%u9669%uFF0C%u4F46%u662F%u589E%u52A0%u4E86%u6B20%u62DF%u5408%u7684%u98CE%u9669%u3002%0A%0A%0A%23%23%23%20%u5B8C%u6574%u7684%u5377%u79EF%u795E%u7ECF%u7F51%u7EDC%0A%u4E00%u4E2A%u5B8C%u6574%u7684%u5377%u79EF%u795E%u7ECF%u7F51%u7EDC%u5927%u6982%u957F%u8FD9%u6837%uFF1A%0A%21%5BAlt%20text%7C800x0%5D%28./1537247393297.png%29%0A%0ACONV-POOL-CONV-POOL-FC-FC-Softmax%0A%0A-%20%u6BCF%u4E2A%u5377%u79EF%u7F51%u7EDC%u540E%u8DDF%u4E00%u4E2A%u6C60%u5316%u5C42%uFF0C%u5171%u4E24%u4E2A%u5377%u79EF%u7F51%u7EDC%u4E24%u4E2A%u6C60%u5316%u5C42%0A-%20%u5377%u79EF%u5C42%u540E%uFF0C%u8F93%u51FA%u4E32%u884C%u5316%u5230%u4E00%u4E2A%u5217%u5411%u91CF%u91CC%uFF0C%u4F5C%u4E3A%u540E%u7EED%u795E%u7ECF%u7F51%u7EDC%u7684%u8F93%u5165%0A-%20FC3%u548CFC4%u662F%u4E24%u4E2A%u5168%u8FDE%u63A5%u7684%u6807%u51C6%u795E%u7ECF%u7F51%u7EDC%0A-%20%u6700%u540E%u662FSoftmax%u7684%u8F93%u51FA%u5C42%0A%0A%u8FD9%u6837%u4E00%u4E2A%u7F51%u7EDC%u6240%u6709%u7684%u53C2%u6570%u5982%u4E0B%u8868%uFF1A%0A%21%5BAlt%20text%7C650x0%5D%28./1537247671438.png%29%0A%0A%0A%0A%0A%0A%0A%0A

Edit

Transfer learning

中文译作迁移学习,指的是将task A的模型,不经过修改,替换输出层后,直接用作task B的模型使用,或者基于task A模型基础上继续训练称为task B的模型。
产生迁移学习的原因是样本数据的约束。举个例子,task A是cat detector,task B是通过X光照片判断骨龄。A的样本千千万万,数据集庞大,B的样本来自于医院的患者照片,数据集非常有限。但是他们的底层模块应该是相同的或类似的,都需要边缘检测,需要像素分析等等。考虑到底层需求的近似性,所以考虑可以共用网络模型,进而产生了迁移学习。

When transfer learning makes sense?
Task A and B have the same input X.
You have a lot more data for Task A than Task B.
Low level features from A could be helpful for learning B.

Multi-task learning

中文译作多任务学习,指的是在一次学习中完成多个任务。例如:一个图片里,同时识别多个目标。

  • 标准的损失函数里是没有这一项的。
  • 如果样本数据中一些标记不完整(如下问号项),仍然可以使用上述公式,只是项只累积有标记的部分

When multi-task learning makes sense?

  • Traning on a set of tasks that could benefit from having shared low-level features.
  • Usually: Amount of data you have for each task is quite similar.
  • Can train a big enough neural network to do well on all the tasks.

End-to-end learning

中文译作端到端学习。何为端到端学习?看下面speech recognition的例子:

听起来端到端学习不是很靠谱。不过如果是简单的学习目标,肯定都还是采用端到端学习。下面看端到端学习的优缺点:

Pros and Cons
Pros

  • Let the data speak. 说到底还是相关的样本够不够。你的样本足不足够覆盖所要达到的预测复杂程度。
  • Less hand-designing of components needed. 不需要手动分割系统,设计pipe line。
    Cons
  • May need large amount of data. 缺点自然是需要大量数据。
  • Excludes potentially useful hand-designed components. 再就是要自行分割系统,设计pipe line。
%23%20Deep%20Learning%20%289%29%20-%20Transfer%20learning%2C%20Multi-task%20learning%2C%20End-to-end%20learning%0A@%28myblog%29%5Bdeep%20learning%2C%20machine%20learning%5D%0A%0A%23%23%20Transfer%20learning%0A%u4E2D%u6587%u8BD1%u4F5C%u8FC1%u79FB%u5B66%u4E60%uFF0C%u6307%u7684%u662F%u5C06task%20A%u7684%u6A21%u578B%uFF0C%u4E0D%u7ECF%u8FC7%u4FEE%u6539%uFF0C%u66FF%u6362%u8F93%u51FA%u5C42%u540E%uFF0C%u76F4%u63A5%u7528%u4F5Ctask%20B%u7684%u6A21%u578B%u4F7F%u7528%uFF0C%u6216%u8005%u57FA%u4E8Etask%20A%u6A21%u578B%u57FA%u7840%u4E0A%u7EE7%u7EED%u8BAD%u7EC3%u79F0%u4E3Atask%20B%u7684%u6A21%u578B%u3002%0A%u4EA7%u751F%u8FC1%u79FB%u5B66%u4E60%u7684%u539F%u56E0%u662F%u6837%u672C%u6570%u636E%u7684%u7EA6%u675F%u3002%u4E3E%u4E2A%u4F8B%u5B50%uFF0Ctask%20A%u662Fcat%20detector%uFF0Ctask%20B%u662F%u901A%u8FC7X%u5149%u7167%u7247%u5224%u65AD%u9AA8%u9F84%u3002A%u7684%u6837%u672C%u5343%u5343%u4E07%u4E07%uFF0C%u6570%u636E%u96C6%u5E9E%u5927%uFF0CB%u7684%u6837%u672C%u6765%u81EA%u4E8E%u533B%u9662%u7684%u60A3%u8005%u7167%u7247%uFF0C%u6570%u636E%u96C6%u975E%u5E38%u6709%u9650%u3002%u4F46%u662F%u4ED6%u4EEC%u7684%u5E95%u5C42%u6A21%u5757%u5E94%u8BE5%u662F%u76F8%u540C%u7684%u6216%u7C7B%u4F3C%u7684%uFF0C%u90FD%u9700%u8981%u8FB9%u7F18%u68C0%u6D4B%uFF0C%u9700%u8981%u50CF%u7D20%u5206%u6790%u7B49%u7B49%u3002%u8003%u8651%u5230%u5E95%u5C42%u9700%u6C42%u7684%u8FD1%u4F3C%u6027%uFF0C%u6240%u4EE5%u8003%u8651%u53EF%u4EE5%u5171%u7528%u7F51%u7EDC%u6A21%u578B%uFF0C%u8FDB%u800C%u4EA7%u751F%u4E86%u8FC1%u79FB%u5B66%u4E60%u3002%0A%3E%20**When%20transfer%20learning%20makes%20sense%3F**%0A%3E%20Task%20A%20and%20B%20have%20the%20same%20input%20X.%0A%3E%20You%20have%20a%20lot%20more%20data%20for%20Task%20A%20than%20Task%20B.%0A%3E%20Low%20level%20features%20from%20A%20could%20be%20helpful%20for%20learning%20B.%0A%0A%23%23%20Multi-task%20learning%0A%u4E2D%u6587%u8BD1%u4F5C%u591A%u4EFB%u52A1%u5B66%u4E60%uFF0C%u6307%u7684%u662F%u5728%u4E00%u6B21%u5B66%u4E60%u4E2D%u5B8C%u6210%u591A%u4E2A%u4EFB%u52A1%u3002%u4F8B%u5982%uFF1A%u4E00%u4E2A%u56FE%u7247%u91CC%uFF0C%u540C%u65F6%u8BC6%u522B%u591A%u4E2A%u76EE%u6807%u3002%0A%21%5BAlt%20text%5D%28./1536013550836.png%29%0A%u4F8B%u5982%uFF0C%u5728%u8FD9%u4E2A%u56FE%u7247%u91CC%uFF0C%u8981%u8BC6%u522B%u884C%u4EBA%uFF0C%u7EA2%u7EFF%u706F%uFF0C%u6C7D%u8F66%u548C%u505C%u6B62%u6807%u5FD7%uFF0C%u90A3%u4E48%24y%20%5Cin%20R%5E4%24%uFF0C%u635F%u5931%u51FD%u6570%u5C31%u5199%u4F5C%uFF1A%0A%24%24J%20%3D%20%5Cfrac%7B1%7D%7Bm%7D%20%5CSigma_%7Bi%3D1%7D%5Em%20%5CSigma_%7Bj%3D1%7D%5E4%20L%28%5Chat%20y_j%5E%7B%28i%29%7D%2C%20y_j%5E%7B%28i%29%7D%29%24%24%0A%24%24L%28%5Chat%20y_j%5E%7B%28i%29%7D%2C%20y_j%5E%7B%28i%29%7D%29%20%3D%20-y_j%5E%7B%28i%29%7Dlog%5Chat%20y_j%5E%7B%28i%29%7D%20-%20%281-y_j%5E%7B%28i%29%7D%29log%281-%5Chat%20y_j%5E%7B%28i%29%7D%29%24%24%0A-%20%u6807%u51C6%u7684%u635F%u5931%u51FD%u6570%u91CC%u662F%u6CA1%u6709%24%5CSigma_%7Bj%3D1%7D%5E4%24%u8FD9%u4E00%u9879%u7684%u3002%0A-%20%u5982%u679C%u6837%u672C%u6570%u636E%u4E2D%u4E00%u4E9B%24y%5E%7B%28i%29%7D%24%u6807%u8BB0%u4E0D%u5B8C%u6574%28%u5982%u4E0B%u95EE%u53F7%u9879%29%uFF0C%u4ECD%u7136%u53EF%u4EE5%u4F7F%u7528%u4E0A%u8FF0%u516C%u5F0F%uFF0C%u53EA%u662F%24%5CSigma_%7Bj%3D1%7D%5E4%24%u9879%u53EA%u7D2F%u79EF%u6709%u6807%u8BB0%u7684%u90E8%u5206%0A%21%5BAlt%20text%7C250x0%5D%28./1536014022479.png%29%0A%0A%3E%20**When%20multi-task%20learning%20makes%20sense%3F**%0A%3E%20-%20Traning%20on%20a%20set%20of%20tasks%20that%20could%20benefit%20from%20having%20shared%20low-level%20features.%0A%3E%20-%20Usually%3A%20Amount%20of%20data%20you%20have%20for%20each%20task%20is%20quite%20similar.%0A%3E%20-%20Can%20train%20a%20big%20enough%20neural%20network%20to%20do%20well%20on%20all%20the%20tasks.%0A%0A%23%23%20End-to-end%20learning%0A%u4E2D%u6587%u8BD1%u4F5C%u7AEF%u5230%u7AEF%u5B66%u4E60%u3002%u4F55%u4E3A%u7AEF%u5230%u7AEF%u5B66%u4E60%uFF1F%u770B%u4E0B%u9762speech%20recognition%u7684%u4F8B%u5B50%uFF1A%0A%21%5BAlt%20text%7C600x0%5D%28./1536038229382.png%29%0A%u5728%u73B0%u6709%u7684speech%20recognition%u7CFB%u7EDF%uFF0C%u901A%u5E38%u8D70%u7684%u662F%u4E0A%u9762%u7684%u90A3%u6761%u89E3%u51B3%u65B9%u6848%u3002%u4ECE%u8BED%u97F3%u5230%u8BED%u4E49%uFF0C%u6574%u4E2Apipe%20line%u6709%u82E5%u5E72%u4E2Acomponent%u7EC4%u6210%u3002%u6BCF%u4E2Acomponent%u90FD%u6709%u5F88%u591A%u76F8%u5173%u7684paper%u548Cproject%u3002%u800C%u8FD9%u5E76%u4E0D%u662F%u4E00%u4E2A%u7AEF%u5230%u7AEF%u5B66%u4E60%u7684%u89E3%u51B3%u65B9%u6848%u3002%u6240%u8C13%u7684%u7AEF%u5230%u7AEF%u5B66%u4E60%u662F%u4E0B%u9762%u8FD9%u6761%u8DEF%u3002%u76F4%u63A5%u4ECE%u8BED%u97F3%uFF0C%u901A%u8FC7%u8BAD%u7EC3%u5F97%u5230%u8BED%u4E49%u3002%0A%u4E3A%u4EC0%u4E48%u5F53%u524D%u7684%u89E3%u51B3%u65B9%u6848%u6CA1%u6709%u7528%u7AEF%u5230%u7AEF%u5B66%u4E60%u5462%uFF1F%u56E0%u4E3A%u5982%u679C%u8981%u91C7%u7528%u7AEF%u5230%u7AEF%u5B66%u4E60%u6765%u5B9E%u73B0speech%20recognition%uFF0C%u5FC5%u987B%u8981%u6709%u5927%u91CF%u7684%u6837%u672C%u4ECE%u5404%u79CD%u5404%u6837%u7684audio%u7247%u6BB5%u5230%u5BF9%u5E94%u8BED%u4E49%u7684%u6837%u672C%u3002%u7406%u60F3%u60C5%u51B5%u4E0B%uFF0C%u5982%u679C%u8FD9%u79CD%u6709%u6548%u6837%u672C%u8DB3%u591F%u591A%u7684%u8BDD%uFF0C%u662F%u53EF%u4EE5%u91C7%u7528%u7AEF%u5230%u7AEF%u5B66%u4E60%u7684%u3002%0A%0A%u542C%u8D77%u6765%u7AEF%u5230%u7AEF%u5B66%u4E60%u4E0D%u662F%u5F88%u9760%u8C31%u3002%u4E0D%u8FC7%u5982%u679C%u662F%u7B80%u5355%u7684%u5B66%u4E60%u76EE%u6807%uFF0C%u80AF%u5B9A%u90FD%u8FD8%u662F%u91C7%u7528%u7AEF%u5230%u7AEF%u5B66%u4E60%u3002%u4E0B%u9762%u770B%u7AEF%u5230%u7AEF%u5B66%u4E60%u7684%u4F18%u7F3A%u70B9%uFF1A%0A%0A%3E%20**Pros%20and%20Cons**%0A%3E%20**Pros**%0A%3E%20-%20Let%20the%20data%20speak.%20%u8BF4%u5230%u5E95%u8FD8%u662F%u76F8%u5173%u7684%u6837%u672C%u591F%u4E0D%u591F%u3002%u4F60%u7684%u6837%u672C%u8DB3%u4E0D%u8DB3%u591F%u8986%u76D6%u6240%u8981%u8FBE%u5230%u7684%u9884%u6D4B%u590D%u6742%u7A0B%u5EA6%u3002%0A%3E%20-%20Less%20hand-designing%20of%20components%20needed.%20%u4E0D%u9700%u8981%u624B%u52A8%u5206%u5272%u7CFB%u7EDF%uFF0C%u8BBE%u8BA1pipe%20line%u3002%0A%3E%20**Cons**%0A%3E%20-%20May%20need%20large%20amount%20of%20data.%20%u7F3A%u70B9%u81EA%u7136%u662F%u9700%u8981%u5927%u91CF%u6570%u636E%u3002%0A%3E%20-%20Excludes%20potentially%20useful%20hand-designed%20components.%20%u518D%u5C31%u662F%u8981%u81EA%u884C%u5206%u5272%u7CFB%u7EDF%uFF0C%u8BBE%u8BA1pipe%20line%u3002%0A%0A%0A

Edit

这里我觉得应当翻作错误分析,而非误差分析。重点阐述两个问题:

  • 数据集本身有错误怎么办
  • 有些情况因为样本数据本身的局限,导致training set和dev/test set分布不同怎么办?是否出现data mismatch?如果出现了,怎么办?

错误分析方法

这里Andrew抛出一个方法论。就是在需要错误分析的时候,例如现在的模型结果不满意,或者你发现样本数据可能有问题,再或者后面提到的可能会有data mismatch的情况的时候,都可以采用下面提到的方法,来发掘问题或者错误可能存在的地方:

  • 取大约100个样本
  • 在这100个样本中,数出错误的样本数,并标记到表格里
  • 推断正确的样本也要查看,有可能存在样本标记错误,而模型缺陷导致其负负得正,从而恰好得出正确的结果。这样的样本也要挑出来。

样本错误怎么办?

In training set

通常如果只是随机错误,因为机器学习本身很善于消除随机错误带来的影响,所以不必特别的去处理这些随机的样本错误。如果是系统错误(systematic),例如将所有的白色小狗认作猫咪,那就需要去纠正了。通常这不太可能,如果发生了,而且数据量很大,只能说明我们使用了不可信的数据。

In dev/test set

dev/test set中如果发现了错误标记的数据,还是需要纠正出来的,因为dev/test set中的数据并不会用来迭代,而是用来判断模型之间的优劣。如果数据中有错误的存在,会影响评判,进而影响模型修改的方向。所以dev/test set中的数据错误是有害的,必要时还是需要纠正。判断是否有必要修正的方法仍然是采用前一节提到的错误分析方法,再加上一列Incorrectly labeled。如果此列数据显示因为错标数据导致的判断错误达到足够的百分比,我们就要花点时间来修正这些dev/test集中的样本数据了。
Some guide lines:

  • Apply same process to your dev and test sets to make sure they continue to come from the same distribution. 保证dev和test集样本分布相同,否则会出现矛盾的模型评估。
  • Consider examining examples your algorithm got right as well as ones it got wrong. 考虑负负得正的情况。
  • Train and dev/test data may now come from slightly different distribtuion. 因为训练集样本太多,而且对随机错误不敏感,所以通常不去调整训练集。如果调整了开发和测试集就可能引入data mismatch的情况。如果怀疑data mismatch发生了,就要用前面的方法来get some insight。

Build your first system quickly, then iterate.

训练集与开发/测试集分布不同

为什么会有这种情况

Andrew举了两个例子:cat detector和rear view mirror。很生动。摘取cat detector来作为笔记:

如何判断data mismatch

方法是从训练集中再分割出一个数据集叫:training-dev set (训练-开发集)

Human Level 4%
avoidable bias
Training set error 7%
variance
Training-dev set error 10%
data mismatch
Dev error 12%
degree of overfitting to dev set
Test error 12%

当dev error与training-dev error相差过大的时候,就可以认为发生了data mismatch问题。

Test error与dev error通常应该一致,如果相差过远,说明模型在开发集上也出现了overfitting。考虑要扩大开发集

出现data mismatch怎么办?

当发生data mismatch的时候:

  • Carry out manual error analysis to try to understand difference between training and dev/test sets
  • Make training data more similar; or collect more data similar to dev/test sets.

针对第二点,可以采用数据合成(data synthesis),也就是以前提到过的data augment。数据合成是有效的。但是要小心的是,数据合成通常只能模拟真实世界的很小一部分,你的模型是有可能对这很小一部分over tune。所以要注意运用variance tactics来定位和解决此时的overfitting。

%23%20Deep%20Learning%20%288%29%20-%20Error%20Analysis%0A@%28myblog%29%5Bmachine%20learning%2C%20deep%20learning%5D%0A%0A%u8FD9%u91CC%u6211%u89C9%u5F97%u5E94%u5F53%u7FFB%u4F5C%u9519%u8BEF%u5206%u6790%uFF0C%u800C%u975E%u8BEF%u5DEE%u5206%u6790%u3002%u91CD%u70B9%u9610%u8FF0%u4E24%u4E2A%u95EE%u9898%uFF1A%0A-%20%u6570%u636E%u96C6%u672C%u8EAB%u6709%u9519%u8BEF%u600E%u4E48%u529E%0A-%20%u6709%u4E9B%u60C5%u51B5%u56E0%u4E3A%u6837%u672C%u6570%u636E%u672C%u8EAB%u7684%u5C40%u9650%uFF0C%u5BFC%u81F4training%20set%u548Cdev/test%20set%u5206%u5E03%u4E0D%u540C%u600E%u4E48%u529E%uFF1F%u662F%u5426%u51FA%u73B0data%20mismatch%uFF1F%u5982%u679C%u51FA%u73B0%u4E86%uFF0C%u600E%u4E48%u529E%uFF1F%0A%0A%23%23%20%u9519%u8BEF%u5206%u6790%u65B9%u6CD5%0A%u8FD9%u91CCAndrew%u629B%u51FA%u4E00%u4E2A%u65B9%u6CD5%u8BBA%u3002%u5C31%u662F%u5728%u9700%u8981%u9519%u8BEF%u5206%u6790%u7684%u65F6%u5019%uFF0C%u4F8B%u5982%u73B0%u5728%u7684%u6A21%u578B%u7ED3%u679C%u4E0D%u6EE1%u610F%uFF0C%u6216%u8005%u4F60%u53D1%u73B0%u6837%u672C%u6570%u636E%u53EF%u80FD%u6709%u95EE%u9898%uFF0C%u518D%u6216%u8005%u540E%u9762%u63D0%u5230%u7684%u53EF%u80FD%u4F1A%u6709data%20mismatch%u7684%u60C5%u51B5%u7684%u65F6%u5019%uFF0C%u90FD%u53EF%u4EE5%u91C7%u7528%u4E0B%u9762%u63D0%u5230%u7684%u65B9%u6CD5%uFF0C%u6765%u53D1%u6398%u95EE%u9898%u6216%u8005%u9519%u8BEF%u53EF%u80FD%u5B58%u5728%u7684%u5730%u65B9%uFF1A%0A%21%5BAlt%20text%7C550x0%5D%28./1536007558086.png%29%0A-%20%u53D6%u5927%u7EA6100%u4E2A%u6837%u672C%0A-%20%u5728%u8FD9100%u4E2A%u6837%u672C%u4E2D%uFF0C%u6570%u51FA%u9519%u8BEF%u7684%u6837%u672C%u6570%uFF0C%u5E76%u6807%u8BB0%u5230%u8868%u683C%u91CC%0A-%20%u63A8%u65AD%u6B63%u786E%u7684%u6837%u672C%u4E5F%u8981%u67E5%u770B%uFF0C%u6709%u53EF%u80FD%u5B58%u5728%u6837%u672C%u6807%u8BB0%u9519%u8BEF%uFF0C%u800C%u6A21%u578B%u7F3A%u9677%u5BFC%u81F4%u5176%u8D1F%u8D1F%u5F97%u6B63%uFF0C%u4ECE%u800C%u6070%u597D%u5F97%u51FA%u6B63%u786E%u7684%u7ED3%u679C%u3002%u8FD9%u6837%u7684%u6837%u672C%u4E5F%u8981%u6311%u51FA%u6765%u3002%0A%0A%23%23%20%u6837%u672C%u9519%u8BEF%u600E%u4E48%u529E%uFF1F%0A%23%23%23%20In%20training%20set%0A%u901A%u5E38%u5982%u679C%u53EA%u662F%u968F%u673A%u9519%u8BEF%uFF0C%u56E0%u4E3A%u673A%u5668%u5B66%u4E60%u672C%u8EAB%u5F88%u5584%u4E8E%u6D88%u9664%u968F%u673A%u9519%u8BEF%u5E26%u6765%u7684%u5F71%u54CD%uFF0C%u6240%u4EE5%u4E0D%u5FC5%u7279%u522B%u7684%u53BB%u5904%u7406%u8FD9%u4E9B%u968F%u673A%u7684%u6837%u672C%u9519%u8BEF%u3002%u5982%u679C%u662F%u7CFB%u7EDF%u9519%u8BEF%28systematic%29%uFF0C%u4F8B%u5982%u5C06%u6240%u6709%u7684%u767D%u8272%u5C0F%u72D7%u8BA4%u4F5C%u732B%u54AA%uFF0C%u90A3%u5C31%u9700%u8981%u53BB%u7EA0%u6B63%u4E86%u3002%u901A%u5E38%u8FD9%u4E0D%u592A%u53EF%u80FD%uFF0C%u5982%u679C%u53D1%u751F%u4E86%uFF0C%u800C%u4E14%u6570%u636E%u91CF%u5F88%u5927%uFF0C%u53EA%u80FD%u8BF4%u660E%u6211%u4EEC%u4F7F%u7528%u4E86%u4E0D%u53EF%u4FE1%u7684%u6570%u636E%u3002%0A%0A%23%23%23%20In%20dev/test%20set%0Adev/test%20set%u4E2D%u5982%u679C%u53D1%u73B0%u4E86%u9519%u8BEF%u6807%u8BB0%u7684%u6570%u636E%uFF0C%u8FD8%u662F%u9700%u8981%u7EA0%u6B63%u51FA%u6765%u7684%uFF0C%u56E0%u4E3Adev/test%20set%u4E2D%u7684%u6570%u636E%u5E76%u4E0D%u4F1A%u7528%u6765%u8FED%u4EE3%uFF0C%u800C%u662F%u7528%u6765%u5224%u65AD%u6A21%u578B%u4E4B%u95F4%u7684%u4F18%u52A3%u3002%u5982%u679C%u6570%u636E%u4E2D%u6709%u9519%u8BEF%u7684%u5B58%u5728%uFF0C%u4F1A%u5F71%u54CD%u8BC4%u5224%uFF0C%u8FDB%u800C%u5F71%u54CD%u6A21%u578B%u4FEE%u6539%u7684%u65B9%u5411%u3002%u6240%u4EE5dev/test%20set%u4E2D%u7684%u6570%u636E%u9519%u8BEF%u662F%u6709%u5BB3%u7684%uFF0C%u5FC5%u8981%u65F6%u8FD8%u662F%u9700%u8981%u7EA0%u6B63%u3002%u5224%u65AD%u662F%u5426%u6709%u5FC5%u8981%u4FEE%u6B63%u7684%u65B9%u6CD5%u4ECD%u7136%u662F%u91C7%u7528%u524D%u4E00%u8282%u63D0%u5230%u7684%u9519%u8BEF%u5206%u6790%u65B9%u6CD5%uFF0C%u518D%u52A0%u4E0A%u4E00%u5217Incorrectly%20labeled%u3002%u5982%u679C%u6B64%u5217%u6570%u636E%u663E%u793A%u56E0%u4E3A%u9519%u6807%u6570%u636E%u5BFC%u81F4%u7684%u5224%u65AD%u9519%u8BEF%u8FBE%u5230%u8DB3%u591F%u7684%u767E%u5206%u6BD4%uFF0C%u6211%u4EEC%u5C31%u8981%u82B1%u70B9%u65F6%u95F4%u6765%u4FEE%u6B63%u8FD9%u4E9Bdev/test%u96C6%u4E2D%u7684%u6837%u672C%u6570%u636E%u4E86%u3002%0ASome%20guide%20lines%3A%0A-%20Apply%20same%20process%20to%20your%20dev%20and%20test%20sets%20to%20make%20sure%20they%20continue%20to%20come%20from%20the%20same%20distribution.%20%u4FDD%u8BC1dev%u548Ctest%u96C6%u6837%u672C%u5206%u5E03%u76F8%u540C%uFF0C%u5426%u5219%u4F1A%u51FA%u73B0%u77DB%u76FE%u7684%u6A21%u578B%u8BC4%u4F30%u3002%0A-%20Consider%20examining%20examples%20your%20algorithm%20got%20right%20as%20well%20as%20ones%20it%20got%20wrong.%20%u8003%u8651%u8D1F%u8D1F%u5F97%u6B63%u7684%u60C5%u51B5%u3002%0A-%20Train%20and%20dev/test%20data%20may%20now%20come%20from%20slightly%20different%20distribtuion.%20%u56E0%u4E3A%u8BAD%u7EC3%u96C6%u6837%u672C%u592A%u591A%uFF0C%u800C%u4E14%u5BF9%u968F%u673A%u9519%u8BEF%u4E0D%u654F%u611F%uFF0C%u6240%u4EE5%u901A%u5E38%u4E0D%u53BB%u8C03%u6574%u8BAD%u7EC3%u96C6%u3002%u5982%u679C%u8C03%u6574%u4E86%u5F00%u53D1%u548C%u6D4B%u8BD5%u96C6%u5C31%u53EF%u80FD%u5F15%u5165data%20mismatch%u7684%u60C5%u51B5%u3002%u5982%u679C%u6000%u7591data%20mismatch%u53D1%u751F%u4E86%uFF0C%u5C31%u8981%u7528%u524D%u9762%u7684%u65B9%u6CD5%u6765get%20some%20insight%u3002%0A%0A%3E%20**Build%20your%20first%20system%20quickly%2C%20then%20iterate.**%0A%0A%23%23%20%u8BAD%u7EC3%u96C6%u4E0E%u5F00%u53D1/%u6D4B%u8BD5%u96C6%u5206%u5E03%u4E0D%u540C%0A%23%23%23%20%u4E3A%u4EC0%u4E48%u4F1A%u6709%u8FD9%u79CD%u60C5%u51B5%0AAndrew%u4E3E%u4E86%u4E24%u4E2A%u4F8B%u5B50%uFF1Acat%20detector%u548Crear%20view%20mirror%u3002%u5F88%u751F%u52A8%u3002%u6458%u53D6cat%20detector%u6765%u4F5C%u4E3A%u7B14%u8BB0%uFF1A%0A%21%5BAlt%20text%5D%28./1536010240549.png%29%0A%u6211%u4EEC%u8BAD%u7EC3%u7684%u65F6%u5019%u7528%u7684%u662F%u5DE6%u8FB9%u7684%u6570%u636E%u96C6%uFF0C%u6765%u81EA%u4E8E%u7F51%u7EDC%uFF0C%u6E05%u6670%u5EA6%u5F88%u9AD8%uFF0C%u732B%u54AA%u5F62%u6001%u4E5F%u5F88%u5B8C%u6574%u3002%u800Capp%u4E0A%u7EBF%u540E%uFF0C%u9700%u8981%u9884%u6D4B%u7684%u662F%u53F3%u8FB9%u7684%u6570%u636E%uFF0C%u6A21%u7CCA%u4E0D%u6E05%uFF0C%u732B%u54AA%u8868%u60C5%u5404%u5F02%uFF0C%u6548%u679C%u5F88%u5DEE%u3002%u5DE6%u8FB9%u7684%u6570%u636E%u96C6%u6570%u636E%u53EF%u89C2%uFF0C%u6709200%2C000%uFF0C%u53F3%u8FB9%u6570%u636E%u6709%u9650%uFF0C%u53EA%u670910%2C000%u3002%u90A3%u73B0%u5728%u8BE5%u600E%u4E48%u529E%uFF1F%0A%u6309%u7167%u4E4B%u524D%u7406%u60F3%u7684%u72B6%u51B5%uFF0C%u6211%u4EEC%u5E94%u8BE5%u628A%u8FD910k%u7684%u7528%u6237%u6570%u636E%u548C20k%u7684%u8BAD%u7EC3%u6837%u672C%u6570%u636E%u6DF7%u7F16%uFF0C%u518D%u968F%u673A%u5207%u5272%u6210%u8BAD%u7EC3%uFF0C%u5F00%u53D1/%u6D4B%u8BD5%u96C6%u3002%u5982%u4E0B%uFF1A%0A%21%5BAlt%20text%7C550x0%5D%28./1536010459428.png%29%0A%u8FD9%u6837%u663E%u7136%u4E0D%u5408%u9002%uFF0C%u56E0%u4E3A%u5F00%u53D1/%u6D4B%u8BD5%u96C6%u9700%u8981%u53CD%u6620%u6211%u4EEC%u7684%u771F%u5B9E%u9700%u6C42%u3002%u6211%u4EEC%u771F%u6B63%u5728%u4E4E%u7684%u662F%u90A3%u4E9B%u6A21%u7CCA%u7684%uFF0C%u8868%u60C5%u5404%u5F02%u7684%u7528%u6237%u732B%u54AA%u56FE%u7247%uFF0C%u800C%u6309%u8FD9%u79CD%u89C4%u5219%u5207%u5272%u51FA%u6765%u7684%u5F00%u53D1/%u6D4B%u8BD5%u96C6%u4E2D%u53EA%u67095%25%u4E0D%u5230%u7684%u6211%u4EEC%u5728%u4E4E%u7684%u6570%u636E%u3002%u663E%u7136%u8FD9%u4E2A%u9879%u76EE%u7684%u76EE%u6807%u8BBE%u504F%u4E86%u3002%0A%u6BD4%u8F83%u597D%u7684%u505A%u6CD5%u662F%uFF0C%u5C06%u7528%u6237%u4E0A%u4F20%u6570%u636E%u4E00%u5206%u4E3A%u4E8C%uFF0C%u5F00%u53D1/%u6D4B%u8BD5%u96C6%u5168%u90E8%u91C7%u7528%u4E00%u534A%uFF0C%u53E6%u4E00%u534A%u5F52%u5165%u8BAD%u7EC3%u96C6%u505A%u8BAD%u7EC3%u3002%u5982%u4E0B%uFF1A%0A%21%5BAlt%20text%7C550x0%5D%28./1536010736876.png%29%0A%u8FD9%u6837%u505A%u7684%u597D%u5904%u662F%uFF0C%u76EE%u6807%u51C6%u786E%uFF0C%u574F%u5904%u662F%u53EF%u80FD%u5F15%u5165data%20mismatch%u95EE%u9898%u3002%0A%0A%23%23%23%20%u5982%u4F55%u5224%u65ADdata%20mismatch%0A%u65B9%u6CD5%u662F%u4ECE%u8BAD%u7EC3%u96C6%u4E2D%u518D%u5206%u5272%u51FA%u4E00%u4E2A%u6570%u636E%u96C6%u53EB%uFF1Atraining-dev%20set%20%28%u8BAD%u7EC3-%u5F00%u53D1%u96C6%29%0A%7C%20Human%20Level%20%20%20%20%20%20%20%20%20%20%20%7C%204%25%20%20%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20%3A--------------------%20%7C%20%3A-%3A%20%7C%20%3A-------------------------------%20%7C%0A%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%7C%20avoidable%20bias%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20Training%20set%20error%20%20%20%20%7C%207%25%20%20%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%7C%20variance%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20Training-dev%20set%20error%7C%2010%25%20%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%7C%20**data%20mismatch**%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20Dev%20error%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%2012%25%20%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%7C%20degree%20of%20overfitting%20to%20dev%20set%20%7C%0A%7C%20Test%20error%20%20%20%20%20%20%20%20%20%20%20%20%7C%2012%25%20%7C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%u5F53dev%20error%u4E0Etraining-dev%20error%u76F8%u5DEE%u8FC7%u5927%u7684%u65F6%u5019%uFF0C%u5C31%u53EF%u4EE5%u8BA4%u4E3A%u53D1%u751F%u4E86data%20mismatch%u95EE%u9898%u3002%0A%3E*Test%20error%u4E0Edev%20error%u901A%u5E38%u5E94%u8BE5%u4E00%u81F4%uFF0C%u5982%u679C%u76F8%u5DEE%u8FC7%u8FDC%uFF0C%u8BF4%u660E%u6A21%u578B%u5728%u5F00%u53D1%u96C6%u4E0A%u4E5F%u51FA%u73B0%u4E86overfitting%u3002%u8003%u8651%u8981%u6269%u5927%u5F00%u53D1%u96C6*%0A%0A%23%23%23%20%u51FA%u73B0data%20mismatch%u600E%u4E48%u529E%uFF1F%0A%0A%u5F53%u53D1%u751Fdata%20mismatch%u7684%u65F6%u5019%uFF1A%0A-%20Carry%20out%20manual%20error%20analysis%20to%20try%20to%20understand%20difference%20between%20training%20and%20dev/test%20sets%0A-%20Make%20training%20data%20more%20similar%3B%20or%20collect%20more%20data%20similar%20to%20dev/test%20sets.%0A%0A%u9488%u5BF9%u7B2C%u4E8C%u70B9%uFF0C%u53EF%u4EE5%u91C7%u7528%u6570%u636E%u5408%u6210%28data%20synthesis%29%uFF0C%u4E5F%u5C31%u662F%u4EE5%u524D%u63D0%u5230%u8FC7%u7684data%20augment%u3002%u6570%u636E%u5408%u6210%u662F%u6709%u6548%u7684%u3002%u4F46%u662F%u8981%u5C0F%u5FC3%u7684%u662F%uFF0C%u6570%u636E%u5408%u6210%u901A%u5E38%u53EA%u80FD%u6A21%u62DF%u771F%u5B9E%u4E16%u754C%u7684%u5F88%u5C0F%u4E00%u90E8%u5206%uFF0C%u4F60%u7684%u6A21%u578B%u662F%u6709%u53EF%u80FD%u5BF9%u8FD9%u5F88%u5C0F%u4E00%u90E8%u5206over%20tune%u3002%u6240%u4EE5%u8981%u6CE8%u610F%u8FD0%u7528variance%20tactics%u6765%u5B9A%u4F4D%u548C%u89E3%u51B3%u6B64%u65F6%u7684overfitting%u3002%0A

Edit

所谓Machine Learning Strategy就是如何调整学习的策略,来达到更好的准确率。Andrew提到这一门课(Structuring Machine Learning Projects)的两周的内容,通常是一些机器学习项目的经验之谈,在学校的课程中通常不会有提及。

Orthogonalization

Orthogonalization,译作正交化,类似coding时候的解耦(decouple)。通过一些正交化的调整策略来修正相关条件下的模型准确率。一般有下面4中策略:

When a supervised learning system is design, these are the 4 assumptions that needs to be true and orthogonal.

  1. Fit training set well in cost function
    • If it doesn’t fit well, the use of a bigger neural network or switching to a better optimization algorithm might help.
  2. Fit development set well on cost function
    • If it doesn’t fit well, regularization or using bigger training set might help.
  3. Fit test set well on cost function
    • If it doesn’t fit well, the use of a bigger development set might help
  4. Performs well in real world
    • If it doesn’t perform well, the development test set is not set correctly or the cost function is not evaluating the right thing.

Andrew也提到之前提到的一种regularization的方法——early stopping,就不是一种正交化的调整策略。他同时想做好第1和第2点。

Setting up your goal

做一个机器学习的项目,通常是下面这个过程,Idea->Code->Experiment->Idea,不断的迭代得到最终优化的模型。

Single real number evaluation metric

如果针对一个测试有两个指数型指标,这就很难取舍。这会降低上面这个循环的效率,甚至最终的模型准确度。所以这里才会提出单一评价标准。例如下面的例子:

Precision: Of all the images we predicted y=1, what fraction of it have cats?
Recall: Of all the images that actually have cats, what fraction of it did we correctly identifying have cats?

这两个指标当然都是越大越好,那如何评定Classifier A和B?
之前的一篇博文Machine Learning (2) - Neural Network中关于skewed data的一节中,有提到F1-score。这里就是通过F1-score来整合precision和recall来得到一个统一的标准。
F1-score
这个算法又称为”Harmonic Mean”。

计算Precision和Recall的”均值“只是一个例子,通常我们要自己思考出如何能产生一个易于评估的单一指数标准。

Satisficing and Optimizing

这里指,检验试验结果的指标,通常可以分成两类,即Satisficing和Optimizing。
Satisficing类指标指的是只要达到一定的阈值,再优化也不太会影响试验结果的优劣。例如,运行时间达到100ms即可,80ms和90ms的运行时间差别,并不能带来什么改善。
Optimizing类指标指的就是一些数值类的指标,例如准确率,一点点改善都会影响试验结果的评判。
下面是一个具体的例子:

Train/dev/test distributions

在做机器学习项目时,通常需要对样本数据进行分割。而不是将所有的样本都用于训练。上一章提到要对试验结果进行评估,而这个评估就是在dev或者test set上进行的。所以有一个合适的dev/test set对模型的迭代以及项目的推进,有着至关重要的影响。

  • Training set: 迭代得到各个模型参数(W, b, , )
  • Development (dev) set: 在一轮循环中验证training结果的正确性,以便下一轮调整模型(包括超参数的调整,是否采用正则化,网络结构的调整等等)
  • Test set: 在所有循环结束后,来验证最终模型的正确性。

如何分割?

在样本数据不太多(数千数万的级别)的时候,一般如下分割:

Guide Line

同一分布

分割样本数据的时候必须保证training,dev,test set均来自于同一种分布。

反映最终应用场景

dev/test set必须反应最终模型的应用场景。
例如cat detector,训练,测试都是采用网络上清晰度很高的图片,但使用时发现用户上传的很多都是清晰度不高的照片,那识别率肯定是不能令人满意的。这就是dev/test set和最终应用不符。改进的办法就是,要么可以多采用一些真实的用户数据,或者采用data augmentation来人工添加噪声,让原本清晰的图片变成模糊的。总而言之,就是改进训练集和测试开发集再进行训练。

如果有特别不想要的指标怎么办?

比如cat detector会误将色情图片识别为猫咪,这个是不可接受的,不管模型精度有多高。解决方法就是修改metric:
Error:
if is non-porn, if is porn.

最终的优化方向

为什么以人类表现作为分水岭?

因为人类在自然感知(Natural Perception)方面已经很擅长了。而在模型未达到人类表现的时候,你可以通过各种工具来改善你的学习模型。但当模型达到或者超过人类表现的时候,这些工具就失效了。因为这些工具也是通过人类感知来制造的。

什么是Avoidable Bias?

  • 通常用人类表现来近似贝叶斯最优误差。
  • If | human-level error - training error | | training error - development error |, focus on bias (avoidable bias) reduction technique
  • If | human-level error - training error | | training error - development error |, focus on variance reduction technique

超过人类表现后怎么办?

答案是,没办法或者没有好的办法。因为超过人类表现,所有的改善模型表现的工具可能都会失效。所以要想前进一步都很困难。Andrew在教程中也没有给出明确的方向应该怎么做。我们通常用人类表现来近似贝叶斯最佳误差。当模型表现已经超过一组人类的推断准确率的时候,就没办法再用原先的值来近似贝叶斯误差了。

总结

%23%20Deep%20Learning%20%287%29%20-%20Machine%20Learning%20Strategy%0A@%28myblog%29%5Bdeep%20learning%2C%20machine%20learning%5D%0A%0A%u6240%u8C13Machine%20Learning%20Strategy%u5C31%u662F%u5982%u4F55%u8C03%u6574%u5B66%u4E60%u7684%u7B56%u7565%uFF0C%u6765%u8FBE%u5230%u66F4%u597D%u7684%u51C6%u786E%u7387%u3002Andrew%u63D0%u5230%u8FD9%u4E00%u95E8%u8BFE%28Structuring%20Machine%20Learning%20Projects%29%u7684%u4E24%u5468%u7684%u5185%u5BB9%uFF0C%u901A%u5E38%u662F%u4E00%u4E9B%u673A%u5668%u5B66%u4E60%u9879%u76EE%u7684%u7ECF%u9A8C%u4E4B%u8C08%uFF0C%u5728%u5B66%u6821%u7684%u8BFE%u7A0B%u4E2D%u901A%u5E38%u4E0D%u4F1A%u6709%u63D0%u53CA%u3002%0A%0A%23%23%20Orthogonalization%20%0AOrthogonalization%uFF0C%u8BD1%u4F5C%u6B63%u4EA4%u5316%uFF0C%u7C7B%u4F3Ccoding%u65F6%u5019%u7684%u89E3%u8026%28decouple%29%u3002%u901A%u8FC7%u4E00%u4E9B%u6B63%u4EA4%u5316%u7684%u8C03%u6574%u7B56%u7565%u6765%u4FEE%u6B63%u76F8%u5173%u6761%u4EF6%u4E0B%u7684%u6A21%u578B%u51C6%u786E%u7387%u3002%u4E00%u822C%u6709%u4E0B%u97624%u4E2D%u7B56%u7565%uFF1A%0A%3EWhen%20a%20supervised%20learning%20system%20is%20design%2C%20these%20are%20the%204%20assumptions%20that%20needs%20to%20be%20true%20and%20orthogonal.%0A1.%20Fit%20training%20set%20well%20in%20cost%20function%0A-%20If%20it%20doesn%u2019t%20fit%20well%2C%20the%20use%20of%20a%20bigger%20neural%20network%20or%20switching%20to%20a%20better%20optimization%20algorithm%20might%20help.%0A2.%20Fit%20development%20set%20well%20on%20cost%20function%0A-%20If%20it%20doesn%u2019t%20fit%20well%2C%20regularization%20or%20using%20bigger%20training%20set%20might%20help.%0A3.%20Fit%20test%20set%20well%20on%20cost%20function%0A-%20If%20it%20doesn%u2019t%20fit%20well%2C%20the%20use%20of%20a%20bigger%20development%20set%20might%20help%0A4.%20Performs%20well%20in%20real%20world%0A-%20If%20it%20doesn%u2019t%20perform%20well%2C%20the%20development%20test%20set%20is%20not%20set%20correctly%20or%20the%20cost%20function%20is%20not%20evaluating%20the%20right%20thing.%0A%0AAndrew%u4E5F%u63D0%u5230%u4E4B%u524D%u63D0%u5230%u7684%u4E00%u79CDregularization%u7684%u65B9%u6CD5%u2014%u2014early%20stopping%uFF0C%u5C31%u4E0D%u662F%u4E00%u79CD%u6B63%u4EA4%u5316%u7684%u8C03%u6574%u7B56%u7565%u3002%u4ED6%u540C%u65F6%u60F3%u505A%u597D%u7B2C1%u548C%u7B2C2%u70B9%u3002%0A%0A%23%23%20Setting%20up%20your%20goal%0A%u505A%u4E00%u4E2A%u673A%u5668%u5B66%u4E60%u7684%u9879%u76EE%uFF0C%u901A%u5E38%u662F%u4E0B%u9762%u8FD9%u4E2A%u8FC7%u7A0B%uFF0CIdea-%3ECode-%3EExperiment-%3EIdea%uFF0C%u4E0D%u65AD%u7684%u8FED%u4EE3%u5F97%u5230%u6700%u7EC8%u4F18%u5316%u7684%u6A21%u578B%u3002%0A%21%5BAlt%20text%7C350x0%5D%28./1535881797306.png%29%0A%u8FD9%u91CC%u7684Experiment%u4E5F%u5C31%u76F8%u5F53%u4E8E%u6D4B%u8BD5%uFF0C%u901A%u8FC7%u6D4B%u8BD5%u7684%u7ED3%u679C%u662F%u597D%u662F%u574F%uFF0C%u6765%u51B3%u5B9A%u4E0B%u4E00%u6B65%u600E%u4E48%u8C03%u6574%u6A21%u578B%u3002%u6240%u4EE5%u6D4B%u8BD5%u7ED3%u679C%u7684%u8BC4%u4F30%u5C31%u975E%u5E38%u91CD%u8981%u4E86%u3002%u8FD9%u4E00%u7AE0%20%u63D0%u5230%u7684%u51E0%u4E2A%u6280%u5DE7%u90FD%u662F%u5173%u4E8E%u6D4B%u8BD5%u7ED3%u679C%u7684%u8BC4%u4F30%u3002%0A%23%23%23%20Single%20real%20number%20evaluation%20metric%0A%u5982%u679C%u9488%u5BF9%u4E00%u4E2A%u6D4B%u8BD5%u6709%u4E24%u4E2A%u6307%u6570%u578B%u6307%u6807%uFF0C%u8FD9%u5C31%u5F88%u96BE%u53D6%u820D%u3002%u8FD9%u4F1A%u964D%u4F4E%u4E0A%u9762%u8FD9%u4E2A%u5FAA%u73AF%u7684%u6548%u7387%uFF0C%u751A%u81F3%u6700%u7EC8%u7684%u6A21%u578B%u51C6%u786E%u5EA6%u3002%u6240%u4EE5%u8FD9%u91CC%u624D%u4F1A%u63D0%u51FA%u5355%u4E00%u8BC4%u4EF7%u6807%u51C6%u3002%u4F8B%u5982%u4E0B%u9762%u7684%u4F8B%u5B50%uFF1A%0A%21%5BAlt%20text%7C350x0%5D%28./1535927323926.png%29%0A%3E%20Precision%3A%20Of%20all%20the%20images%20we%20predicted%20y%3D1%2C%20what%20fraction%20of%20it%20have%20cats%3F%0A%3E%20Recall%3A%20Of%20all%20the%20images%20that%20actually%20have%20cats%2C%20what%20fraction%20of%20it%20did%20we%20correctly%20identifying%20have%20cats%3F%0A%0A%u8FD9%u4E24%u4E2A%u6307%u6807%u5F53%u7136%u90FD%u662F%u8D8A%u5927%u8D8A%u597D%uFF0C%u90A3%u5982%u4F55%u8BC4%u5B9AClassifier%20A%u548CB%uFF1F%0A%u4E4B%u524D%u7684%u4E00%u7BC7%u535A%u6587%5BMachine%20Learning%20%282%29%20-%20Neural%20Network%5D%28https%3A//zhougy0717.github.io/2017/04/03/Machine%2520Learning%2520%282%29%2520-%2520Neural%2520Network/%29%u4E2D%u5173%u4E8E%5Bskewed%20data%u7684%u4E00%u8282%5D%28https%3A//zhougy0717.github.io/2017/04/03/Machine%2520Learning%2520%282%29%2520-%2520Neural%2520Network/%23x7279x4f8bskewed-data%29%u4E2D%uFF0C%u6709%u63D0%u5230F1-score%u3002%u8FD9%u91CC%u5C31%u662F%u901A%u8FC7F1-score%u6765%u6574%u5408precision%u548Crecall%u6765%u5F97%u5230%u4E00%u4E2A%u7EDF%u4E00%u7684%u6807%u51C6%u3002%0AF1-score%24%20%3D%20%5Cfrac%20%7B2%7D%7B%5Cfrac%7B1%7D%7BP%7D%20+%20%5Cfrac%7B1%7D%7BR%7D%7D%24%0A%u8FD9%u4E2A%u7B97%u6CD5%u53C8%u79F0%u4E3A%22Harmonic%20Mean%22%u3002%0A%0A%u8BA1%u7B97Precision%u548CRecall%u7684%u201D%u5747%u503C%u201C%u53EA%u662F%u4E00%u4E2A%u4F8B%u5B50%uFF0C%u901A%u5E38%u6211%u4EEC%u8981%u81EA%u5DF1%u601D%u8003%u51FA%u5982%u4F55%u80FD%u4EA7%u751F%u4E00%u4E2A%u6613%u4E8E%u8BC4%u4F30%u7684%u5355%u4E00%u6307%u6570%u6807%u51C6%u3002%0A%0A%23%23%23%20Satisficing%20and%20Optimizing%0A%u8FD9%u91CC%u6307%uFF0C%u68C0%u9A8C%u8BD5%u9A8C%u7ED3%u679C%u7684%u6307%u6807%uFF0C%u901A%u5E38%u53EF%u4EE5%u5206%u6210%u4E24%u7C7B%uFF0C%u5373Satisficing%u548COptimizing%u3002%0ASatisficing%u7C7B%u6307%u6807%u6307%u7684%u662F%u53EA%u8981%u8FBE%u5230%u4E00%u5B9A%u7684%u9608%u503C%uFF0C%u518D%u4F18%u5316%u4E5F%u4E0D%u592A%u4F1A%u5F71%u54CD%u8BD5%u9A8C%u7ED3%u679C%u7684%u4F18%u52A3%u3002%u4F8B%u5982%uFF0C%u8FD0%u884C%u65F6%u95F4%u8FBE%u5230100ms%u5373%u53EF%uFF0C80ms%u548C90ms%u7684%u8FD0%u884C%u65F6%u95F4%u5DEE%u522B%uFF0C%u5E76%u4E0D%u80FD%u5E26%u6765%u4EC0%u4E48%u6539%u5584%u3002%0AOptimizing%u7C7B%u6307%u6807%u6307%u7684%u5C31%u662F%u4E00%u4E9B%u6570%u503C%u7C7B%u7684%u6307%u6807%uFF0C%u4F8B%u5982%u51C6%u786E%u7387%uFF0C%u4E00%u70B9%u70B9%u6539%u5584%u90FD%u4F1A%u5F71%u54CD%u8BD5%u9A8C%u7ED3%u679C%u7684%u8BC4%u5224%u3002%0A%u4E0B%u9762%u662F%u4E00%u4E2A%u5177%u4F53%u7684%u4F8B%u5B50%uFF1A%0A%21%5BAlt%20text%7C350x0%5D%28./1535928500755.png%29%0A%u8FD9%u4E2A%u4F8B%u5B50%u91CC%uFF0CRunning%20time%u5C31%u662Fsatisficing%u6307%u6807%uFF0CAccuracy%u5C31%u662Foptimizing%u6307%u6807%u3002%0A%0A%23%23%20Train/dev/test%20distributions%0A%u5728%u505A%u673A%u5668%u5B66%u4E60%u9879%u76EE%u65F6%uFF0C%u901A%u5E38%u9700%u8981%u5BF9%u6837%u672C%u6570%u636E%u8FDB%u884C%u5206%u5272%u3002%u800C%u4E0D%u662F%u5C06%u6240%u6709%u7684%u6837%u672C%u90FD%u7528%u4E8E%u8BAD%u7EC3%u3002%u4E0A%u4E00%u7AE0%u63D0%u5230%u8981%u5BF9%u8BD5%u9A8C%u7ED3%u679C%u8FDB%u884C%u8BC4%u4F30%uFF0C%u800C%u8FD9%u4E2A%u8BC4%u4F30%u5C31%u662F%u5728dev%u6216%u8005test%20set%u4E0A%u8FDB%u884C%u7684%u3002%u6240%u4EE5%u6709%u4E00%u4E2A%u5408%u9002%u7684dev/test%20set%u5BF9%u6A21%u578B%u7684%u8FED%u4EE3%u4EE5%u53CA%u9879%u76EE%u7684%u63A8%u8FDB%uFF0C%u6709%u7740%u81F3%u5173%u91CD%u8981%u7684%u5F71%u54CD%u3002%0A%0A%3E%20-%20Training%20set%3A%20%u8FED%u4EE3%u5F97%u5230%u5404%u4E2A%u6A21%u578B%u53C2%u6570%28W%2C%20b%2C%20%24%5Cbeta%24%2C%20%24%5Cgamma%20%5Cdots%24%29%0A%3E%20-%20Development%20%28dev%29%20set%3A%20%u5728%u4E00%u8F6E%u5FAA%u73AF%u4E2D%u9A8C%u8BC1training%u7ED3%u679C%u7684%u6B63%u786E%u6027%uFF0C%u4EE5%u4FBF%u4E0B%u4E00%u8F6E%u8C03%u6574%u6A21%u578B%28%u5305%u62EC%u8D85%u53C2%u6570%u7684%u8C03%u6574%uFF0C%u662F%u5426%u91C7%u7528%u6B63%u5219%u5316%uFF0C%u7F51%u7EDC%u7ED3%u6784%u7684%u8C03%u6574%u7B49%u7B49%29%0A%3E%20-%20Test%20set%3A%20%u5728%u6240%u6709%u5FAA%u73AF%u7ED3%u675F%u540E%uFF0C%u6765%u9A8C%u8BC1%u6700%u7EC8%u6A21%u578B%u7684%u6B63%u786E%u6027%u3002%0A%0A%23%23%23%20%u5982%u4F55%u5206%u5272%uFF1F%0A%u5728%u6837%u672C%u6570%u636E%u4E0D%u592A%u591A%28%u6570%u5343%u6570%u4E07%u7684%u7EA7%u522B%29%u7684%u65F6%u5019%uFF0C%u4E00%u822C%u5982%u4E0B%u5206%u5272%uFF1A%0A%21%5BAlt%20text%7C400x0%5D%28./1535929236023.png%29%0A%u5F53%u6837%u672C%u8DB3%u591F%u591A%u7684%u65F6%u5019%uFF0C%u4F8B%u5982%u767E%u4E07%u7EA7%uFF0C%u53EF%u4EE5%u5982%u4E0B%u5206%u5272%uFF1A%0A%21%5BAlt%20text%7C400x0%5D%28./1535929370378.png%29%0Adev/test%20set%u7684%u6837%u672C%u53EA%u8981%u80FD%u8FBE%u5230%u6570%u5343%u7684%u7EA7%u522B%u5C31%u8DB3%u591F%u4E86%uFF0C%u518D%u591A%u4E5F%u5E76%u4E0D%u4F1A%u5F97%u5230%u66F4%u597D%u7684%u6548%u679C%u3002%u4E0D%u5982%u5C06%u8FD9%u4E9B%u6837%u672C%u653E%u5165training%u96C6%u3002%0A%0A%23%23%23%20Guide%20Line%0A%21%5BAlt%20text%5D%28./1535930044364.png%29%0A%0A%23%23%23%23%20%u540C%u4E00%u5206%u5E03%0A%u5206%u5272%u6837%u672C%u6570%u636E%u7684%u65F6%u5019%u5FC5%u987B%u4FDD%u8BC1training%uFF0Cdev%uFF0Ctest%20set%u5747%u6765%u81EA%u4E8E%u540C%u4E00%u79CD%u5206%u5E03%u3002%0A%21%5BAlt%20text%7C250x0%5D%28./1535929538718.png%29%0A%u4E0A%u9762%u8FD9%u79CD%u5206%u5272%u5C31%u662F%u9519%u8BEF%u7684%uFF0C%u5C06%u524D4%u4E2A%u56FD%u5BB6%u7684%u6570%u636E%u4F5C%u4E3Adev%20set%uFF0C%u540E%u56DB%u4E2A%u56FD%u5BB6%u6570%u636E%u4F5C%u4E3Atest%20set%u3002%u6B63%u786E%u7684%u505A%u6CD5%u5E94%u5F53%u662F%uFF0C%u5C06%u6240%u6709%u56FD%u5BB6%u7684%u6570%u636E%u5168%u90E8%u6DF7%u7F16%uFF0C%u968F%u673A%u4EA7%u751F%u51FA%u8DB3%u591F%u6570%u91CF%u7684dev%20set%u548Ctest%20set%u3002%0A%0A%23%23%23%23%20%u53CD%u6620%u6700%u7EC8%u5E94%u7528%u573A%u666F%0Adev/test%20set%u5FC5%u987B%u53CD%u5E94%u6700%u7EC8%u6A21%u578B%u7684%u5E94%u7528%u573A%u666F%u3002%0A%u4F8B%u5982cat%20detector%uFF0C%u8BAD%u7EC3%uFF0C%u6D4B%u8BD5%u90FD%u662F%u91C7%u7528%u7F51%u7EDC%u4E0A%u6E05%u6670%u5EA6%u5F88%u9AD8%u7684%u56FE%u7247%uFF0C%u4F46%u4F7F%u7528%u65F6%u53D1%u73B0%u7528%u6237%u4E0A%u4F20%u7684%u5F88%u591A%u90FD%u662F%u6E05%u6670%u5EA6%u4E0D%u9AD8%u7684%u7167%u7247%uFF0C%u90A3%u8BC6%u522B%u7387%u80AF%u5B9A%u662F%u4E0D%u80FD%u4EE4%u4EBA%u6EE1%u610F%u7684%u3002%u8FD9%u5C31%u662Fdev/test%20set%u548C%u6700%u7EC8%u5E94%u7528%u4E0D%u7B26%u3002%u6539%u8FDB%u7684%u529E%u6CD5%u5C31%u662F%uFF0C%u8981%u4E48%u53EF%u4EE5%u591A%u91C7%u7528%u4E00%u4E9B%u771F%u5B9E%u7684%u7528%u6237%u6570%u636E%uFF0C%u6216%u8005%u91C7%u7528data%20augmentation%u6765%u4EBA%u5DE5%u6DFB%u52A0%u566A%u58F0%uFF0C%u8BA9%u539F%u672C%u6E05%u6670%u7684%u56FE%u7247%u53D8%u6210%u6A21%u7CCA%u7684%u3002%u603B%u800C%u8A00%u4E4B%uFF0C%u5C31%u662F%u6539%u8FDB%u8BAD%u7EC3%u96C6%u548C%u6D4B%u8BD5%u5F00%u53D1%u96C6%u518D%u8FDB%u884C%u8BAD%u7EC3%u3002%0A%0A%23%23%23%23%20%u5982%u679C%u6709%u7279%u522B%u4E0D%u60F3%u8981%u7684%u6307%u6807%u600E%u4E48%u529E%uFF1F%0A%u6BD4%u5982cat%20detector%u4F1A%u8BEF%u5C06%u8272%u60C5%u56FE%u7247%u8BC6%u522B%u4E3A%u732B%u54AA%uFF0C%u8FD9%u4E2A%u662F%u4E0D%u53EF%u63A5%u53D7%u7684%uFF0C%u4E0D%u7BA1%u6A21%u578B%u7CBE%u5EA6%u6709%u591A%u9AD8%u3002%u89E3%u51B3%u65B9%u6CD5%u5C31%u662F%u4FEE%u6539metric%uFF1A%0AError%3A%20%24%5Cfrac%20%7B1%7D%7B%5CSigma_i%7Bw%5E%7B%28i%29%7D%7D%7D%20%5CSigma_%7Bi%3D1%7D%5E%7Bm_%7Bdev%7D%7Dw%5E%7B%28i%29%7DI%5C%7By_%7Bpred%7D%5E%7B%28i%29%7D%20%5Cneq%20y%5E%7B%28i%29%7D%5C%7D%24%0A%24w%5E%7B%28i%29%7D%20%3D%201%24%20if%20%24X%5E%7B%28i%29%7D%24%20is%20non-porn%2C%20%24w%5E%7B%28i%29%7D%20%3D%2010%24%20if%20%24X%5E%7B%28i%29%7D%24%20is%20porn.%0A%0A%0A%23%23%20%u6700%u7EC8%u7684%u4F18%u5316%u65B9%u5411%0A%21%5BAlt%20text%7C600x0%5D%28./1535944488653.png%29%0A%u84DD%u8272%u865A%u7EBF%u4EE3%u8868%u4EBA%u7C7B%u7684%u8868%u73B0%uFF0C%u7EFF%u8272%u865A%u7EBF%u4EE3%u8868%u7406%u8BBA%u6781%u9650%uFF0C%u5373Bayes%20Optimal%20Error%28%u8D1D%u53F6%u65AF%u6700%u4F18%u8BEF%u5DEE%29%u3002%u8FD9%u91CC%u63D0%u5230%u7684Bayes%20Optimal%20Error%u662F%u4E00%u4E2A%u7406%u8BBA%u503C%uFF0C%u5B83%u4EE3%u8868%u7740%u901A%u8FC7%u4E00%u5207%u624B%u6BB5%u80FD%u8FBE%u5230%u7684%u6700%u4F18%u72B6%u6001%u3002%0A%u4E0A%u9762%u7684%u66F2%u7EBF%u8868%u793A%uFF0C%u673A%u5668%u5B66%u4E60%u9879%u76EE%u5728%u8FBE%u5230%u4EBA%u7C7B%u6C34%u5E73%u4E4B%u524D%uFF0C%u4F1A%u6709%u4E00%u4E2A%u6BD4%u8F83%u597D%u7684%u6539%u5584%u901F%u7387%uFF0C%u5F53%u8FBE%u5230%u6216%u8D85%u8FC7%u4EBA%u7C7B%u8868%u73B0%u540E%uFF0C%u5373%u8D8B%u4E8E%u5E73%u7F13%uFF0C%u751A%u81F3%u5F88%u96BE%u518D%u6709%u6539%u5584%u7684%u7A7A%u95F4%u3002%0A%0A%23%23%23%20%u4E3A%u4EC0%u4E48%u4EE5%u4EBA%u7C7B%u8868%u73B0%u4F5C%u4E3A%u5206%u6C34%u5CAD%uFF1F%0A%u56E0%u4E3A%u4EBA%u7C7B%u5728%u81EA%u7136%u611F%u77E5%28Natural%20Perception%29%u65B9%u9762%u5DF2%u7ECF%u5F88%u64C5%u957F%u4E86%u3002%u800C%u5728%u6A21%u578B%u672A%u8FBE%u5230%u4EBA%u7C7B%u8868%u73B0%u7684%u65F6%u5019%uFF0C%u4F60%u53EF%u4EE5%u901A%u8FC7%u5404%u79CD%u5DE5%u5177%u6765%u6539%u5584%u4F60%u7684%u5B66%u4E60%u6A21%u578B%u3002%u4F46%u5F53%u6A21%u578B%u8FBE%u5230%u6216%u8005%u8D85%u8FC7%u4EBA%u7C7B%u8868%u73B0%u7684%u65F6%u5019%uFF0C%u8FD9%u4E9B%u5DE5%u5177%u5C31%u5931%u6548%u4E86%u3002%u56E0%u4E3A%u8FD9%u4E9B%u5DE5%u5177%u4E5F%u662F%u901A%u8FC7%u4EBA%u7C7B%u611F%u77E5%u6765%u5236%u9020%u7684%u3002%0A%0A%20%0A%23%23%23%20%u4EC0%u4E48%u662FAvoidable%20Bias%uFF1F%0A%21%5BAlt%20text%7C500x0%5D%28./1535951043543.png%29%0AAvoidable%20Bias%20%3D%20Bayes%20Error%20-%20Training%20Error%0A-%20%u901A%u5E38%u7528%u4EBA%u7C7B%u8868%u73B0%u6765%u8FD1%u4F3C%u8D1D%u53F6%u65AF%u6700%u4F18%u8BEF%u5DEE%u3002%0A-%20If%20%7C%20human-level%20error%20-%20training%20error%20%7C%20%24%5Cge%24%20%7C%20training%20error%20-%20development%20error%20%7C%2C%20focus%20on%20**bias%20%28avoidable%20bias%29**%20reduction%20technique%0A-%20If%20%20%7C%20human-level%20error%20-%20training%20error%20%7C%20%24%5Cle%24%20%7C%20training%20error%20-%20development%20error%20%7C%2C%20focus%20on%20**variance**%20reduction%20technique%0A%0A%0A%23%23%23%20%u8D85%u8FC7%u4EBA%u7C7B%u8868%u73B0%u540E%u600E%u4E48%u529E%uFF1F%0A%u7B54%u6848%u662F%uFF0C%u6CA1%u529E%u6CD5%u6216%u8005%u6CA1%u6709%u597D%u7684%u529E%u6CD5%u3002%u56E0%u4E3A%u8D85%u8FC7%u4EBA%u7C7B%u8868%u73B0%uFF0C%u6240%u6709%u7684%u6539%u5584%u6A21%u578B%u8868%u73B0%u7684%u5DE5%u5177%u53EF%u80FD%u90FD%u4F1A%u5931%u6548%u3002%u6240%u4EE5%u8981%u60F3%u524D%u8FDB%u4E00%u6B65%u90FD%u5F88%u56F0%u96BE%u3002Andrew%u5728%u6559%u7A0B%u4E2D%u4E5F%u6CA1%u6709%u7ED9%u51FA%u660E%u786E%u7684%u65B9%u5411%u5E94%u8BE5%u600E%u4E48%u505A%u3002%u6211%u4EEC%u901A%u5E38%u7528%u4EBA%u7C7B%u8868%u73B0%u6765%u8FD1%u4F3C%u8D1D%u53F6%u65AF%u6700%u4F73%u8BEF%u5DEE%u3002%u5F53%u6A21%u578B%u8868%u73B0%u5DF2%u7ECF%u8D85%u8FC7%u4E00%u7EC4%u4EBA%u7C7B%u7684%u63A8%u65AD%u51C6%u786E%u7387%u7684%u65F6%u5019%uFF0C%u5C31%u6CA1%u529E%u6CD5%u518D%u7528%u539F%u5148%u7684%u503C%u6765%u8FD1%u4F3C%u8D1D%u53F6%u65AF%u8BEF%u5DEE%u4E86%u3002%0A%0A%23%23%20%u603B%u7ED3%0A%21%5BAlt%20text%7C700x0%5D%28./1535959208924.png%29%0A

Edit

这个在看这个教程之前就有所耳闻。据说现在的deep learning模型的最后一层都是softmax层,来做多分类(Multi-class Classification)。

Activation Function:

%23%20Deep%20Learning%20%286%29%20-%20Softmax%20regression%0A@%28myblog%29%5Bdeep%20learning%2C%20machine%20learning%5D%0A%0A%u8FD9%u4E2A%u5728%u770B%u8FD9%u4E2A%u6559%u7A0B%u4E4B%u524D%u5C31%u6709%u6240%u8033%u95FB%u3002%u636E%u8BF4%u73B0%u5728%u7684deep%20learning%u6A21%u578B%u7684%u6700%u540E%u4E00%u5C42%u90FD%u662Fsoftmax%u5C42%uFF0C%u6765%u505A%u591A%u5206%u7C7B%28Multi-class%20Classification%29%u3002%0A%0AActivation%20Function%3A%0A%24t%20%3D%20e%20%5E%7B%28Z%5E%7B%5BL%5D%7D%29%7D%24%0A%24a%5E%7B%5BL%5D%7D%20%3D%20%5Cfrac%20%7Bt%7D%7B%5CSigma_i%20%7Bt_i%7D%7D%24

Edit

在机器学习建模时,通常会对输入参数X进行Normalize,即




Normalize的好处是可以加速收敛。看下图,当正规化后,Contour从椭圆变成圆,不管起始点落在圆的哪里,最后都可以收敛到中心最优点。而左图,可能有一些随机噪声,导致方向偏离,就会导致最终结果发散。所以需要很小心的选择learning rate。

Implement

Batch Norm要做的就是对神经网络的每一层的中间变量使用正规化:

  • 参数控制着Z的均值和方差,它们和W,b一样也是模型求解的参数(learnable parameter)。

为什么要有?
正规化成均值为0,方差为1时,当采用类似sigmoid的激活函数的时候,则激活函数输出,或者说该节点输出均集中在中心线性区域,则该节点退化成线性激活函数,所有的节点退化成线性节点,神经网络就退化成了logistic regression。为了保持非线性,为了保持随机性,要通过来调整每个状态量的分布函数。

Batch Norm in Neural Network

for 1…num of Mini-batches
   compute forward path on
       In each hidden layer, use BN to repair with
   Use backprop to compute
   Update params
       
       
       
Work with momentum, RMSprop, Adam

这里注意省略了, 因为b是常量,与状态输入无关,所以在正规化的时候,会被计入状态量的期望值

Batch Norm at test time

在测试阶段,对test set本身不做Batch Norm,因为test set和training set的分布可能不同。但是在做正向传播求预测输出的时候,因为各个hidden unit的参数都是根据Batch Norm迭代出来的,所以折中的办法就是,采用exponential weighted average来记录training set的,在测试阶段使用。具体步骤:

  • 针对每个mini batch,每一层记录
  • 使用exponential weighted average across mini batches,更新,
  • 结束training的时候,记录使用在test set里
%23%20Deep%20Learning%20%285%29%20-%20Batch%20Normalization%0A@%28myblog%29%5Bdeep%20learning%2C%20machine%20learning%5D%0A%0A%u5728%u673A%u5668%u5B66%u4E60%u5EFA%u6A21%u65F6%uFF0C%u901A%u5E38%u4F1A%u5BF9%u8F93%u5165%u53C2%u6570X%u8FDB%u884CNormalize%uFF0C%u5373%0A%24%5Cmu%20%3D%20%5Cfrac%20%7B1%7D%7Bm%7D%20%5CSigma_i%20x%5E%7B%28i%29%7D%24%0A%24X%20%3D%20X-%5Cmu%24%0A%24%5Csigma%5E2%20%3D%20%5Cfrac%20%7B1%7D%7Bm%7D%20%5CSigma_i%20%28x%5E%7B%28i%29%7D-%5Cmu%29%5E2%24%0A%24X%3D%5Cfrac%20%7BX%7D%7B%5Csigma%7D%24%0ANormalize%u7684%u597D%u5904%u662F%u53EF%u4EE5%u52A0%u901F%u6536%u655B%u3002%u770B%u4E0B%u56FE%uFF0C%u5F53%u6B63%u89C4%u5316%u540E%uFF0CContour%u4ECE%u692D%u5706%u53D8%u6210%u5706%uFF0C%u4E0D%u7BA1%u8D77%u59CB%u70B9%u843D%u5728%u5706%u7684%u54EA%u91CC%uFF0C%u6700%u540E%u90FD%u53EF%u4EE5%u6536%u655B%u5230%u4E2D%u5FC3%u6700%u4F18%u70B9%u3002%u800C%u5DE6%u56FE%uFF0C%u53EF%u80FD%u6709%u4E00%u4E9B%u968F%u673A%u566A%u58F0%uFF0C%u5BFC%u81F4%u65B9%u5411%u504F%u79BB%uFF0C%u5C31%u4F1A%u5BFC%u81F4%u6700%u7EC8%u7ED3%u679C%u53D1%u6563%u3002%u6240%u4EE5%u9700%u8981%u5F88%u5C0F%u5FC3%u7684%u9009%u62E9learning%20rate%u3002%0A%21%5BAlt%20text%7C300x0%5D%28./1535670653390.png%29%0A%0A%23%23%20Implement%0ABatch%20Norm%u8981%u505A%u7684%u5C31%u662F%u5BF9%u795E%u7ECF%u7F51%u7EDC%u7684%u6BCF%u4E00%u5C42%u7684%u4E2D%u95F4%u53D8%u91CF%24Z%5E%7B%5Bl%5D%7D%24%u4F7F%u7528%u6B63%u89C4%u5316%uFF1A%0A-%20%24%5Cmu%20%3D%20%5Cfrac%7B1%7D%7Bm%7D%5CSigma_i%20Z%5E%7B%28i%29%7D%24%0A-%20%24%5Csigma%5E2%20%3D%20%5Cfrac%20%7B1%7D%7Bm%7D%20%5CSigma_i%20%28Z%5E%7B%28i%29%7D%20-%20%5Cmu%29%5E2%24%0A-%20%24Z_%7Bnorm%7D%5E%7B%28i%29%7D%20%3D%20%5Cfrac%20%7BZ%5E%7B%28i%29%7D-%5Cmu%7D%7B%5Csqrt%20%7B%5Csigma%5E2%20+%20%5Cepsilon%7D%7D%24%0A-%20%24%5Ctilde%7BZ%7D%5E%7B%28i%29%7D%20%3D%20%5Cgamma%20Z_%7Bnorm%7D%5E%7B%28i%29%7D%20+%20%5Cbeta%24%0A-%20%u53C2%u6570%24%5Cgamma%24%uFF0C%24%5Cbeta%24%u63A7%u5236%u7740Z%u7684%u5747%u503C%u548C%u65B9%u5DEE%uFF0C%u5B83%u4EEC%u548CW%uFF0Cb%u4E00%u6837%u4E5F%u662F%u6A21%u578B%u6C42%u89E3%u7684%u53C2%u6570%28learnable%20parameter%29%u3002%0A%0A%3E%20%u4E3A%u4EC0%u4E48%u8981%u6709%24%5Cgamma%2C%20%5Cbeta%24%3F%0A%3E%20%u6B63%u89C4%u5316%u6210%u5747%u503C%u4E3A0%uFF0C%u65B9%u5DEE%u4E3A1%u65F6%uFF0C%u5F53%u91C7%u7528%u7C7B%u4F3Csigmoid%u7684%u6FC0%u6D3B%u51FD%u6570%u7684%u65F6%u5019%uFF0C%u5219%u6FC0%u6D3B%u51FD%u6570%u8F93%u51FA%uFF0C%u6216%u8005%u8BF4%u8BE5%u8282%u70B9%u8F93%u51FA%u5747%u96C6%u4E2D%u5728%u4E2D%u5FC3%u7EBF%u6027%u533A%u57DF%uFF0C%u5219%u8BE5%u8282%u70B9%u9000%u5316%u6210%u7EBF%u6027%u6FC0%u6D3B%u51FD%u6570%uFF0C%u6240%u6709%u7684%u8282%u70B9%u9000%u5316%u6210%u7EBF%u6027%u8282%u70B9%uFF0C%u795E%u7ECF%u7F51%u7EDC%u5C31%u9000%u5316%u6210%u4E86logistic%20regression%u3002%u4E3A%u4E86%u4FDD%u6301%u975E%u7EBF%u6027%uFF0C%u4E3A%u4E86%u4FDD%u6301%u968F%u673A%u6027%uFF0C%u8981%u901A%u8FC7%24%5Cgamma%24%u548C%24%5Cbeta%24%u6765%u8C03%u6574%u6BCF%u4E2A%u72B6%u6001%u91CF%u7684%u5206%u5E03%u51FD%u6570%u3002%0A%3E%20%21%5BAlt%20text%5D%28./1535683830394.png%29%0A%0A%23%23%20Batch%20Norm%20in%20Neural%20Network%0Afor%201...num%20of%20Mini-batches%0A%u3000%u3000%u3000compute%20forward%20path%20on%20%24X%5E%7B%28t%29%7D%24%0A%u3000%u3000%u3000%u3000%u3000%u3000%u3000In%20each%20hidden%20layer%2C%20use%20BN%20to%20repair%20%24Z%5E%7B%5Bl%5D%7D%24with%20%24%5Ctilde%20%7BZ%7D%5E%7B%5Bl%5D%7D%24%0A%u3000%u3000%u3000Use%20backprop%20to%20compute%20%24dw%5E%7B%5Bl%5D%7D%2C%20d%5Cbeta%5E%7B%5Bl%5D%7D%2C%20d%5Cgamma%5E%7B%5Bl%5D%7D%24%0A%u3000%u3000%u3000Update%20params%20%0A%u3000%u3000%u3000%u3000%u3000%u3000%u3000%24w%5E%7B%5Bl%5D%7D%20%3A%3D%20w%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20%5Ccdot%20dw%5E%7B%28l%29%7D%24%0A%u3000%u3000%u3000%u3000%u3000%u3000%u3000%24%5Cbeta%5E%7B%5Bl%5D%7D%20%3A%3D%20%5Cbeta%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20%5Ccdot%20d%20%5Cbeta%5E%7B%5Bl%5D%7D%24%0A%u3000%u3000%u3000%u3000%u3000%u3000%u3000%24%5Cgamma%5E%7B%5Bl%5D%7D%20%3A%3D%20%5Cgamma%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20%5Ccdot%20d%20%5Cgamma%5E%7B%5Bl%5D%7D%24%0AWork%20with%20momentum%2C%20RMSprop%2C%20Adam%0A%0A%u8FD9%u91CC%u6CE8%u610F%u7701%u7565%u4E86%24db%5E%7B%5Bl%5D%7D%24%2C%20%u56E0%u4E3Ab%u662F%u5E38%u91CF%uFF0C%u4E0E%u72B6%u6001%u8F93%u5165%u65E0%u5173%uFF0C%u6240%u4EE5%u5728%u6B63%u89C4%u5316%u7684%u65F6%u5019%uFF0C%u4F1A%u88AB%u8BA1%u5165%u72B6%u6001%u91CF%u7684%u671F%u671B%u503C%24%5Cbeta%24%0A%0A%23%23%20Batch%20Norm%20at%20test%20time%0A%u5728%u6D4B%u8BD5%u9636%u6BB5%uFF0C%u5BF9test%20set%u672C%u8EAB%u4E0D%u505ABatch%20Norm%uFF0C%u56E0%u4E3Atest%20set%u548Ctraining%20set%u7684%u5206%u5E03%u53EF%u80FD%u4E0D%u540C%u3002%u4F46%u662F%u5728%u505A%u6B63%u5411%u4F20%u64AD%u6C42%u9884%u6D4B%u8F93%u51FA%u7684%u65F6%u5019%uFF0C%u56E0%u4E3A%u5404%u4E2Ahidden%20unit%u7684%u53C2%u6570%u90FD%u662F%u6839%u636EBatch%20Norm%u8FED%u4EE3%u51FA%u6765%u7684%uFF0C%u6240%u4EE5%u6298%u4E2D%u7684%u529E%u6CD5%u5C31%u662F%uFF0C%u91C7%u7528exponential%20weighted%20average%u6765%u8BB0%u5F55training%20set%u7684%24%5Cmu%24%u548C%24%5Csigma%5E2%24%uFF0C%u5728%u6D4B%u8BD5%u9636%u6BB5%u4F7F%u7528%u3002%u5177%u4F53%u6B65%u9AA4%uFF1A%0A-%20%u9488%u5BF9%u6BCF%u4E2Amini%20batch%uFF0C%u6BCF%u4E00%u5C42%u8BB0%u5F55%24%5Cmu%5E%7B%5C%7Bi%5C%7D%5Bl%5D%7D%24%u548C%24%5Csigma%5E%7B2%5C%7Bi%5C%7D%5Bl%5D%7D%24%0A-%20%u4F7F%u7528exponential%20weighted%20average%20across%20mini%20batches%uFF0C%u66F4%u65B0%24%5Cmu%24%2C%20%24%5Csigma%5E2%24%0A-%20%u7ED3%u675Ftraining%u7684%u65F6%u5019%uFF0C%u8BB0%u5F55%24%5Cmu%24%u548C%24%5Csigma%5E2%24%u4F7F%u7528%u5728test%20set%u91CC

Edit

在机器学习的模型中,通常有一些超参数(Hyperparameter),例如:学习率(),神经网络层数等等。这些是模型的参数。相对超参数,我们要通过学习调优的模型参数,例如:W,b等等,称为learnable parameter。超参数通常影响Gradient decent迭代的收敛速度和质量,甚至是否收敛。所以通常需要不断的调整,找到适合模型的超参数。

Tuning Process

在引入各种优化算法(Momentum,RMSprop,ADAM)之后,超参数的种类变得更多起来:

  • Learning rate:
  • Momentum:
  • ADAM:
  • Number of layers
  • Number of hidden units
  • Learning rate decay算法
  • mini-batch size

在调试这些参数的时候,Andrew给出了优先级:

解释一下,这么多超参数中:

  • Learning rate是最重要的,首先要调整的,选择合适的learning rate,否则算法有发散的可能
  • 第二优先级的是橙色的框框,包括:Momentum , Number of hidden units, mini-batch size
  • 之后是紫色的框框,包括:Number of layers, 选择Learning rate decay的算法
  • ADAM的参数通常不需要调整,经典值往往就有不错的效果,

Try random values, Don’t use grid search

Coarse to fine

粒度由粗到精,这个就是显而易见的策略了。下图也很好的说明了:

Using an appropriate scale to pick hyperparameter

这里意思是有些场合,超参数的调试范围希望是指数上均匀的。例如Momentum中的,当我们想调试0.9~0.999范围的时候,实际上是想调试1-,取

%23%20Deep%20Learning%20%284%29%20-%20Hyperparameters%0A@%28myblog%29%5Bdeep%20learning%2C%20machine%20learning%5D%0A%0A%u5728%u673A%u5668%u5B66%u4E60%u7684%u6A21%u578B%u4E2D%uFF0C%u901A%u5E38%u6709%u4E00%u4E9B%u8D85%u53C2%u6570%28Hyperparameter%29%uFF0C%u4F8B%u5982%uFF1A%u5B66%u4E60%u7387%28%24%5Calpha%24%29%uFF0C%u795E%u7ECF%u7F51%u7EDC%u5C42%u6570%u7B49%u7B49%u3002%u8FD9%u4E9B%u662F%u6A21%u578B%u7684%u53C2%u6570%u3002%u76F8%u5BF9%u8D85%u53C2%u6570%uFF0C%u6211%u4EEC%u8981%u901A%u8FC7%u5B66%u4E60%u8C03%u4F18%u7684%u6A21%u578B%u53C2%u6570%uFF0C%u4F8B%u5982%uFF1AW%uFF0Cb%u7B49%u7B49%uFF0C%u79F0%u4E3Alearnable%20parameter%u3002%u8D85%u53C2%u6570%u901A%u5E38%u5F71%u54CDGradient%20decent%u8FED%u4EE3%u7684%u6536%u655B%u901F%u5EA6%u548C%u8D28%u91CF%uFF0C%u751A%u81F3%u662F%u5426%u6536%u655B%u3002%u6240%u4EE5%u901A%u5E38%u9700%u8981%u4E0D%u65AD%u7684%u8C03%u6574%uFF0C%u627E%u5230%u9002%u5408%u6A21%u578B%u7684%u8D85%u53C2%u6570%u3002%0A%23%23%20Tuning%20Process%0A%u5728%u5F15%u5165%u5404%u79CD%u4F18%u5316%u7B97%u6CD5%28Momentum%uFF0CRMSprop%uFF0CADAM%29%u4E4B%u540E%uFF0C%u8D85%u53C2%u6570%u7684%u79CD%u7C7B%u53D8%u5F97%u66F4%u591A%u8D77%u6765%uFF1A%0A-%20Learning%20rate%3A%20%24%5Calpha%24%0A-%20Momentum%3A%20%24%5Cbeta%24%0A-%20ADAM%3A%20%24%5Cbeta_1%2C%20%5Cbeta_2%2C%20%5Cepsilon%24%0A-%20Number%20of%20layers%0A-%20Number%20of%20hidden%20units%0A-%20Learning%20rate%20decay%u7B97%u6CD5%0A-%20mini-batch%20size%0A%0A%u5728%u8C03%u8BD5%u8FD9%u4E9B%u53C2%u6570%u7684%u65F6%u5019%uFF0CAndrew%u7ED9%u51FA%u4E86%u4F18%u5148%u7EA7%uFF1A%0A%21%5BAlt%20text%7C250x0%5D%28./1535585275676.png%29%0A%3E%u89E3%u91CA%u4E00%u4E0B%uFF0C%u8FD9%u4E48%u591A%u8D85%u53C2%u6570%u4E2D%uFF1A%0A-%20Learning%20rate%u662F%u6700%u91CD%u8981%u7684%uFF0C%u9996%u5148%u8981%u8C03%u6574%u7684%uFF0C%u9009%u62E9%u5408%u9002%u7684learning%20rate%uFF0C%u5426%u5219%u7B97%u6CD5%u6709%u53D1%u6563%u7684%u53EF%u80FD%0A-%20%u7B2C%u4E8C%u4F18%u5148%u7EA7%u7684%u662F%u6A59%u8272%u7684%u6846%u6846%uFF0C%u5305%u62EC%uFF1AMomentum%20%24%5Cbeta%24%2C%20Number%20of%20hidden%20units%2C%20mini-batch%20size%0A-%20%u4E4B%u540E%u662F%u7D2B%u8272%u7684%u6846%u6846%uFF0C%u5305%u62EC%uFF1ANumber%20of%20layers%2C%20%u9009%u62E9Learning%20rate%20decay%u7684%u7B97%u6CD5%0A-%20ADAM%u7684%u53C2%u6570%u901A%u5E38%u4E0D%u9700%u8981%u8C03%u6574%uFF0C%u7ECF%u5178%u503C%u5F80%u5F80%u5C31%u6709%u4E0D%u9519%u7684%u6548%u679C%uFF0C%24%5Cbeta_1%3D0.9%2C%20%5Cbeta_2%3D0.999%2C%20%5Cepsilon%3D10%5E%7B-8%7D%24%0A%0A%23%23%20Try%20random%20values%2C%20Don%27t%20use%20grid%20search%0A%21%5BAlt%20text%7C650x0%5D%28./1535613475283.png%29%0A%u4E3A%u4EC0%u4E48%u4E0D%u7528grid%20search%uFF1F%u56E0%u4E3A%u8D85%u53C2%u6570%u7684%u91CD%u8981%u6027%u4E0D%u540C%u3002%u5982%u5DE6%u56FE%u7684%u7F51%u683C%u641C%u7D22%uFF0C5%u4E2A%u4E0D%u540C%u7684%24%5Calpha%24%u642D%u914D5%u4E2A%u4E0D%u540C%u7684%24%5Cepsilon%24%uFF0C%u7ED3%u679C%u53D1%u73B0%24%5Cepsilon%24%u5BF9%u7ED3%u679C%u5E76%u6CA1%u6709%u4EC0%u4E48%u5F71%u54CD%u3002%u90A3%u4E48%u8FD925%u6B21%u6D4B%u8BD5%uFF0C%u5176%u5B9E%u53EA%u67095%u7EC4%u662F%u6709%u6548%u7684%u3002%0A%u5982%u679C%u91C7%u7528%u53F3%u56FErandom%20search%uFF0C%u56E0%u4E3A%u6BCF%u7EC4%u7684%24%5Calpha%24%u548C%24%5Cepsilon%24%u5747%u4E0D%u76F8%u540C%uFF0C%u6240%u4EE5%u662F25%u7EC4%u6709%u6548%u6D4B%u8BD5%u3002%u4E5F%u5C31%u662F%u8BF4%u968F%u673A%u6D4B%u8BD5%u7684%u6548%u7387%u8981%u9AD8%u4E8E%u7F51%u683C%u6D4B%u8BD5%u3002%0A%0A%23%23%20Coarse%20to%20fine%0A%u7C92%u5EA6%u7531%u7C97%u5230%u7CBE%uFF0C%u8FD9%u4E2A%u5C31%u662F%u663E%u800C%u6613%u89C1%u7684%u7B56%u7565%u4E86%u3002%u4E0B%u56FE%u4E5F%u5F88%u597D%u7684%u8BF4%u660E%u4E86%uFF1A%0A%21%5BAlt%20text%7C400x0%5D%28./1535613723014.png%29%0A%u7ECF%u8FC7%u6D4B%u8BD5%u53D1%u73B0%u5176%u4E2D3%u70B9%u6548%u679C%u4E0D%u9519%uFF0C%u90A3%u5C31%u7F29%u5C0F%u533A%u57DF%uFF0C%u63D0%u9AD8%u641C%u7D22%u7CBE%u5EA6%u6765%u6D4B%u8BD5%u66F4%u591A%u7684%u70B9%u3002%0A%0A%23%23%20Using%20an%20appropriate%20scale%20to%20pick%20hyperparameter%0A%u8FD9%u91CC%u610F%u601D%u662F%u6709%u4E9B%u573A%u5408%uFF0C%u8D85%u53C2%u6570%u7684%u8C03%u8BD5%u8303%u56F4%u5E0C%u671B%u662F%u6307%u6570%u4E0A%u5747%u5300%u7684%u3002%u4F8B%u5982Momentum%u4E2D%u7684%24%5Cbeta%24%uFF0C%u5F53%u6211%u4EEC%u60F3%u8C03%u8BD50.9%7E0.999%u8303%u56F4%u7684%u65F6%u5019%uFF0C%u5B9E%u9645%u4E0A%u662F%u60F3%u8C03%u8BD51-%24%5Cbeta%20%5Cin%20%5B10%5E%7B-3%7D%2C%2010%5E%7B-1%7D%5D%24%uFF0C%u53D6%24r%20%5Cin%20%5B-3%2C%20-1%5D%24%uFF0C%24%5Cbeta%20%3D%201-10%5Er%24