Caffe中的EuclideanLoss层源码解析

Caffe中的EuclideanLoss层是用于计算L2 loss的(即平方和损失函数),其损失函数为:

                                                                                   E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n\right| \right|_2^2

其中,N为输入blob的第一维(即bottom[0].num()),\hat{y}_n ,y_n可以是向量也可以是标量,前者为预测值,后者为标签(即目标值)。

先看看该层的头文件定义了哪些:

#ifndef CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_
#define CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_

#include <vector>

#include "caffe/blob.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"

#include "caffe/layers/loss_layer.hpp"

namespace caffe {

/**
 * @brief Computes the Euclidean (L2) loss @f$
 *          E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n
 *        \right| \right|_2^2 @f$ for real-valued regression tasks.
 *
 * @param bottom input Blob vector (length 2)
 *   -# @f$ (N \times C \times H \times W) @f$
 *      the predictions @f$ \hat{y} \in [-\infty, +\infty]@f$
 *   -# @f$ (N \times C \times H \times W) @f$
 *      the targets @f$ y \in [-\infty, +\infty]@f$
 * @param top output Blob vector (length 1)
 *   -# @f$ (1 \times 1 \times 1 \times 1) @f$
 *      the computed Euclidean loss: @f$ E =
 *          \frac{1}{2n} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n
 *        \right| \right|_2^2 @f$
 *
 * This can be used for least-squares regression tasks.  An InnerProductLayer
 * input to a EuclideanLossLayer exactly formulates a linear least squares
 * regression problem. With non-zero weight decay the problem becomes one of
 * ridge regression -- see src/caffe/test/test_sgd_solver.cpp for a concrete
 * example wherein we check that the gradients computed for a Net with exactly
 * this structure match hand-computed gradient formulas for ridge regression.
 *
 * (Note: Caffe, and SGD in general, is certainly \b not the best way to solve
 * linear least squares problems! We use it only as an instructive example.)
 */
template <typename Dtype>
class EuclideanLossLayer : public LossLayer<Dtype> {
 public:
  explicit EuclideanLossLayer(const LayerParameter& param)
      : LossLayer<Dtype>(param), diff_() {}
  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);

  virtual inline const char* type() const { return "EuclideanLoss"; }
  /**
   * Unlike most loss layers, in the EuclideanLossLayer we can backpropagate
   * to both inputs -- override to return true and always allow force_backward.
   */
  //不像其他loss层,EuclideanLoss层的两个输入blob都可以进行反向传播
  virtual inline bool AllowForceBackward(const int bottom_index) const {
    return true;
  }

 protected:
  /// @copydoc EuclideanLossLayer
  //前向传播
  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);
  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);

  /**
   * @brief Computes the Euclidean error gradient w.r.t. the inputs.
   *
   * Unlike other children of LossLayer, EuclideanLossLayer \b can compute
   * gradients with respect to the label inputs bottom[1] (but still only will
   * if propagate_down[1] is set, due to being produced by learnable parameters
   * or if force_backward is set). In fact, this layer is "commutative" -- the
   * result is the same regardless of the order of the two bottoms.
   *与LossLayer的其他loss函数不同,EuclideanLossLayer可以计算相对于标签输入bottom[1]的梯度
   *(即如果设置了propagate_down [1],或者设置了force_backward)。
   *实际上,这一层是“可交换的”——无论两个bottom(即bottom[0]和bottom[1])的顺序如何,反向传播结果都是相同的。
   * @param top output Blob vector (length 1), providing the error gradient with
   *      respect to the outputs
   *   -# @f$ (1 \times 1 \times 1 \times 1) @f$
   *      This Blob's diff will simply contain the loss_weight* @f$ \lambda @f$,
   *      as @f$ \lambda @f$ is the coefficient of this layer's output
   *      @f$\ell_i@f$ in the overall Net loss
   *      @f$ E = \lambda_i \ell_i + \mbox{other loss terms}@f$; hence
   *      @f$ \frac{\partial E}{\partial \ell_i} = \lambda_i @f$.
   *      (*Assuming that this top Blob is not used as a bottom (input) by any
   *      other layer of the Net.)
   * @param propagate_down see Layer::Backward.
   * @param bottom input Blob vector (length 2)
   *   -# @f$ (N \times C \times H \times W) @f$
   *      the predictions @f$\hat{y}@f$; Backward fills their diff with
   *      gradients @f$
   *        \frac{\partial E}{\partial \hat{y}} =
   *            \frac{1}{n} \sum\limits_{n=1}^N (\hat{y}_n - y_n)
   *      @f$ if propagate_down[0]
   *   -# @f$ (N \times C \times H \times W) @f$
   *      the targets @f$y@f$; Backward fills their diff with gradients
   *      @f$ \frac{\partial E}{\partial y} =
   *          \frac{1}{n} \sum\limits_{n=1}^N (y_n - \hat{y}_n)
   *      @f$ if propagate_down[1]
   */
  //反向传播
  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);

  Blob<Dtype> diff_;  //用于存储y^hat-y(即预测值和标签之间的差值)
};

}  // namespace caffe

#endif  // CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_

再来看看上述头文件中函数的实现(只看CPU的实现,GPU的实现类似,详细可以参见euclidean_loss_layer.cu文件)

#include <vector>

#include "caffe/layers/euclidean_loss_layer.hpp"
#include "caffe/util/math_functions.hpp"

namespace caffe {

template <typename Dtype>
void EuclideanLossLayer<Dtype>::Reshape(
  const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
  LossLayer<Dtype>::Reshape(bottom, top); //调用LossLayer类的Reshape函数(会检查bottom[0]和bottom[1]的num是否相等)
  CHECK_EQ(bottom[0]->count(1), bottom[1]->count(1))
      << "Inputs must have the same dimension."; //即bottom[0]和bottom[1]的channel×height×width需要相同
  diff_.ReshapeLike(*bottom[0]);
}

template <typename Dtype>
void EuclideanLossLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
    const vector<Blob<Dtype>*>& top) {
  int count = bottom[0]->count();
  caffe_sub(
      count,
      bottom[0]->cpu_data(),
      bottom[1]->cpu_data(),
      diff_.mutable_cpu_data()); //caffe_sub实现逐元素相减
  Dtype dot = caffe_cpu_dot(count, diff_.cpu_data(), diff_.cpu_data()); //caffe_cpu_dot实现向量内积
  Dtype loss = dot / bottom[0]->num() / Dtype(2); //计算L2 loss
  top[0]->mutable_cpu_data()[0] = loss;
}

template <typename Dtype>
void EuclideanLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
  for (int i = 0; i < 2; ++i) {
    if (propagate_down[i]) {
      const Dtype sign = (i == 0) ? 1 : -1;  //由此实现两个输入blob都可以反向传播(无论设置哪一个,反向传播结果都是一样的)
      const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num(); //这里的top[0]->cpu_diff()[0]存储的其实就是该层的损失权重loss_weight
      //caffe_cpu_axpby实现b=alpha*a+beta*b(a和b均为向量)
      caffe_cpu_axpby(
          bottom[i]->count(),              // count
          alpha,                              // alpha
          diff_.cpu_data(),                   // a
          Dtype(0),                           // beta
          bottom[i]->mutable_cpu_diff());  // b
    }
  }
}

#ifdef CPU_ONLY
STUB_GPU(EuclideanLossLayer);
#endif

INSTANTIATE_CLASS(EuclideanLossLayer);
REGISTER_LAYER_CLASS(EuclideanLoss);

}  // namespace caffe

从上述Reshape()函数中可以看出该层的两个输入blob需要有相同的维数。

上述Forward_cpu()函数就是实现上面的损失函数(由于损失是一个标量,所以输出blob,即top[0]只有一个元素,这是在LossLayer类的Reshape函数所设置的)

而Backward_cpu()函数实现的便是计算下面公式中的梯度:

如果propagate_down[0] = true,则\frac{\partial E}{\partial \hat{y}_n} =\frac{1}{N}( \hat{y}_n-y_n)

如果propagate_down[1] = true,则\frac{\partial E}{\partial y_n} =\frac{1}{N}( y_n-\hat{y}_n)

注:当\hat{y}_n ,y_n是标量时,这里用到了向量/矩阵求导。

另外需要注意的是,这里的两个输入blob都可以反向传播的意思其实是该层可以允许预测值和标签放反,即允许y_{n}为预测向量,\hat_{y}_{n}为标签向量,但需要设置propagate_down[1] = true或设置force_backward。

还有一点需要注意的是,损失层的Forward/Backword函数的输出blob(即top)是标量,只有一个数据,但参见这些损失层的继承类LossLayer时,你会发现一个奇怪的现象(以下代码摘自loss_layer.cpp):

template <typename Dtype>
void LossLayer<Dtype>::Reshape(
    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
  CHECK_EQ(bottom[0]->num(), bottom[1]->num())
      << "The data and label should have the same number.";
  vector<int> loss_shape(0);  // Loss layers output a scalar; 0 axes.
  top[0]->Reshape(loss_shape);
}

即上述代码定义了top[0]这个blob的维数为0,但在其他损失函数,例如EuclideanLoss层中的Forward_cpu函数中的语句:top[0]->mutable_cpu_data()[0] = loss;却调用了top[0][0](所有损失层都是继承这个类的,所以top[0]的初始化正是上面的Reshape函数),一般而言这样索引会出错(即超出了索引范围),但在caffe中并不会报错(可能是proto自身的原因)。

例如,当运行以下代码

#include <vector>
#include <iostream>
#include <caffe/blob.hpp>
#include <caffe/util/io.hpp>
using namespace caffe;
using namespace std;
int main()
{
   Blob<float> a;
   cout<<"Size:="<<a.shape_string()<<endl;
   a.Reshape(1,2,3,4);
   cout<<"Size:="<<a.shape_string()<<endl;
   float* p=a.mutable_cpu_data();
   for(int i=0;i<a.count();i++)
       p[i]=i;
   for(int u=0;u<a.num();u++)
      for(int v=0;v<a.channels();v++)
         for(int w=0;w<a.height();w++)
            for(int x=0;x<a.width();x++)
               cout<<"a["<<u<<"]["<<v<<"]["<<w<<"]["<<x<<"]="<<a.data_at(u,v,w,x)<<endl;

   //测试loss_layer中设置shape为0维
   vector<int> shape(0);
   vector<Blob<float>*> top;
   top.push_back(&a);
   top[0]->Reshape(shape);
   top[0]->mutable_cpu_data()[0] = 10;
   cout << "loss:" << top[0]->cpu_data()[0] << endl;
   return 0;
}

代码执行结果如下 ,即输出的top[0]->cpu_data()[0]即为设定的10,且top[0]的size为1。

Size:=(0)
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0827 14:35:05.329982  6920 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
Size:=1 2 3 4 (24)
a[0][0][0][0]=0
a[0][0][0][1]=1
a[0][0][0][2]=2
a[0][0][0][3]=3
a[0][0][1][0]=4
a[0][0][1][1]=5
a[0][0][1][2]=6
a[0][0][1][3]=7
a[0][0][2][0]=8
a[0][0][2][1]=9
a[0][0][2][2]=10
a[0][0][2][3]=11
a[0][1][0][0]=12
a[0][1][0][1]=13
a[0][1][0][2]=14
a[0][1][0][3]=15
a[0][1][1][0]=16
a[0][1][1][1]=17
a[0][1][1][2]=18
a[0][1][1][3]=19
a[0][1][2][0]=20
a[0][1][2][1]=21
a[0][1][2][2]=22
a[0][1][2][3]=23
Size:=(1)
loss:10

假如注释掉top[0]->mutable_cpu_data()[0] = 10;这句代码,会发现运行结果为loss:0;而当修改

cout << "loss:" << top[0]->cpu_data()[0] << endl;

cout << "loss:" << top[0]->cpu_data()[1] << endl;

执行结果为loss:1。

可知proto定义的blob是自动给你初始化了(按照标号初始化),且索引超出范围也不会报错,但需要注意的是top[0]->cpu_diff()[n],无论n为几,输出都是0(不信的话大家不妨可以试一试)。当然本质上 top[0]的大小仍旧为1,只是超出索引不会报错而已,这一点还是要注意的,至于为何大小为1,这就需要看一下blob.cpp中的Reshape()函数了,如下:

template <typename Dtype>
void Blob<Dtype>::Reshape(const vector<int>& shape) {
  CHECK_LE(shape.size(), kMaxBlobAxes);
  count_ = 1; //此为关键,正是这里设置了1,所以该blob的大小至少为1
  shape_.resize(shape.size());
  if (!shape_data_ || shape_data_->size() < shape.size() * sizeof(int)) {
    shape_data_.reset(new SyncedMemory(shape.size() * sizeof(int)));
  }
  int* shape_data = static_cast<int*>(shape_data_->mutable_cpu_data());
  for (int i = 0; i < shape.size(); ++i) {
    CHECK_GE(shape[i], 0);
    if (count_ != 0) {
      CHECK_LE(shape[i], INT_MAX / count_) << "blob size exceeds INT_MAX";
    }
    count_ *= shape[i];
    shape_[i] = shape[i];
    shape_data[i] = shape[i];
  }
  if (count_ > capacity_) {
    capacity_ = count_;
    data_.reset(new SyncedMemory(capacity_ * sizeof(Dtype)));
    diff_.reset(new SyncedMemory(capacity_ * sizeof(Dtype)));
  }
}

那为何top[0]->cpu_diff()[n]无论n为几,都为0,却在损失层中要乘上这个数据呢,例如上面EuclideanLoss层中的Backward_cpu()函数中的一句代码如下:

const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num(); //这里的top[0]->cpu_diff()[0]存储的其实就是该层的损失权重loss_weight

求取alpha时乘上了top[0]->cpu_diff()[0],原因在于在损失层初始化的时候,该层的top[0]->cpu_diff()[0]被赋值为loss_weight(损失权重),当然也只有损失层有这个待遇,其余层并没有操作。具体溯源的话,先要看layer.hpp中的SetUp()函数:

 /**
   * @brief Implements common layer setup functionality.
   *
   * @param bottom the preshaped input blobs
   * @param top
   *     the allocated but unshaped output blobs, to be shaped by Reshape
   *
   * Checks that the number of bottom and top blobs is correct.
   * Calls LayerSetUp to do special layer setup for individual layer types,
   * followed by Reshape to set up sizes of top blobs and internal buffers.
   * Sets up the loss weight multiplier blobs for any non-zero loss weights.
   * This method may not be overridden.
   */
  void SetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
    InitMutex();
    CheckBlobCounts(bottom, top);
    LayerSetUp(bottom, top);
    Reshape(bottom, top);
    SetLossWeights(top);  //此函数设置了损失权重
  }

然后是SetLossWeights()函数:

 /**
   * Called by SetUp to initialize the weights associated with any top blobs in
   * the loss function. Store non-zero loss weights in the diff blob.
   */
  inline void SetLossWeights(const vector<Blob<Dtype>*>& top) {
    const int num_loss_weights = layer_param_.loss_weight_size();
    if (num_loss_weights) {
      CHECK_EQ(top.size(), num_loss_weights) << "loss_weight must be "
          "unspecified or specified once per top blob.";
      for (int top_id = 0; top_id < top.size(); ++top_id) {
        const Dtype loss_weight = layer_param_.loss_weight(top_id);
        if (loss_weight == Dtype(0)) { continue; }
        this->set_loss(top_id, loss_weight);
        const int count = top[top_id]->count();
        //下面这句代码就是关键了,调用caffe_set函数给top中的diff赋值
        Dtype* loss_multiplier = top[top_id]->mutable_cpu_diff();
        caffe_set(count, loss_weight, loss_multiplier);
      }
    }
  }

看到这,大家应该豁然开朗了。

如需转载,请注明本博客出处!

猜你喜欢

转载自blog.csdn.net/qq_21368481/article/details/81950538