前言

本文主要是解析论文Notes onConvolutional Neural Networks的公式,参考了http://blog.csdn.net/lu597203933/article/details/46575871的公式推导,借用https://github.com/BigPeng/JavaCNN代码

CNN

cnn每一层会输出多个feature map, 每个feature map由多个神经元组成,假如某个feature map的shape是m*n, 则该feature map有m*n个神经元

卷积层

卷积计算

设当前层l为卷积层,下一层l+1为子采样层subsampling.
则卷积层l的输出feature map为:
Xlj=f(∑i∈MjXl−1i∗klij+blj)X_j^l=f(\sum_{i\in M_j}X_i^{l-1}\ast k_{ij}^l +b_j^l)
∗\ast为卷积符号

残差计算

设当前层l为卷积层,下一层l+1为子采样层subsampling.
第l层的第j个feature map的残差公式为:

δlj=βl+1j(f′(μlj)∘up(δl+1j))(1)\delta_j^l = \beta_j^{l+1}(f^{'}(\mu_j^l)\circ up(\delta_j^{l+1})) \tag 1

其中
f(x)=11+e−x(2)f(x)=\frac{1}{1+e^{-x}} \tag2,
其导数

f′(x)=f(x)∗(1−f(x))(3)f^{'}(x)=f(x)*(1-f(x)) \tag3

为了之后的推导,先提前讲讲subsample过程,比较简单,假设采样层是对卷积层的均值处理,如卷积层的输出feature map(f(μlj)f(\mu_j^l))是

则经过subsample的结果是:

subsample过程如下:

import java.util.Arrays;/*** Created by keliz on 7/7/16.*/public class test
{/*** 卷积核或者采样层scale的大小,长与宽可以不等.*/public static class Size{public final int x;public final int y;public Size(int x, int y){this.x = x;this.y = y;}}/*** 对矩阵进行均值缩小** @param matrix* @param scale* @return*/public static double[][] scaleMatrix(final double[][] matrix, final Size scale){int m = matrix.length;int n = matrix[0].length;final int sm = m / scale.x;final int sn = n / scale.y;final double[][] outMatrix = new double[sm][sn];if (sm * scale.x != m || sn * scale.y != n)throw new RuntimeException("scale不能整除matrix");final int size = scale.x * scale.y;for (int i = 0; i < sm; i++){for (int j = 0; j < sn; j++){double sum = 0.0;for (int si = i * scale.x; si < (i + 1) * scale.x; si++){for (int sj = j * scale.y; sj < (j + 1) * scale.y; sj++){sum += matrix[si][sj];}}outMatrix[i][j] = sum / size;}}return outMatrix;}public static void main(String args[]){int row = 4;int column = 4;int k = 0;double[][] matrix = new double[row][column];Size s = new Size(2, 2);for (int i = 0; i < row; ++i)for (int j = 0; j < column; ++j)matrix[i][j] = ++k;double[][] result = scaleMatrix(matrix, s);System.out.println(Arrays.deepToString(matrix).replaceAll("],", "]," + System.getProperty("line.separator")));System.out.println(Arrays.deepToString(result).replaceAll("],", "]," + System.getProperty("line.separator")));}
}

其中3.5=(1+2+5+6)/(2*2); 5.5=(3+4+7+8)/(2*2)
由此可知,卷积层输出的feature map中的值为1的节点,值为2的节点,值为5的节点,值为6的节点(神经元)与subsample层的值为3.5的节点相连接,值为3,值为4,值为7,值为8节点与subsample层的值为5.5节点相连接。由BP算法章节的推导结论可知

卷积层第j个节点的残差等于子采样层与其相连接的所有节点的权值乘以相应的残差的加权和再乘以该节点的导数

对着公式看比较容易理解这句话。
假设子采样层的对应文中的卷积层的残差δl+1j\delta_j^{l+1}是,

按照公式(1),节点1值为0.5的残差是

βl+1j(f′(1)∗δl+1j(3.5))(4)

\beta_j^{l+1}(f^{'}(1) * \delta_j^{l+1}(3.5)) \tag4
因为这是计算单个神经元的残差,所以需要把 ∘\circ换成 ∗*, ∘\circ这个运算符代表矩阵的点乘即对应元素相乘,而且 节点(神经元)1的对应子采样层的值为3.5的节点, 由 公式(3),可知节点1的残差是

f(1)∗(1−f(1))∗δl+1j(3.5)

f(1)*(1-f(1))* \delta_j^{l+1}(3.5)

11+e−1∗e−11+e−1∗0.5

\frac{1}{1+e^{-1}}*\frac{e^{-1}}{1+e^{-1}}*0.5
同理,对于节点2,
残差为

f(2)∗(1−f(2))∗δl+1j(3.5)

f(2)*(1-f(2))* \delta_j^{l+1}(3.5)
对于节点5,
残差为

f(5)∗(1−f(5))∗δl+1j(3.5)

f(5)*(1-f(5))* \delta_j^{l+1}(3.5)
对于节点6,
残差为

f(6)∗(1−f(6))∗δl+1j(3.5)

f(6)*(1-f(6))* \delta_j^{l+1}(3.5)
因为节点3对应的子采样层的残差是0.6,所以节点3的残差为

f(3)∗(1−f(3))∗δl+1j(5.5)

f(3)*(1-f(3))* \delta_j^{l+1}(5.5)

11+e−5∗e−51+e−5∗0.6

\frac{1}{1+e^{-5}}*\frac{e^{-5}}{1+e^{-5}}*0.6
公式(1)使用了一个技巧,实现这个计算过程,把子采样层的残差 δl+1j\delta_j^{l+1}与一个全一矩阵(shape与 δl+1j\delta_j^{l+1}一样)做克罗内克积,


做克罗内克积 .
结果是
计算过程:

import java.util.Arrays;/*** Created by keliz on 7/7/16.*/
public class kronecker {/*** 卷积核或者采样层scale的大小,长与宽可以不等.*/public static class Size{public final int x;public final int y;public Size(int x, int y){this.x = x;this.y = y;}}/*** 克罗内克积,对矩阵进行扩展** @param matrix* @param scale* @return*/public static double[][] kronecker(final double[][] matrix, final Size scale) {final int m = matrix.length;int n = matrix[0].length;final double[][] outMatrix = new double[m * scale.x][n * scale.y];for (int i = 0; i < m; i++) {for (int j = 0; j < n; j++) {for (int ki = i * scale.x; ki < (i + 1) * scale.x; ki++) {for (int kj = j * scale.y; kj < (j + 1) * scale.y; kj++) {outMatrix[ki][kj] = matrix[i][j];}}}}return outMatrix;}public static void main(String args[]){int row = 2;int column = 2;double k = 0.5;double[][] matrix = new double[row][column];Size s = new Size(2, 2);for (int i = 0; i < row; ++i)for (int j = 0; j < column; ++j){matrix[i][j] = k;k += 0.1;}System.out.println(Arrays.deepToString(matrix).replaceAll("],", "]," + System.getProperty("line.separator")));double[][] result = kronecker(matrix, s);System.out.println(Arrays.deepToString(result).replaceAll("],", "]," + System.getProperty("line.separator")));}
}

将f′(μlj)f^{'}(\mu_j^l)矩阵与这个克罗内克积的结果矩阵做点乘再乘以βl+1j\beta_j^{l+1},就可以得到该卷积层的残差.

子采样层

采样计算

假设采样层是对卷积层的均值处理,如卷积层的输出feature map(f(μlj)f(\mu_j^l))是

则经过subsample的结果是:

公式为:
Xlj=f(βljdown(Xl−1j)+blj)X_j^l=f(\beta_j^ldown(X_j^{l-1})+b_j^l)
down(Xl−1j)down(X_j^{l-1})对Xl−1jX_j^{l-1}中的2*2大小中的像素值求和。
卷积与子采样层的计算公式都包含β\beta与bb参数,但是本文默认不处理.
subsample过程如下:

import java.util.Arrays;/*** Created by keliz on 7/7/16.*/public class test
{/*** 卷积核或者采样层scale的大小,长与宽可以不等.*/public static class Size{public final int x;public final int y;public Size(int x, int y){this.x = x;this.y = y;}}/*** 对矩阵进行均值缩小** @param matrix* @param scale* @return*/public static double[][] scaleMatrix(final double[][] matrix, final Size scale){int m = matrix.length;int n = matrix[0].length;final int sm = m / scale.x;final int sn = n / scale.y;final double[][] outMatrix = new double[sm][sn];if (sm * scale.x != m || sn * scale.y != n)throw new RuntimeException("scale不能整除matrix");final int size = scale.x * scale.y;for (int i = 0; i < sm; i++){for (int j = 0; j < sn; j++){double sum = 0.0;for (int si = i * scale.x; si < (i + 1) * scale.x; si++){for (int sj = j * scale.y; sj < (j + 1) * scale.y; sj++){sum += matrix[si][sj];}}outMatrix[i][j] = sum / size;}}return outMatrix;}public static void main(String args[]){int row = 4;int column = 4;int k = 0;double[][] matrix = new double[row][column];Size s = new Size(2, 2);for (int i = 0; i < row; ++i)for (int j = 0; j < column; ++j)matrix[i][j] = ++k;double[][] result = scaleMatrix(matrix, s);System.out.println(Arrays.deepToString(matrix).replaceAll("],", "]," + System.getProperty("line.separator")));System.out.println(Arrays.deepToString(result).replaceAll("],", "]," + System.getProperty("line.separator")));}
}

其中3.5=(1+2+5+6)/(2*2); 5.5=(3+4+7+8)/(2*2)

残差计算

设当前层l为子采样层,下一层l+1为卷积层.
第l层的第j个feature map的残差公式为:

δlj=f′(μlj)∘conv2(δl+1j,rot180(kl+1j),full)(5)\delta_j^l = f^{'}(\mu_j^l)\circ conv2(\delta_j^{l+1}, rot180(k_j^{l+1}), full) \tag 5

设子采样层的输出feature map(f(μlj)f(\mu_j^l))是

对应的卷积层的kernel(kl+1jk_j^{l+1})为

则卷积层输出的feature map为

假设卷积层的delta(δl+1j\delta_j^{l+1})为

delta与feature map是一一对应
计算过程:

import java.util.Arrays;/*** Created by keliz on 7/7/16.*/
public class conv {/*** 复制矩阵** @param matrix* @return*/public static double[][] cloneMatrix(final double[][] matrix) {final int m = matrix.length;int n = matrix[0].length;final double[][] outMatrix = new double[m][n];for (int i = 0; i < m; i++) {for (int j = 0; j < n; j++) {outMatrix[i][j] = matrix[i][j];}}return outMatrix;}/*** 对矩阵进行180度旋转,是在matrix的副本上复制,不会对原来的矩阵进行修改** @param matrix*/public static double[][] rot180(double[][] matrix) {matrix = cloneMatrix(matrix);int m = matrix.length;int n = matrix[0].length;// 按列对称进行交换for (int i = 0; i < m; i++) {for (int j = 0; j < n / 2; j++) {double tmp = matrix[i][j];matrix[i][j] = matrix[i][n - 1 - j];matrix[i][n - 1 - j] = tmp;}}// 按行对称进行交换for (int j = 0; j < n; j++) {for (int i = 0; i < m / 2; i++) {double tmp = matrix[i][j];matrix[i][j] = matrix[m - 1 - i][j];matrix[m - 1 - i][j] = tmp;}}return matrix;}/*** 计算valid模式的卷积** @param matrix* @param kernel* @return*/public static double[][] convnValid(final double[][] matrix,double[][] kernel) {kernel = rot180(kernel);int m = matrix.length;int n = matrix[0].length;final int km = kernel.length;final int kn = kernel[0].length;// 需要做卷积的列数int kns = n - kn + 1;// 需要做卷积的行数final int kms = m - km + 1;// 结果矩阵final double[][] outMatrix = new double[kms][kns];for (int i = 0; i < kms; i++) {for (int j = 0; j < kns; j++) {double sum = 0.0;for (int ki = 0; ki < km; ki++) {for (int kj = 0; kj < kn; kj++)sum += matrix[i + ki][j + kj] * kernel[ki][kj];}outMatrix[i][j] = sum;}}return outMatrix;}public static void main(String args[]) {int subSampleLayerRow = 4;int subSampleLayerColumn = 4;int subSampleLayerK = 0;double[][] subSampleLayer = new double[subSampleLayerRow][subSampleLayerColumn];for (int i = 0; i < subSampleLayerRow; ++i)for (int j = 0; j < subSampleLayerColumn; ++j)subSampleLayer[i][j] = ++subSampleLayerK;int kernelRow = 2;int kernelColumn = 2;double kernelK = 0.3;double[][] kernelMatrix = new double[kernelRow][kernelColumn];for (int i = 0; i < kernelRow; ++i)for (int j = 0; j < kernelColumn; ++j){kernelMatrix[i][j] = kernelK;kernelK += 0.1;}System.out.println(Arrays.deepToString(kernelMatrix).replaceAll("],", "]," + System.getProperty("line.separator")));double[][] result = convnValid(subSampleLayer, kernelMatrix);System.out.println(Arrays.deepToString(result).replaceAll("],", "]," + System.getProperty("line.separator")));}
}

注意:卷积计算时,需要对kernel先旋转180度
从计算过程可以看出,卷积层输出的feature map中的神经元(0,0)值为7.2是由子采样层的输出feature map中的(0,0)值为1,(0,1)值为2,(1,0)值为5,(1,1)值为6生成,即子采样层的输出feature map中的(0,0)与卷积层δ00\delta_{00}相连接,与卷积层δ\delta其它节点不相连。

再如,卷积层输出的feature map中的神经元(0,1)值为9.0是由子采样层的输出feature map中的(0,1)值为2,(0,2)值为3,(1,1)值为6,(1,2)值为7生成,则即子采样层的输出feature map中的(0,1)与卷积层δ01\delta_{01}相连接, 但是从上面的描述克制δ00\delta_{00}也是由子采样层的输出feature map中的(0,1)生成的,即子采样层的输出feature map中的(0,1)与卷积层δ00\delta_{00},δ01\delta_{01}相连接,与卷积层δ\delta其它节点不相连。

再如,卷积层输出的feature map中的神经元(1,0)值为14.4是由子采样层的输出feature map中的(1,0)值为5,(1,1)值为6,(2,0)值为9,(2,1)值为10生成。
卷积层输出的feature map中的神经元(1,1)值为16.2是由子采样层的输出feature map中的(1,1)值为6,(1,2)值为7,(2,1)值为10,(2,2)值为11生成.
反过来,从这几部分可以看出,子采样层的输出feature map中的(1,1)值为6, 是与卷积层δ00\delta_{00},δ01\delta_{01}, δ10\delta_{10}, δ11\delta_{11}相连接.
由前一章节的BP算法推导结论可知

子采样层第j个节点的残差等于卷积层与其相连接的所有节点的权值乘以相应的残差的加权和再乘以该子采样层节点的导数。

由于子采样层节点(0,0)值为1只与卷积层δ00\delta_{00}相连接,因此其残差为
δ(0,0)=f′(1)∗kernel(1,1)∗δ00(6)\delta(0,0)=f^{'}(1)*kernel(1,1)*\delta_{00} \tag6

δ(0,0)=f′(1)∗0.6∗0.1(7)\delta(0,0)=f^{'}(1)*0.6*0.1 \tag7

子采样层节点(0,1)值为2与卷积层δ0,0\delta_{0,0},δ01\delta_{01}相连接,因此其残差为
δ(0,1)=f′(2)∗(kernel(1,0)∗δ00+kernel(1,1)∗δ01)(8)\delta(0,1)=f^{'}(2)*(kernel(1,0)*\delta_{00}+kernel(1,1)*\delta_{01}) \tag8
因为卷积层δ00\delta_{00}是由δ(0,1)∗kernel(1,0)\delta(0,1)*kernel(1,0)与其它节点一起计算出来,同理卷积层δ01\delta_{01}是由δ(0,1)∗kernel(1,1)\delta(0,1)*kernel(1,1)与其它节点一起计算出来,因此公式8是kernel(1,0)∗δ00kernel(1,0)*\delta_{00}和kernel(1,1)∗δ01kernel(1,1)*\delta_{01},这个过程,可以手动计算一下就明白。

子采样层节点(1,1)值为6与卷积层δ0,0\delta_{0,0},δ01\delta_{01},δ10\delta_{10},δ11\delta_{11}相连接,因此其残差为
δ(1,1)=f′(6)∗(kernel(0,0)∗δ00+kernel(0,1)∗δ01+kernel(1,0)∗δ10+kernel(1,1)∗δ11)(9)\delta(1,1)=f^{'}(6)*(kernel(0,0)*\delta_{00}+kernel(0,1)*\delta_{01}+kernel(1,0)*\delta_{10}+kernel(1,1)*\delta_{11})\tag9

以此类推可以得出子采样层的残差:
δ(0,0)=f′(1)∗kernel(1,1)∗δ00(10)\delta(0,0)=f^{'}(1)*kernel(1,1)*\delta_{00} \tag{10}
δ(0,1)=f′(2)∗(kernel(1,0)∗δ00+kernel(1,1)∗δ01)(11)\delta(0,1)=f^{'}(2)*(kernel(1,0)*\delta_{00}+kernel(1,1)*\delta_{01}) \tag{11}
δ(0,2)=f′(3)∗(kernel(1,0)∗δ01+kernel(1,1)∗δ02)(12)\delta(0,2)=f^{'}(3)*(kernel(1,0)*\delta_{01}+kernel(1,1)*\delta_{02}) \tag{12}
δ(0,3)=f′(4)∗(kernel(1,0)∗δ02)(13)\delta(0,3)=f^{'}(4)*(kernel(1,0)*\delta_{02}) \tag{13}
δ(1,0)=f′(5)∗(kernel(1,1)∗δ10+kernel(0,1)∗δ00)(14)\delta(1,0)=f^{'}(5)*(kernel(1,1)*\delta_{10}+kernel(0,1)*\delta_{00}) \tag{14}
δ(1,1)=f′(6)∗(kernel(0,0)∗δ00+kernel(0,1)∗δ01+kernel(1,0)∗δ10+kernel(1,1)∗δ11)(15)\delta(1,1)=f^{'}(6)*(kernel(0,0)*\delta_{00}+kernel(0,1)*\delta_{01}+kernel(1,0)*\delta_{10}+kernel(1,1)*\delta_{11})\tag{15}
δ(1,2)=f′(7)∗(kernel(0,0)∗δ01+kernel(0,1)∗δ02+kernel(1,0)∗δ11+kernel(1,1)∗δ12)(16)\delta(1,2)=f^{'}(7)*(kernel(0,0)*\delta_{01}+kernel(0,1)*\delta_{02}+kernel(1,0)*\delta_{11}+kernel(1,1)*\delta_{12})\tag{16}
δ(1,3)=f′(8)∗(kernel(0,0)∗δ02+kernel(1,0)∗δ12)(17)\delta(1,3)=f^{'}(8)*(kernel(0,0)*\delta_{02}+kernel(1,0)*\delta_{12})\tag{17}
δ(2,0)=f′(9)∗(kernel(0,1)∗δ10+kernel(1,1)∗δ20)(18)\delta(2,0)=f^{'}(9)*(kernel(0,1)*\delta_{10}+kernel(1,1)*\delta_{20})\tag{18}
δ(2,1)=f′(10)∗(kernel(0,0)∗δ10+kernel(0,1)∗δ11+kernel(1,0)∗δ20+kernel(1,1)∗δ21)(19)\delta(2,1)=f^{'}(10)*(kernel(0,0)*\delta_{10}+kernel(0,1)*\delta_{11}+kernel(1,0)*\delta_{20}+kernel(1,1)*\delta_{21})\tag{19}
δ(2,2)=f′(11)∗(kernel(0,0)∗δ11+kernel(0,1)∗δ12+kernel(1,0)∗δ21+kernel(1,1)∗δ22)(20)\delta(2,2)=f^{'}(11)*(kernel(0,0)*\delta_{11}+kernel(0,1)*\delta_{12}+kernel(1,0)*\delta_{21}+kernel(1,1)*\delta_{22})\tag{20}
δ(2,3)=f′(12)∗(kernel(0,0)∗δ12+kernel(1,0)∗δ22)(21)\delta(2,3)=f^{'}(12)*(kernel(0,0)*\delta_{12}+kernel(1,0)*\delta_{22})\tag{21}
δ(3,0)=f′(13)∗(kernel(0,1)∗δ20)(22)\delta(3,0)=f^{'}(13)*(kernel(0,1)*\delta_{20})\tag{22}
δ(3,1)=f′(14)∗(kernel(0,0)∗δ20+kernel(0,1)∗δ21)(23)\delta(3,1)=f^{'}(14)*(kernel(0,0)*\delta_{20}+kernel(0,1)*\delta_{21})\tag{23}
δ(3,2)=f′(15)∗(kernel(0,0)∗δ21+kernel(0,1)∗δ22)(24)\delta(3,2)=f^{'}(15)*(kernel(0,0)*\delta_{21}+kernel(0,1)*\delta_{22})\tag{24}
δ(3,3)=f′(16)∗(kernel(0,0)∗δ22)(25)\delta(3,3)=f^{'}(16)*(kernel(0,0)*\delta_{22})\tag{25}

这个残差计算过程使用公式(5)描述,这其中跟卷积层的残差计算一样,有一个小技巧,用卷积层的残差矩阵与旋转180度后的卷积层kernel做full 模式的卷积,因为在mathlab中,计算卷积时,需要旋转180度后再进行卷积,而这个残差计算过程是不要旋转的,因此需要事先把它旋转180度。看看下面这个过程,就知道为什么不需要旋转180度。

设δl+1j\delta_j^{l+1}的shape为(dRowSize, dColumSize)
kl+1jk_j^{l+1}的shape为(kRowSize, kColumSize)
conv的full模式需要把δl+1j\delta_j^{l+1}扩展为shape为((dRowSize + 2 * (kRowSize - 1)), (dColumSize + 2 * (kColumSize - 1))),
即卷积层的delta(δl+1j\delta_j^{l+1})

扩展为

此时用旋转180度的kernel,与扩展后的δl+1j\delta_j^{l+1}做卷积在乘以相应的激励值,即可得到子采样层的残差,与公式(10-25)一致
,

import java.util.Arrays;/*** Created by keliz on 7/7/16.*/
public class convFull {/*** 计算full模式的卷积** @param matrix* @param kernel* @return*/public static double[][] convnFull(double[][] matrix,final double[][] kernel) {int m = matrix.length;int n = matrix[0].length;final int km = kernel.length;final int kn = kernel[0].length;// 扩展矩阵final double[][] extendMatrix = new double[m + 2 * (km - 1)][n + 2* (kn - 1)];for (int i = 0; i < m; i++) {for (int j = 0; j < n; j++)extendMatrix[i + km - 1][j + kn - 1] = matrix[i][j];}return convnValid(extendMatrix, kernel);}/*** 计算valid模式的卷积** @param matrix* @param kernel* @return*/public static double[][] convnValid(final double[][] matrix,double[][] kernel) {kernel = rot180(kernel);int m = matrix.length;int n = matrix[0].length;final int km = kernel.length;final int kn = kernel[0].length;// 需要做卷积的列数int kns = n - kn + 1;// 需要做卷积的行数final int kms = m - km + 1;// 结果矩阵final double[][] outMatrix = new double[kms][kns];for (int i = 0; i < kms; i++) {for (int j = 0; j < kns; j++) {double sum = 0.0;for (int ki = 0; ki < km; ki++) {for (int kj = 0; kj < kn; kj++)sum += matrix[i + ki][j + kj] * kernel[ki][kj];}outMatrix[i][j] = sum;}}return outMatrix;}/*** 复制矩阵** @param matrix* @return*/public static double[][] cloneMatrix(final double[][] matrix) {final int m = matrix.length;int n = matrix[0].length;final double[][] outMatrix = new double[m][n];for (int i = 0; i < m; i++) {for (int j = 0; j < n; j++) {outMatrix[i][j] = matrix[i][j];}}return outMatrix;}/*** 对矩阵进行180度旋转,是在matrix的副本上复制,不会对原来的矩阵进行修改** @param matrix*/public static double[][] rot180(double[][] matrix) {matrix = cloneMatrix(matrix);int m = matrix.length;int n = matrix[0].length;// 按列对称进行交换for (int i = 0; i < m; i++) {for (int j = 0; j < n / 2; j++) {double tmp = matrix[i][j];matrix[i][j] = matrix[i][n - 1 - j];matrix[i][n - 1 - j] = tmp;}}// 按行对称进行交换for (int j = 0; j < n; j++) {for (int i = 0; i < m / 2; i++) {double tmp = matrix[i][j];matrix[i][j] = matrix[m - 1 - i][j];matrix[m - 1 - i][j] = tmp;}}return matrix;}public static void main(String args[]) {int deltaRow = 3;int deltaColum = 3;double initDelta = 0.1;double[][] delta = new double[deltaRow][deltaColum];for(int i = 0; i < deltaRow; ++i)for(int j = 0; j < deltaColum; ++j){delta[i][j] = initDelta;initDelta += 0.1;}int kernelRow = 2;int kernelColum = 2;double initKernel = 0.3;double[][] kernel = new double[kernelRow][kernelColum];for(int i = 0; i < kernelRow; ++i)for(int j = 0; j < kernelColum; ++j){kernel[i][j] = initKernel;initKernel += 0.1;}double[][] result = convnFull(delta, kernel);System.out.println(Arrays.deepToString(result).replaceAll("],", "]," + System.getProperty("line.separator")));}
}

参考文献

JavaCNN
CNN公式推导
Notes onConvolutional Neural Networks
Notes onConvolutional Neural Networks论文翻译

CNN 卷积神经网络-- 残差计算相关推荐

  1. 卷积神经网络残差计算

    输出层的残差 和BP一样,CNN的输出层的残差与中间层的残差计算方式不同,输出层的残差是输出值与类标值得误差值,而中间各层的残差来源于下一层的残差的加权和.输出层的残差计算如下: 下一层为采样层(su ...

  2. Deep Learning论文笔记之(五)CNN卷积神经网络代码理解

    Deep Learning论文笔记之(五)CNN卷积神经网络代码理解 zouxy09@qq.com http://blog.csdn.net/zouxy09          自己平时看了一些论文,但 ...

  3. python神经网络案例——CNN卷积神经网络实现mnist手写体识别

    分享一个朋友的人工智能教程.零基础!通俗易懂!风趣幽默!还带黄段子!大家可以看看是否对自己有帮助:点击打开 全栈工程师开发手册 (作者:栾鹏) python教程全解 CNN卷积神经网络的理论教程参考 ...

  4. 搭建CNN卷积神经网络(用pytorch搭建)

    手撕卷积神经网络-CNN 卷积:提取特征 池化:压缩特征 heigh X weigh X depth 长度 宽度.深度(也就是特征图个数) 例如输入32x32x3 hxwxc 卷积就是取某个小区域进行 ...

  5. CNN卷积神经网络之RegNet

    CNN卷积神经网络之RegNet 前言 设计思路 AnyNet设计空间 网络结构 实验结果 消融实验结论 前言 <Designing Network Design Spaces> 论文地址 ...

  6. tf2.0先试试图片(七)——CNN卷积神经网络

    之前已经介绍了TenforFlow的基本操作和神经网络,主要是全联接网络的一些概念: tf2.0先试试图片(七)--CNN卷积神经网络 7.0 简介 7.1 全连接网络的问题 7.1.1 局部相关性 ...

  7. CNN卷积神经网络之SENet及代码

    CNN卷积神经网络之SENet 个人成果,禁止以任何形式转载或抄袭! 一.前言 二.SE block细节 SE block的运用实例 模型的复杂度 三.消融实验 1.降维系数r 2.Squeeze操作 ...

  8. CNN卷积神经网络详解

    1.cnn卷积神经网络的概念 卷积神经网络(CNN),这是深度学习算法应用最成功的领域之一,卷积神经网络包括一维卷积神经网络,二维卷积神经网络以及三维卷积神经网络.一维卷积神经网络主要用于序列类的数据 ...

  9. 3层-CNN卷积神经网络预测MNIST数字

    3层-CNN卷积神经网络预测MNIST数字 本文创建一个简单的三层卷积网络来预测 MNIST 数字.这个深层网络由两个带有 ReLU 和 maxpool 的卷积层以及两个全连接层组成. MNIST 由 ...

最新文章

  1. Android中获取系统内存信息以及进程信息-----ActivityManager的使用(一)
  2. python代码示例500行源代码-500行Python代码打造刷脸考勤系统,其实也就那么简单...
  3. pytorch错误解决: BrokenPipeError: [Errno 32] Broken pipe
  4. 网络延长器分为哪几类?其应用领域有哪些?
  5. Huffman树压缩和解压文件
  6. CSS 强制换行和禁止换行学习
  7. python缩写转换成全拼_Python中文转拼音代码(支持全拼和首字母缩写)
  8. mongodb(分片)
  9. mysql 插入万条数据_你向 MySQL 插入 100万 条数据用了多久?
  10. c语言计算器括号怎么解决,C语言计算器,该如何解决
  11. EXCEL 出错 8000401a
  12. redhat8.1网卡配置教程
  13. 不同范数下的余弦定理_平行四边形法则与勾股定理–内积与范数
  14. 领航机器人广告段子_医院机器人物流科技宣传广告语
  15. 2019云栖大会归来有感
  16. 1419 最小公倍数挑战(素数性质之两两互质 与 取“3”个数的特殊性)
  17. 嵌入式Linux开发8——UART串口通讯
  18. python分句_Python 中文分句 | 学步园
  19. Qt鼠标相对位置、绝对位置、窗口位置、控件位置、控件大小、控件绝对位置
  20. Shader 漫反射

热门文章

  1. 聊聊MySQL存储过程
  2. mysql朋友圈数据库设计_实现微信朋友圈可见不可见的数据库设计及ORM语句
  3. 计算机限制打开外接硬盘,在win7中,为什么打开磁盘出现限制提示?
  4. matlab零序五次谐波,五次谐波选线法的仿真分析
  5. iOS:开发者账号申请
  6. 使用docker安装mysql8及mysql5.7
  7. CSS好看的一些颜色
  8. 代码管理工具—GitLab
  9. mysql5.7 只读视图_MySQL 5.7: Innodb read view在只读场景的优化
  10. 认识物联网系列——物联网架构