原定的损失函数: J = − 1 m ∑ i = 1 m ( y ( i ) log ( a [ L ] ( i ) ) + ( 1 − y ( i ) ) log ( 1 − a [ L ] ( i ) ) ) (1) J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1} J=−m1i=1∑m(y(i)log(a[L](i))+(1−y(i))log(1−a[L](i)))(1) 添加了L2正则化后的损失函数: J r e g u l a r i z e d = − 1 m ∑ i = 1 m ( y ( i ) log ( a [ L ] ( i ) ) + ( 1 − y ( i ) ) log ( 1 − a [ L ] ( i ) ) ) ⏟ cross-entropy cost + 1 m λ 2 ∑ l ∑ k ∑ j W k , j [ l ] 2 ⏟ L2 regularization cost (2) J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2} Jregularized=cross-entropy cost −m1i=1∑m(y(i)log(a[L](i))+(1−y(i))log(1−a[L](i)))+L2 regularization cost m12λl∑k∑j∑Wk,j[l]2(2)
∂ J ∂ θ = lim ε → 0 J ( θ + ε ) − J ( θ − ε ) 2 ε (1) \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1} ∂θ∂J=ε→0lim2εJ(θ+ε)−J(θ−ε)(1) 可以用这个公式验证梯度是否计算正确。
公式推导(非常有意思!):https://blog.csdn.net/oBrightLamp/article/details/84333111 里面有关于神经网络梯度计算的推导:
对于一个n阶方阵A的迹被定义为方阵A的主对角线的元素之和,通常对方阵的求迹操作写成trA,于是我们有
附: 1、向量对向量求导 2、标量对向量求导
3、向量对标量求导 4、可能用到的公式
参考: https://www.cnblogs.com/crackpotisback/p/5545708.html
求导公式大全