本文共 1475 字,大约阅读时间需要 4 分钟。
两层全连接神经网络的实现, 包括网络的实现、梯度的反向传播计算和权重更新过程:
# -*- coding: utf-8 -*-import numpy as np# N is batch size; D_in is input dimension;# H is hidden dimension; D_out is output dimension.N, D_in, H, D_out = 64, 1000, 100, 10# Create random input and output datax = np.random.randn(N, D_in)y = np.random.randn(N, D_out)# Randomly initialize weightsw1 = np.random.randn(D_in, H)w2 = np.random.randn(H, D_out)learning_rate = 1e-6for t in range(500): # Forward pass: compute predicted y h = x.dot(w1) h_relu = np.maximum(h, 0) y_pred = h_relu.dot(w2) # Compute and print loss loss = np.square(y_pred - y).sum() print(t, loss) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.T.dot(grad_y_pred) grad_h_relu = grad_y_pred.dot(w2.T) grad_h = grad_h_relu.copy() grad_h[h < 0] = 0 grad_w1 = x.T.dot(grad_h) # Update weights w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2
反向传播 过程,核心代码如下
h = x.dot(w1)h_relu = np.maximum(h, 0)y_pred = h_relu.dot(w2)loss = np.square(y_pred - y).sum()grad_y_pred = 2.0 * (y_pred - y) # 64 x 10grad_w2 = h_relu.T.dot(grad_y_pred) # 100 x 10grad_h_relu = grad_y_pred.dot(w2.T) # 64 x 100grad_h = grad_h_relu.copy() # 64 x 100grad_h[h < 0] = 0 # 64 x 100grad_w1 = x.T.dot(grad_h) # 1000 x 100
问题:如何实现relu求导呢?
转载地址:http://usgm.baihongyu.com/