Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
257 views
in Technique[技术] by (71.8m points)

python - Backpropagation in Andrew Trask's snippet

I'm trying to figure out just one line in the below snippet I got from here

import numpy as np
X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
y = np.array([[0,1,1,0]]).T
alpha,hidden_dim = (0.5,4)
synapse_0 = 2*np.random.random((3,hidden_dim)) - 1
synapse_1 = 2*np.random.random((hidden_dim,1)) - 1
for j in xrange(60000):
    layer_1 = 1/(1+np.exp(-(np.dot(X,synapse_0))))
    layer_2 = 1/(1+np.exp(-(np.dot(layer_1,synapse_1))))
    layer_2_delta = (layer_2 - y)*(layer_2*(1-layer_2))
    layer_1_delta = layer_2_delta.dot(synapse_1.T) * (layer_1 * (1-layer_1))
    synapse_1 -= (alpha * layer_1.T.dot(layer_2_delta))
    synapse_0 -= (alpha * X.T.dot(layer_1_delta))

The line I cannot figure out is:

layer_1_delta = layer_2_delta.dot(synapse_1.T) * (layer_1 * (1-layer_1))

specifically, why are we doing dot product with the synapse_1 instead of layer_1?

By using synapse_1 in the delta calculation, the partial differential is carried out with respect to weights instead of the layer_1 output which is what we want right?

I think this this is what layer_1_delta should actually be:

layer_1_delta = layer_1.T.dot(layer_2_delta) * (layer_1 * (1-layer_1))
question from:https://stackoverflow.com/questions/65864250/backpropagation-in-andrew-trasks-snippet

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...