深度學習中對于網絡的訓練是參數更新的過程,需要注意一種情況就是輸入數據未做歸一化時,如果前向傳播結果已經是[0,0,0,1,0,0,0,0]這種形式,而真實結果是[1,0,0,0,0,0,0,0,0],此時由于得出的結論不懼有概率性,而是錯誤的估計值,此時反向傳播會使得權重和偏置值變的無窮大,導致數據溢出,也就出現了nan的問題。
解決辦法:
1、對輸入數據進行歸一化處理,如將輸入的圖片數據除以255將其轉化成0-1之間的數據;
2、對于層數較多的情況,各層都做batch_nomorlization;
3、對設置Weights權重使用tf.truncated_normal(0, 0.01, [3,3,1,64])生成,同時值的均值為0,方差要小一些;
4、激活函數可以使用tanh;
5、減小學習率lr。
實例:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data',one_hot = True)
def add_layer(input_data,in_size, out_size,activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size,out_size]))
Biases = tf.Variable(tf.zeros([1, out_size])+0.1)
Wx_plus_b = tf.add(tf.matmul(input_data, Weights), Biases)
if activation_function==None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
#return outputs#, Weights
return {'outdata':outputs, 'w':Weights}
def get_accuracy(t_y):
# global l1
# accu = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(l1['outdata'],1),tf.argmax(t_y,1)), dtype = tf.float32))
global prediction
accu = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(prediction['outdata'],1),tf.argmax(t_y,1)), dtype = tf.float32))
return accu
X = tf.placeholder(tf.float32, [None, 784])
Y = tf.placeholder(tf.float32, [None, 10])
#l1 = add_layer(X, 784, 10, tf.nn.softmax)
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(l1['outdata']), reduction_indices= [1]))
#l1 = add_layer(X, 784, 1024, tf.nn.relu)
l1 = add_layer(X, 784, 1024, None)
prediction = add_layer(l1['outdata'], 1024, 10, tf.nn.softmax)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(prediction['outdata']), reduction_indices= [1]))
optimizer = tf.train.GradientDescentOptimizer(0.000001)
train = optimizer.minimize(cross_entropy)
newW = tf.Variable(tf.random_normal([1024,10]))
newOut = tf.matmul(l1['outdata'],newW)
newSoftMax = tf.nn.softmax(newOut)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
#print(sess.run(l1_Weights))
for i in range(2):
X_train, y_train = mnist.train.next_batch(1)
X_train = X_train/255 #需要進行歸一化處理
#print(sess.run(l1['w'],feed_dict={X:X_train}))
#print(sess.run(prediction['w'],feed_dict={X:X_train, Y:y_train}))
#print(sess.run(l1['outdata'],feed_dict={X:X_train, Y:y_train}).shape)
print(sess.run(prediction['outdata'],feed_dict={X:X_train, Y:y_train}))
print(sess.run(newOut, feed_dict={X:X_train}))
print(sess.run(newSoftMax, feed_dict={X:X_train}))
print(y_train)
#print(sess.run(l1['outdata'], feed_dict={X:X_train}))
sess.run(train, feed_dict={X:X_train, Y:y_train})
if i%100 == 0:
#print(sess.run(cross_entropy, feed_dict={X:X_train, Y:y_train}))
accuracy = get_accuracy(mnist.test.labels)
print(sess.run(accuracy,feed_dict={X:mnist.test.images}))
#if i%100==0:
#print(sess.run(prediction, feed_dict={X:X_train}))
#print(sess.run(cross_entropy, feed_dict={X:X_train,Y:y_train}))
以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持億速云。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。