0x00 过拟合问题
欠拟合根本原因在于特征维度过少、模型过于简单,拟合的函数无法满足训练集,导致误差较大。过拟合的根本原因则是在于特征维度过多、模型过于复杂、参数过多、训练的数据过少、噪声过多,导致拟合的函数完美预测训练集,在引入新的测试集时预测结果差。
产生过拟合的根本原因可以归咎于如下几点:
- 数据太少,无法描述问题的真实分布。比如投硬币,连续投了10次都是正面,这样的数据机器学习了以后是无法揭示投硬币的统计规律的。由此,当数据量无限大的时候,过拟合的问题即不存在。
- 观察值与真实值存在偏差。比如人脸识别系统,不同的人在不同的地方识别背景总是不一样的吧。如果你在选择训练集的时候,这批人脸的拍照背景都相同,那这个背景就可以看做是在样本选择时引入的一个随机误差了。再次使用相同背景拍照的人脸可能能正确识别出来,而换一个背景识别可能误差就会变得比较大。
- 数据有噪声。如果噪声过多,过拟合的情况下会尽量覆盖噪声点,这样会导致训练集的误差极小,但是换到测试集就会产生较大的误差。
过拟合实例,以上述的MNIST代码改造而成:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57
| import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("data", one_hot=True)
x = tf.placeholder(tf.float32, shape=[None, 784]) y = tf.placeholder(tf.float32, shape=[None, 10])
weight_1 = tf.Variable(tf.truncated_normal([784, 1000], stddev=0.1)) bias_1 = tf.Variable(tf.zeros([1000]) + 0.1) prediction_1 = tf.nn.tanh(tf.matmul(x, weight_1) + bias_1)
weight_2 = tf.Variable(tf.truncated_normal([1000, 2000], stddev=0.1)) bias_2 = tf.Variable(tf.zeros([2000]) + 0.1) prediction_2 = tf.nn.tanh(tf.matmul(prediction_1, weight_2) + bias_2)
weight_3 = tf.Variable(tf.truncated_normal([2000, 10], stddev=0.1)) bias_3 = tf.Variable(tf.zeros([10]) + 0.1) prediction = tf.matmul(prediction_2, weight_3) + bias_3
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction))
train = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) batch_size = 100 train_batch_num = mnist.train.num_examples // batch_size for _ in range(30): for i in range(train_batch_num): images, labels = mnist.train.next_batch(batch_size) sess.run(train, feed_dict={x: images, y: labels}) correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('========== Iter ' + str(_) + ' ==========') train_acc = 0 for i in range(train_batch_num): images, labels = mnist.train.next_batch(batch_size) train_acc = train_acc + sess.run(accuracy, feed_dict={x: images, y: labels}) print('Train set accuracy: ' + str( train_acc / train_batch_num * 100), "%") test_acc = 0 test_batch_num = mnist.test.num_examples // batch_size for i in range(test_batch_num): images, labels = mnist.test.next_batch(batch_size) test_acc = test_acc + sess.run(accuracy, feed_dict={x: images, y: labels}) print('Test set accuracy: ' + str( test_acc / test_batch_num * 100), "%")
|
在上述的例子中,我们填了2个隐藏层,第一个隐藏层有1000个神经元而第二个有2000个神经元。训练的时候我们可以发现,当进行到第7次迭代的时候,训练集的准确率就已经高达100%,测试集的准确率在98%左右,属于过拟合状态。
为了更加突出过拟合状态的效果,我们把训练集和测试集调换,用测试集训练神经网络,用训练集测试神经网络。调换测试集和训练集后的代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55
| import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("data", one_hot=True)
x = tf.placeholder(tf.float32, shape=[None, 784]) y = tf.placeholder(tf.float32, shape=[None, 10])
weight_1 = tf.Variable(tf.truncated_normal([784, 1000], stddev=0.1)) bias_1 = tf.Variable(tf.zeros([1000]) + 0.1) prediction_1 = tf.nn.tanh(tf.matmul(x, weight_1) + bias_1)
weight_2 = tf.Variable(tf.truncated_normal([1000, 2000], stddev=0.1)) bias_2 = tf.Variable(tf.zeros([2000]) + 0.1) prediction_2 = tf.nn.tanh(tf.matmul(prediction_1, weight_2) + bias_2)
weight_3 = tf.Variable(tf.truncated_normal([2000, 10], stddev=0.1)) bias_3 = tf.Variable(tf.zeros([10]) + 0.1) prediction = tf.matmul(prediction_2, weight_3) + bias_3
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction))
train = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) batch_size = 100 train_batch_num = mnist.train.num_examples // batch_size test_batch_num = mnist.test.num_examples // batch_size for _ in range(30): for i in range(test_batch_num): images, labels = mnist.test.next_batch(batch_size) sess.run(train, feed_dict={x: images, y: labels}) correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('========== Iter ' + str(_) + ' ==========') train_acc = 0 for i in range(train_batch_num): images, labels = mnist.train.next_batch(batch_size) train_acc = train_acc + sess.run(accuracy, feed_dict={x: images, y: labels}) print('Train set accuracy: ' + str( train_acc / train_batch_num * 100), "%") test_acc = 0 for i in range(test_batch_num): images, labels = mnist.test.next_batch(batch_size) test_acc = test_acc + sess.run(accuracy, feed_dict={x: images, y: labels}) print('Test set accuracy: ' + str( test_acc / test_batch_num * 100), "%")
|
运行上述代码,在训练的第6次测试集(此处用于训练)的准确率已达100%,在训练的最终,训练集(用于测试)的准确率却只有94%。同样明显过拟合。
减少过拟合现象的方法:
- 获取更多数据,或者实现数据增强(Data Augmentation)
- 正则化方法
- Dropout
0x01 过拟合处理之Dropout
Dropout通过消除一部分神经元在神经网络中的作用来减少过拟合现象的发生。如图:
Dropout的过程:
- 首先随机删除一部分神经元,输入输出神经元保持不变。
- 把输入数据x通过修改后的网络进行前向传播,然后把得到的损失结果通过修改的网络反向传播,一小批训练样本执行完后,在没有被删除的神经元上采用梯度下降法更新参数w和b。
- 此后,恢复被删除的神经元,此时没有被删除的神经元状态已经更新,被删除的神经元状态不变。
- 从隐藏层神经元中随机选择一个一半大小的子集,准备被删除掉
- 不断重复2-4的过程
Dropout在删除神经元时使用Bernoulli(伯努利)函数,此函数输入参数仅有一个p,函数返回为以p做概率生成的一个只有0和1的随机向量。
使用Dropout优化上述过拟合代码即有:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61
| import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("data", one_hot=True)
x = tf.placeholder(tf.float32, shape=[None, 784]) y = tf.placeholder(tf.float32, shape=[None, 10])
drop_rate = tf.Variable(0.2, dtype=tf.float32)
weight_1 = tf.Variable(tf.truncated_normal([784, 1000], stddev=0.1)) bias_1 = tf.Variable(tf.zeros([1000]) + 0.1) prediction_1 = tf.nn.tanh(tf.matmul(x, weight_1) + bias_1)
dropped_1 = tf.nn.dropout(prediction_1, rate=drop_rate)
weight_2 = tf.Variable(tf.truncated_normal([1000, 2000], stddev=0.1)) bias_2 = tf.Variable(tf.zeros([2000]) + 0.1) prediction_2 = tf.nn.tanh(tf.matmul(dropped_1, weight_2) + bias_2)
dropped_2 = tf.nn.dropout(prediction_2, rate=drop_rate)
weight_3 = tf.Variable(tf.truncated_normal([2000, 10], stddev=0.1)) bias_3 = tf.Variable(tf.zeros([10]) + 0.1) prediction = tf.matmul(prediction_2, weight_3) + bias_3
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction))
train = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) batch_size = 100 train_batch_num = mnist.train.num_examples // batch_size for _ in range(30): for i in range(train_batch_num): images, labels = mnist.train.next_batch(batch_size) sess.run(train, feed_dict={x: images, y: labels}) correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('========== Iter ' + str(_) + ' ==========') train_acc = 0 for i in range(train_batch_num): images, labels = mnist.train.next_batch(batch_size) train_acc = train_acc + sess.run(accuracy, feed_dict={x: images, y: labels}) print('Train set accuracy: ' + str( train_acc / train_batch_num * 100), "%") test_acc = 0 test_batch_num = mnist.test.num_examples // batch_size for i in range(test_batch_num): images, labels = mnist.test.next_batch(batch_size) test_acc = test_acc + sess.run(accuracy, feed_dict={x: images, y: labels}) print('Test set accuracy: ' + str( test_acc / test_batch_num * 100), "%")
|
使用dropout之后,我们发现,模型的收敛速度变慢了。在过拟合问题中我们介绍的过拟合代码训练结果为训练集的准确率为100%而测试集的准确率只有94%,两者之差有6%。执行上述代码,我们最终得到的训练集准确率为99.44%,测试集的准确率为97.51%,两者之差明显缩短了很多。