Error not decreasing in a 3 layer deep CNN using TensorFlow

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP

Error not decreasing in a 3 layer deep CNN using TensorFlow



I'm trying to train a CNN to play an online game by feeding images of the game along with the keyboard input.



By playing the game for some time and collecting the data, I gathered 342 images with size 110x42. I'm feeding these images in the network like so:


def convolutional_neural_network(x):
weights = 'W_conv1': tf.Variable(tf.random_normal([3, 3, 1, 16])),
'W_conv2': tf.Variable(tf.random_normal([5, 5, 16, 32])),
'W_conv3': tf.Variable(tf.random_normal([5, 5, 32, 64])),
'W_conv4': tf.Variable(tf.random_normal([5, 5, 64, 64])),
'W_fc': tf.Variable(tf.random_normal([7 * 3 * 64, 1024])),
'out': tf.Variable(tf.random_normal([1024, n_classes]))

biases = 'b_conv1': tf.Variable(tf.random_normal([16])),
'b_conv2': tf.Variable(tf.random_normal([32])),
'b_conv3': tf.Variable(tf.random_normal([64])),
'b_conv4': tf.Variable(tf.random_normal([64])),
'b_fc': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([n_classes]))

x = tf.reshape(x, shape=[-1, 110, 42, 1])

conv1 = tf.nn.relu(conv2d(x, weights['W_conv1']) + biases['b_conv1'])
conv1 = maxpool2d(conv1)

conv2 = tf.nn.relu(conv2d(conv1, weights['W_conv2']) + biases['b_conv2'])
conv2 = maxpool2d(conv2)

conv3 = tf.nn.relu(conv2d(conv2, weights['W_conv3']) + biases['b_conv3'])
conv3 = maxpool2d(conv3)

conv4 = tf.nn.relu(conv2d(conv3, weights['W_conv4']) + biases['b_conv4'])
conv4 = maxpool2d(conv4)


fc = tf.reshape(conv3, [-1, 7 * 3 * 64])
fc = tf.nn.relu(tf.matmul(fc, weights['W_fc']) + biases['b_fc'])
fc = tf.nn.dropout(fc, keep_rate)

output = tf.sigmoid(tf.add(tf.matmul(fc, weights['out']), biases['out'], name='pred'))

return output


def train_neural_network(x):
prediction = convolutional_neural_network(x)
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=prediction, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=1).minimize(cost)

hm_epochs = 6
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())

for epoch in range(hm_epochs):
epoch_loss = 0
for epoch_x, epoch_y, i in dataset.create_batches():
#epoch_x, epoch_y = mnist.train.next_batch(batch_size)
epoch_x = epoch_x.reshape(-1,4620)
_, c = sess.run([optimizer, cost], feed_dict=x: epoch_x, y: epoch_y)
epoch_loss += c

print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)

#correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
correct = tf.equal(tf.greater(prediction, 0.5), tf.equal(y, 1.0))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:', accuracy.eval(x: dataset.dataset['test']['x_test'], y: dataset.dataset['test']['y_test']))


train_neural_network(x)



The errors are getting stuck in a particular value and floating up and down around this value by a small amount.



I've tried incresing/decreasing the learning rate, improving the quality of images, changing the size of the batches... and nothing seems to make the network stable.



Do you guys know what I'm doing wrong?



Thanks in advance!




1 Answer
1



My first thought, which might be wrong is - how complex is this game. And are 340 samples really enough to capture the complexity?



Also which error are you talking about? Your Training error? If yes, you should also get yourself an Evaluation dataset.



Also how much did you Play around with the learning rate? Which range of values?
Have you tried Adams Default Learning rate? 1 seems rather high.






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Comments

Popular posts from this blog

Executable numpy error

PySpark count values by condition

Trying to Print Gridster Items to PDF without overlapping contents