# A Tensorflow Exercise

A previous post in this series, implemented the Walk Forward Loop on top of Microsoft’s CNTK. There was interest in a Google’s Tensorflow implementation, which seems to be the more popular framework in this domain, I decided to put what have already done with Tensorflow.

The full source code is here. It will not work without modifications – it needs data, and some of my modules. These are pretty easy to fix though.

tensorflow_fit_predict is where the work is done. The first point of interest is how we use Tensorflow’s computation graph:

```    with tf.Graph().as_default():
input = tf.placeholder(tf.float32, [None, nfeatures])
label = tf.placeholder(tf.float32, [None, nlabels])
```

The above code creates a new graph. This is a slight deviation of how Tensorflow is usually used (take a look at pretty much any example), but there is a good reason for that. tensorflow_fit_predict is called in a loop. If we use the default graph for each iteration, the old nodes are not deleted, looks like they stay around. The result is that we run out of memory fairly quickly. I found out this the hard way – the code still contains some debug logging I used to investigate this issue.

Next comes the deep neural network:

```input = tf.placeholder(tf.float32, [None, nfeatures])
label = tf.placeholder(tf.float32, [None, nlabels])

nconv1 = 32
cw1 = tf.Variable(tf.random_normal([1, 3, 1, nconv1]))
cb1 = tf.Variable(tf.random_normal([nconv1]))
conv_input = tf.reshape(input, shape=[-1, 1, nfeatures, 1])
mp1 = tf.nn.max_pool(cl1, ksize=[1, 1, 2, 1], strides=[1, 1, 2, 1], padding='SAME')

nhidden1 = 128
w1 = tf.Variable(tf.random_normal([378*32, nhidden1]))
b1 = tf.Variable(tf.random_normal([nhidden1]))
fc_input = tf.reshape(mp1, [-1, w1.get_shape().as_list()])

nhidden2 = 128
w2 = tf.Variable(tf.random_normal([nhidden1, nhidden2]))
b2 = tf.Variable(tf.random_normal([nhidden2]))

w3 = tf.Variable(tf.random_normal([nhidden2, nlabels]))
b3 = tf.Variable(tf.random_normal([nlabels]))