tensorflow - Using states as predictors in a Recurrent Neural Network -
i'm using lstm (long short term memory) network in tensorflow linear layer final layer. using concatenation of input , lstm output input final layer. add states of lstm predictors.
difficulty tf.nn.dynamic_rnn()
produces last state output, i've resorted using loop , running tf.nn.dynamic_rnn()
1 time step , outputting , running 1 more time step...
however i'm still getting errors code. here is. input 30*238*3 tensor of [observations/batches,timesteps, predictors]
:
testinput = tf.placeholder(tf.float64, [none,238,3]) weightin = tf.placeholder(tf.float64, [none]) target = tf.placeholder(tf.float64, [1, none, 238]) mask = tf.placeholder(tf.float64, [1,none,238]) lin_weight = tf.variable(numpy.concatenate((numpy.reshape(numpy.array([0.12504494, 0.326449906, -0.192413488]), (1,3,1)), numpy.random.normal(0,1/((3000000000.0)**(1/2.0)),[1,3*neurons,1])), axis = 1),dtype=tf.float64)#[0.12504494, 0.326449906, -0.192413488] bias = tf.variable(1.76535047076, dtype=tf.float64, )#1.76535047076 lstm = tf.contrib.rnn.basiclstmcell(neurons) state = lstm.zero_state(1,tf.float64) out1 =[0.0 each2 in range(238)] = [0.0 each2 in range(238)] b = [0.0 each2 in range(238)] k in range(238): out1[k],state = tf.nn.dynamic_rnn(lstm, testinput, intial_state = state, sequence_length = [1], dtype=tf.float64) (a[k],b[k]) = state print(out1) out1 = tf.reshape(numpy.array(out1),[-1,238,4]) = tf.reshape(numpy.array(a),[-1,238,4]) b = tf.reshape(numpy.array(b),[-1,238,4]) lineinput = tf.concat([testinput,out1,a,b], 2) output = tf.squeeze(tf.tensordot(lin_weight, lineinput, [[1],[2]]) + bias, [0]) sqerror = tf.square(tf.subtract(output, target)) masklayer = tf.multiply(sqerror,mask) useweight = tf.tensordot(masklayer ,weightin,[[1],[0]]) meansquared = tf.reduce_sum(useweight) #meansquared = tf.reduce_mean(tf.tensordot(tf.multiply(tf.square(tf.subtract(output, target)), mask),weightin,[[1],[0]])) optimizer = tf.train.adamoptimizer() minimize1 = optimizer.minimize(meansquared) init_op = tf.global_variables_initializer() sess = tf.session() sess.run(init_op) print(sess.run(a, {testinput: [xtesting[0,:,:]]})) batch_size = 5 no_of_batches = int(30/batch_size) run = 0 maxerror = 10000000000 flag = true each in range(10000): if flag: ptr = 0 j in range(no_of_batches): inp, out, win, maskin= xtesting[ptr:ptr+batch_size,:,:], ytesting[:,ptr:ptr+batch_size,:], weights2[ptr:ptr+batch_size], bmask[:,ptr:ptr+batch_size,:] ptr+=batch_size sess.run(minimize1, {testinput: inp, target: out, weightin: win, mask: maskin}) validerror = sess.run(meansquared, {testinput: xtesting, target: ytesting, weightin: weights2, mask: bmaskvalidate}) print(sess.run(meansquared, {testinput: xtesting, target: ytesting, weightin: weights2, mask: bmask})) print(validerror) print(each) if validerror < maxerror: run = each maxerror = validerror if each > run + 25: flag = false print(sess.run(output, {testinput: [xtesting[0,:,:]]})) print(sess.run(meansquared, {testinput: [xtesting[0,:,:]], target: [ytesting[:,0,:]], weightin: [weights2[0]], mask: [bmask2[:,0,:]]})/24.0)
the error is: typeerror: expected binary or unicode string, got < tf.tensor 'rnn/transpose:0' shape=(?, 238, 4) dtype=float64 >. error caused line out1 = tf.reshape(numpy.array(out1),[-1,238,4])
.
Comments
Post a Comment