python - Tensorflow ReLU normalizes strangely -
in opinion rectified linear unit supposed execute following function:
relu(x) = max(x, 0)
however, seems not case tf.nn.relu
:
import tensorflow tf import numpy np rand_large = np.random.randn(10, 3)*100 x = tf.placeholder(tf.float32, [10, 3]) sess = tf.session() sess.run(tf.nn.relu(x), feed_dict={x:rand_large})
the random matrix looks this:
>>> rand_large array([[ 21.94064161, -82.16632876, 16.25152777], [ 55.54897693, -93.15235155, 118.99166126], [ -13.36452239, 39.36508285, 65.42844521], [-193.34041145, -97.08632376, 99.22162259], [ 87.02924619, 2.04134891, -27.29975745], [-181.11406687, 43.55952393, 42.29312993], [ -29.81242188, 93.5764354 , -165.62711447], [ 17.78380711, -171.30536766, -197.20709038], [ 105.94903623, 34.07995616, -7.27568839], [-100.59533697, -189.88957685, -7.52421816]])
and output relu function this:
>>> sess.run(tf.nn.relu(x), feed_dict={x:rand_large})array([[ 1. , 0.5, 0.5], [ 0.5, 0.5, 0.5], [ 0.5, 0.5, 0.5], [ 0.5, 0.5, 0.5], [ 0.5, 0.5, 0.5], [ 0.5, 0.5, 0.5], [ 0.5, 0.5, 0.5], [ 0.5, 0.5, 0.5], [ 0.5, 0.5, 0.5], [ 0.5, 0.5, 0.5]], dtype=float32)
so, if see correctly, tf.nn.relu
sort of normalization, right? if yes, why isn't mentioned in docs?
okay, found out whole issue related tensorflow installtion seemed corrupt. on machine, did expected results. thank , helpful comments.
tf.nn.relu
not normalize data. example, if run
import tensorflow tf import numpy np x = tf.placeholder(tf.float32, [2, 3]) relu_x=tf.nn.relu(x) sess = tf.session() mat = np.array([[-1,2,3],[2,-5,1]]) sess.run(relu_x, feed_dict={x:mat})
the result is
array([[ 0., 2., 3.], [ 2., 0., 1.]], dtype=float32)
Comments
Post a Comment