The difference bewteen tf.shape() and tensor.get_shape()
https://stackoverflow.com/questions/36966316/how-to-get-the-dimensions-of-a-tensor-in-tensorflow-at-graph-construction-time
I see most people confused about
tf.shape(tensor)
and tensor.get_shape()
Let's make it clear:tf.shape
tf.shape
is used for dynamic shape. If your tensor's shape is changable, use it. An example: a input is an image with changable width and height, we want resize it to half of its size, then we can write something like:new_height = tf.shape(image)[0] / 2
tensor.get_shape
tensor.get_shape
is used for fixed shapes, which means the tensor's shape can be deduced in the graph.
Conclusion:
tf.shape
can be used almost anywhere, but t.get_shape
only for shapes can be deduced from graph.def shape(tensor):
s = tensor.get_shape()
return tuple([s[i].value for i in range(0, len(s))])
Example:
batch_size, num_feats = shape(logits)
https://stackoverflow.com/questions/43563609/how-tf-transpose-works-in-tensorflow
Looking at the numpy.transpose
documentation, we find that transpose
takes the argument
axes
: list of ints, optional
By default, reverse the dimensions, otherwise permute the axes according to the values given.
So the default call to transpose
translates into np.transpose(a, axes=[1,0])
for the 2D case, or np.transpose(a, axes=[2,1,0])
.
The operation you want to have here, is one that leaves the "depth" dimension unchanged. Therefore in the axes argument, the depth axes, which is the 0
th axes, needs to stay unchanged. The axes 1
and 2
(where 1 is the vertical axis), need to change positions. So you change the axes order from the initial [0,1,2]
to [0,2,1]
([stays the same, changes with other, changes with other]
).
In tensorflow, they have for some reason renamed axes
to perm
. The argument from above stays the same.
images
Concerning images, they differ from the arrays in the question. Images normally have their x and y stored in the first two dimensions and the channel in the last, [y,x,channel]
.
In order to "transpose" an image in the sense of a 2D transposition, where horizontal and vertical axes are exchanged, you would need to use
np.transpose(a, axes=[1,0,2])
(channel stays the same, x and y are exchanged).
https://stackoverflow.com/questions/36966316/how-to-get-the-dimensions-of-a-tensor-in-tensorflow-at-graph-construction-time
Let's make it simple as hell. If you want a single number for the number of dimensions like 2, 3, 4, etc.,
then just use tf.rank()
. But, if you want the exact shape of the tensor then use tensor.get_shape()
with tf.Session() as sess:
arr = tf.random_normal(shape=(10, 32, 32, 128))
a = tf.random_gamma(shape=(3, 3, 1), alpha=0.1)
print(sess.run([tf.rank(arr), tf.rank(a)]))
print(arr.get_shape(), ", ", a.get_shape())
# for tf.rank()
[4, 3]
# for tf.get_shape()
Output: (10, 32, 32, 128) , (3, 3, 1)
Comments
Post a Comment