4-1 Structural Operations of the Tensor

Tensor operation includes structural operation and mathematical operation.

The structural operation includes tensor creation, index slicing, dimension transform, combining & splitting, etc.

The mathematical operation includes scalar operation, vector operation, and matrix operation. We will also introduce the broadcasting mechanism of tensor operation.

This section is about the structural operation of tensor.

1. Creating Tensor

Tensor creation is similar to array creation in numpy.

  1. import tensorflow as tf
  2. import numpy as np
  1. a = tf.constant([1,2,3],dtype = tf.float32)
  2. tf.print(a)
  1. [1 2 3]
  1. b = tf.range(1,10,delta = 2)
  2. tf.print(b)
  1. [1 3 5 7 9]
  1. c = tf.linspace(0.0,2*3.14,100)
  2. tf.print(c)
  1. [0 0.0634343475 0.126868695 ... 6.15313148 6.21656609 6.28]
  1. d = tf.zeros([3,3])
  2. tf.print(d)
  1. [[0 0 0]
  2. [0 0 0]
  3. [0 0 0]]
  1. a = tf.ones([3,3])
  2. b = tf.zeros_like(a,dtype= tf.float32)
  3. tf.print(a)
  4. tf.print(b)
  1. [[1 1 1]
  2. [1 1 1]
  3. [1 1 1]]
  4. [[0 0 0]
  5. [0 0 0]
  6. [0 0 0]]
  1. b = tf.fill([3,2],5)
  2. tf.print(b)
  1. [[5 5]
  2. [5 5]
  3. [5 5]]
  1. # Random numbers with uniform distribution
  2. tf.random.set_seed(1.0)
  3. a = tf.random.uniform([5],minval=0,maxval=10)
  4. tf.print(a)
  1. [1.65130854 9.01481247 6.30974197 4.34546089 2.9193902]
  1. # Random numbers with normal distribution
  2. b = tf.random.normal([3,3],mean=0.0,stddev=1.0)
  3. tf.print(b)
  1. [[0.403087884 -1.0880208 -0.0630953535]
  2. [1.33655667 0.711760104 -0.489286453]
  3. [-0.764221311 -1.03724861 -1.25193381]]
  1. # Random numbers with normal distribution and truncate within the range 2X standard deviation
  2. c = tf.random.truncated_normal((5,5), mean=0.0, stddev=1.0, dtype=tf.float32)
  3. tf.print(c)
  1. [[-0.457012236 -0.406867266 0.728577733 -0.892977774 -0.369404584]
  2. [0.323488563 1.19383323 0.888299048 1.25985599 -1.95951891]
  3. [-0.202244401 0.294496894 -0.468728036 1.29494202 1.48142183]
  4. [0.0810953453 1.63843894 0.556645 0.977199793 -1.17777884]
  5. [1.67368948 0.0647980496 -0.705142677 -0.281972528 0.126546144]]
  1. # Special matrix
  2. I = tf.eye(3,3) # Identity matrix
  3. tf.print(I)
  4. tf.print(" ")
  5. t = tf.linalg.diag([1,2,3]) # Diagonal matrix
  6. tf.print(t)
  1. [[1 0 0]
  2. [0 1 0]
  3. [0 0 1]]
  4. [[1 0 0]
  5. [0 2 0]
  6. [0 0 3]]

2. Indexing and Slicing

The indexing and slicing of tensor is the same as numpy, and slicing supports default parameters and ellipsis.

Data type of tf.Variable supports indexing and slicing to modify values of certain elements.

For referencing a continuous portion of a tensor, tf.slice is recommended.

On the other hand, for the irregular slicing shape, tf.gather, tf.gather_nd, tf.boolean_mask are recommended.

The method tf.boolean_mask is powerful, it functions as both tf.gather and tf.gather_nd, and supports boolean indexing.

For the purpose of creating a new tensor through modifying certain elements in an existing tensor, tf.where and tf.scatter_nd can be used.

  1. tf.random.set_seed(3)
  2. t = tf.random.uniform([5,5],minval=0,maxval=10,dtype=tf.int32)
  3. tf.print(t)
  1. [[4 7 4 2 9]
  2. [9 1 2 4 7]
  3. [7 2 7 4 0]
  4. [9 6 9 7 2]
  5. [3 7 0 0 3]]
  1. # Row 0
  2. tf.print(t[0])
  1. [4 7 4 2 9]
  1. # Last row
  2. tf.print(t[-1])
  1. [3 7 0 0 3]
  1. # Row 1 Column 3
  2. tf.print(t[1,3])
  3. tf.print(t[1][3])
  1. 4
  2. 4
  1. # From row 1 to row 3
  2. tf.print(t[1:4,:])
  3. tf.print(tf.slice(t,[1,0],[3,5])) #tf.slice(input,begin_vector,size_vector)
  1. [[9 1 2 4 7]
  2. [7 2 7 4 0]
  3. [9 6 9 7 2]]
  4. [[9 1 2 4 7]
  5. [7 2 7 4 0]
  6. [9 6 9 7 2]]
  1. # From row 1 to the last row, and from column 0 to the last one with an increment of 2
  2. tf.print(t[1:4,:4:2])
  1. [[9 2]
  2. [7 7]
  3. [9 9]]
  1. # Variable supports modifying elements through indexing and slicing
  2. x = tf.Variable([[1,2],[3,4]],dtype = tf.float32)
  3. x[1,:].assign(tf.constant([0.0,0.0]))
  4. tf.print(x)
  1. [[1 2]
  2. [0 0]]
  1. a = tf.random.uniform([3,3,3],minval=0,maxval=10,dtype=tf.int32)
  2. tf.print(a)
  1. [[[7 3 9]
  2. [9 0 7]
  3. [9 6 7]]
  4. [[1 3 3]
  5. [0 8 1]
  6. [3 1 0]]
  7. [[4 0 6]
  8. [6 2 2]
  9. [7 9 5]]]
  1. # Ellipsis represents multiple colons
  2. tf.print(a[...,1])
  1. [[3 0 6]
  2. [3 8 1]
  3. [0 2 9]]

The examples above are regular slicing; for irregular slicing, tf.gather, tf.gather_nd, tf.boolean_mask can be used.

Here is an example of student’s grade records. There are 4 classes, 10 students in each class, and 7 courses for each student, which could be represented as a tensor with a dimension of 4×10×7.

  1. scores = tf.random.uniform((4,10,7),minval=0,maxval=100,dtype=tf.int32)
  2. tf.print(scores)
  1. [[[52 82 66 ... 17 86 14]
  2. [8 36 94 ... 13 78 41]
  3. [77 53 51 ... 22 91 56]
  4. ...
  5. [11 19 26 ... 89 86 68]
  6. [60 72 0 ... 11 26 15]
  7. [24 99 38 ... 97 44 74]]
  8. [[79 73 73 ... 35 3 81]
  9. [83 36 31 ... 75 38 85]
  10. [54 26 67 ... 60 68 98]
  11. ...
  12. [20 5 18 ... 32 45 3]
  13. [72 52 81 ... 88 41 20]
  14. [0 21 89 ... 53 10 90]]
  15. [[52 80 22 ... 29 25 60]
  16. [78 71 54 ... 43 98 81]
  17. [21 66 53 ... 97 75 77]
  18. ...
  19. [6 74 3 ... 53 65 43]
  20. [98 36 72 ... 33 36 81]
  21. [61 78 70 ... 7 59 21]]
  22. [[56 57 45 ... 23 15 3]
  23. [35 8 82 ... 11 59 97]
  24. [44 6 99 ... 81 60 27]
  25. ...
  26. [76 26 35 ... 51 8 17]
  27. [33 52 53 ... 78 37 31]
  28. [71 27 44 ... 0 52 16]]]
  1. # Extract all the grades of the 0th, 5th and 9th students in each class.
  2. p = tf.gather(scores,[0,5,9],axis=1)
  3. tf.print(p)
  1. [[[52 82 66 ... 17 86 14]
  2. [24 80 70 ... 72 63 96]
  3. [24 99 38 ... 97 44 74]]
  4. [[79 73 73 ... 35 3 81]
  5. [46 10 94 ... 23 18 92]
  6. [0 21 89 ... 53 10 90]]
  7. [[52 80 22 ... 29 25 60]
  8. [19 12 23 ... 87 86 25]
  9. [61 78 70 ... 7 59 21]]
  10. [[56 57 45 ... 23 15 3]
  11. [6 41 79 ... 97 43 13]
  12. [71 27 44 ... 0 52 16]]]
  1. # Extract the grades of the 1st, 3rd and 6th courses of the 0th, 5th and 9th students in each class.
  2. q = tf.gather(tf.gather(scores,[0,5,9],axis=1),[1,3,6],axis=2)
  3. tf.print(q)
  1. [[[82 55 14]
  2. [80 46 96]
  3. [99 58 74]]
  4. [[73 48 81]
  5. [10 38 92]
  6. [21 86 90]]
  7. [[80 57 60]
  8. [12 34 25]
  9. [78 71 21]]
  10. [[57 75 3]
  11. [41 47 13]
  12. [27 96 16]]]
  1. # Extract all the grades of the 0th student in the 0th class, the 4th student in the 2nd class, and the 6th student in the 3rd class.
  2. # Then length of the parameter indices equals to the number of samples, and the each element of indices is the coordinate of each sample.
  3. s = tf.gather_nd(scores,indices = [(0,0),(2,4),(3,6)])
  4. s
  1. <tf.Tensor: shape=(3, 7), dtype=int32, numpy=
  2. array([[52, 82, 66, 55, 17, 86, 14],
  3. [99, 94, 46, 70, 1, 63, 41],
  4. [46, 83, 70, 80, 90, 85, 17]], dtype=int32)>

The function of tf.gather and tf.gather_nd as shown above could be achieved through tf.boolean_mask.

  1. # Extract all the grades of the 0th, 5th and 9th students in each class.
  2. p = tf.boolean_mask(scores,[True,False,False,False,False,
  3. True,False,False,False,True],axis=1)
  4. tf.print(p)
  1. [[[52 82 66 ... 17 86 14]
  2. [24 80 70 ... 72 63 96]
  3. [24 99 38 ... 97 44 74]]
  4. [[79 73 73 ... 35 3 81]
  5. [46 10 94 ... 23 18 92]
  6. [0 21 89 ... 53 10 90]]
  7. [[52 80 22 ... 29 25 60]
  8. [19 12 23 ... 87 86 25]
  9. [61 78 70 ... 7 59 21]]
  10. [[56 57 45 ... 23 15 3]
  11. [6 41 79 ... 97 43 13]
  12. [71 27 44 ... 0 52 16]]]
  1. # Extract all the grades of the 0th student in the 0th class, the 4th student in the 2nd class, and the 6th student in the 3rd class.
  2. s = tf.boolean_mask(scores,
  3. [[True,False,False,False,False,False,False,False,False,False],
  4. [False,False,False,False,False,False,False,False,False,False],
  5. [False,False,False,False,True,False,False,False,False,False],
  6. [False,False,False,False,False,False,True,False,False,False]])
  7. tf.print(s)
  1. [[52 82 66 ... 17 86 14]
  2. [99 94 46 ... 1 63 41]
  3. [46 83 70 ... 90 85 17]]
  1. # Boolean indexing using tf.boolean_mask
  2. # Find all elements that are less than 0 in the matrix
  3. c = tf.constant([[-1,1,-1],[2,2,-2],[3,-3,3]],dtype=tf.float32)
  4. tf.print(c,"\n")
  5. tf.print(tf.boolean_mask(c,c<0),"\n")
  6. tf.print(c[c<0]) # This is the syntactic sugar of boolean_mask for boolean indexing.
  1. [[-1 1 -1]
  2. [2 2 -2]
  3. [3 -3 3]]
  4. [-1 -1 -2 -3]
  5. [-1 -1 -2 -3]

The methods shown above are able to extract part of the elements in the tensor, but are not able to create new tensors through modification of these elements.

The method tf.where and tf.scatter_nd should be used for this purpose.

tf.where is the tensor version of if; on the other hand, this method is able to find the coordinate of all the elements that statisfy certain conditions.

tf.scatter_nd works in an opposite way to the method tf.gather_nd. The latter collects the elements according to the given coordinate, while the former inserts values on the given positions in an all-zero tensor with a known shape.

  1. # Find elements that are less than 0, create a new tensor by replacing these elements with np.nan.
  2. # tf.where is similar to np.where, which is the "if" for the tensors
  3. c = tf.constant([[-1,1,-1],[2,2,-2],[3,-3,3]],dtype=tf.float32)
  4. d = tf.where(c<0,tf.fill(c.shape,np.nan),c)
  5. d
  1. <tf.Tensor: shape=(3, 3), dtype=float32, numpy=
  2. array([[nan, 1., nan],
  3. [ 2., 2., nan],
  4. [ 3., nan, 3.]], dtype=float32)>
  1. # The method where returns all the coordinates that satisfy the condition if there is only one argument
  2. indices = tf.where(c<0)
  3. indices
  1. <tf.Tensor: shape=(4, 2), dtype=int64, numpy=
  2. array([[0, 0],
  3. [0, 2],
  4. [1, 2],
  5. [2, 1]])>
  1. # Create a new tensor by replacing the value of two tensor elements located at [0,0] [2,1] as 0.
  2. d = c - tf.scatter_nd([[0,0],[2,1]],[c[0,0],c[2,1]],c.shape)
  3. d
  1. <tf.Tensor: shape=(3, 3), dtype=float32, numpy=
  2. array([[ 0., 1., -1.],
  3. [ 2., 2., -2.],
  4. [ 3., 0., 3.]], dtype=float32)>
  1. # The method scatter_nd functions inversly to gather_nd
  2. # This method can be used to insert values on the given positions in an all-zero tensor with a known shape.
  3. indices = tf.where(c<0)
  4. tf.scatter_nd(indices,tf.gather_nd(c,indices),c.shape)
  1. <tf.Tensor: shape=(3, 3), dtype=float32, numpy=
  2. array([[-1., 0., -1.],
  3. [ 0., 0., -2.],
  4. [ 0., -3., 0.]], dtype=float32)>

3. Dimension Transform

The functions that are related to dimension transform include tf.reshape, tf.squeeze, tf.expand_dims, tf.transpose.

tf.reshape is used to alter the shape of the tensor.

tf.squeeze is used to reduce the number of dimensions.

tf.expand_dims is used to increase the number of dimensions.

tf.transpose is used to exchange the order of the dimensions.

tf.reshape changes the shape of the tensor, but will not change the order of elements stored in the memory, thus this operation is extremely fast and reversible.

  1. a = tf.random.uniform(shape=[1,3,3,2],
  2. minval=0,maxval=255,dtype=tf.int32)
  3. tf.print(a.shape)
  4. tf.print(a)
  1. TensorShape([1, 3, 3, 2])
  2. [[[[135 178]
  3. [26 116]
  4. [29 224]]
  5. [[179 219]
  6. [153 209]
  7. [111 215]]
  8. [[39 7]
  9. [138 129]
  10. [59 205]]]]
  1. # Reshape into (3,6)
  2. b = tf.reshape(a,[3,6])
  3. tf.print(b.shape)
  4. tf.print(b)
  1. TensorShape([3, 6])
  2. [[135 178 26 116 29 224]
  3. [179 219 153 209 111 215]
  4. [39 7 138 129 59 205]]
  1. # Reshape back to (1,3,3,2)
  2. c = tf.reshape(b,[1,3,3,2])
  3. tf.print(c)
  1. [[[[135 178]
  2. [26 116]
  3. [29 224]]
  4. [[179 219]
  5. [153 209]
  6. [111 215]]
  7. [[39 7]
  8. [138 129]
  9. [59 205]]]]

When there is only one element on a certain dimension, tf.squeeze eliminates this dimension.

It won’t change the order of the stored elements in the memory, which is similar to tf.reshape.

The elements in a tensor is stored linearly, usually the adjacent elements in the same dimension use adjacent physical addresses.

  1. s = tf.squeeze(a)
  2. tf.print(s.shape)
  3. tf.print(s)
  1. TensorShape([3, 3, 2])
  2. [[[135 178]
  3. [26 116]
  4. [29 224]]
  5. [[179 219]
  6. [153 209]
  7. [111 215]]
  8. [[39 7]
  9. [138 129]
  10. [59 205]]]
  1. d = tf.expand_dims(s,axis=0) # Insert an extra dimension to the 0th dim with length = 1
  2. d
  1. <tf.Tensor: shape=(1, 3, 3, 2), dtype=int32, numpy=
  2. array([[[[135, 178],
  3. [ 26, 116],
  4. [ 29, 224]],
  5. [[179, 219],
  6. [153, 209],
  7. [111, 215]],
  8. [[ 39, 7],
  9. [138, 129],
  10. [ 59, 205]]]], dtype=int32)>

tf.transpose swaps the dimensions in the tensor; unlike tf.shape, it will change the order of the elements in the memory.

tf.transpose is usually used for converting image format of storage.

  1. # Batch,Height,Width,Channel
  2. a = tf.random.uniform(shape=[100,600,600,4],minval=0,maxval=255,dtype=tf.int32)
  3. tf.print(a.shape)
  4. # Transform to the order as Channel,Height,Width,Batch
  5. s= tf.transpose(a,perm=[3,1,2,0])
  6. tf.print(s.shape)
  1. TensorShape([100, 600, 600, 4])
  2. TensorShape([4, 600, 600, 100])

4. Combining and Splitting

We can use tf.concat and tf.stack methods to combine multiple tensors, and use tf.split to split a tensor into multiple ones, which are similar as those in numpy.

tf.concat is slightly different to tf.stack: tf.concat is concatination and does not increase the number of dimensions, while tf.stack is stacking and increases the number of dimensions.

  1. a = tf.constant([[1.0,2.0],[3.0,4.0]])
  2. b = tf.constant([[5.0,6.0],[7.0,8.0]])
  3. c = tf.constant([[9.0,10.0],[11.0,12.0]])
  4. tf.concat([a,b,c],axis = 0)
  1. <tf.Tensor: shape=(6, 2), dtype=float32, numpy=
  2. array([[ 1., 2.],
  3. [ 3., 4.],
  4. [ 5., 6.],
  5. [ 7., 8.],
  6. [ 9., 10.],
  7. [11., 12.]], dtype=float32)>
  1. tf.concat([a,b,c],axis = 1)
  1. <tf.Tensor: shape=(2, 6), dtype=float32, numpy=
  2. array([[ 1., 2., 5., 6., 9., 10.],
  3. [ 3., 4., 7., 8., 11., 12.]], dtype=float32)>
  1. tf.stack([a,b,c])
  1. <tf.Tensor: shape=(3, 2, 2), dtype=float32, numpy=
  2. array([[[ 1., 2.],
  3. [ 3., 4.]],
  4. [[ 5., 6.],
  5. [ 7., 8.]],
  6. [[ 9., 10.],
  7. [11., 12.]]], dtype=float32)>
  1. tf.stack([a,b,c],axis=1)
  1. <tf.Tensor: shape=(2, 3, 2), dtype=float32, numpy=
  2. array([[[ 1., 2.],
  3. [ 5., 6.],
  4. [ 9., 10.]],
  5. [[ 3., 4.],
  6. [ 7., 8.],
  7. [11., 12.]]], dtype=float32)>
  1. a = tf.constant([[1.0,2.0],[3.0,4.0]])
  2. b = tf.constant([[5.0,6.0],[7.0,8.0]])
  3. c = tf.constant([[9.0,10.0],[11.0,12.0]])
  4. c = tf.concat([a,b,c],axis = 0)

tf.split is the inverse of tf.concat. It allows even splitting with given number of portions, or uneven splitting with given size of each portion.

  1. #tf.split(value,num_or_size_splits,axis)
  2. tf.split(c,3,axis = 0) # Even splitting with given number of portions
  1. [<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[1., 2.],
  3. [3., 4.]], dtype=float32)>,
  4. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  5. array([[5., 6.],
  6. [7., 8.]], dtype=float32)>,
  7. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  8. array([[ 9., 10.],
  9. [11., 12.]], dtype=float32)>]
  1. tf.split(c,[2,2,2],axis = 0) # Splitting with given size of each portion.
  1. [<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[1., 2.],
  3. [3., 4.]], dtype=float32)>,
  4. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  5. array([[5., 6.],
  6. [7., 8.]], dtype=float32)>,
  7. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  8. array([[ 9., 10.],
  9. [11., 12.]], dtype=float32)>]

Please leave comments in the WeChat official account “Python与算法之美” (Elegance of Python and Algorithms) if you want to communicate with the author about the content. The author will try best to reply given the limited time available.

You are also welcomed to join the group chat with the other readers through replying 加群 (join group) in the WeChat official account.

image.png