(11)tensorflow填充与复制

tech2022-08-09  158

填充与复制

功能函数代码填充tf.pad(x,padding)复制tf.tile(x,multiples)限幅tf.maximum(x, a),tf.minimum(x, a)

填充

tf.pad(x,padding)padding 以列表形式传入paddings 是包含了多个[Left Padding, Right Padding]的嵌套方案 List,如[[1,2], [2,3], [2,1]],示例: import tensorflow as tf x = tf.random.normal([2,3,2]) print(x) y1 = tf.pad(x,[[1,2],[0,0],[0,0]]) #第一维度填充,左侧(上侧)1个,右侧(下侧)2个 print(y1) y2 = tf.pad(x,[[0,0],[2,3],[0,0]]) #第二维度填充,左侧(上侧)2个,右侧(下侧)3个 print(y2) y3 = tf.pad(x,[[0,0],[0,0],[2,1]]) #第三维度填充,左侧(上侧)2个,右侧(下侧)1个 print(y3) y4 = tf.pad(x,[[1,2],[2,3],[2,1]]) print(y4) out: tf.Tensor( [[[ 0.1766518 -0.5856068 ] [-0.19644707 1.9648746 ] [ 0.24688447 1.1661692 ]] [[ 1.2609143 0.30764177] [ 0.02750975 -0.72527266] [-2.2883105 0.10192525]]], shape=(2, 3, 2), dtype=float32) tf.Tensor( [[[ 0. 0. ] [ 0. 0. ] [ 0. 0. ]] [[ 0.1766518 -0.5856068 ] [-0.19644707 1.9648746 ] [ 0.24688447 1.1661692 ]] [[ 1.2609143 0.30764177] [ 0.02750975 -0.72527266] [-2.2883105 0.10192525]] [[ 0. 0. ] [ 0. 0. ] [ 0. 0. ]] [[ 0. 0. ] [ 0. 0. ] [ 0. 0. ]]], shape=(5, 3, 2), dtype=float32) tf.Tensor( [[[ 0. 0. ] [ 0. 0. ] [ 0.1766518 -0.5856068 ] [-0.19644707 1.9648746 ] [ 0.24688447 1.1661692 ] [ 0. 0. ] [ 0. 0. ] [ 0. 0. ]] [[ 0. 0. ] [ 0. 0. ] [ 1.2609143 0.30764177] [ 0.02750975 -0.72527266] [-2.2883105 0.10192525] [ 0. 0. ] [ 0. 0. ] [ 0. 0. ]]], shape=(2, 8, 2), dtype=float32) tf.Tensor( [[[ 0. 0. 0.1766518 -0.5856068 0. ] [ 0. 0. -0.19644707 1.9648746 0. ] [ 0. 0. 0.24688447 1.1661692 0. ]] [[ 0. 0. 1.2609143 0.30764177 0. ] [ 0. 0. 0.02750975 -0.72527266 0. ] [ 0. 0. -2.2883105 0.10192525 0. ]]], shape=(2, 3, 5), dtype=float32) tf.Tensor( [[[ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ]] [[ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0.1766518 -0.5856068 0. ] [ 0. 0. -0.19644707 1.9648746 0. ] [ 0. 0. 0.24688447 1.1661692 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ]] [[ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 1.2609143 0.30764177 0. ] [ 0. 0. 0.02750975 -0.72527266 0. ] [ 0. 0. -2.2883105 0.10192525 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ]] [[ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ]] [[ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ]]], shape=(5, 8, 5), dtype=float32)

复制

tf.tile(x,multiples)multiples传入list列表 import tensorflow as tf x = tf.random.normal([2,3,2])#第1维度复制2份,第2维度复制3份,第3维度复制2份 print(x) print(tf.tile(x,[2,3,2])) out: tf.Tensor( [[[ 1.0897092 -0.3775045 ] [ 0.23899518 -0.39701027] [-0.99930507 0.92922664]] [[ 0.3259821 1.4621626 ] [ 0.7076961 -0.62487763] [ 2.5603561 0.6227924 ]]], shape=(2, 3, 2), dtype=float32) tf.Tensor( [[[ 1.0897092 -0.3775045 1.0897092 -0.3775045 ] [ 0.23899518 -0.39701027 0.23899518 -0.39701027] [-0.99930507 0.92922664 -0.99930507 0.92922664] [ 1.0897092 -0.3775045 1.0897092 -0.3775045 ] [ 0.23899518 -0.39701027 0.23899518 -0.39701027] [-0.99930507 0.92922664 -0.99930507 0.92922664] [ 1.0897092 -0.3775045 1.0897092 -0.3775045 ] [ 0.23899518 -0.39701027 0.23899518 -0.39701027] [-0.99930507 0.92922664 -0.99930507 0.92922664]] [[ 0.3259821 1.4621626 0.3259821 1.4621626 ] [ 0.7076961 -0.62487763 0.7076961 -0.62487763] [ 2.5603561 0.6227924 2.5603561 0.6227924 ] [ 0.3259821 1.4621626 0.3259821 1.4621626 ] [ 0.7076961 -0.62487763 0.7076961 -0.62487763] [ 2.5603561 0.6227924 2.5603561 0.6227924 ] [ 0.3259821 1.4621626 0.3259821 1.4621626 ] [ 0.7076961 -0.62487763 0.7076961 -0.62487763] [ 2.5603561 0.6227924 2.5603561 0.6227924 ]] [[ 1.0897092 -0.3775045 1.0897092 -0.3775045 ] [ 0.23899518 -0.39701027 0.23899518 -0.39701027] [-0.99930507 0.92922664 -0.99930507 0.92922664] [ 1.0897092 -0.3775045 1.0897092 -0.3775045 ] [ 0.23899518 -0.39701027 0.23899518 -0.39701027] [-0.99930507 0.92922664 -0.99930507 0.92922664] [ 1.0897092 -0.3775045 1.0897092 -0.3775045 ] [ 0.23899518 -0.39701027 0.23899518 -0.39701027] [-0.99930507 0.92922664 -0.99930507 0.92922664]] [[ 0.3259821 1.4621626 0.3259821 1.4621626 ] [ 0.7076961 -0.62487763 0.7076961 -0.62487763] [ 2.5603561 0.6227924 2.5603561 0.6227924 ] [ 0.3259821 1.4621626 0.3259821 1.4621626 ] [ 0.7076961 -0.62487763 0.7076961 -0.62487763] [ 2.5603561 0.6227924 2.5603561 0.6227924 ] [ 0.3259821 1.4621626 0.3259821 1.4621626 ] [ 0.7076961 -0.62487763 0.7076961 -0.62487763] [ 2.5603561 0.6227924 2.5603561 0.6227924 ]]], shape=(4, 9, 4), dtype=float32)

数据限幅

tf.maximum(x, a)实现下限幅tf.minimum(x, a)实现上限幅 import tensorflow as tf x = tf.range(9) a = tf.maximum(x,3) b = tf.minimum(x,7) print(a,'\n',b) out: tf.Tensor([3 3 3 3 4 5 6 7 8], shape=(9,), dtype=int32) tf.Tensor([0 1 2 3 4 5 6 7 7], shape=(9,), dtype=int32)

基于 tf.maximum 函数,我们可以实现 ReLU

import tensorflow as tf def relu(x): return tf.maximum(x,0.)

实现上下限幅

import tensorflow as tf x = tf.range(11) a = tf.minimum(tf.maximum(x,2),7) print(a) out: tf.Tensor([2 2 2 3 4 5 6 7 7 7 7], shape=(11,), dtype=int32)
最新回复(0)