首发地址:https://zhuanlan.zhihu.com/p/138731311
线性代数都学过二维矩阵的乘法,而tf.matmul还可以处理多维矩阵,比如

import tensorflow as tf
import numpy as np
a = tf.random.uniform([2, 1, 2, 3])
b = tf.random.uniform([1, 3, 3, 2])
c = tf.matmul(a, b)

c是什么呢?

先给出结论:不管几维矩阵都是先做最后两维的矩阵的乘法,再在不同维度重复多次。

多维的 tf.matmul(a, b) 的维度有如下两个要求:

1、a的axis=-1的值(只可意会)和b的axis=-2的值需要相等。比如上述例子中[3, 2, 3]最后的3,和[3, 3, 2]的第二个3。

2、a和b的各维度的值(除了axis=-1和-2的值),在任意维度上,都需要“相等”或“有一个是1”。

比如,[3, 2, 3]维度的张量与[3, 3, 2]维度的张量做tf.matmul的例子:

In [84]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([3, 2, 3])...: b = tf.random.uniform([3, 3, 2])...: c = tf.matmul(a, b)...: c.shape...:...:Out[84]: TensorShape([3, 2, 2])In [87]: tf.matmul(a[0],b[0])
Out[87]:
<tf.Tensor: id=374, shape=(2, 2), dtype=float32, numpy=
array([[1.4506222 , 1.323427  ],[0.28268352, 0.2917934 ]], dtype=float32)>In [88]: tf.matmul(a[1],b[1])
Out[88]:
<tf.Tensor: id=383, shape=(2, 2), dtype=float32, numpy=
array([[1.0278544 , 0.4219831 ],[0.865297  , 0.87740964]], dtype=float32)>In [89]: c
Out[89]:
<tf.Tensor: id=365, shape=(3, 2, 2), dtype=float32, numpy=
array([[[1.4506222 , 1.323427  ],[0.28268352, 0.2917934 ]],[[1.0278544 , 0.4219831 ],[0.865297  , 0.8774096 ]],[[0.5752927 , 0.13066964],[0.5343988 , 0.2741483 ]]], dtype=float32)>

可以看到,[3, 2, 3]维度的张量与[3, 3, 2]维度的张量做tf.matmul,可以理解成:

第一步,先在axis=1和2的维度上做[2, 3]维度的张量与[3, 2]维度的张量之间的二维张量的矩阵乘法,得到[2, 2]维度的结果;

第二部,然后在axis=0的维度上,分别选a的第i个和选b的第i个做上述的第一步,最终得到[3, 2,2]维度的输出。

如果,a和b的axis=0维度对不上,会bug:

In [95]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([2, 2, 3])...: b = tf.random.uniform([3, 3, 2])...: c = tf.matmul(a, b)...: c.shape...:...:
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-95-462c4976a35a> in <module>3 a = tf.random.uniform([2, 2, 3])4 b = tf.random.uniform([3, 3, 2])
----> 5 c = tf.matmul(a, b)6 c.shape7D:\S\Anaconda3_v3\lib\site-packages\tensorflow_core\python\util\dispatch.py in wrapper(*args, **kwargs)178     """Call target, and fall back on dispatchers if there is a TypeError."""179     try:
--> 180       return target(*args, **kwargs)181     except (TypeError, ValueError):182       # Note: convert_to_eager_tensor currently raises a ValueError, not aD:\S\Anaconda3_v3\lib\site-packages\tensorflow_core\python\ops\math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)2725         b = conj(b)2726         adjoint_b = True
-> 2727       return batch_mat_mul_fn(a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)27282729     # Neither matmul nor sparse_matmul support adjoint, so we conjugateD:\S\Anaconda3_v3\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py in batch_mat_mul_v2(x, y, adj_x, adj_y, name)1700       else:1701         message = e.message
-> 1702       _six.raise_from(_core._status_to_exception(e.code, message), None)1703   # Add nodes to the TensorFlow graph.1704   if adj_x is None:D:\S\Anaconda3_v3\lib\site-packages\six.py in raise_from(value, from_value)InvalidArgumentError: In[0] and In[1] must have compatible batch dimensions: [2,2,3] vs. [3,3,2] [Op:BatchMatMulV2] name: MatMul/

但是当a和b中axis=0的值有一个是1,不会bug:

In [90]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([1, 2, 3])...: b = tf.random.uniform([3, 3, 2])...: c = tf.matmul(a, b)...: c.shape...:...:
Out[90]: TensorShape([3, 2, 2])In [91]: c
Out[91]:
<tf.Tensor: id=398, shape=(3, 2, 2), dtype=float32, numpy=
array([[[0.59542704, 0.60751694],[0.19115494, 0.36344892]],[[1.0542538 , 0.75257593],[0.26940605, 0.24408351]],[[1.1716111 , 0.4058628 ],[0.09086016, 0.28043625]]], dtype=float32)>In [92]: tf.matmul(a[0],b[0])
Out[92]:
<tf.Tensor: id=407, shape=(2, 2), dtype=float32, numpy=
array([[0.59542704, 0.60751694],[0.19115494, 0.36344892]], dtype=float32)>In [93]: tf.matmul(a[0],b[1])
Out[93]:
<tf.Tensor: id=416, shape=(2, 2), dtype=float32, numpy=
array([[1.0542538 , 0.7525759 ],[0.26940605, 0.2440835 ]], dtype=float32)>In [94]: tf.matmul(a[0],b[2])
Out[94]:
<tf.Tensor: id=425, shape=(2, 2), dtype=float32, numpy=
array([[1.1716112 , 0.4058628 ],[0.09086016, 0.28043625]], dtype=float32)>

依然遵循上述的先最后两维做乘法,再依次组成结果,只是由于a的axis=0的值为1,所以是b在axis=0的所有都对应a的axis=0的唯一。(还是看代码和输出结果更清楚。)

所以得到三维上的结论:

先做最后两维的矩阵的乘法,再在不同维度重复多次。

多维的 tf.matmul(a, b) 的维度有如下两个要求:

1、a的axis=2的值(只可意会)和b的axis=1的值需要相等。

2、a和b的axis=0的值需要“相等”或者“有一个是1”。

再看更高维度,比如四维的情况。

In [96]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([2, 1, 2, 3])...: b = tf.random.uniform([2, 3, 3, 2])...: c = tf.matmul(a, b)...: c.shape...:...:
Out[96]: TensorShape([2, 3, 2, 2])In [97]: c
Out[97]:
<tf.Tensor: id=454, shape=(2, 3, 2, 2), dtype=float32, numpy=
array([[[[1.0685383 , 1.9015994 ],[1.1457413 , 1.5246255 ]],[[0.953201  , 1.5544493 ],[0.7639411 , 1.4360913 ]],[[0.67427766, 0.49847895],[0.499685  , 0.39281937]]],[[[0.42752475, 0.7453967 ],[0.3735991 , 0.74812794]],[[0.54442215, 0.6510606 ],[0.6632798 , 0.38497943]],[[0.3459217 , 0.96300673],[0.45035997, 0.90772474]]]], dtype=float32)>In [98]: tf.matmul(a[0],b[0])
Out[98]:
<tf.Tensor: id=463, shape=(3, 2, 2), dtype=float32, numpy=
array([[[1.0685383 , 1.9015994 ],[1.1457413 , 1.5246255 ]],[[0.953201  , 1.5544493 ],[0.7639411 , 1.4360913 ]],[[0.67427766, 0.49847895],[0.499685  , 0.39281937]]], dtype=float32)>In [99]: tf.matmul(a[1],b[1])
Out[99]:
<tf.Tensor: id=472, shape=(3, 2, 2), dtype=float32, numpy=
array([[[0.42752475, 0.7453967 ],[0.3735991 , 0.74812794]],[[0.54442215, 0.6510606 ],[0.6632798 , 0.38497943]],[[0.3459217 , 0.96300673],[0.45035997, 0.90772474]]], dtype=float32)>

和三维时候是一致的,层层都依次做tf.matmul,也都能转化为最后两维的二维矩阵乘法。

同理,axis=0维度位置的值,有一个是1,也行:

In [100]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([2, 1, 2, 3])...: b = tf.random.uniform([1, 3, 3, 2])...: c = tf.matmul(a, b)...: c.shape...:...:
Out[100]: TensorShape([2, 3, 2, 2])

不再赘述

最终结论:不管几维矩阵都是先做最后两维的矩阵的乘法,再在不同维度重复多次。

多维的 tf.matmul(a, b) 的维度有如下两个要求:

1、a的axis=-1的值(只可意会)和b的axis=-2的值需要相等。

2、a和b的各维度的值(除了axis=-1和-2的值),在任意维度上,都需要“相等”或“有一个是1”。

另外给出一些维度数量对不上的例子,供意会:

In [105]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([2, 1, 2, 3])...: b = tf.random.uniform([1, 3, 2])...: c = tf.matmul(a, b)...: c.shape
Out[105]: TensorShape([2, 1, 2, 2])In [106]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([2, 1, 2, 3])...: b = tf.random.uniform([7, 3, 2])...: c = tf.matmul(a, b)...: c.shape
Out[106]: TensorShape([2, 7, 2, 2])In [107]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([2, 1, 2, 3])...: b = tf.random.uniform([7, 9, 3, 2])...: c = tf.matmul(a, b)...: c.shape
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-107-ff6e40117cf7> in <module>3 a = tf.random.uniform([2, 1, 2, 3])4 b = tf.random.uniform([7, 9, 3, 2])
----> 5 c = tf.matmul(a, b)6 c.shapeD:\S\Anaconda3_v3\lib\site-packages\tensorflow_core\python\util\dispatch.py in wrapper(*args, **kwargs)178     """Call target, and fall back on dispatchers if there is a TypeError."""179     try:
--> 180       return target(*args, **kwargs)181     except (TypeError, ValueError):182       # Note: convert_to_eager_tensor currently raises a ValueError, not aD:\S\Anaconda3_v3\lib\site-packages\tensorflow_core\python\ops\math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)2725         b = conj(b)2726         adjoint_b = True
-> 2727       return batch_mat_mul_fn(a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)27282729     # Neither matmul nor sparse_matmul support adjoint, so we conjugateD:\S\Anaconda3_v3\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py in batch_mat_mul_v2(x, y, adj_x, adj_y, name)1700       else:1701         message = e.message
-> 1702       _six.raise_from(_core._status_to_exception(e.code, message), None)1703   # Add nodes to the TensorFlow graph.1704   if adj_x is None:D:\S\Anaconda3_v3\lib\site-packages\six.py in raise_from(value, from_value)InvalidArgumentError: In[0] and In[1] must have compatible batch dimensions: [2,1,2,3] vs. [7,9,3,2] [Op:BatchMatMulV2] name: MatMul/

a和b的维度对不上也可以用,规则是“向右看齐”。

后面讨论多维 tf.matmul(a, b, transpose_b=True) 的情况:

In [111]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([2, 1, 2, 3])...: b = tf.random.uniform([2, 1, 2, 3])...: c = tf.matmul(a, b, transpose_b=True)...: c.shape
Out[111]: TensorShape([2, 1, 2, 2])In [112]: import tensorflow as tf...: import numpy as np...: a = tf.random.uniform([2, 1, 2, 3])...: b = tf.random.uniform([1, 5, 2, 3])...: c = tf.matmul(a, b, transpose_b=True)...: c.shape
Out[112]: TensorShape([2, 5, 2, 2])

transpose只是对最后两维做了转置,用于二维矩阵乘法能对的上。


感觉有用请点赞~~~谢谢

【tensorflow】多维张量做tf.matmul相关推荐

  1. tensorflow tf.matmul() (多维)矩阵相乘(多维矩阵乘法)

    @tf_export("matmul") def matmul(a,b,transpose_a=False,transpose_b=False,adjoint_a=False,ad ...

  2. tf计算矩阵维度_tensorflow tf.matmul() (多维)矩阵相乘(多维矩阵乘法)

    @tf_export("matmul") def matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=Fals ...

  3. tensorflow LSTM:张量变形,5维变4维,4维变5维

    把5维张量变成4维张量进行卷积操作,再变回5维张量,类似于tf.keras.layers.TimeDistributed操作,手动操作如下: import numpy as np import ten ...

  4. tf.matmul - 矩阵乘法

    tf.matmul - 矩阵乘法 https://github.com/tensorflow/docs/tree/r1.4/site/en/api_docs/api_docs/python/tf si ...

  5. python的matmul_TensorFlow:tf.matmul函数

    函数:tf.matmulmatmul( a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_ ...

  6. 对比tensorflow查看打印输出张量Tensor的两种方法(急切执行tf.enable_eager_execution()和tf.Session.run())

    第一种:tf.enable_eager_execution() # -*- coding: utf-8 -*- """ @File : 111.py @Time : 20 ...

  7. Tensorflow基础:tf.matmul() 和tf.multiply() 的区别

    tf.multiply() 两个矩阵中对应元素各自相乘 格式: tf.multiply(x, y, name=None) 参数: x: 一个类型为:half, float32, float64, ui ...

  8. 【TensorFlow】占位符:tf.placeholder,与feed_dict

    Tensorflow中的placeholder和feed_dict的使用_python_ https://www.jb51.net/article/143407.htm TensorFlow 支持占位 ...

  9. 二维张量 乘以 三维张量_通量vs张量流误解

    二维张量 乘以 三维张量 TensorFlow is the 800-pound Gorilla of Machine Learning that almost everybody in the fi ...

最新文章

  1. 17.深浅拷贝和写时拷贝
  2. helm values使用示例:变量定义及使用
  3. android -上传文件到服务器
  4. react todolist代码优化
  5. C++数据结构02--链式线性表(单链表的实现)
  6. 饥荒联机版运行不了服务器,饥荒联机版启动服务器出现问题 | 手游网游页游攻略大全...
  7. 从0开始构建SpringCloud微服务(1)
  8. 时间机器与iCloud云盘:应该使用哪个来备份你的Mac?
  9. 重构:从方法论到实践
  10. fiddler4安装教程以及手机下载证书时报错 no root certificate was found解决方法
  11. MAVEN实战 整理 笔记
  12. 【报告分享】抖店百宝书-抖音电商(附下载)
  13. Python爬取、可视化分析B站大司马视频40W+弹幕
  14. Docker容器无法解析域名
  15. Ubuntu 16.04 桌面字体太小让它大大大
  16. cxf调用报错Could not find conduit initiator for address:
  17. linux计划定时自动删除目录下文件
  18. download and build swe
  19. *4-2 CCF 2014-12-2 Z字形扫描
  20. 远控杂说---总有一款适合你

热门文章

  1. 【Davinci开发】:功能安全(vHSM与HOST联调)
  2. java毕业设计学生作业管理系统Mybatis+系统+数据库+调试部署
  3. DirectInput里的键盘鼠标的应用
  4. OpenGL第九章——混合
  5. tracker系列(一)
  6. Nginx学习之Nginx进程
  7. Python知识点学习——切片
  8. MVC框架详解(资源整理)
  9. 员工的积极性-能力四象限模型
  10. 汽车数字钥匙设计02--UWB基础知识