tf.matmul - 矩阵乘法

https://github.com/tensorflow/docs/tree/r1.4/site/en/api_docs/api_docs/python/tf
site/en/api_docs/api_docs/python/tf/matmul.md

matmul(a,b,transpose_a=False,transpose_b=False,adjoint_a=False,adjoint_b=False,a_is_sparse=False,b_is_sparse=False,name=None
)

Defined in tensorflow/python/ops/math_ops.py.
See the guide: Math > Matrix Math Functions

Multiplies matrix a by matrix b, producing a * b.
矩阵 a 乘以矩阵 b 生成 a * b。

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication arguments, and any further outer dimensions match.
输入必须在任何转换之后是 rank >= 2 的张量,其中内部 2 维度指定有效的矩阵乘法参数,并且任何其他外部维度匹配。

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.
两个矩阵必须是相同类型。

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.
通过将相应的标志之一设置为 True,矩阵可以被 transposed or adjointed (共轭和转置)。默认情况下,这些都是 False。

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.
如果一个或两个矩阵包含很多的零,则可以通过将相应的 a_is_sparse 或 b_is_sparse 标志设置为 True 来使用更有效的乘法算法,默认为 False。这个优化仅适用于具有数据类型为 bfloat16 或 float32 的纯矩阵 (rank 为 2 的张量)。

transposition [trænspə'zɪʃ(ə)n; trɑːns-; -nz-]:n. 调换,换置,词序的换位,移项,一个古老故事的现代翻版,变换物
conjugate ['kɒndʒʊgeɪt]:v. 列举 (动词的) 词形变化,结合,使成对,使共轭 adj. 共轭的,结合的 n. 结合物,共轭物,偶联物

For example:

# 2-D tensor `a`
# [[1, 2, 3],
#  [4, 5, 6]]
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])# 2-D tensor `b`
# [[ 7,  8],
#  [ 9, 10],
#  [11, 12]]
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])# `a` * `b`
# [[ 58,  64],
#  [139, 154]]
c = tf.matmul(a, b)# 3-D tensor `a`
# [[[ 1,  2,  3],
#   [ 4,  5,  6]],
#  [[ 7,  8,  9],
#   [10, 11, 12]]]
a = tf.constant(np.arange(1, 13, dtype=np.int32),shape=[2, 2, 3])# 3-D tensor `b`
# [[[13, 14],
#   [15, 16],
#   [17, 18]],
#  [[19, 20],
#   [21, 22],
#   [23, 24]]]
b = tf.constant(np.arange(13, 25, dtype=np.int32),shape=[2, 3, 2])# `a` * `b`
# [[[ 94, 100],
#   [229, 244]],
#  [[508, 532],
#   [697, 730]]]
c = tf.matmul(a, b)# Since python >= 3.5 the @ operator is supported (see PEP 465).
# In TensorFlow, it simply calls the `tf.matmul()` function, so the
# following lines are equivalent:
d = a @ b @ [[10.], [11.]]
d = tf.matmul(tf.matmul(a, b), [[10.], [11.]])

1. Args

  • a: Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1.
  • b: Tensor with same type and rank as a.
  • transpose_a: If True, a is transposed before multiplication. (如果 True,a 在乘法之前转置。)
  • transpose_b: If True, b is transposed before multiplication. (如果 True,b 在乘法之前转置。)
  • adjoint_a: If True, a is conjugated and transposed before multiplication. (如果 True,a 在乘法之前共轭和转置。)
  • adjoint_b: If True, b is conjugated and transposed before multiplication. (如果 True,b 在乘法之前共轭和转置。)
  • a_is_sparse: If True, a is treated as a sparse matrix. (如果 True,a 被视为稀疏矩阵。)
  • b_is_sparse: If True, b is treated as a sparse matrix. (如果 True,b 被视为稀疏矩阵。)
  • name: Name for the operation (optional).

2. Returns

A Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:
该函数返回与 a 和 b 具有相同类型的张量,其中每个最内矩阵是 a 和 b 中对应矩阵的乘积。例如,如果所有转置或伴随的属性为 False:

output[…, i, j] = sum_k (a[…, i, k] * b[…, k, j]),
for all indices i, j.

  • Note: This is matrix product, not element-wise product. (这是矩阵乘积,而不是元素的乘积。)

3. Raises

  • ValueError: If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.

4. Example

#!/usr/bin/env python
# -*- coding: utf-8 -*-from __future__ import absolute_import
from __future__ import print_function
from __future__ import divisionimport os
import sys
import numpy as np
import tensorflow as tfsys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")# 2-D tensor `a`
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
# => [[1. 2. 3.]
#     [4. 5. 6.]]# 2-D tensor `b`
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
# => [[7. 8.]
#     [9. 10.]
#     [11. 12.]]c = tf.matmul(a, b)  # => [[58 64]
#     [139 154]]with tf.Session() as sess:input_a = sess.run(a)print("input_a.shape:", input_a.shape)print("input_a:\n", input_a)print('\n')input_b = sess.run(b)print("input_b.shape:", input_b.shape)print("input_b:\n", input_b)print('\n')output_c = sess.run(c)print("output_c.shape:", output_c.shape)print("output_c:\n", output_c)print('\n')
/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-21 20:31:03.554301: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-21 20:31:03.621830: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-21 20:31:03.622083: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.31GiB
2019-08-21 20:31:03.622093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
input_a.shape: (2, 3)
input_a:[[1 2 3][4 5 6]]input_b.shape: (3, 2)
input_b:[[ 7  8][ 9 10][11 12]]output_c.shape: (2, 2)
output_c:[[ 58  64][139 154]]Process finished with exit code 0

5. Example

#!/usr/bin/env python
# -*- coding: utf-8 -*-from __future__ import absolute_import
from __future__ import print_function
from __future__ import divisionimport os
import sys
import numpy as np
import tensorflow as tfsys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")# 3-D tensor `a`
a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
# => [[[ 1.  2.  3.]
#      [ 4.  5.  6.]],
#     [[ 7.  8.  9.]
#      [10. 11. 12.]]]a0 = tf.constant(np.arange(1, 7, dtype=np.int32), shape=[2, 3])
# =>  [[ 1.  2.  3.]
#      [ 4.  5.  6.]]a1 = tf.constant(np.arange(7, 13, dtype=np.int32), shape=[2, 3])
# =>  [[ 7.  8.  9.]
#      [10. 11. 12.]]# 3-D tensor `b`
b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
# => [[[13. 14.]
#      [15. 16.]
#      [17. 18.]],
#     [[19. 20.]
#      [21. 22.]
#      [23. 24.]]]b0 = tf.constant(np.arange(13, 19, dtype=np.int32), shape=[3, 2])
# =>  [[13. 14.]
#      [15. 16.]
#      [17. 18.]]b1 = tf.constant(np.arange(19, 25, dtype=np.int32), shape=[3, 2])
# =>  [[19. 20.]
#      [21. 22.]
#      [23. 24.]]a0b0 = tf.matmul(a0, b0)
a0b1 = tf.matmul(a0, b1)
a1b0 = tf.matmul(a1, b0)
a1b1 = tf.matmul(a1, b1)c = tf.matmul(a, b)
# => [[[ 94 100]
#      [229 244]],
#     [[508 532]
#      [697 730]]]with tf.Session() as sess:input_a = sess.run(a)print("input_a.shape:", input_a.shape)print("input_a:\n", input_a)print('\n')input_b = sess.run(b)print("input_b.shape:", input_b.shape)print("input_b:\n", input_b)print('\n')output_c = sess.run(c)print("output_c.shape:", output_c.shape)print("output_c:\n", output_c)print('\n')input_a0 = sess.run(a0)print("input_a0.shape:", input_a0.shape)print("input_a0:\n", input_a0)print('\n')input_a1 = sess.run(a1)print("input_a1.shape:", input_a1.shape)print("input_a1:\n", input_a1)print('\n')input_b0 = sess.run(b0)print("input_b0.shape:", input_b0.shape)print("input_b0:\n", input_b0)print('\n')input_b1 = sess.run(b1)print("input_b1.shape:", input_b1.shape)print("input_b1:\n", input_b1)print('\n')output_a0b0 = sess.run(a0b0)print("output_a0b0.shape:", output_a0b0.shape)print("output_a0b0:\n", output_a0b0)print('\n')output_a0b1 = sess.run(a0b1)print("output_a0b1.shape:", output_a0b1.shape)print("output_a0b1:\n", output_a0b1)print('\n')output_a1b0 = sess.run(a1b0)print("output_a1b0.shape:", output_a1b0.shape)print("output_a1b0:\n", output_a1b0)print('\n')output_a1b1 = sess.run(a1b1)print("output_a1b1.shape:", output_a1b1.shape)print("output_a1b1:\n", output_a1b1)print('\n')print("output_a0b0 + a1b1:\n")print(output_a0b0)print(output_a1b1)
/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-21 20:57:43.726875: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-21 20:57:43.792803: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-21 20:57:43.793048: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.31GiB
2019-08-21 20:57:43.793059: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
input_a.shape: (2, 2, 3)
input_a:[[[ 1  2  3][ 4  5  6]][[ 7  8  9][10 11 12]]]input_b.shape: (2, 3, 2)
input_b:[[[13 14][15 16][17 18]][[19 20][21 22][23 24]]]output_c.shape: (2, 2, 2)
output_c:[[[ 94 100][229 244]][[508 532][697 730]]]input_a0.shape: (2, 3)
input_a0:[[1 2 3][4 5 6]]input_a1.shape: (2, 3)
input_a1:[[ 7  8  9][10 11 12]]input_b0.shape: (3, 2)
input_b0:[[13 14][15 16][17 18]]input_b1.shape: (3, 2)
input_b1:[[19 20][21 22][23 24]]output_a0b0.shape: (2, 2)
output_a0b0:[[ 94 100][229 244]]output_a0b1.shape: (2, 2)
output_a0b1:[[130 136][319 334]]output_a1b0.shape: (2, 2)
output_a1b0:[[364 388][499 532]]output_a1b1.shape: (2, 2)
output_a1b1:[[508 532][697 730]]output_a0b0 + a1b1:[[ 94 100][229 244]]
[[508 532][697 730]]Process finished with exit code 0

tf.matmul - 矩阵乘法相关推荐

  1. RTKLIB——matmul(矩阵乘法函数)

    RTKLIB--matmul(矩阵乘法函数) 笔者个人喜欢使用malloc开辟二维矩阵进行计算,在C语言线性代数专栏中对C语言中实验矩阵乘积.转置.求行列式.求逆等方法进行了详细的介绍. RTKLIB ...

  2. AI学习之路(19)TensorFlow里的矩阵乘法

    如果对矩阵的知识有点遗忘,有点陌生,有点想不起来,请先看看这个网页: http://blog.csdn.net/caimouse/article/details/55001181 基础知识已经补过了, ...

  3. tf.matmul / tf.multiply

    import tensorflow as tf import numpy as np 1.tf.placeholder placeholder()函数是在神经网络构建graph的时候在模型中的占位,此 ...

  4. tensorflow tf.matmul() (多维)矩阵相乘(多维矩阵乘法)

    @tf_export("matmul") def matmul(a,b,transpose_a=False,transpose_b=False,adjoint_a=False,ad ...

  5. tf计算矩阵维度_tensorflow tf.matmul() (多维)矩阵相乘(多维矩阵乘法)

    @tf_export("matmul") def matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=Fals ...

  6. tf计算矩阵维度_tf.matmul() 和tf.multiply() 的区别

    1.tf.multiply()两个矩阵中对应元素各自相乘 格式: tf.multiply(x, y, name=None) 参数: x: 一个类型为:half, float32, float64, u ...

  7. 学习笔记:几种矩阵乘法(matmul product普通乘积、hadamard product矩阵点乘、kronecker product克罗内克积、斯特拉森矩阵乘法)

    1.普通矩阵乘法(matmul product) 假设矩阵A大小是M*N,矩阵B大小是N*P,C=AB 这里选取一个例子 这里的矩阵乘法要求相乘的两个矩阵一个的行数得等于另一个的列数,否则,无法进行乘 ...

  8. Pytorch矩阵乘法之torch.mul() 、 torch.mm() 及torch.matmul()的区别

    torch.mul() . torch.mm() 及torch.matmul()的区别 一.简介 torch.mul(a, b) 是矩阵a和b对应位相乘,a和b的维度必须相等,比如a的维度是(1, 2 ...

  9. numpy矩阵乘法中的multiply,matmul和dot

    用numpy做矩阵运算时,少不了用到矩阵乘法.本文帮你迅速区分multiply, matmul和dot的区别. numpy官方文档中的说明:(想深入了解可以一戳) multiply: https:// ...

最新文章

  1. ViewPager 实现界面加载不同的数据
  2. C++数组名做函数形参/指针
  3. Nature:揭示大脑中的免疫记忆
  4. CSS 居中完全指南
  5. 五分钟了解先验概率和后验概率
  6. 呼市计算机应用那个职高好,呼和浩特职业高中哪个比较好
  7. office数据集dslr_如何将照片从DSLR无线传输到智能手机
  8. mybatis 多租户saas_MybatisPlus 多租户架构(SaaS)实现
  9. 嵌入式OS入门笔记-以RTX为案例:一.简介
  10. Centos 6.5(64bit)上安装Vertica single node
  11. Google要回归国内,百度李彦宏居然不怕?我想笑
  12. 在Linux中快速安装Docker
  13. url散列算法原理_如何列出与网站相关的所有URL
  14. 5G时代FPGA需求分析
  15. eclipse如何显示行数
  16. ECSHOP始终显示全部分类方法
  17. c#操作txt文本之查询
  18. 灰鸽子***猛增五百多变种 网民生活受***监控
  19. NC文件数据提取完成(1.16)
  20. Excel数据分析从入门到精通(十一)28个图表之结构分析

热门文章

  1. Codeforces Round #401 (Div. 2) D. Cloud of Hashtags(字符串 + 逆向思维)
  2. 基于MQTT协议的Mosquitto的使用及libmosquitto客户端编程
  3. 装逼技巧:程序员如何用代码证明自己牛逼!
  4. NET 里序列化可概念。
  5. 20.2 Java写文件之OutputStream学习
  6. 蓝桥云课linux入门2:基本概念及操作
  7. 暑期实训二20220621
  8. IPv6和IPv4的区别
  9. EightCap易汇:美元指数成分组成是什么?美元指数有什么作用?
  10. JAVA飞机移动斜着走_java复习 做一个简易的飞机游戏