# Eigen Tensors {#eigen_tensors}
Tensors are multidimensional arrays of elements. Elements are typically scalars,
but more complex types such as strings are also supported.
[TOC]
## Tensor Classes
You can manipulate a tensor with one of the following classes. They all are in
the namespace `::Eigen.`
### Class Tensor<data_type, rank>
This is the class to use to create a tensor and allocate memory for it. The
class is templatized with the tensor datatype, such as float or int, and the
tensor rank. The rank is the number of dimensions, for example rank 2 is a
matrix.
Tensors of this class are resizable. For example, if you assign a tensor of a
different size to a Tensor, that tensor is resized to match its new value.
#### Constructor `Tensor<data_type, rank>(size0, size1, ...)`
Constructor for a Tensor. The constructor must be passed `rank` integers
indicating the sizes of the instance along each of the the `rank`
dimensions.
// Create a tensor of rank 3 of sizes 2, 3, 4. This tensor owns
// memory to hold 24 floating point values (24 = 2 x 3 x 4).
Tensor<float, 3> t_3d(2, 3, 4);
// Resize t_3d by assigning a tensor of different sizes, but same rank.
t_3d = Tensor<float, 3>(3, 4, 3);
#### Constructor `Tensor<data_type, rank>(size_array)`
Constructor where the sizes for the constructor are specified as an array of
values instead of an explicitly list of parameters. The array type to use is
`Eigen::array<Eigen::Index>`. The array can be constructed automatically
from an initializer list.
// Create a tensor of strings of rank 2 with sizes 5, 7.
Tensor<string, 2> t_2d({5, 7});
### Class `TensorFixedSize<data_type, Sizes<size0, size1, ...>>`
Class to use for tensors of fixed size, where the size is known at compile
time. Fixed sized tensors can provide very fast computations because all their
dimensions are known by the compiler. FixedSize tensors are not resizable.
If the total number of elements in a fixed size tensor is small enough the
tensor data is held onto the stack and does not cause heap allocation and free.
// Create a 4 x 3 tensor of floats.
TensorFixedSize<float, Sizes<4, 3>> t_4x3;
### Class `TensorMap<Tensor<data_type, rank>>`
This is the class to use to create a tensor on top of memory allocated and
owned by another part of your code. It allows to view any piece of allocated
memory as a Tensor. Instances of this class do not own the memory where the
data are stored.
A TensorMap is not resizable because it does not own the memory where its data
are stored.
#### Constructor `TensorMap<Tensor<data_type, rank>>(data, size0, size1, ...)`
Constructor for a Tensor. The constructor must be passed a pointer to the
storage for the data, and "rank" size attributes. The storage has to be
large enough to hold all the data.
// Map a tensor of ints on top of stack-allocated storage.
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
// The same storage can be viewed as a different tensor.
// You can also pass the sizes as an array.
TensorMap<Tensor<int, 2>> t_2d(storage, 16, 8);
// You can also map fixed-size tensors. Here we get a 1d view of
// the 2d fixed-size tensor.
TensorFixedSize<float, Sizes<4, 5>> t_4x3;
TensorMap<Tensor<float, 1>> t_12(t_4x3.data(), 12);
#### Class `TensorRef`
See Assigning to a TensorRef below.
## Accessing Tensor Elements
#### `<data_type> tensor(index0, index1...)`
Return the element at position `(index0, index1...)` in tensor
`tensor`. You must pass as many parameters as the rank of `tensor`.
The expression can be used as an l-value to set the value of the element at the
specified position. The value returned is of the datatype of the tensor.
// Set the value of the element at position (0, 1, 0);
Tensor<float, 3> t_3d(2, 3, 4);
t_3d(0, 1, 0) = 12.0f;
// Initialize all elements to random values.
for (int i = 0; i < 2; ++i) {
for (int j = 0; j < 3; ++j) {
for (int k = 0; k < 4; ++k) {
t_3d(i, j, k) = ...some random value...;
}
}
}
// Print elements of a tensor.
for (int i = 0; i < 2; ++i) {
LOG(INFO) << t_3d(i, 0, 0);
}
## TensorLayout
The tensor library supports 2 layouts: `ColMajor` (the default) and
`RowMajor`. Only the default column major layout is currently fully
supported, and it is therefore not recommended to attempt to use the row major
layout at the moment.
The layout of a tensor is optionally specified as part of its type. If not
specified explicitly column major is assumed.
Tensor<float, 3, ColMajor> col_major; // equivalent to Tensor<float, 3>
TensorMap<Tensor<float, 3, RowMajor> > row_major(data, ...);
All the arguments to an expression must use the same layout. Attempting to mix
different layouts will result in a compilation error.
It is possible to change the layout of a tensor or an expression using the
`swap_layout()` method. Note that this will also reverse the order of the
dimensions.
Tensor<float, 2, ColMajor> col_major(2, 4);
Tensor<float, 2, RowMajor> row_major(2, 4);
Tensor<float, 2> col_major_result = col_major; // ok, layouts match
Tensor<float, 2> col_major_result = row_major; // will not compile
// Simple layout swap
col_major_result = row_major.swap_layout();
eigen_assert(col_major_result.dimension(0) == 4);
eigen_assert(col_major_result.dimension(1) == 2);
// Swap the layout and preserve the order of the dimensions
array<int, 2> shuffle(1, 0);
col_major_result = row_major.swap_layout().shuffle(shuffle);
eigen_assert(col_major_result.dimension(0) == 2);
eigen_assert(col_major_result.dimension(1) == 4);
## Tensor Operations
The Eigen Tensor library provides a vast library of operations on Tensors:
numerical operations such as addition and multiplication, geometry operations
such as slicing and shuffling, etc. These operations are available as methods
of the Tensor classes, and in some cases as operator overloads. For example
the following code computes the elementwise addition of two tensors:
Tensor<float, 3> t1(2, 3, 4);
...set some values in t1...
Tensor<float, 3> t2(2, 3, 4);
...set some values in t2...
// Set t3 to the element wise sum of t1 and t2
Tensor<float, 3> t3 = t1 + t2;
While the code above looks easy enough, it is important to understand that the
expression `t1 + t2` is not actually adding the values of the tensors. The
expression instead constructs a "tensor operator" object of the class
TensorCwiseBinaryOp<scalar_sum>, which has references to the tensors
`t1` and `t2`. This is a small C++ object that knows how to add
`t1` and `t2`. It is only when the value of the expression is assigned
to the tensor `t3` that the addition is actually performed. Technically,
this happens through the overloading of `operator=()` in the Tensor class.
This mechanism for computing tensor expressions allows for lazy evaluation and
optimizations which are what make the tensor library very fast.
Of course, the tensor operators do nest, and the expression `t1 + t2 * 0.3f`
is actually represented with the (approximate) tree of operators:
TensorCwiseBinaryOp<scalar_sum>(t1, TensorCwiseUnaryOp<scalar_mul>(t2, 0.3f))
### Tensor Operations and C++ "auto"
Because Tensor operations create tensor operators, the C++ `auto` keyword
does not have its intuitive meaning. Consider these 2 lines of code:
Tensor<float, 3> t3 = t1 + t2;
auto t4 = t1 + t2;
In the first line we allocate the tensor `t3` and it will contain the
result of the addition of `t1` and `t2`. In the second line, `t4`
is actually the tree of tensor operators that will compute the addition of
`t1` and `t2`. In fact, `t4` is *not* a tensor and you cannot get
the values of its elements:
Tensor<float, 3> t3 = t1 + t2;
cout << t3(0, 0, 0); // OK prints the value of t1(
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
matlab绘图 采用PreScan、ROS、Simulink进行自动驾驶控制算法仿真 横向控制采用的是stanley,MPC算法 使用说明 0分支说明 master分支:运行在ros端的控制算法,包括stanley,mpc。 PreScan分支:包含了prescan工程的压缩文件,解压后需要在matlab中添加自定义的ros msg,也就是src/nodes/msgs 配合控制程序,可以直接进行prescan-ros仿真。 1环境说明 PC1:装win10系统;运行prescan2019.2和matlab2019b;ip = 172.16.6.248 PC2:Ubuntu18.04,ros melodic;ip = 172.16.6.70 p.s. PC1和PC2应该处于同一网段下 2PC2(ubuntu18.04)操作说明 clone master分支的代码后 2.1修改PC2的.bashrc文件,向其中添加PC2的ip地址 sudo gedit ~/.bashrc #打开bashrc #在bashrc文件的末尾添加如下两行 export ROS_IP=172.16.
资源推荐
资源详情
资源评论














格式:pdf 资源大小:69.9KB 页数:2
















收起资源包目录





































































































共 1641 条
- 1
- 2
- 3
- 4
- 5
- 6
- 17
资源评论

- 小菜鸡?_?2024-12-05感谢大佬,让我及时解决了当下的问题,解燃眉之急,必须支持!

飞翔的佩奇

- 粉丝: 7761
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 计算机视觉课程作业 2:CIFAR10 与 CIFAR100 数据集训练实践
- JavaScript核心编程与最佳实践
- Mathematica 在计算机视觉作业中表现超强碾压对手
- 计算机视觉作业2-训练cifar10和cifar100
- 这篇文章详细介绍了基于小波分析和时域介电谱的变压器油纸绝缘老化状态评估方法及其工程应用(论文复现含详细代码及解释)
- 【无人机通信】基于无人机通信的移动边缘计算任务迁移与资源分配算法研究:优化能耗与计算效率的系统设计(论文复现含详细代码及解释)
- 模式识别与计算机视觉课程第三次作业任务安排
- 这篇文章深入探讨了基于吸引力模型的轴-辐式集装箱海运网络优化问题,尤其关注在竞争环境下中小型海运公司的网络设计与优化策略(论文复现含详细代码及解释)
- 航空电子基于小波包变换及TRLMS抑制脉冲干扰的方法:GNSS接收机中脉冲干扰的有效抑制与信号恢复系统设计(论文复现含详细代码及解释)
- 【电力系统谐波检测】基于小波变换与分形理论的谐波检测方法研究:从理论到工程实践的全面解析(论文复现含详细代码及解释)
- 《模式识别与计算机视觉课程对应的第三次作业》
- 埃博拉酱所开发的计算机视觉工具箱
- 【电力设备检测】基于小波分析和时域介电谱的变压器油纸绝缘老化状态评估:特征提取与智能诊断系统设计(论文复现含详细代码及解释)
- 【雷达目标跟踪】基于新息自适应的扩展卡尔曼滤波算法优化:复杂环境下时变噪声鲁棒性提升系统设计(论文复现含详细代码及解释)
- 【电气化铁路供电系统】基于新型YNvd平衡变压器的同相供电系统设计与仿真:解决无功负序谐波及过分相问题(论文复现含详细代码及解释)
- ### 标题:【无人机系统控制】基于新型观测器的线性UAVs预设时间编队容错控制:分布式控制与故障处理(论文复现含详细代码及解释)
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制
