使用pytorch可以很方便的训练了网络,但是在部署libtorch的时候,c++对tensor的操作接口API,资料不是很多,因此,进行整理收集,形成文档。
目录
1、tensor创建
1.1、std::vector转tensor
1.2、torch::empty()
1.3、torch::ones()
1.4、torch::zeros()
1.5、torch::rand()
2、获取tensor属性
2.1、tensor.sizes()/numel()
2.2、获取tensor值
2.3、打印tensor值
3、维度操作
3.1、torch::squeeze() / unsqueeze()
3.2、torch::stack()
3.3、torch::cat()
3.4、permute
3.5、torch::meshgrid
4、切片操作
4.1、torch::select()
4.2、torch::index_select()
4.3、torch::slice()
4.4、Tensor::transpose()
4.5、Tensor::expand_as()
5、算数操作
5.1、算术操作
5.2、逻辑操作
5.3、clamp操作
6、统计操作
6.1、torch::max
6.2、torch::nonzero
6.3、torch::sort
6.4、argmax/argmin
6.5、argmax/argmin
7、tensor类型转换
7.1、toType Tensor数据类型转换
std::vector arrys;
arrys.push_back(1.2);
torch::Tensor t_arrys = torch::tensor(arrys);
std::cout<<"t_arrys:"<< t_arrys <
t_arrys: 1.2000
[ CPUFloatType{1} ]
开辟一块内存区,不初始化, 直接取内存中值。
torch::Tensor a = torch::empty({1, 4});
std::cout << a << std::endl;
=>
1e-42 *1.1673 0.0000 -6876249034576749395968.0000 0.6236
[ CPUFloatType{1,4} ]
torch::Tensor b = torch::ones({2, 4});
std::cout << b << std::endl;
=>
1 1 1 1
1 1 1 1
[ CPUFloatType{2,4} ]
torch::Tensor scores;
torch::Tensor keep = torch::zeros({scores.size(0)}).to(torch::kLong).to(scores.device());
torch::rand({2,3}) # 均值为0,方差为1的随机数
sizes():获取tensor大小,返回{c10::ArrayRef} 类型,获取其中元素值时,可使用下标[]操作。int len = tsize.size(); //获取size的长度。
numel():获取tensor元素个数。
torch::Tensor a = torch::rand({2, 4});
c10::IntArrayRef tsize = a.sizes();
int len = tsize .size();
std::cout << "tsize:"<< tsize << std::endl;
=>
[ CPUFloatType{2,4} ]
tsize:[2, 4]
int h = tsize[0];
int w = tsize[1];
std::cout << "h = " << h << ", w = " << w << std::endl;
=>
h = 2, w = 4
两种写法:
torch::Tensor a = torch::rand({2,3});
1)auto bbb = a[1][1].item().toFloat();
2) auto bbb = a[1][1].item
torch::Tensor tb = torch::rand({1});
std::cout << "tb = " << tb << std::endl;
=>
tb = 0.2414
[ CUDAFloatType{1} ]
float fb = tb.item();
std::cout << "float fb = " << fb << std::endl;
=>
float fb = 0.2414
对应关系:
tensor |
c++ |
FloatType |
float |
LongType |
int |
torch::Tensor a = torch::rand({2, 4});
std::cout << a << std::endl;
c10::IntArrayRef tsize = a.sizes();
std::cout << "tsize:"<< tsize << std::endl;
int h = tsize[0];
int w = tsize[1];
std::cout << "a[0][0] = " << a[0][0] << std::endl;
std::cout << "print arry:" << std::endl;
for (int i = 0; i < h; ++i)
{
for (int j = 0; j < w; ++j)
{
float val = a[i][j].item();
std::cout << val << " ";
}
std::cout << "\n";
}
=>
tsize:[2, 4]
a[0][0] = 0.834826
[ CPUFloatType{} ]
print arry:
0.834826 0.526461 0.163823 0.273542
0.755133 0.747542 0.0238906 0.662743
inline Tensor Tensor::squeeze() const //不加参数的,把所有为1的维度都压缩
inline Tensor Tensor::squeeze(int64_t dim)const //加参数的,指定哪个维度压缩
inline Tensor Tensor::unsqueeze(int64_t dim) //维度扩充
// torch::unsqueeze(a, 0); //0维度扩展
torch::Tensor a = torch::rand({2, 4});
std::cout << " a = "<< a << std::endl;
std::cout << " unsqueeze(a, 0): " << torch::unsqueeze(a, 0) << std::endl; //0维度扩展
=>
a = 0.6582 0.9560 0.0335 0.6024
0.5889 0.4425 0.4054 0.1707
[ CPUFloatType{2,4} ]
unsqueeze(a, 0): (1,.,.) =
0.6582 0.9560 0.0335 0.6024
0.5889 0.4425 0.4054 0.1707
[ CPUFloatType{1,2,4} ]
增加新的维度进行堆叠。
static inline Tensor stack(TensorList tensors, int64_t dim)
torch::Tensor a = torch::rand({1, 4});
torch::Tensor b = torch::rand({1, 4});
std::cout << " a = "<< a << std::endl;
std::cout << " b = " << b << std::endl;
torch::Tensor c = torch::stack({a, b}, 0);
std::cout << " c = " << c << std::endl;
torch::Tensor d = torch::stack({a, b}, 1);
std::cout << " d = " << d << std::endl;
=>
a = 0.4581 0.5502 0.2302 0.3650
[ CPUFloatType{1,4} ]
b = 0.6733 0.7400 0.5701 0.5953
[ CPUFloatType{1,4} ]
c = (1,.,.) =
0.4581 0.5502 0.2302 0.3650
(2,.,.) =
0.6733 0.7400 0.5701 0.5953
[ CPUFloatType{2,1,4} ]
d = (1,.,.) =
0.4581 0.5502 0.2302 0.3650
0.6733 0.7400 0.5701 0.5953
[ CPUFloatType{1,2,4} ]
对数据沿着某一维度进行拼接。cat后数据的总维数不变.
在下面的例子中,只能在dim=0上进行cat,因为a和b只有一个维度,dim=1将报错,因为cat不能增加维度。这里的dim是沿着哪个维度进行拼接。
std::cout << a << std::endl;
std::cout << b << std::endl;
torch::Tensor c = torch::cat({a, b}, 0);
std::cout << c << std::endl;
[ 21.0000
29.6985
14.8492
30.7409
43.4741
21.7371
[ CPUFloatType{6} ]
21.0000
14.8492
29.6985
30.7409
21.7371
43.4741
[ CPUFloatType{6} ]
21.0000
29.6985
14.8492
30.7409
43.4741
21.7371
21.0000
14.8492
29.6985
30.7409
21.7371
43.4741
[ CPUFloatType{12} ]
交换维度, 适合多维数据,更灵活的transpose。
torch::Tensor x = torch::randn({ 2,3,4 });
std::cout << "x.sizes() = " << x.sizes() << std::endl;
torch::Tensor x_p = x.permute({ 1,0,2 }); //将原来第1维变为0维,同理,0→1,2→2
std::cout << "x_p.sizes() = " << x_p.sizes() << std::endl;
=>
x.sizes() = [2, 3, 4]
x_p.sizes() = [3, 2, 4]
创建网格点矩阵。注意:args[0]是y的坐标变换, args[1]是x的坐标变换,另外args里面的元素的数据类型是CUDALongType
torch::Tensor shift_x = torch::arange(0, 3, device = device);
torch::Tensor shift_y = torch::arange(0, 3, device = device);
std::cout << shift_x << std::endl;
std::cout << shift_y << std::endl;
std::vector args = torch::meshgrid({shift_x, shift_x});
std::cout << args[0] << std::endl;
std::cout << args[1] << std::endl;
结果:
0
1
2
[ CUDALongType{3} ]
0
1
2
[ CUDALongType{3} ]
0 0 0
1 1 1
2 2 2
[ CUDALongType{3,3} ]
0 1 2
0 1 2
0 1 2
[ CUDALongType{3,3} ]
在某个维度上获取index指定的行或列。
函数原型:slice inline Tensor Tensor::slice(int64_t dim, int64_t index)
第一个参数是维度,对于二维矩阵来说,0是取行,1是取列第二个参数是对应维度索引,这个取出来的数据和原始tensor内存共享,即可以避免不必要的复制。另外,select和index_select的区别是,index_select可以取好几个索引,select只可以取一个,index_select没有内存共享。
torch::Tensor a = torch::rand({2, 3});
torch::Tensor b = a.select(1, 2);
std::cout << a <
在某个维度上,获取某些行或者某些列。
Tensor index_select(const Tensor & self, Dimname dim, const Tensor & index);
第一个参数是索引的对象,第二个参数0表示按行索引,1表示按列进行索引,第三个参数是一个tensor,就是索引的序号,例如下面的例子,是在a上取0,3,1,2行。下面的实验说明,没有内存共享。
// 测试代码
torch::Tensor a = torch::rand({2, 4});
std::cout << " a = "<< a << std::endl;
std::vector indices;
indices.push_back(1);
indices.push_back(0);
torch::Tensor tindices = torch::tensor(indices);
torch::Tensor b = torch::index_select(a, 0, tindices);
std::cout << " b = " << b << std::endl;
b[0][0] = 0.0; //改变b的内容,查看a是否改变
std::cout << " a = " << a << std::endl;
//输出:
// 随机a
a = 0.3419 0.4963 0.5489 0.3725
0.4221 0.4832 0.0587 0.3596
[ CPUFloatType{2,4} ]
// 取a的第1行和第2行
b = 0.4221 0.4832 0.0587 0.3596
0.3419 0.4963 0.5489 0.3725
[ CPUFloatType{2,4} ]
// b改变后,a并未改变!
a = 0.3419 0.4963 0.5489 0.3725
0.4221 0.4832 0.0587 0.3596
[ CPUFloatType{2,4} ]
获取tensor数据块中roi区域的数据,不复制,内存共享。
函数原型:slice inline Tensor Tensor::slice(int64_t dim, int64_t start, int64_t end, int64_t step) 。
dim0表示按行取,1表示按列取,支持高维,从start开始,到end(不含)结束。
torch::Tensor a = torch::rand({2, 4});
std::cout << " a = "<< a << std::endl;
torch::Tensor b = a.slice(1, 0, 2); //获取a中前两列
std::cout << " b = " << b << std::endl;
b[0][0] = 0.0;
std::cout << " a = " << a << std::endl;
=>
a = 0.2436 0.1376 0.1686 0.5232
0.8919 0.4098 0.8384 0.5210
[ CPUFloatType{2,4} ]
b = 0.2436 0.1376
0.8919 0.4098
[ CPUFloatType{2,2} ]
a = 0.0000 0.1376 0.1686 0.5232
0.8919 0.4098 0.8384 0.5210
[ CPUFloatType{2,4} ]
转置操作。
inline Tensor Tensor::transpose(Dimname dim0, Dimname dim1) const
torch::Tensor a = torch::rand({2,3});
std::cout<
扩展操作。
inline Tensor Tensor::expand_as(const Tensor & other) const
torch::Tensor a = torch::rand({2,3});;
// torch::Tensor b = torch::ones({2,2});
torch::Tensor b = torch::ones({2,1});
torch::Tensor c = b.expand_as(a);
std::cout<
与pytorch文档类似,实现运算符重载。
运算符 |
函数 |
+ |
add |
- |
sub |
* |
mul |
/ |
div |
1) > 大于符号
tensor数据中的每一位做比较,返回对应位置的bool值。
torch::Tensor a = torch::randn({3,4});
cout << ( a > 0.1) << endl;
=>
tensor a 的值为:
-0.4388 -0.8234 0.3935 0.0000
0.0121 1.0354 0.0000 1.5286
0.1590 2.7148 -0.0737 -0.5168
[ CPUFloatType{3,4} ]
( a > 0.1)值为:
0 0 1 0
0 1 0 1
1 1 0 0
[ CPUBoolType{3,4} ]
把数值控制在min max之间,小于min的就为min,大于max的就为max。
inline Tensor Tensor::clamp(c10::optional min, c10::optional max) const
torch::Tensor a = torch::rand({2,3});
a[0][0] = 20;
a[0][1] = 21;
a[0][2] = 22;
a[1][0] = 23;
a[1][1] = 24;
std::cout<
在工程中,一般要取tensor里面的值,还有有时候就只限制一边,比如只限制min,如下:
xx1 = xx1.clamp(x1[i].item().toFloat(),INT_MAX*1.0);
tensor数据块中每个维度上,求最大值以及最大值下标。
torch::Tensor a = torch::randn({3,4});
cout << a << endl;
std::tuple max_classes = torch::max(a, 1);
auto max_1= std::get<0>(max_classes);
auto max_index= std::get<1>(max_classes);
// tensor转为标准类型
float fmax_1 = max_1.item();
int imax_index = tb.item();
cout << max_1 << endl;
cout << max_index << endl;
=>
tensor a 的值为:
[0.4388 -0.8234 0.3935 0.0000
0.0121 1.0354 0.0000 1.5286
0.1590 2.7148 -0.0737 -0.5168]
[ CPUFloatType{3,4} ]
max_1 是最大值,值为:
0.3935
1.5286
2.7148
max_index是最大值的下标,值为:
[ CPUFloatType{3} ]
2
3
1
[ CPULongType{3} ]
获取tensor数据块中非零数的下标,返回下标。
torch::Tensor a = torch::randn({3,4});
a[2][3] = 0;
a[1][2] = 0;
cout << " a = " << a << endl;
auto b = torch::nonzero(a);
cout << " b = " << b << endl;
=>
a =
-0.4388 -0.8234 0.3935 0.0000
0.0121 1.0354 0.0000 1.5286
0.1590 2.7148 -0.0737 -0.5168
[ CPUFloatType{3,4} ]
b =
0 0
0 1
0 2
1 0
1 1
1 3
2 0
2 1
2 2
2 3
[ CPULongType{10,2} ]
static inline std::tuple
dim0表示按行,1表示按列,descending=false表示升序,true表示降序。返回的是元组,第一个表示排序后的值,第二个表示排序之后对应之前的索引。
torch::Tensor scores = torch::rand({10});
std::tuple sort_ret = torch::sort(scores.unsqueeze(1), 0, 1);
torch::Tensor v = std::get<0>(sort_ret).squeeze(1).to(scores.device());
torch::Tensor idx = std::get<1>(sort_ret).squeeze(1).to(scores.device());
std::cout<
static inline Tensor argmax(const Tensor & self, c10::optional
static inline Tensor argmin(const Tensor & self, c10::optional
torch::Tensor a = torch::rand({2,3});
auto b = torch::argmax(a, 0);
std::cout<
static inline Tensor where(const Tensor & condition, const Tensor & self, const Tensor & other);
static inline std::vector where(const Tensor & condition);
torch::Tensor d = torch::where(a>0.5,b,c);
说明:在a大于0.5的位置设为pos,d的pos位置上用b的pos位置上面值填充,其余的位置上值是c的值。
torch::Tensor a = torch::rand({2,3});
torch::Tensor b = torch::ones({2,3});
torch::Tensor c = torch::zeros({2,3});
torch::Tensor d = torch::where(a>0.5,b,c);
std::cout<
另外的例子:
auto b = torch::where(a>0.5);
torch::Tensor a = torch::rand({2,3});
auto b = torch::where(a>0.5);
std::cout<
cx = cx.toType(torch::kFloat);
参考:
1、libtorch 常用api函数示例
libtorch 常用api函数示例(史上最全、最详细) - 无左无右 - 博客园