pytorch基本操作_通过这5个基本功能开始使用PyTorch。

pytorch基本操作

PyTorch基础知识 (PyTorch Fundamentals)

函数1 — torch.device() (Function 1 — torch.device())

PyTorch, an open-source library developed by Facebook, is very popular among data scientists. One of the main reasons behind its rise is the built-in support of GPU to developers.

PyTorch是Facebook开发的开源库,在数据科学家中非常受欢迎。 其兴起的主要原因之一是对开发人员的内置GPU支持。

The torch.device enables you to specify the device type responsible to load a tensor into memory. The function expects a string argument specifying the device type.

使用torch.device可以指定负责将张量加载到内存中的设备类型。 该函数需要一个字符串参数来指定设备类型。

You can even pass an ordinal like the device index. or leave it unspecified for PyTorch to use the currently available device.

您甚至可以传递序号,例如设备索引。 或未指定PyTorch使用当前可用的设备。

Example 1.2 例子1.2

For example 1.1, we select the device type that we want to store our tensor on during runtime. Note, we have specified our device type as cuda and also attached the ordinal in the same string separated by a ‘:’.

例如1.1 ,我们选择要在运行时存储张量的设备类型。 请注意,我们已将设备类型指定为cuda ,并将序号附加在由':'分隔的同一字符串中。

Example 1.2 achieves the same result by passing in separate arguments for device type and index.

例子1.2 通过为设备类型和索引传递单独的参数来实现相同的结果。

Defining the device type while creating a tensor 在创建张量时定义设备类型

The expected device types in torch.device() are cpu, cuda, mkldnn, opengl, opencl, ideep, hip, msnpu. The device type should exist in the list of expected devices for correct usage of this method.

torch.device()中预期的设备类型为cpu,cuda,mkldnn,opengl,opencl,ideep,hip,msnpu 。 设备类型应存在于预期设备列表中,以正确使用此方法。

Let’s take a look at what happens when we tried specifying GPU as the device type.

让我们看看尝试将GPU指定为设备类型时会发生什么。

Example 1.3 (a) — Runtime Error on creating a “gpu” type device instance 例1.3(a)—创建“ gpu”类型的设备实例时发生运行时错误

The specified device type should be available on the machine running your notebook. Failure to do so will result in an error as explained in example 1.3 (b).

指定的设备类型应该在运行笔记本计算机的计算机上可用。 否则将导致错误,如示例1.3(b)所述。

In 1.3 (b), we defined a tensor using a cuda device type which required an NVIDIA GPU. Since our machine does not have any GPU available, the kernel threw a runtime error.

1.3(b)中 ,我们使用需要NVIDIA GPU的cuda设备类型定义了张量。 由于我们的计算机没有可用的GPU,因此内核引发了运行时错误。

Example 1.3 (b) — Runtime Error on specifying a non-existing device type 例1.3(b)—指定不存在的设备类型时发生运行时错误

函数2 — torch.view() (Function 2 — torch.view())

The torch.view() method changes the view of the tensor (be it a vector, matrix, or a scalar) to the required shape. The transformed tensor has modified representational dimensions but retains the same data type.

torch.view()方法将张量的视图(矢量,矩阵或标量)更改为所需的形状。 变换后的张量已修改了表示尺寸,但保留了相同的数据类型。

Let’s jump into an example —

让我们跳一个例子-

Define a sample tensor x and transform its dimensions to store in another tensor z.

定义样本张量x并转换其尺寸以存储在另一个张量z中。

Example 2.1 范例2.1

In example 2.1, we transformed the dimension of a 4 x 5 matrix represented as a single row vector. The row-major order by default prints the elements of the rows contiguously. Every row-element in the transformed view appears in the same order as it does in the original tensor.

示例2.1中 ,我们转换了表示为单行向量的4 x 5矩阵的维数。 默认情况下,以行为主的顺序连续打印行的元素。 变换后的视图中的每个行元素的显示顺序都与原始张量中的顺序相同。

Besides, the new tensor must be of a shape that supports the same number of elements in the original tensor. You cannot store a 4 x 5 shaped tensor in a 5 x 3 view.

此外,新张量必须具有在原始张量中支持相同数量元素的形状。 您无法在5 x 3视图中存储4 x 5形状的张量。

For example 2.2, we will use -1 to denote a dimension. PyTorch automatically interprets the unknown dimension from the other dimensions. The transforming tensor is compatible in size and stride with the original tensor.

例如2.2 ,我们将使用-1表示维度。 PyTorch会自动从其他尺寸解释未知尺寸。 变换张量在大小上与原始张量兼容。

Example 2.3 — incorrect shape 例2.3 —不正确的形状

Note that only one dimension is inferrable at a time. Using -1 for more than one dimension will only introduce ambiguity and a runtime error!

请注意,一次只能推断一个维度。 对多个维度使用-1只会引入歧义和运行时错误!

PyTorch only allows -1 as an acceptable value for at the most 1 dimension. If more than one dimension is passed as -1, it will throw a runtime error.

PyTorch仅允许-1作为最多1维的可接受值。 如果将多个维度作为-1传递, 则会抛出运行时错误 。

函数3 — torch.set_printoptions() (Function 3 — torch.set_printoptions())

Many times, you would like to print the contents of a tensor before performing certain tasks. For this purpose, you may need to change the display representation when you print a tensor in your notebook.

很多时候,您希望在执行某些任务之前先打印张量的内容。 为此,在笔记本中打印张量时,可能需要更改显示形式。

Using the set_printoptions, you can adjust properties like precision level, line width, result threshold, etc.

使用set_printoptions,您可以调整精度级别,线宽,结果阈值等属性。

For our example, we will take a tensor representing a 20 x 30 matrix. Representing such a huge matrix in the notebook is not often required. A general use case behind printing a tensor variable is to peek the first few and last rows.

在我们的示例中,我们将使用一个表示20 x 30矩阵的张量。 经常不需要在笔记本中显示如此巨大的矩阵。 打印张量变量后的一般用例是窥视前几行和最后一行。

Example 3.1 (a) — Printing with the default print options 示例3.1(a)-使用默认打印选项进行打印

We will make use of the threshold, edgeitems and linewidth properties to change the visual representation of the tensor according to our liking. We can also alter the number of digits to display after the decimal point using the precision property.

我们将根据自己的喜好使用阈值,边项目和线宽属性来更改张量的视觉表示。 我们还可以使用precision属性更改要显示在小数点后的位数。

Example 3.1 (b) — Updated print option to display fewer rows 示例3.1(b)-更新了打印选项以显示更少的行

T

Ť

Example 3.2 范例3.2

函数4 — Tensor.backward() (Function 4 — Tensor.backward())

Tensors are used to simplify the common tasks required in a Machine Learning pipeline. To perform gradient descent, a popular loss minimization technique, we need to compute the gradients (recall - derivates) w.r.t the loss function.

张量用于简化机器学习管道中所需的常见任务。 为了执行梯度下降(一种流行的损耗最小化技术),我们需要通过损耗函数计算梯度(回想-导数)。

PyTorch simplifies this with the backward() method by storing gradients with each method call. Note: PyTorch computes the gradients for a tensor only if its require_grad property is set to True.

PyTorch通过使用每个方法调用存储梯度来使用向后()方法简化此过程。 注意:仅当其require_grad属性设置为True时,PyTorch才会计算张量的梯度。

Example 4.1 示例4.1

We will use the linear equation y = mx + c to find the partial derivates of y w.r.t each variable in the equation.

我们将使用线性方程y = mx + c查找方程中每个变量的y的偏导数。

Example 4.1 (a) 范例4.1(a)

After calling the y.backward() method and printing the computes gradients, we can access the .grad property of the tensor x.

调用y.backward()方法并打印计算梯度后,我们可以访问张量x的.grad属性。

Example 4.1 (b) 示例4.1(b)

As we did not set the require_grad option to True, we will not get a result on calling .grad property for m.

由于我们没有将require_grad选项设置为True,因此调用m的.grad属性不会得到结果。

Calling the y.backward() again will result in second-order differentiation of y w.r.t the tensors. Note that PyTorch stores the gradients cumulatively.

再次调用y.backward()将导致y与张量的二阶微分。 请注意,PyTorch会累积存储渐变。

函数5 — torch.linspace() (Function 5 — torch.linspace())

The linspace() method returns a 1-dimensional tensor containing a range of numbers. In contrast to the rand() function where the numbers are generated randomly, the returned numbers are members of a series in arithmetic progression in linspace().

linspace()方法返回包含一连串数字的一维张量。 与随机生成数字的rand()函数相反,返回的数字是linspace()中算术级数序列的成员。

The difference in each member is specified by the steps property and the range (end — start).

每个成员之间的差异由steps属性和范围(结束—开始)指定。

Example 5.1 范例5.1

The output tensor contains 50 equally-spaced numbers in the range 1–10. The dtype property is int so decimal places are not stored.

输出张量包含50个等距的数字,范围为1-10。 dtype属性为int,因此不存储小数位。

out property can also be used to specify the tensor to store the results of the method. out属性也可以用来指定张量来存储方法的结果。

Note that while creating a tensor using the linspace() method, the dtype value must confer with the output tensor’s defined dtype.

请注意,在使用linspace()方法创建张量时, dtype值必须与输出张量的已定义dtype保持一致

Example 5.3 — the dtype property does not match 例5.3 — dtype属性不匹配

总结思想 (Closing Thoughts)

This article covered some of the basic methods available in the PyTorch API to get you started with it. As much of the implementation is borrowed from the NumPy library to build on the existing understanding and experience of Python developers, the API is easy to get started with.

本文介绍了PyTorch API中可用的一些基本方法,以帮助您入门。 由于许多实现都是从NumPy库中借来的,以建立在Python开发人员的现有理解和经验之上,因此该API易于入门。

The next step after reading through these functions is to browse through the official documentation. As PyTorch is a deep-learning library, it is highly encouraged to learn ML fundamentals before you start using these functions.

阅读完这些功能后的下一步是浏览官方文档 。 由于PyTorch是一个深度学习库,因此强烈建议您在开始使用这些功能之前学习ML基础知识。

That’s it, you made it till the end of my very first blog in the PyTorch series!

就是这样,直到我在PyTorch系列中的第一个博客结束时为止!

If you liked this article and would like to read more from me in the future, you can follow me here and on LinkedIn and Twitter. Also, please drop your suggestions in the comments about what more functions I can cover on this page.

如果您喜欢这篇文章,并且希望以后能从我这里获得更多信息,则可以在这里以及在LinkedIn和Twitter上关注我。 另外,请在有关我可以在此页面上覆盖的更多功能的注释中添加您的建议。

pytorch基本操作_通过这5个基本功能开始使用PyTorch。_第1张图片
Photo by Trent Erwin on Unsplash 特伦特·欧文 ( Trent Erwin)在 Unsplash上 拍摄的照片

在LinkedIn和Twitter上关注我,以获取有关数据科学,机器学习和数据结构的内容。 (Follow me on LinkedIn and Twitter for content on Data Science, Machine Learning, and Data Structures.)

翻译自: https://towardsdatascience.com/get-started-with-pytorch-with-these-5-basic-functions-33ae428bab97

pytorch基本操作

你可能感兴趣的:(pytorch基本操作_通过这5个基本功能开始使用PyTorch。)