自然语言理解gpt_GPT-3:自然语言处理的创造潜力

自然语言理解gpt

重点 (Top highlight)

It was last year in February, as OpenAI published results on their training of unsupervised language model GPT-2. Trained in 40Gb texts (8 Mio websites) and was able to predict words in proximity. GPT-2, a transformer-based language applied to self-attention, allowed us to generated very convincing and coherent texts. The quality was that good, so the main model with 1.5 billion parameters wasn’t initially publicly accessible, to prevent uncontrolled fake news. Luckily, the complete model was later published and could be even used with Colab Notebooks.

这是去年2月,由于OpenAI发表了他们的训练成果无监督的语言模型GPT-2 。 经过40Gb文本培训(8个Mio网站),并能够预测附近的单词。 GPT-2是一种用于自我注意的基于变压器的语言,它使我们能够生成非常令人信服且连贯的文本。 质量是如此的好,因此带有15亿个参数的主模型最初并未公开访问,以防止不受控制的假新闻。 幸运的是,完整的模型后来发布了,甚至可以与Colab Notebooks一起使用 。

This year OpenAI strikes back with new language model GPT-3. With 175 billion parameters (read also: GPT-3 Paper).Unnecessary spoiler: it’s incredibly good.

今年,OpenAI推出了新的语言模型GPT-3。 具有1750亿个参数(另请 参见 GPT-3文件 )。 不必要的破坏者:非常好。

There are already some profound articles on TDS examining features and paper of GPT-3:

关于TDS检查功能和GPT-3的论文已经有一些深刻的文章:

但是在实际情况中看起来如何? (But how does it look like in action?)

OpenAI is building an API, currently accessible via waiting list:

OpenAI正在构建一个API,目前可以通过等待列表进行访问:

Fortunately, I could get access and experiment with GPT-3 directly. Here are some of my initial outcomes.

幸运的是,我可以直接使用GPT-3并进行实验。 这是我的一些初步结果。

界面,设置,预设。 (Interface, Settings, Presets.)

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

The AI Playground interface looks simple, but it bears the power within. For the first, here is a setting dialog, which lets you configure text length, temperature (from low/boring to standard to chaotic/creative), and other features.

AI Playground 界面看上去很简单,但它承载着强大的功能。 首先,这是一个设置对话框,您可以在其中配置文本长度,温度(从低/无聊到标准到混乱/创意)以及其他功能。

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

You also can define where the generated text has to start and to stop, these are some of the control functions that have a direct impact on textual results.

您还可以定义生成的文本必须在何处开始和停止,这些是一些对文本结果有直接影响的控制功能。

The simple interface provides also some GPT-3 presets. The amazing thing about transformer-driven GPT-models is among others the ability to recognize a specific style, text character, or structure. In case you begin with lists, GPT-3 continues generating lists. In case your prompt has a Q&A structure, it will be kept coherently. If you ask for a poem, it writes a poem.

简单的界面还提供了一些GPT-3 预设 。 变压器驱动的GPT模型的神奇之处在于,它能够识别特定的样式,文本字符或结构。 如果您以列表开头,则GPT-3会继续生成列表。 如果您的提示具有问答结构,则该提示将保持一致。 如果您要写诗,它就写诗。

You can do your own presets, or use the existing, which are:

您可以进行自己的预设,也可以使用现有的预设:

Chat.

聊天

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

A typical setting for a chatbot. You ask - AI answers. It’s possible to change the “characters” or setting also. As you can see, the chat situation was accomplished perfectly (even if my, Human’s, third question was kind of unfair).

聊天机器人的典型设置。 你问-AI的答案。 也可以更改“字符”或设置。 如您所见,聊天情况已完美完成(即使我的人类的第三个问题有点不公平)。

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

To demonstrate the contextual impact, let’s change the AI character from “helpful” and “very friendly” to “brutal, stupid and very unfriendly”. You will see how the whole dialogue will be influenced:

为了展示上下文的影响,让我们将AI角色从“有用”和“非常友好”更改为“残酷,愚蠢和非常不友好”。 您将看到整个对话将如何受到影响:

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

I think, we re-invented Marvin the Paranoid Android.

我认为,我们重新发明了Marvin Paranoid Android。

Q&A

问答环节

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

This preset consists of a clear dual structure: Question and Answer. You need some training before it starts to answer the question (and get the rules), but then it works perfectly. I asked some random questions from various areas and here you go:

此预设由清晰的双重结构组成:“问答”。 在它开始回答问题(并获得规则)之前,您需要进行一些培训,但是随后它可以正常工作。 我从各个领域随机询问了一些问题,然后您就可以开始了:

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

I’d say, perfect!

我会说,完美!

Parsing unstructured data

解析非结构化数据

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

This one is fascinating and shows a good comprehension of the unstructured text — extracting structured data from the full text.

这令人着迷,并且显示了对非结构化文本的良好理解-从全文中提取结构化数据。

Summarizing for a 2nd grader

总结二年级学生

This preset shows another level of comprehension — including rephrasing of difficult concepts and sentences in clear words.

此预设显示了另一种理解水平-包括将困难的概念和句子改写为清晰的单词。

I tried Wittgenstein:

我尝试过维特根斯坦:

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

The simple proverb can be paraphrased convincingly:

简单的谚语可以令人信服地解释为:

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

Or look at this pretty well and clear transition of Sigmund Freud’s time distancing concept:

或者看一下西格蒙德·弗洛伊德(Sigmund Freud)的时间间隔概念的这种清晰过渡:

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

As you see, compression of text and its coherent “translation” is one of the strengths of GPT-3.

如您所见,压缩文本及其连贯的“翻译”是GPT-3的优势之一。

语言呢? (What about languages?)

GPT-2 was already a great language model when it was about English. You could generate amazing texts, especially with 1.5 billion parameters. I used GPT-2 for a screenplay of this short movie — and its absurdity could be rather understood as a good tradition of David Lynch and Beckett:

关于英语,GPT-2已经是一个很棒的语言模型。 您可以生成惊人的文本,尤其是具有15亿个参数的文本。 我使用GPT-2录制了这部短片的剧本-它的荒谬之处可以说是David Lynch和Beckett的良好传统:

The dialogues were logical, even if spontaneous. But it was regarding English. If you’ve tried with inputs in other languages, you would face the barrier of understanding. GPT-2 tried to imitate languages, but you needed to fine-tune it on text corpus in a specific language to get good results.

对话是自然的,即使是自发的。 但这是关于英语的。 如果您尝试使用其他语言的输入,则将面临理解的障碍。 GPT-2试图模仿语言,但是您需要使用特定语言在文本语料库上对其进行微调,以获得良好的效果。

GPT-3 is different.

GPT-3不同。

It’s processing in other languages is phenomenal.

用其他语言处理它是惊人的。

I tried German, Russian, and Japanese.

我尝试了德语,俄语和日语。

German.

德语。

It was rather my daughter, who tried to let GPT-3 write a fairy tale. She began with “Eine Katze mit Flügeln ging im Park spazieren” (“A cat with wings took a walk in a park”).

而是我的女儿,他试图让GPT-3写一个童话故事。 她的开头是“ 公园的小狗 ”(“ 一只有翅膀的猫在公园里散步 ”)。

full text. 全文 。

The emerged story was astonishingly well written. With irony, vivid characters, and some leitmotifs. This is not just a collection of topoi or connected sentences. This is… a story!

出现的故事写得惊人。 具有讽刺意味的生动人物和一些主题。 这不仅是拓扑或相关句子的集合。 这是一个故事!

Russian.

俄语。

full text is here. 全文在这里。

I trained once GPT-2 on Pushkin’s poetry and have got some interesting neologisms, but it was a grammar mess. Here I input some lines of Pushkin’s poem — and the result I’ve got was… interesting. It hadn’t rhymes, but stylistically intense power. It was not Pushkin style, though. But almost without any mistakes or weird grammar. And… it works as poetry (especially if you are ready to interpret it).

我曾经对普希金的诗歌进行过GPT-2培训,并得到了一些有趣的新词,但那简直是语法混乱。 在这里,我输入了普希金的一些诗句,而我得到的结果是……很有趣。 它没有韵律,但是在造型上很强大。 但是,这不是普希金风格。 但是几乎没有任何错误或奇怪的语法。 而且……它就像诗歌一样(特别是如果您准备好诠释的话)。

Japanese.

日本。

here. 在这里 。

This was something special. I entered just a random sentence:

这很特别。 我只输入了一个随机句子:

今日は楽しい一日になりますように!と言いました。// Today was funny and entertaining day, I said.

我今天说,今天是有趣而有趣的一天。

And the result was a small story about prayer, happiness, wisdom, and financial investment. In well written Japanese (neutral politeness form, like the input).

结果是一个关于祈祷,幸福,智慧和财务投资的小故事。 用日文书写(中立的礼貌形式,例如输入内容)。

It does mean: GPT-3 is ready for multilingual text processing.

这确实意味着 :GPT-3已准备好进行多语言文本处理。

各种实验(和警报信号)。 (Various experiments (and alerting signals).)

莎士比亚写作诗 (ShakespAIre and writing poems)

My first try was, of course, to write a Shakespearean sonnet. So the prompt was just:

我的第一个尝试当然是写一个莎士比亚十四行诗。 因此提示只是:

here is a poem by Shakespeare

The result was this:

结果是这样的:

Screenshot: beta.openai.com // by: Merzmensch 截图:beta.openai.com //作者:Merzmensch

Perfect iambic verse, great style, nice rhymes… If not one thing:

完美的韵律诗,出色的风格,优美的韵律……如果不是一件事:

The first two lines are actually from Alexander Pope, The Rape of the Lock. And here we have a reason to be cautious: GPT-3 produces unique and unrepeatable texts, but it can reuse the whole quotes of existing texts it was trained on.

前两行实际上来自亚历山大·波普(Alexander Pope),《强奸》。 在这里,我们有一个谨慎的理由:GPT-3会生成独特且不可重复的文本,但是 它可以重用经过培训的现有文本的全部引用。

Re-examination of results is inevitable if you want to guarantee a singularity of a text.

如果要保证文本的唯一性,则不可避免地需要重新检查结果。

I wonder, if there are some possibilities for “Projection” like StyleGAN2 feature, just in opposite to StyleGAN2 (where it compares the image with latent space), in GPT-3 it would compare with the dataset it was trained on? To prevent accidental plagiarism.

我想知道,是否有一些 像ProjectGAN2功能这样的“投影”的 可能性, 与StyleGAN2 相反(它将图像与潜在空间进行比较),在GPT-3中它是否可以与训练过的数据集进行比较? 防止意外窃。

But the thing is: GPT-3 can write poems on demand, in particular styles.

但事实是:GPT-3可以按需编写诗歌,特别是样式。

Here is another example:

这是另一个示例:

随笔 (Essays)

As I still hadn’t accessed, I asked a friend to let GPT-3 write an essay on Kurt Schwitters, a German artist, and Dadaist:

由于我仍然无法访问,我请一个朋友让GPT-3写一篇关于德国艺术家和达达主义者的Kurt Schwitters的文章:

The outcome is: GPT-3 has already a rich knowledge, which can be recollected. It is not always reliable (you have to fine-tune it to have a perfect meaning match), but it’s still very close to the discourse.

结果是:GPT-3已经拥有丰富的知识,可以对其进行回忆。 它并不总是可靠的(您必须对其进行微调以使其具有完美的含义匹配),但是它仍然非常接近于论述。

用GPT-3编码 (Coding with GPT-3)

Another mindblowing possibility is using GPT-3 is quite different cases than just text generation:

使用GPT-3的另一种令人振奋的可能性与仅生成文本的情况大不相同:

You can get support by CSS:

您可以通过CSS获得支持:

And calling it General Intelligence is already a thing:

称之为通用情报已经是一回事了:

摘要。 (Summary.)

We are still at the beginning, but the experiments with GPT-3 made by the AI community show its power, potential, and impact. We just have to use it with reason and good intention. But that’s the human factor. Which is not always the best one.

我们仍处于起步阶段,但是AI社区使用GPT-3进行的实验表明了它的力量,潜力和影响。 我们只需要出于理性和善意使用它。 但这是人为因素。 并非总是最好的。

For more wonderful text experiments I highly recommend you to read Gwern:

对于更精彩的文字实验,我强烈建议您阅读Gwern:

让旅程继续! (Let the journey continue!)

翻译自: https://towardsdatascience.com/gpt-3-creative-potential-of-nlp-d5ccae16c1ab

自然语言理解gpt

你可能感兴趣的:(python,leetcode)