CVPR2020 ScrabbleGAN: Semi-Supervised Varying Length Handwritten Text Generation

摘要:Optical character recognition (OCR) systems performance have improved significantly in the deep learning era. This is especially true for handwritten text recognition (HTR), where each author has a unique style, unlike printed text, where the variation is smaller by design. That said, deep learning based HTR is limited, as in every other task, by the number of training examples. Gathering data is a challenging and costly task, and even more so, the labeling task that follows, of which we focus here. One possible approach to reduce the burden of data annotation is semisupervised learning. Semi supervised methods use, in addition to labeled data, some unlabeled samples to improve performance, compared to fully supervised ones. Consequently, such methods may adapt to unseen images during test time.

We present ScrabbleGAN, a semi-supervised approach to synthesize handwritten text images that are versatile both in style and lexicon. ScrabbleGAN relies on a novel generative model which can generate images of words with an arbitrary length. We show how to operate our approach in a semi-supervised manner, enjoying the aforementioned benefits such as performance boost over state of the art supervised HTR. Furthermore, our generator can manipulate the resulting text style. This allows us to change, for instance, whether the text is cursive, or how thin is the pen stroke.

论文下载地址:https://openaccess.thecvf.com/content_CVPR_2020/papers/Fogel_ScrabbleGAN_Semi-Supervised_Varying_Length_Handwritten_Text_Generation_CVPR_2020_paper.pdf

基于深度学习的视觉字符识别(optical character recognition, OCR)系统的子系统手写文本识别(handwritten text recognition)的训练受限,主要是训练样本不足。因为训练样本的收集本身就很困难,然而标注数据的任务同样困难。采用半监督的方法训练模型,可以减少标注的任务量。本文提出的是一个半监督的生成手写文本的对抗生成网络(generative adversarial networks), ScrabbleGAN。


ScrabbleGAN结构示意图

ScrabbleGAN的结构,大致如上图所示,除了常规的分辨器(Discriminator),还增加了一个手写文本的识别器R。如图所示,由需要生成的文本,从卷积核库(filter bank),取出对应字符的卷积核。卷积核与噪声数据z相乘,z用以控制文本的风格。卷积核与卷积核的空间上存在重叠。这样生成文本是非常灵活的,每个字符可以有灵活的大小和类型,并且模型能学习到字符之间的依赖关系。分辨器用来分辨生成图片还是真实图片,本地的文本识别器(localized text recognizer)用来识别生成的文本中单个字符,并且没有采用识别文本通用的循环模型,只是采用了卷积网络进行识别(也就是说,识别的时候,不用考虑到识别字符的前后字符,防止识别器根据这些先验知识进行猜测,导致图片即使生成不清楚的字符,也可以被识别出,以此提升生成的质量)。因此,模型训练的损失函数由两项构成,



由于这损失的两个部分,并不会具有相同的大小,文中采用了如下公式,来计算最终更新的梯度:

这样的ScrabbleGAN在文中主要提供了两个思路应用到训练具体的手写识别系统中。第一种,是直接用ScrabbleGAN生成的数据来扩增已有的训练集;第二种应用是迁移学习,文中利用有标签的IAM数据集,并且在不利用CVL标签数据的情况下,利用CVL训练ScrabbleGAN来生成CVL风格以及字典的数据集,从而扩充已有的IAM数据集,再利用这个扩充的数据集来训练识别模型,成功达到了提升性能的目的。

本文在阅读中,其实存在几个比较不明的地方,由于代码实现没有放出,此处留作讨论:

1.卷积核库用于生成不同的字符,这样的卷积核库是如何获取的,或者是如何学习的?
2.每个字母filter的长度为什么就是8192?
3.如何分离数据集中手写体的风格和词汇的(Lexicon)?如何做到用IAM的风格加上CVL的词汇?
4.如果做到无监督训练这个GAN的,因为其中有一个R:如果R是提前训练的,那么是不是用了这个域的标签;如果不是提前训练,那么这个R分类效果岂不是很差,那怎么可能做到去监督GAN的生成效果?

你可能感兴趣的:(CVPR2020 ScrabbleGAN: Semi-Supervised Varying Length Handwritten Text Generation)