1、Paper basic information
author:Junho Cho and Sangdoo Yun and Kyoungmu Lee and Jin Young Choi
journal:2017 IEEE Conference on Computer Vision and Pattern Recognition
year:2017
Volume:2017 July
pages:1058-1066
2、The understanding of paper
Abstract
First of all,paper points out the aim of image recolorization.
Image recolorization enhances the visual perception of an image for design and artistic purposes.
Then,introducing the PaletteNet.
In this work, we present a deep neural network, referred to as PaletteNet, which recolors an image according to a given target color palette that is useful to express the color concept of an image.
PaletteNet takes two inputs: a source image to be recolored and a target palette.
PaletteNet is then designed to change the color concept of a source image so that the palette of the output image is close to the target palette.
To train PaletteNet, the proposed multi-task loss is composed of Euclidean loss and adversarial loss.
Finally,describing their great experimental results by comparing to the existing approach.
The experimental results show that the proposed method outperforms the existing recolorization methods.
Human experts with a commercial software take on average 18 minutes to recolor an image, while PaletteNet automatically recolors plausible results in less than a second.
1.Introduction
First, introducing background of color palette.
Color is an essential element of humans’ visual perceptions of their daily lives. Beautiful color harmony in artworks or movies fulfills our desires for color. Thus, designers and artists must put effort into building basic color concepts into their works. A sophisticated selection of color gives a sense of stability, unity, and identity to works.
In general, designers express a color concept through a color palette. The color palette of an image represents the color concept of an image with six colors ordered as shown in Figure 1. The corresponding color palette that contains distinctive color concept is subjective, and the number of palettes is uncountable. Typical designers would carefully select a color concept by the palette prior to the work.
Furthermore, recoloring an image with a target color palette is preferred for images to maintain uniformity and identity among artworks. Thus, the recolorization problem occupies a critical position in enhancing the visual understanding of viewers.Researchers have been tackling the recolorization problem with various approaches and purposes. Kuhn et al. [9] Figure proposed a practical way to enhance visibility for the colorblind (dichromat) by exaggerating color contrast. However, it ignored the color concept and lacked aesthetics. Casaca et al. [2] proposed a colorization algorithm that requires segmentation masks and user’s hints for the colors of some pixels. Even though the colorization based on the color hints was considered the desired color for each pixel, the algorithms were far from automatic colorization.
To reflect the intended color concept, the color palette-based methods [5, 3] have been proposed. Greenfield et al. [5] proposed a color association method using palettes. It extracted the color palettes of the source and target images and recolored the source image by associating the palettes in the color space. Chang et al. [3] proposed a color transferring algorithm using the relationship between the palettes of the source and target images. This approach helped users to have elaborate control over the intended color concept. However, it is questionable how well the color transform function [5, 3] in the palette space could be utilized for the content-aware recolorization. For example, flowers look more complicated than the sky. Accordingly, the recoloring of flowers necessitates more effort than the recoloring of the sky. Each of the objects has different color characteristics, and the simple palette matching recolorization neglects them. Moreover, performing color transformation globally on images might not be appropriate. For example, we might want the red tulip and the red bird in an image to be recolored separately to a yellowtulip and a green bird. Thus, it is natural to deploy a deep neural network that has strength in understanding the contents (tulips, bird, etc.) of the source image.
Begin to describe the work in this article.
In this paper, we propose a deep learning architecture for the content-aware image recoloring based on the given target palette. The proposed deep architecture requires two inputs, which are a source image and a target palette.
As described in Figure 2, the output image is a recolored version of the source image with respect to the target palette.
six of the most representative colors.
In our paper, the color palette contains six of the most representative colors in an artwork. Six is minimal and still representative enough to express analogous, monochromatic, triad, complementary, or compound combinations of colors. Although the spatial dimension of the palette is small, we assume the amount of information in the palette is abundant to express a specific color concept.
an encoder-decoder network and multi-task loss function.
To obtain a realistic recolorized image with the given palette, we propose an encoder-decoder network and multi-task loss function composed of Euclidean loss and adversarial loss. To gather image and palette pairs to train the proposed network, we scraped the Design-seeds website [1] and created a dataset.
propose the color augmentation method to expand the dataset.
Since the different color versions of an image do not usually exist, we propose the color augmentation method to expand the dataset for training the deep neural network. The proposed network is trained in an end-to-end and data-driven way. In the experiments, we show our model outperforms the existing recolorization model and produces plausible results in a second, while a human expert takes 18 minutes on average.