signet matlab
My grandfather was an expert in handwriting analysis. He spent all his life analyzing documents for the CBI (Central Bureau Of Investigation) and other organizations. His unique way of analyzing documents using a magnifying glass and different tools required huge amounts of time and patience to analyze a single document. This is back when computers were not fast enough. I remember vividly that he photocopied the same document multiple times and arranged it on the table to gain a closer look at the handwriting style.
我的祖父是笔迹分析专家。 他一生都在分析CBI (中央调查局)和其他组织的文件。 他使用放大镜和不同工具分析文件的独特方法需要大量时间和耐心来分析单个文件 。 当计算机不够快时又回来了。 我清楚地记得他多次影印同一份文档并将其放在桌子上,以便更仔细地查看笔迹样式。
Handwriting analysis involves a comprehensive comparative analysis between a questioned document and the known handwriting of a suspected writer. Specific habits, characteristics, and individualities of both the questioned document and the known specimen are examined for similarities and differences.
笔迹分析包括对有问题的文档和可疑作家的已知笔迹之间的全面比较分析 。 检查有问题的文档和已知标本的 特定习惯 , 特征和个性 ,以查找相似性和差异。
As this problem consists of detecting and analyzing patterns, Machine Learning is a great fit to solve this problem.
由于此问题包括检测和分析模式,因此机器学习非常适合解决此问题。
为什么和如何? (Why and How?)
Why: My grandfather’s unique way of analyzing documents using a magnifying glass and different tools required huge amounts of time and patience to analyze a single document. This is back when computers were not fast enough. I remember vividly that he photocopied the same document multiple times and arranged it on the table to gain a closer look at the handwriting style. While I agree that we cannot replace that job with an A.I with a 100% accuracy, we can certainly build a system capable of aiding human beings.
w ^ HY:我爷爷的独特分析用放大镜和所需的时间和耐心大量不同的工具,文档,分析单个文档的方式。 当计算机不够快时又回来了。 我清楚地记得他多次影印同一份文档并将其放在桌子上,以便更仔细地查看笔迹样式。 虽然我同意我们不能用100%的精度替换AI的工作,但我们当然可以构建能够帮助人类的系统。
How: To build our Signature Similarity network, we will use utilize the wonders of Deep Learning. We will go through three approaches to extract the similarity between our handwritten signatures. For our initial data, we will use the HandWritten Signatures dataset from Kaggle.
^ h流量:建立我们的签名相似的网络,我们将使用利用深度学习的奥妙。 我们将通过三种方法来提取手写签名之间的相似性。 对于我们的初始数据,我们将使用Kaggle的HandWritten Signatures数据集 。
要求 (Requirements)
For this project we will require:
如果是这个项目,我们将要求:
- Python 3.8: The Programming Language Python 3.8:编程语言
- TensorFlow 2: The Deep Learning Library TensorFlow 2:深度学习库
- Numpy: Linear Algebra numpy:线性代数
- Matplotlib: Plotting images Matplotlib:绘制图像
- Scikit-Learn: General Machine Learning Library Scikit-Learn:通用机器学习库
数据集 (The Dataset)
The dataset contains real and forged signatures of 30 people. Each person has 5 genuine and 5 forged signatures.
吨他集包含30人的真实和伪造签名。 每个人都有5个真实签名和5个伪造签名。
For loading the data, I have created a simple load_data() that iterates through all the datasets and extracts real and forged signatures with a label of 1 and 0 respectively.
为了加载数据,我创建了一个简单的load_data(),它循环访问所有数据集并提取标签分别为1和0的真实和伪造签名。
In addition to this, I have also created a dictionary of tuples consisting of images and labels. (To be used later in the project).
除此之外,我还创建了一个由图像和标签组成的元组字典。 (将在项目的后面使用)。
def load_data(DATA_DIR=DATA_DIR, test_size=0.2, verbose=True, load_grayscale=True):
"""
Loads the data into a dataframe.
Arguments:
DATA_DIR: str
test_size: float
Returns:
(x_train, y_train,x_test, y_test, x_val, y_val, df)
"""
features = []
features_forged = []
features_real = []
features_dict = {}
labels = [] # forged: 0 and real: 1
mode = "rgb"
if load_grayscale:
mode = "grayscale"
for folder in os.listdir(DATA_DIR):
# forged images
if folder == '.DS_Store' or folder == '.ipynb_checkpoints':
continue
print ("Searching folder {}".format(folder))
for sub in os.listdir(DATA_DIR+"/"+folder+"/forge"):
f = DATA_DIR+"/"+folder+"/forge/" + sub
img = load_img(f,color_mode=mode, target_size=(150,150))
features.append(img_to_array(img))
features_dict[sub] = (img, 0)
features_forged.append(img)
if verbose:
print ("Adding {} with label 0".format(f))
labels.append(0) # forged
# real images
for sub in os.listdir(DATA_DIR+"/"+folder+"/real"):
f = DATA_DIR+"/"+folder+"/real/" + sub
img = load_img(f,color_mode=mode, target_size=(150,150))
features.append(img_to_array(img))
features_dict[sub] = (img, 1)
features_real.append(img)
if verbose:
print ("Adding {} with label 1".format(f))
labels.append(1) # real
features = np.array(features)
labels = np.array(labels)
x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size=test_size, random_state=42)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42)
print ("Generated data.")
return features, labels,features_forged, features_real,features_dict,x_train, x_test, y_train, y_test, x_val, y_valdef convert_label_to_text(label=0):
"""
Convert label into text
Arguments:
label: int
Returns:
str: The mapping
"""
return "Forged" if label == 0 else "Real"features, labels,features_forged, features_real, features_dict,x_train, x_test, y_train, y_test, x_val, y_val = load_data(verbose=False, load_grayscale=False)
数据可视化 (Visualization of the data)
The images are loaded with a target_size of (150,150,3).
加载的图像的target_size为(150,150,3)。
方法1:使用MSE和SSIM的图像(签名)相似度。 (Approach #1: Similarity in images (signatures) using MSE and SSIM.)
For this approach, we will compute the similarity between images by using MSE (Mean Squared Error) or SSIM(Structural similarity). As you can see the formulas are pretty straightforward and fortunately Scikit-Learn provides an implementation for SSIM.
使用这种方法,我们将使用MSE(均方误差)或SSIM(结构相似度)来计算图像之间的相似度 。 如您所见,公式非常简单,幸运的是,Scikit-Learn提供了SSIM的实现。
def mse(A, B):
"""
Computes Mean Squared Error between two images. (A and B)
Arguments:
A: numpy array
B: numpy array
Returns:
err: float
"""
# sigma(1, n-1)(a-b)^2)
err = np.sum((A - B) ** 2)
# mean of the sum (r,c) => total elements: r*c
err /= float(A.shape[0] * B.shape[1])
return errdef ssim(A, B):
"""
Computes SSIM between two images.
Arguments:
A: numpy array
B: numpy array
Returns:
score: float
"""
return structural_similarity(A, B)
现在让我们从同一个人拍摄两张图像,其中一张是真实的,另一张是假的。 (Now let us take two images from the same person, one of them is real and the other is a fake.)
As you can see, MSE Error does not have a fixed bound whereas SSIM has a fixed bound between -1 and 1.
如您所见,MSE错误没有固定范围,而SSIM具有-1和1之间的固定范围。
Lower MSE represents Similar images whereas lower SSIM represents Similar images.
较低的MSE代表相似的图像,而较低的SSIM代表相似的图像。
方法2:使用可检测伪造或真实签名的CNN构建分类器。 (Approach #2: Building a classifier using CNNs that can detect forged or real signatures.)
With this approach, we will try to come up with a classifier (using CNNs) to detect forged or real signatures. As CNN's are known to detect intricate features among images, we will experiment with this classifier.
通过这种方法,我们将尝试提出一个分类器(使用CNN)来检测伪造或真实的签名。 由于已知CNN可以检测图像中的复杂特征,因此我们将使用此分类器进行实验。
We are bound to encounter with overfitting as we do not have enough data.We will probably use Image Augmentation to generate more training data.
由于我们没有足够的数据,因此我们必然会遇到过度拟合的情况。我们可能会使用图像增强来生成更多的训练数据。
On training our model, we are bound to encounter overfitting and after applying techniques to overcome the problem, the model did not improve.
在训练我们的模型时,我们注定会遇到过度拟合的情况,并且在应用技术来克服问题之后,模型并没有得到改善。
方法#2.1:使用Inception进行转移学习 (Approach #2.1: Transfer Learning using Inception)
To improve our model we will use transfer learning and fine-tune the model for this particular problem.
为了改进我们的模型,我们将使用转移学习并针对该特定问题微调模型。
InceptionV3模型 (The InceptionV3 Model)
For this approach, we will load pre-trained weights and add a classification head at the top to cater to this problem.
使用这种方法,我们将加载预训练的权重,并在顶部添加分类头来解决此问题。
# loading Inception
model2 = tf.keras.applications.InceptionV3(include_top=False, input_shape=(150,150,3))# freezing layers
for layer in model2.layers:
layer.trainable=False# getting mixed7 layer
l = model2.get_layer("mixed7")x = tf.keras.layers.Flatten()(l.output)
x = tf.keras.layers.Dense(1024, activation='relu')(x)
x = tf.keras.layers.Dropout(.5)(x)
x = tf.keras.layers.Dense(1, activation='sigmoid')(x)
net = tf.keras.Model(model2.input, x)net.compile(optimizer='adam', loss=tf.keras.losses.binary_crossentropy, metrics=['acc'])h2 = net.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=5)
These two approaches show that if we use transfer learning, we get much better results than using a plain CNN model. Keep in mind, these approaches do not learn the similarity function but these focus on the classifying whether the image is forged or real.
牛逼 HESE两种方法表明,如果我们使用迁移学习,我们得到更好的结果比使用普通CNN模型。 请记住,这些方法不会学习相似性函数,而是专注于对图像是伪造的还是真实的进行分类 。
There are still many ways we can improve our model, one is by augmenting data.
我们仍然可以通过许多方法来改进模型,一种方法是通过增加数据。
方法3:用于图像相似性的暹罗网络 (Approach #3: Siamese networks for image similarity)
With our third approach, we will try to learn the similarity function. We will use something called Siamese networks (due to the nature of our data i.e fewer training examples).
通过第三种方法,我们将尝试学习相似性函数。 我们将使用一种称为“ 暹罗网络”的网络 (由于数据的性质,即更少的训练示例)。
In this approach, we will use Siamese networks to learn the similarity function. Siamese means ‘twins’ and the biggest difference between normal NNs is that these networks try to learn the similarity function instead of trying to classify (fitting the function).
在这种方法中,我们将使用暹罗网络来学习相似性函数。 暹罗语的意思是“双胞胎”,普通NN之间的最大区别在于,这些网络试图学习相似性函数,而不是试图进行分类(拟合该函数)。
- We first create a common feature vector for our images. We will pass two images (positive and negative) and use a contrastive loss function (Distance metric (L1 distance)) and in the end, we squash the output between 1 and 0 (sigmoid) to get the final result. 我们首先为图像创建一个公共特征向量。 我们将传递两个图像(正负),并使用对比损失函数(距离度量(L1距离)),最后,将输出压缩为1到0(S型)之间,以获得最终结果。
# creating the siamese network
im_a = tf.keras.layers.Input(shape=(150,150,3))
im_b = tf.keras.layers.Input(shape=(150,150,3))encoded_a = feature_vector(im_a)
encoded_b = feature_vector(im_b)combined = tf.keras.layers.concatenate([encoded_a, encoded_b])
combine = tf.keras.layers.BatchNormalization()(combined)
combined = tf.keras.layers.Dense(4, activation = 'linear')(combined)
combined = tf.keras.layers.BatchNormalization()(combined)
combined = tf.keras.layers.Activation('relu')(combined)
combined = tf.keras.layers.Dense(1, activation = 'sigmoid')(combined)sm = tf.keras.Model(inputs=[im_a, im_b], outputs=[combined])
sm.summary()
数据集生成 (Dataset Generation)
To generate the required dataset, we will try two approaches. First, we will generate data on the basis of labels. If two images have the same label (1 or 0), then they are similar. We will generate data in pairs in the form (im_a, im_b, label). Second, we will generate data on the basis of a person's number. According to the dataset, 02104021.png represents the signature produced by person 21 (i.e real).
为了生成所需的数据集,我们将尝试两种方法。 首先,我们将基于标签生成数据。 如果两个图像具有相同的标签(1或0),则它们是相似的。 我们将以(im_a,im_b,标签)的形式成对生成数据。 其次,我们将根据个人号码生成数据。 根据数据集,02104021.png代表人21(即真实人)产生的签名。
数据生成方法1: (Data generation Approach #1:)
Here we are assuming similarity on the basis of labels. If two images have the same label (i.e 1 or 0) then they are similar.
^ h埃雷我们标签的基础上,假设相似。 如果两个图像具有相同的标签(即1或0),则它们是相似的。
def generate_data_first_approach(features, labels, test_size=0.25):
"""
Generate data in pairs according to labels.
Arguments:
features: numpy
labels: numpy
"""
im_a = [] # images a
im_b = [] # images b
pair_labels = []
for i in range(0, len(features)-1):
j = i + 1
if labels[i] == labels[j]:
im_a.append(features[i])
im_b.append(features[j])
pair_labels.append(1) # similar
else:
im_a.append(features[i])
im_b.append(features[j])
pair_labels.append(0) # not similar
pairs = np.stack([im_a, im_b], axis=1)
pair_labels = np.array(pair_labels)
x_train, x_test, y_train, y_test = train_test_split(pairs, pair_labels, test_size=test_size, random_state=42)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42)
return x_train, y_train, x_test, y_test, x_val, y_val, pairs, pair_labelsx_train, y_train, x_test, y_test, x_val, y_val, pairs, pair_labels = generate_data_first_approach(features, labels)# show data
plt.imshow(pairs[:,0][0]/255.)
plt.show()
plt.imshow(pairs[:,1][0]/255.)
plt.show()
print("Label: ",pair_labels[0])
使用数据集生成#1训练数据集 (Training the dataset with Dataset Generation #1)
Now we will train the network. Due to computational limitations, we only train the model on a single epoch.
现在我们将训练网络。 由于计算限制,我们仅在单个时期训练模型。
# x_train[:,0] => axis=1 (all 150,150,3) x_train[:,1] => axis=1 (second column)
sm.fit([x_train[:,0], x_train[:,1]], y_train, validation_data=([x_val[:,0],x_val[:,1]], y_val),epochs=1)
The metric is calculating the L1-Distance (MAE) between the y_hat and y.
度量标准正在计算y_hat和y之间的L1- 距离(MAE) 。
Due to computation limitations, we only train it for one epoch
由于计算限制, 我们只训练一个时期
This represents a very simple siamese network capable of learning the similarity function.
这代表了一个能够学习相似性功能的非常简单的暹罗网络 。
数据生成方法2 (Data Generation Approach #2)
In this approach, we try to set up a dataset where we cross multiply each signature with other number signature. The inputs and the outputs must be the same size.
在这种方法中,我们尝试建立一个数据集,在该数据集中,我们将每个签名与其他数字签名相乘。 输入和输出的大小必须相同。
def generate_data(person_number="001"):
x = list(features_dict.keys())
im_r = []
im_f = []
labels = [] # represents 1 if signature is real else 0
for i in x:
if i.startswith(person_number):
if i.endswith("{}.png".format(person_number)):
im_r.append(i)
labels.append(1)
else:
im_f.append(i)
labels.append(0)
return im_r, im_f, labelsdef generate_dataset_approach_two(size=100, test_size=0.25):
"""
Generate data using the second approach.
Remember input and output must be the same size!
Arguments:
features: numpy array
labels: numpy array
size: the target size (length of the array)
Returns:
x_train, y_train
"""
im_r = []
im_f = []
ls = []ids = ["001","002","003",'004','005','006','007','008','009','010','011','012','013','014','015','016','017','018','019','020','021','022',
'023','024','025','026','027','028','029','030']
for i in ids:
imr, imf, labels = generate_data(i)
# similar batch
for i in imr:
for j in imr:
im_r.append(img_to_array(features_dict[i][0]))
im_f.append(img_to_array(features_dict[j][0]))
ls.append(1) # they are similar
# not similar batch
for k in imf:
for l in imf:
im_r.append(img_to_array(features_dict[k][0]))
im_f.append(img_to_array(features_dict[l][0]))
ls.append(0) # they are not similar
print(len(im_r), len(im_f))
pairs = np.stack([im_r, im_f], axis=1)
ls = np.array(ls)
x_train, x_test, y_train, y_test = train_test_split(pairs, ls, test_size=test_size, random_state=42)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42)
return x_train, y_train, x_test, y_test, x_val, y_val, pairs, lsx_train, y_train, x_test, y_test, x_val, y_val, pairs, ls = generate_dataset_approach_two()# show data
plt.imshow(x_train[:,0][0]/255.)
plt.show()
plt.imshow(x_train[:,0][1]/255.)
print("Label: ",y_train[0])
使用数据集生成#2训练网络 (Training the Network with Dataset Generation #2)
The biggest difference between dataset generation #1 and #2 is the way inputs are arranged. In dataset #1 we select random signatures according to their labels but in #2 we select signatures from the same person throughout.
数据集生成#1和#2之间的最大区别是输入的排列方式。 在数据集1中,我们根据标签随机选择签名,而在数据集2中,我们从同一个人中选择签名。
结论 (Conclusion)
To conclude, we present a plausible method to detect forged signatures using Siamese Networks and most importantly we show how we can easily train a Siamese network only a few training examples. We see how we can easily achieve great results using transfer learning.
最后,我们提出了一个合理的方法来检测使用连体网络,最重要的伪造签名,我们展示我们如何可以很容易地培养出连体网络只有几个训练实例。 我们看到了如何通过迁移学习轻松地取得出色的成绩。
翻译自: https://medium.com/swlh/signet-detecting-signature-similarity-using-machine-learning-deep-learning-is-this-the-end-of-1a6bdc76b04b
signet matlab