该系列文章是讲解Python OpenCV图像处理知识,前期主要讲解图像入门、OpenCV基础用法,中期讲解图像处理的各种算法,包括图像锐化算子、图像增强技术、图像分割等,后期结合深度学习研究图像识别、图像分类应用。希望文章对您有所帮助,如果有不足之处,还请海涵~
上一篇文章主要通过Keras深度学习构建CNN模型识别阿拉伯手写文字图像,一篇非常经典的图像分类文字。本文将详细讲解Pytorch构建Faster-RCNN模型实现小麦目标检测,主要参考kaggle大佬和刘兄的模型,推荐大家关注。这是一篇非常经典的图像识别文字,希望您喜欢,且看且珍惜。
第二阶段我们进入了Python图像识别,该部分主要以目标检测、图像识别以及深度学习相关图像分类为主,将会分享近50篇文章,感谢您一如至往的支持。作者也会继续加油的!
同时,该部分知识均为作者查阅资料撰写总结,并且开设成了收费专栏,为小宝赚点奶粉钱,感谢您的抬爱。如果有问题随时私聊我,只望您能从这个系列中学到知识,一起加油。代码下载地址(如果喜欢记得star,一定喔):
图像识别:
图像处理:
Pytorch安装需要在官网选择对应的环境,接着按自动生成的安装命令执行。
选择与自己相匹配的版本,这里显示是我安装的选择。
安装代码:
同时安装扩展包albumentations。
数据集是来自Kaggle的——全球小麦检测数据,题目是“您能使用图像分析帮助识别小麦吗?”
题目介绍:
打开您的食品储藏室,您很可能会找到几种小麦产品。事实上,您的早餐吐司或麦片可能依赖于这种常见的谷物。它作为一种流行的食物和作物,让小麦得到了广泛的研究。为了获得有关全球麦田的大量准确数据,植物科学家使用“小麦头”的图像,检测含有谷物的植物顶部尖峰。这些图像用于估计不同品种小麦的密度和大小。农民在他们的田地做出管理决策时,可以使用这些数据来评估其健康和成熟度。
然而,在室外田间图像中准确检测麦头在视觉上具有挑战性。密密麻麻的小麦植株经常重叠,风会模糊照片。两者都使识别单个头部变得困难。此外,外观因成熟度、颜色、基因类型和头部方向而异。最后,由于小麦在世界范围内种植,因此必须考虑不同的品种、种植密度、模式和田间条件。小麦开发模型需要在不同的生长环境之间进行概括。当前的检测方法涉及一级和二级检测器(Yolo-v3 和 Faster-RCNN),但即使使用大型数据集进行训练,对训练区域的偏差仍然存在。
在全球小麦头数据集是由来自七个国家的九个研究机构主导,包括东京大学等。此后,许多机构都加入了他们追求准确检测小麦头部的行列,包括全球食品安全研究所、DigitAg、Kubota 和 Hiphen。在本次比赛中,您将从小麦植物的室外图像中检测小麦头,包括来自全球的小麦数据集。使用全球数据,您将专注于通用解决方案来估计小麦头的数量和大小。为了更好地衡量未知基因型、环境和观察条件的性能,训练数据集涵盖多个区域。您将使用来自欧洲(法国、英国、瑞士)和北美(加拿大)的 3,000 多张图像。测试数据包括来自澳大利亚、日本和中国的约 1,000 张图像。
小麦是全球的主食,这就是为什么这种竞争必须考虑到不同的生长条件。为小麦表型开发的模型需要能够在环境之间进行概括。如果成功,研究人员可以准确估计不同品种小麦头的密度和大小。通过改进的检测,农民可以更好地评估他们的作物,最终将谷物、烤面包和其他喜爱的菜肴带到您的餐桌上。有关数据采集和过程的更多详细信息,请访问:
我们应该期望数据格式是什么?
数据是麦田的图像,每个识别的麦头都有边界框,并非所有图像都包含小麦头/边界框。这些图像被记录在世界各地的许多地方。
我们在预测什么?
您正在尝试预测图像中每个小麦头周围的边界框。如果没有小麦头,则必须预测没有边界框。
数据集包含四个文件
数据集如下图所示:
文件夹中包含小麦图像,名称是其ID。
train.csv中对应五列结果,分别是:
训练集中各小麦类型分布如下图所示:
整个小麦预测的大致流程如下图所示:
模型评估参数如下所示,推荐大家阅读kaggle官网介绍。
提交格式需要以空格分隔的一组边界框。例如:
该文件应包含标题并具有以下格式,您提交的每一行都应包含给定图像的所有边界框。
image_id,PredictionString
ce4833752,1.0 0 0 50 50
adcfa13da,1.0 0 0 50 50
6ca7b2650,
1da9078c1,0.3 0 0 50 50 0.5 10 10 30 30
7640b4963,0.5 0 0 50 50
下面我们参考Kaggle Peter老师的代码,来复现Faster-RCNN模型。
模型的框架如下图所示,相信大家都比较熟悉,也推荐大家使用并深入了解背后的原理。
读取小麦数据集的代码如下:
# -*- coding: utf-8 -*-
"""
Created on Fri Oct 29 13:42:38 2021
@author: xiuzhang
"""
import os
import re
import cv2
import pandas as pd
import numpy as np
from PIL import Image
import albumentations as A
from matplotlib import pyplot as plt
from albumentations.pytorch.transforms import ToTensorV2
import torch
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.sampler import SequentialSampler
from dataset import WheatDataset
#-----------------------------------------------------------------------------
#第一步 函数定义
#----------------------------------------------------------------------------
#提取box的四个坐标
def expand_bbox(x):
r = np.array(re.findall("([0-9]+[.]?[0-9]*)", x))
if len(r) == 0:
r = [-1, -1, -1, -1]
return r
#训练图像增强 Albumentations
def get_train_transform():
return A.Compose([
A.Flip(0.5),
ToTensorV2(p=1.0)
], bbox_params={
'format': 'pascal_voc', 'label_fields': ['labels']})
#验证图像增强
def get_valid_transform():
return A.Compose([
ToTensorV2(p=1.0)
], bbox_params={
'format': 'pascal_voc', 'label_fields': ['labels']})
def collate_fn(batch):
return tuple(zip(*batch))
#-----------------------------------------------------------------------------
#第二步 定义变量并读取数据
#-----------------------------------------------------------------------------
DIR_INPUT = 'data'
DIR_TRAIN = f'{
DIR_INPUT}/train'
DIR_TEST = f'{
DIR_INPUT}/test'
train_df = pd.read_csv(f'{
DIR_INPUT}/train.csv')
print(train_df.shape)
train_df['x'] = -1
train_df['y'] = -1
train_df['w'] = -1
train_df['h'] = -1
#读取box四个坐标
train_df[['x', 'y', 'w', 'h']] = np.stack(train_df['bbox'].apply(lambda x: expand_bbox(x)))
train_df.drop(columns=['bbox'], inplace=True)
train_df['x'] = train_df['x'].astype(np.float)
train_df['y'] = train_df['y'].astype(np.float)
train_df['w'] = train_df['w'].astype(np.float)
train_df['h'] = train_df['h'].astype(np.float)
#获取图像id
image_ids = train_df['image_id'].unique()
valid_ids = image_ids[-665:]
train_ids = image_ids[:-665]
valid_df = train_df[train_df['image_id'].isin(valid_ids)]
train_df = train_df[train_df['image_id'].isin(train_ids)]
print(valid_df.shape, train_df.shape)
print(train_df.head())
显示结果如下图所示,分别获取图像id和数据,并划分为train(训练)和valid(验证)。
其中,dataset.py文件代码如下:
# -*- coding: utf-8 -*-
"""
Created on Fri Oct 29 13:42:38 2021
@author: xiuzhang
"""
import numpy as np
import cv2
import torch
from torch.utils.data import Dataset
class WheatDataset(Dataset):
def __init__(self, dataframe, image_dir, transforms=None):
super().__init__()
self.image_ids = dataframe['image_id'].unique()
self.df = dataframe
self.image_dir = image_dir
self.transforms = transforms
def __getitem__(self, index: int):
image_id = self.image_ids[index]
records = self.df[self.df['image_id'] == image_id]
image = cv2.imread(f'{
self.image_dir}/{
image_id}.jpg', cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32)
image /= 255.0
boxes = records[['x', 'y', 'w', 'h']].values
boxes[:, 2] = boxes[:, 0] + boxes[:, 2]
boxes[:, 3] = boxes[:, 1] + boxes[:, 3]
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
area = torch.as_tensor(area, dtype=torch.float32)
# there is only one class
labels = torch.ones((records.shape[0],), dtype=torch.int64)
# suppose all instances are not crowd
iscrowd = torch.zeros((records.shape[0],), dtype=torch.int64)
target = {
}
target['boxes'] = boxes
target['labels'] = labels
# target['masks'] = None
target['image_id'] = torch.tensor([index])
target['area'] = area
target['iscrowd'] = iscrowd
if self.transforms:
sample = {
'image': image,
'bboxes': target['boxes'],
'labels': labels
}
sample = self.transforms(**sample)
image = sample['image']
target['boxes'] = torch.stack(tuple(map(torch.tensor, zip(*sample['bboxes'])))).permute(1, 0)
return image, target, image_id
def __len__(self) -> int:
return self.image_ids.shape[0]
接下来我们对小麦图像进行简单的可视化操作,代码如下:
# -*- coding: utf-8 -*-
"""
Created on Fri Oct 29 13:42:38 2021
@author: xiuzhang
"""
import os
import re
import cv2
import pandas as pd
import numpy as np
from PIL import Image
import albumentations as A
from matplotlib import pyplot as plt
from albumentations.pytorch.transforms import ToTensorV2
import torch
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.sampler import SequentialSampler
from dataset import WheatDataset
#-----------------------------------------------------------------------------
#第一步 函数定义
#----------------------------------------------------------------------------
#提取box的四个坐标
def expand_bbox(x):
r = np.array(re.findall("([0-9]+[.]?[0-9]*)", x))
if len(r) == 0:
r = [-1, -1, -1, -1]
return r
#训练图像增强 Albumentations
def get_train_transform():
return A.Compose([
A.Flip(0.5),
ToTensorV2(p=1.0)
], bbox_params={
'format': 'pascal_voc', 'label_fields': ['labels']})
#验证图像增强
def get_valid_transform():
return A.Compose([
ToTensorV2(p=1.0)
], bbox_params={
'format': 'pascal_voc', 'label_fields': ['labels']})
def collate_fn(batch):
return tuple(zip(*batch))
#-----------------------------------------------------------------------------
#第二步 定义变量并读取数据
#-----------------------------------------------------------------------------
DIR_INPUT = 'data'
DIR_TRAIN = f'{
DIR_INPUT}/train'
DIR_TEST = f'{
DIR_INPUT}/test'
train_df = pd.read_csv(f'{
DIR_INPUT}/train.csv')
print(train_df.shape)
train_df['x'] = -1
train_df['y'] = -1
train_df['w'] = -1
train_df['h'] = -1
#读取box四个坐标
train_df[['x', 'y', 'w', 'h']] = np.stack(train_df['bbox'].apply(lambda x: expand_bbox(x)))
train_df.drop(columns=['bbox'], inplace=True)
train_df['x'] = train_df['x'].astype(np.float)
train_df['y'] = train_df['y'].astype(np.float)
train_df['w'] = train_df['w'].astype(np.float)
train_df['h'] = train_df['h'].astype(np.float)
#获取图像id
image_ids = train_df['image_id'].unique()
valid_ids = image_ids[-665:]
train_ids = image_ids[:-665]
valid_df = train_df[train_df['image_id'].isin(valid_ids)]
train_df = train_df[train_df['image_id'].isin(train_ids)]
print(valid_df.shape, train_df.shape)
print(train_df.head())
#-----------------------------------------------------------------------------
#第三步 加载数据
#-----------------------------------------------------------------------------
train_dataset = WheatDataset(train_df, DIR_TRAIN, get_train_transform())
valid_dataset = WheatDataset(valid_df, DIR_TRAIN, get_valid_transform())
train_data_loader = DataLoader(
train_dataset,
batch_size=2,
shuffle=False,
num_workers=0,
collate_fn=collate_fn
)
valid_data_loader = DataLoader(
valid_dataset,
batch_size=2,
shuffle=False,
num_workers=0,
collate_fn=collate_fn
)
#-----------------------------------------------------------------------------
#第四步 数据可视化
#-----------------------------------------------------------------------------
#提取训练数据和类别
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
images, targets, image_ids = next(iter(train_data_loader))
images = list(image.to(device) for image in images)
targets = [{
k: v.to(device) for k, v in t.items()} for t in targets]
boxes = targets[0]['boxes'].cpu().numpy().astype(np.int32)
sample = images[0].permute(1, 2, 0).cpu().numpy()
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
#绘制小麦目标识别box
for box in boxes:
cv2.rectangle(sample,
(box[0], box[1]),
(box[2], box[3]),
(255, 0, 0), 3)
ax.text(box[0],
box[1] - 2,
'{:s}'.format('wheat'),
bbox=dict(facecolor='blue', alpha=0.5),
fontsize=12,
color='white')
ax.set_axis_off()
ax.imshow(sample)
plt.show()
输出结果如下图所示,按照train.csv定义好的边界我们绘制了wheat红色框,将小麦标记。最终的测试集,我们希望能自动预测小麦的边界,从而有效识别小麦区域和数量。
警告:
接下来构造Faster-RCNN模型,这是目标检测的经典模型。其核心代码如下所示:
#-----------------------------------------------------------------------------
#第五步 模型构建
#-----------------------------------------------------------------------------
num_classes = 2 #1 class (wheat) + background
lr_scheduler = None
num_epochs = 1
itr = 1
class Averager:
def __init__(self):
self.current_total = 0.0
self.iterations = 0.0
def send(self, value):
self.current_total += value
self.iterations += 1
@property
def value(self):
if self.iterations == 0:
return 0
else:
return 1.0 * self.current_total / self.iterations
def reset(self):
self.current_total = 0.0
self.iterations = 0.0
#load a model pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
#获取分类器输入特征数量
in_features = model.roi_heads.box_predictor.cls_score.in_features
#replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
#参数设置
model.to(device)
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)
#lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)
loss_hist = Averager()
print("Start training....")
# 迭代训练
for epoch in range(num_epochs):
loss_hist.reset()
for images, targets, image_ids in train_data_loader:
images = list(image.to(device) for image in images)
targets = [{
k: v.to(device) for k, v in t.items()} for t in targets]
for t in targets:
t['boxes'] = t['boxes'].float()
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
loss_value = losses.item()
loss_hist.send(loss_value)
print("loss is :",loss_value)
optimizer.zero_grad()
losses.backward()
optimizer.step()
if itr % 50 == 0:
print(f"Iteration #{
itr}/{
len(train_data_loader)} loss: {
loss_value}")
itr += 1
#更新学习率
if lr_scheduler is not None:
lr_scheduler.step()
print(f"Epoch #{
epoch} loss: {
loss_hist.value}")
torch.save(model.state_dict(), 'fasterrcnn_resnet50_fpn.pth')
print("Next Test....")
运行过程如下图所示:
Downloading: "https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth"
to C:\Users\xxx/.cache\torch\hub\checkpoints\fasterrcnn_resnet50_fpn_coco-258fb6c6.pth
100%|██████████| 160M/160M [04:06<00:00, 679KB/s]
增加模型预测的最终完整代码如下:
# -*- coding: utf-8 -*-
"""
Created on Fri Oct 29 13:42:38 2021
@author: xiuzhang
"""
import os
import re
import cv2
import pandas as pd
import numpy as np
from PIL import Image
import albumentations as A
from matplotlib import pyplot as plt
from albumentations.pytorch.transforms import ToTensorV2
import torch
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.sampler import SequentialSampler
from dataset import WheatDataset
#-----------------------------------------------------------------------------
#第一步 函数定义
#----------------------------------------------------------------------------
#提取box的四个坐标
def expand_bbox(x):
r = np.array(re.findall("([0-9]+[.]?[0-9]*)", x))
if len(r) == 0:
r = [-1, -1, -1, -1]
return r
#训练图像增强 Albumentations
def get_train_transform():
return A.Compose([
A.Flip(0.5),
ToTensorV2(p=1.0)
], bbox_params={
'format': 'pascal_voc', 'label_fields': ['labels']})
#验证图像增强
def get_valid_transform():
return A.Compose([
ToTensorV2(p=1.0)
], bbox_params={
'format': 'pascal_voc', 'label_fields': ['labels']})
def collate_fn(batch):
return tuple(zip(*batch))
#-----------------------------------------------------------------------------
#第二步 定义变量并读取数据
#-----------------------------------------------------------------------------
DIR_INPUT = 'data'
DIR_TRAIN = f'{
DIR_INPUT}/train'
DIR_TEST = f'{
DIR_INPUT}/test'
train_df = pd.read_csv(f'{
DIR_INPUT}/train.csv')
print(train_df.shape)
train_df['x'] = -1
train_df['y'] = -1
train_df['w'] = -1
train_df['h'] = -1
#读取box四个坐标
train_df[['x', 'y', 'w', 'h']] = np.stack(train_df['bbox'].apply(lambda x: expand_bbox(x)))
train_df.drop(columns=['bbox'], inplace=True)
train_df['x'] = train_df['x'].astype(np.float)
train_df['y'] = train_df['y'].astype(np.float)
train_df['w'] = train_df['w'].astype(np.float)
train_df['h'] = train_df['h'].astype(np.float)
#获取图像id
image_ids = train_df['image_id'].unique()
valid_ids = image_ids[-665:]
train_ids = image_ids[:-665]
valid_df = train_df[train_df['image_id'].isin(valid_ids)]
train_df = train_df[train_df['image_id'].isin(train_ids)]
print(valid_df.shape, train_df.shape)
print(train_df.head())
#-----------------------------------------------------------------------------
#第三步 加载数据
#-----------------------------------------------------------------------------
train_dataset = WheatDataset(train_df, DIR_TRAIN, get_train_transform())
valid_dataset = WheatDataset(valid_df, DIR_TRAIN, get_valid_transform())
train_data_loader = DataLoader(
train_dataset,
batch_size=2,
shuffle=False,
num_workers=0,
collate_fn=collate_fn
)
valid_data_loader = DataLoader(
valid_dataset,
batch_size=2,
shuffle=False,
num_workers=0,
collate_fn=collate_fn
)
#-----------------------------------------------------------------------------
#第四步 数据可视化
#-----------------------------------------------------------------------------
#提取训练数据和类别
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
images, targets, image_ids = next(iter(train_data_loader))
images = list(image.to(device) for image in images)
targets = [{
k: v.to(device) for k, v in t.items()} for t in targets]
boxes = targets[0]['boxes'].cpu().numpy().astype(np.int32)
sample = images[0].permute(1, 2, 0).cpu().numpy()
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
#绘制小麦目标识别box
for box in boxes:
cv2.rectangle(sample,
(box[0], box[1]),
(box[2], box[3]),
(255, 0, 0), 3)
ax.text(box[0],
box[1] - 2,
'{:s}'.format('wheat'),
bbox=dict(facecolor='blue', alpha=0.5),
fontsize=12,
color='white')
ax.set_axis_off()
ax.imshow(sample)
plt.show()
#-----------------------------------------------------------------------------
#第五步 模型构建
#-----------------------------------------------------------------------------
num_classes = 2 #1 class (wheat) + background
lr_scheduler = None
num_epochs = 1
itr = 1
class Averager:
def __init__(self):
self.current_total = 0.0
self.iterations = 0.0
def send(self, value):
self.current_total += value
self.iterations += 1
@property
def value(self):
if self.iterations == 0:
return 0
else:
return 1.0 * self.current_total / self.iterations
def reset(self):
self.current_total = 0.0
self.iterations = 0.0
#load a model pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
#获取分类器输入特征数量
in_features = model.roi_heads.box_predictor.cls_score.in_features
#replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
#参数设置
model.to(device)
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)
#lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)
loss_hist = Averager()
print("Start training....")
# 迭代训练
for epoch in range(num_epochs):
loss_hist.reset()
for images, targets, image_ids in train_data_loader:
images = list(image.to(device) for image in images)
targets = [{
k: v.to(device) for k, v in t.items()} for t in targets]
for t in targets:
t['boxes'] = t['boxes'].float()
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
loss_value = losses.item()
loss_hist.send(loss_value)
#print("loss is :",loss_value)
optimizer.zero_grad()
losses.backward()
optimizer.step()
if itr % 50 == 0:
print(f"Iteration #{
itr}/{
len(train_data_loader)} loss: {
loss_value}")
itr += 1
#更新学习率
if lr_scheduler is not None:
lr_scheduler.step()
print(f"Epoch #{
epoch} loss: {
loss_hist.value}")
torch.save(model.state_dict(), 'fasterrcnn_resnet50_fpn.pth')
print("Next Test....")
#-----------------------------------------------------------------------------
#第六步 模型测试
#-----------------------------------------------------------------------------
images, targets, image_ids = next(iter(valid_data_loader))
images = list(img.to(device) for img in images)
targets = [{
k: v.to(device) for k, v in t.items()} for t in targets]
boxes = targets[0]['boxes'].cpu().numpy().astype(np.int32)
sample = images[0].permute(1, 2, 0).cpu().numpy()
model.eval()
cpu_device = torch.device("cpu")
outputs = model(images)
outputs = [{
k: v.to(cpu_device) for k, v in t.items()} for t in outputs]
fig, ax = plt.subplots(1, 1, figsize=(16, 8))
for box in boxes:
cv2.rectangle(sample,
(box[0], box[1]),
(box[2], box[3]),
(220, 0, 0), 3)
ax.set_axis_off()
ax.imshow(sample)
plt.show()
模型运行结果如下图所示,可以看到迭代的loss,推荐大家用好的目标检测环境实验。
同时对测试集或验证集的图像进行识别,如下图所示:
写到这里,这篇文章就介绍结束了,希望对您有所帮助。详细的对比实验和算法评估还请读者自行完成,后续作者的文章也会深入介绍。
代码和数据集下载地址:
《立冬–小珞情》
初冬已至,泛黄的银杏叶随着寒风飘落,观山大道旁不时传来阵阵残香,冷风也裹紧了路人的衣裳。随着时光的流淌,我渡过了生命中的第一年,虽然还不能用言语表达,但我早已熟悉了这美妙的世界,感受到了家人对我的疼爱,当然也偶尔会经历一些烦恼,比这两天的感冒。每当我难受的时候,我就会嘶吼,或烦躁,还好,妈妈和婆婆总能第一时间将我抱起,在怀抱中摇摆着安抚我稳定。摇着摇着,看到妈妈那双疼爱的眼睛,我总会立时扬起嘴角,那是一种只有在妈妈怀抱中才会扬起的傻笑,接着进入甜美的梦境中。虽然我还小,但似乎也能感受到妈妈对这个微笑的喜欢,恰是立冬的第一杯奶茶,暖暖地流进她的心底。
“叮咚叮…”,醒来的我听到了电话里传来了远方的微信视频。我迫不及待地抢走了手机,但又不知道如何接通,此时的妈妈扶着我按下了绿色接听键,看到那似熟非熟的眼镜娃娃胡子脸,我叫了一声“粑粑”(就是这发音)。亲情,那一朵永远微笑着的蔷薇,无论过了多久都让人无法忘却,散发着芬芳。电话里,听爸爸说,他朋友圈里的北方迎来了初雪,EDG夺得了冠军,乔木和街道昨夜已悄然换上了雪白的冬装,让我们记得保暖。妈妈回复到,贵阳的寒风这两天格外的凛冽,小珞珞感冒快好了,也让爸爸记得多穿衣服。听着他俩唠家常,我在旁边不时地翻滚嬉闹,似乎要证明我才是家里最重要的一员。“妈做的酸汤鱼好了,我们准备吃饭去了,你也早点吃饭,珞珞别担心,保重身体 ”,随着家里那口老锅的热气升腾,就此结束通话。
或许,一岁的我还不知道这意味着什么,但我知道在妈妈怀抱中看着视频里远方的爸爸就很快乐,像极了我吃西瓜的甘甜。大一点,我的小学作文里,可能会写上一句:“这就是家的味道,也是人们在生命里追踪的情思,恰似这灶火间酸汤鱼燃烧的人间喜怒酸甜。正是因为这种味道,妈妈才会爱上爸爸,爸爸也才会追求妈妈,他们都才会爱最可爱的我”。
或许,我童真的世界记不了这些,但爸爸和妈妈会永远记住我的每一天。爱,在当下,爱,在顷刻间。小珞珞祝大家立冬快乐。
(By:Eastmount 2021-11-08 夜于武汉 http://blog.csdn.net/eastmount/ )
感谢几位大佬的分享,参考文献如下: