基于rtsp摄像头视频帧读取并导出为jpg图片

目标: 实现从海康威视摄像头读取视频数据并导出为jpg图片接口

import:

numpy,cv2,PIL,io

技能点:

  1. cv2.VideoCapture cap
  2. PIL.Image img
  3. io.BytesIO

流程

  1. 链接摄像头
    url = “rtsp://用户名:密码@ip:端口/摄像头配置参数”
    cap = cv2.VideoCapture(url,)
    这里需要注意下摄像头配置参数
    我的是"/Streaming/Channels/设备通道值0清晰度编号"
    通道值为1, 清晰度编号为1 的url如下
    “/Streaming/Channels/201”

  2. 读取视频帧
    ret, frame = cap.read()

  3. 将视频帧转为图片
    outbuff = io.BytesIO()
    img = Image.fromarray(np.uint(frame))
    img.save(outbuff, format=‘JPEG’)

  4. 图片数据
    outbuff.getbuffer()
    s = str(base64.b64encode(outbuff.getbuffer()), encoding=‘utf-8’)
    顺便转为base64格式: ‘data:image/jpeg;base64,{img}’.format(img=s)

完整代码:

    def urlformat(self, kwargs):
        return 'rtsp://admin:ZjXf1234567890@{srcip}:{srcport}/Streaming/Channels/{srccard}'.format(**kwargs)
    
    def _process(self, kwargs):
        print(kwargs)
        framecount = int(kwargs.get('framecount'))
        url = self.urlformat(kwargs)
        if self.url != url:
            if self.cap:
                self.cap.release()
            self.cap = cv2.VideoCapture(url)
        cap = self.cap
        ret, frame = cap.read()
        while ret and framecount:
            outbuff = io.BytesIO()
            img = Image.fromarray(np.uint8(frame))
            img.save(outbuff, format='JPEG')
            s = str(base64.b64encode(outbuff.getbuffer()), encoding='utf-8')
            
            yield 'data:image/jpeg;base64,{img}'.format(img=s)
            ret, frame = cap.read()
            framecount -= 1

再顺便一下,连续读取指定帧哈哈哈
framecount 用作帧计数.
调用示例

for imgbs64 in _process(
	srcip="localhost",srcport="554",srccard="101",framecount=50)

问题1 得到的图片色值不对

使用cv2的色值转换函数可转未彩色图.
frame = cv2.cvtColor(frame, cv2.IMREAD_ANYCOLOR)

你可能感兴趣的:(图像处理)