(网上绝大多数博客都是发送端或者接收端同时作为服务器,这不扯么…)
使用的框架为Django,发送视频用Python实现(便于与ROS结合);客户端显示先用HTML凑合吧,写个Qt或者Swift太累了…
虽说ROS2内置DDS通讯,但这玩意儿我没接触过用不明白怎么在广域网通讯,所以我选择重复造轮子搭一个Django服务器。
为了实现双向奔赴通信,我选择websocket;但是蛋疼的是Django 3.0往上走就不支持websocket了,所以这里我通过channels实现websocket。
各位可以先跑一下channels官方的教程demo:https://channels.readthedocs.io/en/latest/tutorial/part_1.html
我只写了前三个tutorial,并且我的代码就是在这个基础上改的;所以我不会从0开始说(以后有时间了再说吧);
以及这个demo实现的是文本传输(聊天室),这不稍微改改就能实现命令和状态传输了么?多好啊
接下来我默认各位已经跑完demo了;接下来我的python版本均为3.8。
首先再次创建一个app,我将它命名为video:
python manage.py startapp video
并且在mysite/setting.py中注册:
INSTALLED_APPS = [
'channels',
'chat',
'video',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
同时在video/urls.py和mysite/urls.py中添加路由:
# mysite/urls.py
from django.conf.urls import include
from django.urls import path
from django.contrib import admin
urlpatterns = [
path('chat/', include('chat.urls')),
path('video/', include('video.urls')),
path('admin/', admin.site.urls),
]
# video/urls.py
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
path('/' , views.v_name, name='v_name'),
]
编辑video/views.py:
# video/views.py
from django.shortcuts import render
# Create your views here.
def index(request):
return render(request, 'video/index.html')
def v_name(request, v_name):
return render(request, 'video/video.html', {
'v_name': v_name
})
添加video/consumers.py:
# video/consumers.py
import json
from channels.generic.websocket import AsyncWebsocketConsumer
class VideoConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.room_name = self.scope['url_route']['kwargs']['v_name']
self.room_group_name = 'video_%s' % self.room_name
#print(self.room_name)
# Join room group
await self.channel_layer.group_add(
self.room_group_name,
self.channel_name
)
await self.accept()
async def disconnect(self, close_code):
# Leave room group
await self.channel_layer.group_discard(
self.room_group_name,
self.channel_name
)
# Receive message from WebSocket
async def receive(self, text_data):
# print(1)
# Send message to room group
await self.channel_layer.group_send(
self.room_group_name,
{
'type': 'video_message',
'message': text_data,
}
)
# Receive message from room group
async def video_message(self, event):
# print(1)
message = event['message']
# Send message to WebSocket
await self.send(text_data=json.dumps({
'message': message
}))
待会我们再来说这个文件。其实这个文件基本上就是抄的chat/consumers.py,各位应该能看出来。
然后我们在mysite应用中创建routing.py:
# chat/routing.py
from django.urls import re_path
import chat.consumers
import video.consumers
websocket_urlpatterns = [
re_path(r'chat/(?P\w+)/$' , chat.consumers.ChatConsumer.as_asgi()),
re_path(r'video/(?P\w+)/$' , video.consumers.VideoConsumer.as_asgi())
]
同时修改asgi.py:
# mysite/asgi.py
import os
from channels.routing import ProtocolTypeRouter, URLRouter
from django.core.asgi import get_asgi_application
from channels.auth import AuthMiddlewareStack
# import chat.routing
# import video.routing
from . import routing
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
application = ProtocolTypeRouter({
"http": get_asgi_application(),
"websocket": AuthMiddlewareStack(
URLRouter(
routing.websocket_urlpatterns
)
),
})
这么干是为了让chat和video能够同时工作(同时传输视频和指令/状态)。
这里我只解释video/consumers.py中我修改的代码,其余操作原理与chat相同。
看到receive()函数:
async def receive(self, text_data):
await self.channel_layer.group_send(
self.room_group_name,
{
'type': 'video_message',
'message': text_data,
}
)
由于传输时我将视屏拆解成图片,并将图片编码成base64,所以这里的接收值为text_data。
send_video.py:
# send_video.py
import asyncio
import websockets
import numpy as np
import json
import cv2
import base64
import time
capture = cv2.VideoCapture(0)
if not capture.isOpened():
print('quit')
quit()
ret, frame = capture.read()
encode_param=[int(cv2.IMWRITE_JPEG_QUALITY),95]
# 向服务器端实时发送视频截图
async def send_video(websocket):
global ret, frame
# global cam
while True:
time.sleep(0.1)
result, imgencode = cv2.imencode('.jpg', frame, encode_param)
data = np.array(imgencode)
img = data.tobytes()
# base64编码传输
img = base64.b64encode(img).decode()
await websocket.send("data:image/jpg;base64,"+ img)
ret, frame = capture.read()
async def main_logic():
async with websockets.connect('ws://127.0.0.1:8000/ws/video/wms/') as websocket:
await send_video(websocket)
asyncio.get_event_loop().run_until_complete(main_logic())
通过cv2读取摄像头并转码发送到Django后端;这里我默认固定连接 ws://127.0.0.1:8000/ws/video/wms/,便于调试。
video.html:
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Videotitle>
head>
<body>
<div>
<h1>Videoh1>
<div>
<div>
<img id="resImg" src="" />
div>
<script src="http://apps.bdimg.com/libs/jquery/2.1.1/jquery.min.js" >script>
<script>
const ws = new WebSocket(
'ws://'
+ window.location.host
+ '/ws/video/'
+ 'wms'
+ '/'
);
ws.onmessage = function(evt) {
v_data = JSON.parse(evt.data);
$("#resImg").attr("src", v_data.message);
//console.log( "Received Message: " + v_data.message);
// ws.close();
};
ws.onclose = function(evt) {
console.log("Connection closed.");
};
script>
body>
html>
接收端就是解析json数据并反向解码jpg图像并显示;之后我可能会用Swift实现(Qt我写不好运行太慢了)。
在Django工程中运行:
python manage.py runserver
然后运行发送端send_video.py:
python send_video.py
最后在浏览器中输入127.0.0.1:8000/video/wms打开video.html:
YADAZE.
本地测试完毕,就差部署了。
又是四点多…
项目部署:Django channels nginx+uwsgi+daphne 项目部署