使用WebWorker优化由tensorflow.js,bodytPix实现的摄像头视频背景虚化功能

背景

WebRTC项目中添加背景虚化功能,使用tensorflow.js和google现成的模型bodyPix实现,但实际使用中存在两个问题,一是帧率低(暂时未解决),二是切换到其他tab页后,背景虚化会很卡,几乎停滞,查阅资料后发现Chrome浏览器会降低隐藏tab页的性能,解决方案是使用WebWorker。
使用一周时间,不断地踩坑最后在webWorker中实现了优化。

踩坑

1.最初认为卡顿的原因是requestAnimationFrame方法,实际改成setTimeout或setInterval或直接继续虚化后发现仍旧卡顿,最后发现实际卡顿的地方是getSegmentPerson()的调用,在tab页切换后该方法调用需要几秒钟时间。

2.造成卡顿的是getSegmentPerson()方法,就只能将该方法放到worker中进行,然而getSegmentPerson需要传入原始的video或者canvas,而dom不能在worker中使用。继续翻阅源码发现bodyPix还支持传入OffscreenCanvasImageData,OffscreenCanvas是专门在webWorker上使用的canvas,被叫做离屏canvas,ImageData是接口描述 元素的一个隐含像素数据的区域,可以直接在canvas.getContext('2d').getImageData()获得。最终我实现了两种,选择了ImageData这种方式。

这是bodyPix getSegmentPerson的源码

segmentPerson(input: BodyPixInput, config?: PersonInferenceConfig): Promise;
export declare type ImageType = HTMLImageElement | HTMLCanvasElement | HTMLVideoElement | OffscreenCanvas;
export declare type BodyPixInput = ImageData | ImageType | tf.Tensor3D;

3.无论是OffscreenCanvas还是ImageData都需要创建一个新的canvas来实时画video帧,新的canvas必须设置width,heightvideo的保持一致,否则获取的segmentation不准,我没设置时就一直看到的是全部虚化了包括我自己,研究了很久发现是widthheight的问题。

WebWorker

1.创建my.worker.ts
2.bodyPix.load()从主体代码迁移到worker中,主体代码直接接收segmentation值
3.监听主体代码传过来的ImageData,调用net.segmentPerson()方法,获取的值再传回去

import * as tfjs from '@tensorflow/tfjs';
import * as bodyPix from '@tensorflow-models/body-pix';
import BodyPix from './service/BodyPix';

const webWorker: Worker = self as any; 
let body = null;
let offscreen = null;
let context = null;

webWorker.addEventListener('message', async (event) => { 

    const { action, data } = event.data;
    switch(action) {
        case 'init':
            body = new BodyPix();
            await body.loadAndPredict();
            webWorker.postMessage({inited: true});
            break;
        case 'imageData':
            body.net.segmentPerson(data.imageData, BodyPix.option.config).then((segmentation) => {
                requestAnimationFrame(() => {
                    webWorker.postMessage({segmentation});
                })
            })
            break;
    }
});
export default null as any;

主体代码

截取部分代码

async blurBackground (canvas: HTMLCanvasElement, video: HTMLVideoElement) {
    //渲染的canvas和获取segmentation的canvas的widht,height必须和video保持一致,否则bodyPix返回的segmentation会不准
    const [ width, height ] = [ video.videoWidth, video.videoHeight ];
    video.width = width;
    canvas.width = width;
    video.height = height;
    canvas.height = height;
    this.workerCanvas = document.createElement('canvas');
    this.workerCanvas.width = video.width;
    this.workerCanvas.height = video.height;
    this.bluring = true;
    this.blurInWorker(video, canvas);
}

async drawImageData (newCanvas: HTMLCanvasElement, video: HTMLVideoElement) {
    const ctx = newCanvas.getContext('2d');
    ctx.drawImage(video, 0, 0, newCanvas.width, newCanvas.height);
    const imageData = ctx.getImageData(0, 0, newCanvas.width, newCanvas.height);
    this.worker.postMessage({ action: 'imageData', data: {imageData} });
}

async blurInWorker (video: HTMLVideoElement, canvas: HTMLCanvasElement) {
    this.worker = new myWorker('');
    this.worker.addEventListener('message', (event) => {
        if(event.data.inited) {
            this.drawImageData(this.workerCanvas, video);
        } else if(event.data.segmentation) {
            bodyPix.drawBokehEffect(
                canvas, video, event.data.segmentation, BodyPix.option.backgroundBlurAmount,
                BodyPix.option.edgeBlurAmount, BodyPix.option.flipHorizontal);
            this.bluring && this.drawImageData(this.workerCanvas, video);
        }
    })
    this.worker.postMessage({action: 'init', data: null});
}

async unBlurBackground (canvas: HTMLCanvasElement, video: HTMLVideoElement) {
    this.bluring = false;
    this.worker.terminate();
    this.worker = null;
    canvas?.getContext('2d')?.clearRect(0, 0, canvas.width, canvas.height);
    this.workerCanvas?.getContext('2d')?.clearRect(0, 0, this.workerCanvas.width, this.workerCanvas.height);
    this.workerCanvas = null;
}

OffScreenCanvas实现

//worker中
let offscreen = nulll;
let context = null;

        case 'offscreen':
            offscreen = new OffscreenCanvas(data.width, data.height);
            context = offscreen.getContext('2d');
            break;
        case 'imageBitmap':
            offscreen.getContext('2d').drawImage(event.data.imageBitmap, 0, 0);
            body.net.segmentPerson(offscreen, BodyPix.option.config).then((segmentation) => {
                requestAnimationFrame(() => {
                    webWorker.postMessage({segmentation});
                })
            });
            break;

//主体中
const [track] = video.srcObject.getVideoTracks();
const imageCapture = new ImageCapture(track);
imageCapture.grabFrame().then(imageBitmap => {
    this.worker.postMessage({ imageBitmap });
});

bodyPix帧率问题还在研究中...

参考资料

bodyPix github: https://github.com/tensorflow...
背景虚化demo: https://segmentfault.com/a/11...
bodyPix其他使用: https://segmentfault.com/a/11...
webWorker优化bodyPix: https://segmentfault.com/a/11...
webWorker使用: https://www.ruanyifeng.com/bl...

你可能感兴趣的:(使用WebWorker优化由tensorflow.js,bodytPix实现的摄像头视频背景虚化功能)