播放器插件实现系列 ―― directshow

directshow的文档比较详细,这里我们其实是要实现一个DirectShow的SourceFilter,在DirectShow提供的sdk包中,有实例代码(目录:Extras\DirectShow\Samples\C++\DirectShow\),我们的工程是拷贝Filters\PushSource然后做修改。

主要修改如下:

1、setup.cpp注册我们的filter

原来的PushSource注册的三个filter,每个filter一个pin,我们改成注册一个filter,包含两个pin,音视频各一个

const AMOVIESETUP_MEDIATYPE sudOpPinTypes[] =
{
{
&MEDIATYPE_Video,       // Major type
&MEDIASUBTYPE_NULL      // Minor type
},
{
&MEDIATYPE_Audio,       // Major type
&MEDIASUBTYPE_NULL      // Minor type
}
};
const AMOVIESETUP_PIN sudMylibPin[] =
{
{
L"Output",      // Obsolete, not used.
FALSE,          // Is this pin rendered?
TRUE,           // Is it an output pin?
FALSE,          // Can the filter create zero instances?
TRUE,           // Does the filter create multiple instances?
&CLSID_NULL,    // Obsolete.
NULL,           // Obsolete.
1,              // Number of media types.
&sudOpPinTypes[0]  // Pointer to media types.
},
{
L"Output",      // Obsolete, not used.
FALSE,          // Is this pin rendered?
TRUE,           // Is it an output pin?
FALSE,          // Can the filter create zero instances?
TRUE,           // Does the filter create multiple instances?
&CLSID_NULL,    // Obsolete.
NULL,           // Obsolete.
1,              // Number of media types.
&sudOpPinTypes[1]  // Pointer to media types.
}
};
const AMOVIESETUP_FILTER sudMylibSource =
{
&CLSID_MylibSource,     // Filter CLSID
g_wszMylib,             // String name
MERIT_DO_NOT_USE,       // Filter merit
2,                      // Number pins
sudMylibPin             // Pin details
};
// List of class IDs and creator functions for the class factory. This
// provides the link between the OLE entry point in the DLL and an object
// being created. The class factory will call the static CreateInstance.
// We provide a set of filters in this one DLL.
CFactoryTemplate g_Templates[] =
{
{
g_wszMylib,                // Name
&CLSID_MylibSource,        // CLSID
CMylibSource::CreateInstance,  // Method to create an instance of MyComponent
NULL,                      // Initialization function
&sudMylibSource            // Set-up information (for filters)
},
};
int g_cTemplates = sizeof(g_Templates) / sizeof(g_Templates[0]);

注意将原来的CFactoryTemplateg_Templates[3]改成CFactoryTemplateg_Templates[],我一开始忘了修改,在将我们的filter注册到系统时间老是失败,后来一步步跟踪发现提供的CFactoryTemplate无效,才找到这个BUG

2、实现SourceFilter

我们的SourceFilter需要打开一个URL,这一点实例代码里面没有相关内容。通过查文档发现需要暴露一个IFileSourceFilter接口,这样GraphBuilder才会将URL传递过来。

STDMETHODIMP CMylibSource::Load(LPCOLESTR pszFileName,
const AM_MEDIA_TYPE *pmt)
{
Mylib_Open(pszFileName);
return S_OK
}
STDMETHODIMP CMylibSource::GetCurFile(LPOLESTR *ppszFileName, AM_MEDIA_TYPE *pmt)
{
return E_FAIL;
}
STDMETHODIMP CMylibSource::NonDelegatingQueryInterface(REFIID riid,
void **ppv)
{
/* Do we have this interface */
if (riid == IID_IFileSourceFilter) {
return GetInterface((IFileSourceFilter *) this, ppv);
} else {
return CSource::NonDelegatingQueryInterface(riid, ppv);
}
}

IFileSourceFilter有两个函数:Load、GetCurFile,load直接调用我们的open,GetCurFile直接返回错误。然后在NonDelegatingQueryInterface查询接口时,把新加的接口暴露出去。

在增加继承IFileSourceFilter接口后,编译会报IUnkown的一些函数没有实现,只需要在头文件类的定义里面增加“DECLARE_IUNKNOWN”就可以解决问题。

3、实现Pin

因为我们的媒体SDK库输出的音视频是统一处理的,这里要实现的Pin用一个类就可以了,运行时会生成两个实例,音视频各一个。

如果不需要拖动的话,实现GetMediaType、DecideBufferSize、FillBuffer就可以了,支持拖动比较复杂,我们先看没有拖动的情况

HRESULT CMylibPin::GetMediaType(CMediaType *pMediaType)
{
CAutoLock cAutoLock(m_pFilter->pStateLock());
CheckPointer(pMediaType, E_POINTER);
return FillMediaType(pMediaType);
}
HRESULT CMylibPin::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pRequest)
{
HRESULT hr;
CAutoLock cAutoLock(m_pFilter->pStateLock());
CheckPointer(pAlloc, E_POINTER);
CheckPointer(pRequest, E_POINTER);
// Ensure a minimum number of buffers
if (pRequest->cBuffers == 0)
{
pRequest->cBuffers = 20;
}
if (m_info->type == mylib_video)
pRequest->cbBuffer = 2 * 1024 * 1024;
else
pRequest->cbBuffer = 2 * 1024;
ALLOCATOR_PROPERTIES Actual;
hr = pAlloc->SetProperties(pRequest, &Actual);
if (FAILED(hr))
{
return hr;
}
// Is this allocator unsuitable?
if (Actual.cbBuffer < pRequest->cbBuffer)
{
return E_FAIL;
}
return S_OK;
}
// This is where we insert the DIB bits into the video stream.
// FillBuffer is called once for every sample in the stream.
HRESULT CMylibPin::FillBuffer(IMediaSample *pSample)
{
CheckPointer(pSample, E_POINTER);
// If the bitmap file was not loaded, just fail here.
CAutoLock cAutoLockShared(&m_cSharedState);
Mylib_SampleEx2 sample;
sample.stream_index = m_nIndex;
HRESULT ret = m_SampleCache->ReadSample(sample, &m_bCancel);
if (ret == S_OK) {
BYTE *pData;
long cbData;
pSample->GetPointer(&pData);
cbData = pSample->GetSize();
if (cbData > sample.buffer_length)
cbData = sample.buffer_length;
memcpy(pData, sample.buffer, cbData);
pSample->SetActualDataLength(cbData);
if (sample.start_time * UINTS_MICROSECOND < m_rtStart)
m_rtStart = sample.start_time * UINTS_MICROSECOND;
REFERENCE_TIME rtStart = sample.start_time * UINTS_MICROSECOND - m_rtStart;
REFERENCE_TIME rtStop  = rtStart + m_nSampleDuration;
pSample->SetTime(&rtStart, &rtStop);
pSample->SetSyncPoint(sample.is_sync);
pSample->SetDiscontinuity(m_bDiscontinuty);
m_bDiscontinuty = FALSE;
}
return ret;
}

GetMediaType需要设置pMediaType,格式DShow我们的媒体编码格式,函数FillMediaType根据当前流是音频还是视频分别调用FillAAC1、FillAVC1。AAC、AVC都是编码格式,具体在DShow中的设置方式如下:

static HRESULT FillAVC1(CMediaType *pMediaType, Mylib_StreamInfoEx const * m_info)
{
MPEG2VIDEOINFO * pmvi =  // maybe  sizeof(MPEG2VIDEOINFO) + m_info->format_size - 7 - 4
(MPEG2VIDEOINFO *)pMediaType->AllocFormatBuffer(sizeof(MPEG2VIDEOINFO) + m_info->format_size - 11);
if (pmvi == 0)
return(E_OUTOFMEMORY);
ZeroMemory(pmvi, pMediaType->cbFormat);
pMediaType->SetSubtype(&MEDIASUBTYPE_AVC1);
pMediaType->SetFormatType(&FORMAT_MPEG2Video);
pMediaType->SetTemporalCompression(TRUE);
pMediaType->SetVariableSize();
VIDEOINFOHEADER2 * pvi = &pmvi->hdr;
SetRectEmpty(&(pvi->rcSource));
SetRectEmpty(&(pvi->rcTarget));
pvi->AvgTimePerFrame = UNITS / m_info->video_format.frame_rate;
BITMAPINFOHEADER * bmi = &pvi->bmiHeader;
bmi->biSize = sizeof(BITMAPINFOHEADER);
bmi->biWidth = m_info->video_format.width;
bmi->biHeight = m_info->video_format.height;
bmi->biBitCount = 0;
bmi->biPlanes = 1;
bmi->biCompression = 0x31435641; // AVC1
//pmvi->dwStartTimeCode = 0;
pmvi->cbSequenceHeader = m_info->format_size - 7;
BYTE * s = (BYTE *)pmvi->dwSequenceHeader;
My_uchar const * p = m_info->format_buffer;
//My_uchar const * e = p + stream_info.format_size;
My_uchar Version = *p++;
My_uchar Profile = *p++;
My_uchar Profile_Compatibility = *p++;
My_uchar Level = *p++;
My_uchar Nalu_Length = 1 + ((*p++) & 3);
size_t n = (*p++) & 31;
My_uchar const * q = p;
for (size_t i = 0; i < n; ++i) {
size_t l = (*p++);
l = (l << 8) + (*p++);
p += l;
}
memcpy(s, q, p - q);
s += p - q;
n = (*p++) & 31;
q = p;
for (size_t i = 0; i < n; ++i) {
size_t l = (*p++);
l = (l << 8) + (*p++);
p += l;
}
memcpy(s, q, p - q);
s += p - q;
pmvi->dwProfile = Profile;
pmvi->dwLevel = Level;
pmvi->dwFlags = Nalu_Length;
return S_OK;
}
static HRESULT FillAAC1(CMediaType *pMediaType, Mylib_StreamInfoEx const * m_info)
{
WAVEFORMATEX  * wf = (WAVEFORMATEX  *)pMediaType->AllocFormatBuffer(sizeof(WAVEFORMATEX) + m_info->format_size);
if (wf == 0)
return(E_OUTOFMEMORY);
ZeroMemory(wf, pMediaType->cbFormat);
pMediaType->SetSubtype(&MEDIASUBTYPE_RAW_AAC1);
pMediaType->SetFormatType(&FORMAT_WaveFormatEx);
pMediaType->SetTemporalCompression(TRUE);
pMediaType->SetVariableSize();
wf->cbSize = sizeof(WAVEFORMATEX);
wf->nChannels = m_info->audio_format.channel_count;
wf->nSamplesPerSec = m_info->audio_format.sample_rate;
wf->wBitsPerSample = m_info->audio_format.sample_size;
wf->wFormatTag = WAVE_FORMAT_RAW_AAC1;
memcpy(wf + 1, m_info->format_buffer, m_info->format_size);
return S_OK;
}

DecideBufferSize就比较简单了,我们针对音视频分别采用2M和2K的帧缓存大小,缓存数量都用20

HRESULT CMylibPin::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pRequest)
{
HRESULT hr;
CAutoLock cAutoLock(m_pFilter->pStateLock());
CheckPointer(pAlloc, E_POINTER);
CheckPointer(pRequest, E_POINTER);
// Ensure a minimum number of buffers
if (pRequest->cBuffers == 0)
{
pRequest->cBuffers = 20;
}
if (m_info->type == mylib_video)
pRequest->cbBuffer = 2 * 1024 * 1024;
else
pRequest->cbBuffer = 2 * 1024;
ALLOCATOR_PROPERTIES Actual;
hr = pAlloc->SetProperties(pRequest, &Actual);
if (FAILED(hr))
{
return hr;
}
// Is this allocator unsuitable?
if (Actual.cbBuffer < pRequest->cbBuffer)
{
return E_FAIL;
}
return S_OK;
}

FillBuffer有点小麻烦,因为我们的媒体SDK库输出的音视频是混在一起的,都是Pin上的FillBuffer是分开的,所以这里做的一个Sample缓存,比如需要一个视频帧时,如果读出来的是音频帧,就缓存起来,知道读到一个视频帧。下次需要音频帧时,可以从缓冲里直接给。这里只给出FillBuffer的实现,没有缓存相关的代码。

HRESULT CMylibPin::FillBuffer(IMediaSample *pSample)
{
CheckPointer(pSample, E_POINTER);
// If the bitmap file was not loaded, just fail here.
CAutoLock cAutoLockShared(&m_cSharedState);
Mylib_Sample sample;
sample.stream_index = m_nIndex;
HRESULT ret = m_SampleCache->ReadSample(sample, &m_bCancel);
if (ret == S_OK) {
BYTE *pData;
long cbData;
pSample->GetPointer(&pData);
cbData = pSample->GetSize();
if (cbData > sample.buffer_length)
cbData = sample.buffer_length;
memcpy(pData, sample.buffer, cbData);
pSample->SetActualDataLength(cbData);
if (sample.start_time * UINTS_MICROSECOND < m_rtStart)
m_rtStart = sample.start_time * UINTS_MICROSECOND;
REFERENCE_TIME rtStart = sample.start_time * UINTS_MICROSECOND - m_rtStart;
REFERENCE_TIME rtStop  = rtStart + m_nSampleDuration;
pSample->SetTime(&rtStart, &rtStop);
pSample->SetSyncPoint(sample.is_sync);
pSample->SetDiscontinuity(m_bDiscontinuty);
m_bDiscontinuty = FALSE;
}
return ret;
}

4、支持拖动

要支持拖动,Pin(注意是Pin,不是SourceFilter)需要继承CSourceSeeking,实现下面三个虚函数:

virtual HRESULT ChangeStart();
virtual HRESULT ChangeStop() {return S_OK;};
virtual HRESULT ChangeRate() {return E_FAIL;};

另外别忘了暴露新的接口:

STDMETHODIMP CMylibPin::NonDelegatingQueryInterface(REFIID riid, void **ppv)
{
if( riid == IID_IMediaSeeking && m_bSeekable)
{
return CSourceSeeking::NonDelegatingQueryInterface( riid, ppv );
}
return CSourceStream::NonDelegatingQueryInterface(riid, ppv);
}

ChangeRate是改变播放速度的,不支持,直接返回错误。ChangeStop是改变结束位置,这个对正常观看来说没有什么意义,返回OK好了。按照DShow的要求,改变位置后需要清空前面的音视频缓存(要通知到各个Filter),然后停止,再从新的位置开始:

void CMylibPin::ChangeStart()
{
if (ThreadExists())
{
OutputDebugString(_T("UpdateFromSeek Cancel\r\n"));
m_bCancel = TRUE;
DeliverBeginFlush();
// Shut down the thread and stop pushing data.
Stop();
m_SampleCache->Seek(m_rtStart);
m_bDiscontinuty = TRUE;
OutputDebugString(_T("UpdateFromSeek Resume\r\n"));
m_bCancel = FALSE;
DeliverEndFlush();
// Restart the thread and start pushing data again.
Pause();
}
}

5、测试运行

代码写完了,编译脚本里面已经有自动注册到系统了。但是还有一件事情,要让系统知道我们的vod://协议用这个SourceFilter打开。方法是修改注册表,导入下面的内容:

[HKEY_CLASSES_ROOT\vod]
"SourceFilter"="{6A881765-07FA-404b-B9B8-6ED429385ECC}"

现在可以通过graphedit测试了,打开url:vod://xxxxxxx。当然其他基于directShow的播放器都是可以播放这样的url的哦,试一下windowsmediaplayer吧,哈哈,竟然也能播放呢。

你可能感兴趣的:(播放器,filter,dshow)