The OpenCV GPU module is a set of classes and functions to utilize GPU computational capabilities. It is implemented using NVIDIA* CUDA* Runtime API and supports only NVIDIA GPUs.
1. getCudaEnableDeviceCount:returns the number of installed CUDA-enabled devices;
2. setDevice:sets adevice and initializes it for the current thread;
3. getDevice:returns the current device index set or initialized by default;
4. resetDevice:explicitly destroys and cleans up all resources associated with the current device in the current process;
5. FeatureSet:enumeration providing GPU computing features;
6. class::TargetArchs:class providing a set of static methods to check what NVIDIA card architecture the GPU module was built for;
7. class::DeviceInfo:class providing functionality for querying the specified GPU properties;
8. DeviceInfo::name:returns the device name;
9. DeviceInfo::majorVersion:returns the major compute capability version;
10. DeviceInfo::minorVersion:returns the minor compute capability version;
11. DeviceInfo::multiProcessorCount:returns the number of streaming multiprocessors;
12. DeviceInfo::freeMemory:returns the amount of free memory in bytes;
13. DeviceInfo::totalMemory:returns the amount of total memory in bytes;
14. DeviceInfo::supports:provides information on GPU feature support;
15. DeviceInfo::isCompatible:checks the GPU module and device compatibility;
16. DeviceInfo::deviceID:returns system index of the GPU device starting with 0;
17. struct::PtrStepSz:lightweight class encapsulating pitched memory on a GPU and passed to nvcc-compiled code(CUDA kernels);
18. struct::PtrStep:structure similar to gpu::PtrStepSz but containing only a pointer and row step;
19. class::GpuMat:base storage class for GPU memory with reference counting. Its interface matches the Mat interface;
20. createContinuous:creates a continuous matrix in the GPU memory;
21. ensureSizeIsEnough:ensures that the size of a matrix is big enough and the matrix has a proper type;
22. registerPageLocked:page-locks the memory of matrix and maps it for the device(s);
23. unregisterPageLocked:unmaps the memory of matrix and makes it pageable again;
24. class::CudaMem:class with reference counting wrapping special memory type allocation functions from CUDA. Its interface is also Mat()-like but with additional memory type parameters;
25. CudaMem::createMatHeader:creates a header without reference counting to gpu::CudaMem data;
26. CudaMem::createGpuMatHeader:maps CPU memory to GPU address space and create the gpu::GpuMat header without reference counting for it;
27. CudaMem::canMapHostMemory:returns true if the current hardware supports address space mapping and ALLOC_ZEROCOPY memory allocation;
28. class::Stream:this class encapsulates a queue of asynchronous calls. Some functions have overloads with the additional gpu::Stream parameter;
29. Stream::queryIfComplete:returns true if the current stream queue is finished, otherwise, it returns false;
30. Stream::waitForCompletion:blocks the current CPU thread until all operations in the stream are complete;
31. Stream::enqueueDownload:copies data from device to host;
32. Stream::enqueueUpload:copies data from host to device;
33. Stream::enqueueCopy:copies data from device to device;
34. Stream::enqueueMemSet:initializes or sets device memory to a value;
35. Stream::enqueueConvert:converts matrix type, ex from float to uchar depending on type;
36. Stream::enqueueHostCallback:adds a callback to be called on the host after all currently enqueued items in the stream have completed;
37. struct::StreamAccessor:class that enables getting cuda Stream_t from gpu::Stream and is declared in stream_accessor.hpp because it is the only public header that depends on the CUDA Runtime API;
38. gemm(cv::gemm):performs generalized matrix multiplication;
39. transpose(cv::transpose):transpose a matrix;
40. flip(cv::flip):flips a 2D matrix around vertical, horizontal, or both axes;
41. LUT(cv::LUT):transforms the source matrix into the destination matrix using the given look-up table:dst(I) = lut(src(I));
42. merge(cv::merge):makes a multi-channel matrix out of several single-channel matrices;
43. split(cv::split):copies each plane of a multi-channel matrix into an array;
44. magnitude(cv::magnitude):computes magnitudes of complex matrix elements;
45. magnitudeSqr:computes squared magnitudes of complex matrix elements;
46. phase(cv::phase):computes polar angles of complex matrix elements;
47. cartToPolar(cv::cartToPolar):converts Cartesian coordinates into polar;
48. polarToCart(cv::polarToCart):converts polar coordinates into Cartesian;
49. normalize(cv::normalize):normalizes the norm or value range of an array;
50. add(cv::add):computes a matrix-matrix or matrix-scalar sum;
51. subtract(cv::subtract):computes a matrix-matrix or matrix-scalar difference;
52. multiply(cv::multiply):computes a matrix-matrix or matrix-scalar per-element product;
53. divide(cv::divide):computes a matrix-matrix or matrix-scalar division;
54. addWeighted(cv::addWeighted):computes the weighted sum of two arrays;
55. abs(cv::abs):computes an absolute value of each matrix element;
56. sqr:computes a square value of each matrix element;
57. sqrt(cv::sqrt):computes a square root of each matrix element;
58. exp(cv::exp):computes an exponent of each matrix element;
59. log(cv::log):computes a natural logarithm of absolute value of each matrix element;
60. pow(cv::pow):raises every matrix element to a power;
61. absdiff(cv::absdiff):computes per-element absolute difference of two matrices(or of a matrix and scalar);
62. compare(cv::compare):compares elements of two matrices;
63. bitwise_not(cv::bitwise_not):performs a per-element bitwise inversion;
64. bitwise_or(cv::bitwise_or):performs a per-element bitwise disjunction of two matrices or of matrix and scalar;
65. bitwise_and(cv::bitwise_and):performs a per-element bitwise conjunction of two matrices or of matrix and scalar;
66. bitwise_xor(cv::bitwise_xor):performs a per-element bitwise exclusive or operation of two matrices of matrix and scalar;
67. rshift:performs pixel by pixel right shift of an image by a constant value;
68. lshift:peforms pixel by pixel left shift of an image by a constant value;
69. min(cv::min):computes the per-element minimum of two matrices(or a matrix and a scalar);
70. max(cv::max):computes the per-element maximum of two matrices(or a matrix and a scalar);
71. meanShiftFiltering:performs mean-shift filtering for each point of the source image;
72. meanShiftProc(gpu::meanShiftFiltering):performs a mean-shift procedure and stores information about processed points(their colorsand positions) in two images;
73. meanShiftSegmentation:performs a mean-shift segmentation of the source image and eliminates small segments;
74. integral(cv::integral):computes an integral image;
75. sqrIntegral:computes a squared integral image;
76. columnSum:computes a vertical(column) sum;
77. cornerHarris(cv::cornerHarris):computes the Harris cornerness criteria at each image pixel;
78. cornerMinEigenVal(cv::cornerMinEigenVal):computes the minimum eigen value of a 2*2 derivative covariation matrix at each pixel(the cornerness criteria);
79. mulSpectrums(cv::mulSpectrums):performs a per-element multiplication of two Fourier spectrums;
80. mulAndScaleSpectrums(cv::mulSpectrums):performs a per-element multiplication of two Fourier spectrums and scales the result;
81. dft(cv::dft):performs a forward or inverse discrete Fourier transform(1D or 2D) of the floating pointmatrix;
82. struct::ConvolveBuf:class providing a memory buffer for gpu::convolve() function, plus it allows toadjust some specific parameters;
83. ConvolveBuf::create:constructs a buffer for gpu::convolve() function with respecitive arguments;
84. convolve(gpu::filter2D):computes a convolution (or cross-correlation) of two images;
85. struct::MatchTemplateBuf:class providing memory buffers for gpu::matchTemplate() function, plus it allows toadjust some specific parameters;
86. matchTemplate(cv::matchTemplate):computes a proximity map for a raster template and an image where the template is searchedfor;
87. remap(cv::remap):applies a generic geometrical transformation to an image;
88. cvtColor(cv::cvtColor):converts an image from one color space to another;
89. swapChannels:exchanges the color channels of an image in-place;
90. threshold(cv::threshold):applies a fixed-level threshold to each array element;
91. resize(cv::resize):resizes an image;
92. warpAffine(cv::warpAffine):applies an affine transformation to an image;
93. buildWarpAffineMats(gpu::warpAffine,gpu::remap):builds transformation maps for affine transformation;
94. warpPerspective(cv::warpPerspective):applies a perspective transformation to an image;
95. buildWarpPerspectiveMaps(gpu::warpPerspective,gpu::remap):builds transformation maps for perspective transformation;
96. rotate(gpu::warpAffine):rotates an image around the origin(0,0) and then shifts it;
97. copyMakeBorder(cv::copyMakeBorder):forms a border around an image;
98. rectStdDev:computes a standard deviation of integral images;
99. evenLevels:computes levels with even distribution;
100. histEven:calculates a histogram with evenly distributed bins;
101. histRange:calculates a histogram with bins determined by the levels array;
102. calcHist:calculates histogram for one channel 8-bit image;
103. equalizeHist(cv::equalizeHist):equalizes the histogram of a grayscale image;
104. buildWarpPlaneMaps:builds plane warping maps;
105. buildWapCylindricalMaps:builds cylindrical warping maps;
106. buildWarpSphericalMaps:builds spherical warping maps;
107. pyrDown(cv::pyrDown):smoothes an image and downsamples it;
108. pyrUp(cv::pyrUp):upsamples an image and then smoothes it;
109. blendLinear:performs linear blending of two images;
110. bilateralFilter(cv::bilateralFilter):performs bilateral filtering of passed image;
111. nonLocalMeans(cv::fastNlMeanDenoising):performs pure non local means denoising without any simplification, and thus it is not fast;
112. class::FastNonLocalMeansDenoising:the class implements fast approximate Non Local Means Denoising algorithm;
113. FastNonLocalMeansDenoising::simpleMethod(cv::fastNlMeanDenoising):perform image denoising using Non-local Means Denoising algorithm;
114. FastNonLocalMeansDenoising::labMethod(cv::fastNlMeanDenoisingColored):modification of FastNonLocalMeansDenoising::simpleMethod for color images;
115. alphaComp:composites two images using alpha opacity values contained in eachimage;
116. Canny(cv::Canny):finds edges in an image using Canny algorithm;
117. HoughLines(cv::HoughLines):finds lines in abinary image using the classical Hough transform;
118. HoughLinesDownload(gpu::HoughLines):downloads resultsfrom gpu::HoughLines to host memory;
119. HoughCircles(cv::HoughCircles):finds circles in agrayscale image using the Hough transform;
120. HoughCirclesDownload(gpu::HoughCircles):downloads results from gpu::HoughCircles to host memory;
121. meanStdDev(cv::meanStdDev):computes a mean value and a standard deviation of matrix elements;
122. norm(cv::norm):returns the norm of a matrix(or difference of two matrices );
123. sum(cv::sum):returns the sum of matrix elements;
124. absSum:returns the sum of absolute values for matrix elements;
125. sqrSum:returns the squared sum of matrix elements;
126. minMax(cv::minMaxLoc):finds global minimum and maximum matrix elements and returns theirvaluse;
127. minMaxLoc(cv::minMaxLoc):finds global minimumand maximum matrix elements and returns their values with locations;
128. countNonZero(cv::countNonZero):counts non-zero matrix elements;
129. reduce(cv::reduce):reduces a matrix to a vector;
130. struct::HOGDescriptor:the class implements Histogram of Oriented Gradients object detector;
131. HOGDescriptor::getDescriptorSize:returns the number of coefficients required for the classification;
132. HOGDescriptor::getBlockHistogramSize:returns the block histogram size;
133. HOGDescriptor::setSVMDetector:sets coefficients for the linear SVM classifier;
134. HOGDescriptor::getDefaultPeopleDetector:returns coefficients of the classifier trained for people detection(for default window size);
135. HOGDescriptor::getPeopleDetector48x96:returns coefficients of the classifier trained for people detection(for 48 * 96windows);
136. HOGDescriptor::getPeopleDetector64x128:returns coefficients of the classifier trained for people detection(for 64 * 128 windows);
137. HOGDescriptor::detect:performs object detection without a multi-scale window;
138. HOGDescriptor::detectMultiScale:performs object detection with a multi-scale window;
139. HOGDescriptor::getDescriptors:returns block descriptors computed for the whole image;
140. class::CascadeClassifier_GPU:cascade classifier class used for object detection, supports HAAR and LBP cascades;
141. CascadeClassifier_GPU::empty:checks whether the classifier is loaded or not;
142. CascadeClassifier_GPU::load:loads the classifier from a file, the previous content is destroyed;
143. CascadeClassifier_GPU::release:destroys the loaded classifier;
144. CascadeClassifier_GPU::detectMultiScale(cv::CascadeClassifier::detectMultiScale):detects objects of different sizes in the input image;
145. class::FAST_GPU(cv::FAST):class used for corner detection using the FAST algorithm;
146. FAST_PUG::operator():finds the key points using FAST detector;
147. FAST_PUG::downloadKeypoints:downlaod key points from GPU to CPU memory;
148. FAST_PUG::convertKeypoints:converts key points from GPU representation to vector of Key Point;
149. FAST_PUG::release:releases inner buffer memory;
150. FAST_PUG::calsKeyPointsLocation:find keypoints and compute it’sresponse if nonmaxSupression is true;
151. FAST_PUG::getKeyPoints:gets final array of keypoints;
152. class::ORB_GPU:class for extracting ORB features and descriptors from an image;
153. ORG_PUG::operator():detects keypoints and computes descriptors for them;
154. ORG_PUG::downlaodKeyPoints:download keypoints from GPU to CPU memory;
155. ORG_PUG::convertKeyPoints:converts keypoints from GPU representation to vector of KeyPoint;
156. ORG_PUG::release:releases inner buffer memory;
157. class::BruteForceMatcher_GPU_base(cv::DescriptorMatcher,cv::BFMatcher):brute-force descriptor matcher, for each descriptor in the firstset, this matcher finds the closest descriptor in the second set by trying eachone, this descriptor matcher supports masking permissible matches between descriptor sets;
158. BruteForceMatcher_GPU_base::match(cv::DescriptorMatcher::match):finds thebest match for each descriptor from a query set with train descriptors;
159. BruteForceMatcher_GPU_base::makeGpuCollection:performs a GPU collection of traindescriptors and masks in a suitable format for the gpu:: BruteForceMatcher_GPU_base::matchCollectionfunction;
160. BruteForceMatcher_GPU_base::matchDownload:downloads matrices obtained via gpu:: BruteForceMatcher_GPU_base::matchSingleor gpu:: BruteForceMatcher_GPU_base::matchCollection to vector with DMatch;
161. BruteForceMatcher_GPU_base::matchConvert:converts matrices obtained via gpu:: BruteForceMatcher_GPU_base::matchSingleor gpu:: BruteForceMatcher_GPU_base::matchCollection to vector with DMatch;
162. BruteForceMatcher_GPU_base::knnMatch(cv::DescriptorMatcher::knnMatch):finds the k best matches for eachdescriptor from a query set with train descriptors;
163. BruteForceMatcher_GPU_base::knnMatchDownload:downloads matrices obtained via gpu:: BruteForceMatcher_GPU_base::knnMatchSingleor gpu:: BruteForceMatcher_GPU_base::knnMatch2Collection to vector with DMatch;
164. BruteForceMatcher_GPU_base::knnMatchConvert:converts matrices obtained via gpu:: BruteForceMatcher_GPU_base::knnMatchSingleor gpu:: BruteForceMatcher_GPU_base::knnMatch2Collection to CPU vector withDMatch;
165. BruteForceMatcher_GPU_base::radiusMatch(cv::DescriptorMatcher::radiusMatch):for each query descriptor, finds thebest matches with a distance less than a given threshold;
166. BruteForceMatcher_GPU_base::radiusMatchDownload:downloads matrices obtained via gpu:: BruteForceMatcher_GPU_base::radiusMatchSingleor gpu:: BruteForceMatcher_GPU_base::radiusMatchCollection to vector withDMatch;
167. BruteForceMatcher_GPU_base::radiusMatchConvert:converts matrices obtained via gpu:: BruteForceMatcher_GPU_base::radiusMatchSingleor gpu:: BruteForceMatcher_GPU_base::radiusMatchCollection to vector withDMatch;
168. class::BaseRowFilter_GPU:base class for linear or non-linearfilters that processes rows of 2D arrays, such filters are used for the “horizontal”filtering passes in separable filters;
169. class::BaseColumnFilter_GPU:base class for linear or non-linearfilters that processes columns of 2D arrays, such filters are used for the “vertical”filtering passes in separable filters;
170. class::BaseFilter_GPU:base class for non-separable 2D filters;
171. class::FilterEngine_GPU:base class for the Filter Engine;
172. createFilter2D_GPU(gpu::createBoxFilter_GPU):creates a non-separable filter enginewith the specified filter;
173. createSeqrableFilter_GPU:creates a separable filter engine withthe specified filters;
174. getRowSumFilter_GPU:creates a horizontal 1D box filter;
175. getColumnSumFilter_GPU:creates a vertical 1D box filter;
176. createBoxFilter_GPU(cv::boxFilter):creates a normalized 2D box filter;
177. boxFilter(cv::boxFilter):smooths the imageusing the normalized box filter;
178. blur(cv::blur, gpu::boxFilter):acts as a synonymfor the normalized box filter;
179. createMorphologyFilter_GPU(cv::createMorphologyFilter):creates a 2Dmorphological filter;
180. erode(cv::erode):erodes an image by using a specific structuring element;
181. dilate(cv::dilate):dilates an image by using a specific structuring element;
182. morphologyEx(cv::morphologyEx):applies an advanced morphologicaloperation to an image;
183. createLinearFilter_GPU(cv::createLinearFilter):creates anon-separable linear filter;
184. filter2D(cv::filter2D, gpu::convolve):applies thenon-separable 2D linear filter to an image;
185. Laplacian(cv::Laplacian, gpu::filter2D):applies theLaplacian operator to an image;
186. getLinearRowFilter_GPU(cv::createSeparableLinearFilter):creates aprimitive row filter with the specified kernel;
187. getLinearColumnFilter_GPU(cv::createSeparableLinearFilter):creates aprimitive column filter with the specified kernel;
188. createSeparableLinearFilter_GPU(cv::createSeparableLinearFilter):creates aseparable linear filter engine;
189. sepFilter2D(cv::sepFilter2D):applies a separable2D linear filter to an image;
190. createDerivFilter_GPU(cv::createDerivFilter):creates afilter engine for the generalized Sobel operator;
191. Sobel(cv::Sobel):applies the generalized Sobel operator to an image;
192. Scharr(cv::Scharr):calculates the first x- or y- image derivative using the Scharroperator;
193. createGaussianFilter_GPU(cv::createGaussianFilter):creates aGaussian filter engine;
194. GaussianBlur(cv::GaussianBlur):smooths an imageusing the Gaussian filter;
195. getMaxFilter_GPU:create the maximum filter:
196. getMinFilter_GPU:create the minimum filter;
197. class::StereoBM_GPU:class computing stereo correspondence(disparity map) using the blockmatching algorithm;
198. StereoBM_GPU::operator:enables thestereo correspondence operator that finds the disparity for the specifiedrectified stereo pair;
199. StereoBM_GPU::checkIfGpuCallReasonable:uses a heuristic method to estimatewhether the current GPU is faster than the CPU in this algorithm, it queriesthe currently active device;
200. class::StereoBeliefPropagation:class computing stereo correspondenceusing the belief propagation algorithm;
201. StereoBeliefPropagation::estimateRecommendedParams:uses a heuristic method to compute therecommended parameters(ndisp, iters and levels) for the specified image size(widthand height);
202. StereoBeliefPropagation::operator:enables the stereo correspondenceoperator that finds the disparity for the specified rectified stereo pair ordata cost;
203. class::StereoConstantSpaceBP:class computingstereo correspondence using the constant space belief propagation algorithm;
204. StereoConstantSpaceBP::estimateRecommendedParams:uses aheuristic to compute parameters(ndisp, iters, levelsand nrplane) for thespecified image size(width and height);
205. StereoConstantSpaceBP::operator:enables the stereocorrespondence operator that finds the disparity for the specified rectifiedstereo pair;
206. class::DisparityBilateralFilter:classrefining a disparity map using joint bilateral filtering;
207. DisparityBilateralFilter::operator:refines a disparitymap using joint bilateral filtering;
208. drawColorDisp:colors a disparity image;
209. reprojectImageTo3D(cv::reprojectImageTo3D):reprojects adisparity image to 3D space;
210. solvePnPRansac(cv::solvePnPRansac):finds the objectpose from 3D-2D point correspondences;
211. class::BroxOpticalFlow:class computing the optical flow for two images using Brox et alOptical Flow algorithm;
212. class::GoodFeaturesToTrackDetector_GPU(cv::goodFeaturesToTrack):class usedfor strong corners detection on an image;
213. GoodFeaturesToTrackDetector_GPU::operator(cv::goodFeaturesToTrack):finds themost prominent corners in the image;
214. GoodFeaturesToTrackDetector_GPU::releaseMemory:releasesinner buffers memory;
215. class::FarnebackOpticalFlow:class computing adense optical flow using the Gunnar Farneback’s algorithm;
216. FarnebackOpticalFlow::operator(cv::calcOpticalFlowFarneback):computes adense optical flow using the Gunnar Farneback’s algorithm;
217. FarnebackOpticalFlow::releaseMemory:releases unusedauxiliary memory buffers;
218. class::PyrLKOpticalFlow(cv::calcOpticalFlowPyrLK):class usedfor calculating an optical flow;
219. PyrLKOpticalFlow::sparse:calculate an opticalflow for a sparse feature set;
220. PyrLKOpticalFlow::dense:calculate dense optical flow;
221. PyrLKOpticalFlow::releaseMemory:releases inner buffers memory;
222. interpolateFrames:interpolates frames(images) usingprovided optical flow(displacement field);
223. class::FGDStatModel:class used for background/foregroundsegmentation;
224. FGDStatModel::create:initializes background model;
225. FGDStatModel::release:releases all inner buffer’s memory;
226. FGDStatModel::update:updates the background model andreturns foreground regions count;
227. class::MOG_GPU(cv::BackgroundSubtractorMOG):GaussianMixture-based Background/Foreground Segmentation Algorithm;
228. MOG_GPU::operator:updates the background model and returns the foreground mask;
229. MOG_GPU::getBackgroundImage:computes abackground image;
230. MOG_GPU::release:releases all inner buffer’s memory;
231. class::MOG2_GPU(cv::BackgroundSubtractorMOG2):GaussianMixture-based Background/Foreground Segmentation Algorithm;
232. MOG2_GPU::operator:updates the background model and returns the foreground mask;
233. MOG2_GPU::getBackgroundImage:computes abackground image;
234. MOG2_GPU::release:releases all inner buffer’s memory;
235. class::GMG_GPU:class used for background/foreground segmentation;
236. GMG_GPU::initialize:initialize background model and allocates all inner buffers;
237. GMG_GPU::operator:updates the background model and returns the foreground mask;
238. GMG_GPU::release:releases all inner buffer’s memory;
239. class::VideoWriter_GPU:video writercalss;
240. VideoWriter_GPU::open:initializes or reinitializes video writer;
241. VideoWriter_GPU::isOpened:returns true ifvideo writer has been successfully initialized;
242. VideoWriter_GPU::close:releases the video writer;
243. VideoWriter_GPU::write:writes the next video frame;
244. strct::VideoWriter_GPU::EncoderParams:differentparameters for CUDA video encoder;
245. VideoWriter_GPU::EncoderParams::load:reads parametersfrom config file;
246. VideoWriter_GPU::EncoderParams::save:saves parameters toconfig file;
247. class::VideoWriter_GPU::EncoderCallBack:callbacksfor CUDA video encoder;
248. VideoWriter_GPU::EncoderCallBack::acquireBitStream:callbackfunctions to signal the start of bitstream that is to be encoded;
249. VideoWriter_GPU::EncoderCallBack::releaseBitStream:callback function to signal that theencoded bitstream is ready to be written to file;
250. VideoWriter_GPU::EncoderCallBack::onBeginFrame:callback function to signal thatencoding operation on the frame has started;
251. VideoWriter_GPU::EncoderCallBack::onEndFrame:callback function signals that theencoding operation on the frame has finished;
252. class::VideoReader_GPU:class for reading video from files;
253. VideoReader_GPU::Codec:video codecs supported bygpu::VideoReader_GPU;
254. VideoReader_GPU::ChromaFormat:chroma formats supported bygpu::VideoReader_GPU;
255. VideoReader_GPU::FormatInfo:struct providing information aboutvideo file format;
256. VideoReader_GPU::open:initializes or reinitializes videoreader;
257. VideoReader_GPU::isOpened:returns true if video reader has beensuccessfully initialized;
258. VideoReader_GPU::close:releases the video reader;
259. VideoReader_GPU::read:grabs, decodes and return the nextvideo frame;
260. VideoReader_GPU::format:returns information about video fileformat;
261. VideoReader_GPU::dumpFormat:dump information about video fileformat to specified stream;
262. class::VideoReader_GPU::VideoSource:interface for video demultiplexing;
263. VideoReader_GPU::VideoSource::format:returns information about video fileformat;
264. VideoReader_GPU::VideoSource::start:starts processing;
265. VideoReader_GPU::VideoSource::stop:stops processing;
266. VideoReader_GPU::VideoSource::isStarted:returns true if processing wassuccessfully started;
267. VideoReader_GPU::VideoSource::hasError:returns true if error occurred duringprocessing;
268. VideoReader_GPU::VideoSource::parseVideoData:parse next video frame, implementationmust call this method after new frame was grabbed;
转自:http://www.xue5.com/Mobile/Mobile/728615.html