Android P 图形显示系统(四) Android VirtualDisplay解析

[TOC]

Android VirtualDisplay解析

Android支持多个屏幕:主显,外显,和虚显,虚显就是我们要说的VirtualDisplay。VirtualDisplay的使用场景很多,比如录屏,WFD显示等。其作用就是抓取屏幕上显示的内容。VirtualDisplay抓取屏幕内容,其实现方式有很多。在API中就提供了ImageReader进行读取VirtualDisplay里的内容。

下面我们就结合ImageReader,来看看VirtualDisplay及其相关流程。

ImageReader和VirtualDisplay使用示例

我们以VirtualDisplayTest为示例:

1.在测试setUp时,初始化 DisplayManager, ImageReader 和 ImageListener ,代码如下:

* frameworks/base/core/tests/coretestssrc/android/hardware/display/VirtualDisplayTest.java

    protected void setUp() throws Exception {
        super.setUp();

        mDisplayManager = (DisplayManager)mContext.getSystemService(Context.DISPLAY_SERVICE);
        mHandler = new Handler(Looper.getMainLooper());
        mImageListener = new ImageListener();

        mImageReaderLock.lock();
        try {
            mImageReader = ImageReader.newInstance(WIDTH, HEIGHT, PixelFormat.RGBA_8888, 2);
            mImageReader.setOnImageAvailableListener(mImageListener, mHandler);
            mSurface = mImageReader.getSurface();
        } finally {
            mImageReaderLock.unlock();
        }
    }

  • DisplayManager 管理Display的,系统中有对应的DisplayManagerService。
  • ImageListener实现OnImageAvailableListener接口。
  • ImageReader是一个图片读取器,它是OnImageAvailableListener接口的触发者
  • 另外,注意这里的mSurface。

2.以测试项目testPrivateVirtualDisplay为例

    public void testPrivateVirtualDisplay() throws Exception {
        VirtualDisplay virtualDisplay = mDisplayManager.createVirtualDisplay(NAME,
                WIDTH, HEIGHT, DENSITY, mSurface, 0);
        assertNotNull("virtual display must not be null", virtualDisplay);

        Display display = virtualDisplay.getDisplay();
        try {
            assertDisplayRegistered(display, Display.FLAG_PRIVATE);

            // Show a private presentation on the display.
            assertDisplayCanShowPresentation("private presentation window",
                    display, BLUEISH,
                    WindowManager.LayoutParams.TYPE_PRIVATE_PRESENTATION, 0);
        } finally {
            virtualDisplay.release();
        }
        assertDisplayUnregistered(display);
    }
  • 测试时,先通过mDisplayManager,创建一个虚拟显示。
  • 通过assertDisplayRegistered判断虚显是否已经注册
  • 通过assertDisplayCanShowPresentation判断是否能显示私有的Presentation
  • 将虚显释放后,通过assertDisplayUnregistered判断是否已经撤销注册。

这里Presentation是Andorid的一个显示控件,能够实现将要显示的内容显示到制定的显示屏上。

实例代码就这么多,接下来,我们来看具体的流程。

ImageReader介绍

ImageReader,简单来说,就是使应用能够以图片数据的形式读取绘制到Surface中的内容。图片数据用Image描述。

1.ImageReader的定义
ImageReader的定义如下:

* frameworks/base/media/java/android/media/ImageReader.java

    public static ImageReader newInstance(int width, int height, int format, int maxImages) {
        return new ImageReader(width, height, format, maxImages, BUFFER_USAGE_UNKNOWN);
    }

这里的参数maxImages表示,能同时访问的Image数量,这里概念上和BufferQueue中的maxnumber也是类似的。

ImageReader关键代码如下:

    protected ImageReader(int width, int height, int format, int maxImages, long usage) {
        mWidth = width;
        mHeight = height;
        mFormat = format;
        mMaxImages = maxImages;

        .. ...

        mNumPlanes = ImageUtils.getNumPlanesForFormat(mFormat);

        nativeInit(new WeakReference<>(this), width, height, format, maxImages, usage);

        mSurface = nativeGetSurface();

        mIsReaderValid = true;
        // Estimate the native buffer allocation size and register it so it gets accounted for
        // during GC. Note that this doesn't include the buffers required by the buffer queue
        // itself and the buffers requested by the producer.
        // Only include memory for 1 buffer, since actually accounting for the memory used is
        // complex, and 1 buffer is enough for the VM to treat the ImageReader as being of some
        // size.
        mEstimatedNativeAllocBytes = ImageUtils.getEstimatedNativeAllocBytes(
                width, height, format, /*buffer count*/ 1);
        VMRuntime.getRuntime().registerNativeAllocation(mEstimatedNativeAllocBytes);
    }
  • 我们的格式是PixelFormat.RGBA_8888,所以这里的mNumPlanes值为1
  • nativeInit,native方法,创建一个native的ImageReader实例。
  • nativeGetSurface,native方法,获取对应的Native实例的Surface,注意,我们的Surface是从哪儿来的。

2.ImageReader的JNI实现
ImageReader的JNI实现如下,这里包含了ImageReader的方法和SurfaceImage的方法。

* frameworks/base/media/jni/android_media_ImageReader.cpp

static const JNINativeMethod gImageReaderMethods[] = {
    {"nativeClassInit",        "()V",                        (void*)ImageReader_classInit },
    {"nativeInit",             "(Ljava/lang/Object;IIIIJ)V",  (void*)ImageReader_init },
    {"nativeClose",            "()V",                        (void*)ImageReader_close },
    {"nativeReleaseImage",     "(Landroid/media/Image;)V",   (void*)ImageReader_imageRelease },
    {"nativeImageSetup",       "(Landroid/media/Image;)I",   (void*)ImageReader_imageSetup },
    {"nativeGetSurface",       "()Landroid/view/Surface;",   (void*)ImageReader_getSurface },
    {"nativeDetachImage",      "(Landroid/media/Image;)I",   (void*)ImageReader_detachImage },
    {"nativeDiscardFreeBuffers", "()V",                      (void*)ImageReader_discardFreeBuffers }
};

static const JNINativeMethod gImageMethods[] = {
    {"nativeCreatePlanes",      "(II)[Landroid/media/ImageReader$SurfaceImage$SurfacePlane;",
                                                              (void*)Image_createSurfacePlanes },
    {"nativeGetWidth",         "()I",                        (void*)Image_getWidth },
    {"nativeGetHeight",        "()I",                        (void*)Image_getHeight },
    {"nativeGetFormat",        "(I)I",                        (void*)Image_getFormat },
};

nativeInit对应的方法为ImageReader_init:

static void ImageReader_init(JNIEnv* env, jobject thiz, jobject weakThiz, jint width, jint height,
                             jint format, jint maxImages, jlong ndkUsage)
{
    ... ...
    sp ctx(new JNIImageReaderContext(env, weakThiz, clazz, maxImages));

    sp gbProducer;
    sp gbConsumer;
    BufferQueue::createBufferQueue(&gbProducer, &gbConsumer);
    sp bufferConsumer;
    String8 consumerName = String8::format("ImageReader-%dx%df%xm%d-%d-%d",
            width, height, format, maxImages, getpid(),
            createProcessUniqueId());
    ... ...
    bufferConsumer = new BufferItemConsumer(gbConsumer, consumerUsage, maxImages,
            /*controlledByApp*/true);
    if (bufferConsumer == nullptr) {
        jniThrowExceptionFmt(env, "java/lang/RuntimeException",
                "Failed to allocate native buffer consumer for format 0x%x and usage 0x%x",
                nativeFormat, consumerUsage);
        return;
    }
    ctx->setBufferConsumer(bufferConsumer);
    bufferConsumer->setName(consumerName);

    ctx->setProducer(gbProducer);
    bufferConsumer->setFrameAvailableListener(ctx);
    ImageReader_setNativeContext(env, thiz, ctx);
    ctx->setBufferFormat(nativeFormat);
    ctx->setBufferDataspace(nativeDataspace);
    ctx->setBufferWidth(width);
    ctx->setBufferHeight(height);

    // Set the width/height/format/dataspace to the bufferConsumer.
    res = bufferConsumer->setDefaultBufferSize(width, height);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
                          "Failed to set buffer consumer default size (%dx%d) for format 0x%x",
                          width, height, nativeFormat);
        return;
    }
    res = bufferConsumer->setDefaultBufferFormat(nativeFormat);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
                          "Failed to set buffer consumer default format 0x%x", nativeFormat);
    }
    res = bufferConsumer->setDefaultBufferDataSpace(nativeDataspace);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
                          "Failed to set buffer consumer default dataSpace 0x%x", nativeDataspace);
    }
}
  • 创建了一个JNIImageReaderContext实例,这个就是ImageReader的Native对应的对象。
JNIImageReaderContext::JNIImageReaderContext(JNIEnv* env,
        jobject weakThiz, jclass clazz, int maxImages) :
    mWeakThiz(env->NewGlobalRef(weakThiz)),
    mClazz((jclass)env->NewGlobalRef(clazz)),
    mFormat(0),
    mDataSpace(HAL_DATASPACE_UNKNOWN),
    mWidth(-1),
    mHeight(-1) {
    for (int i = 0; i < maxImages; i++) {
        BufferItem* buffer = new BufferItem;
        mBuffers.push_back(buffer);
    }
}

这里的mDataSpace是数据空间,用以描述格式的。native的Buffer用BufferItem描述,在mBuffers中。

  • 创建对应的BufferQueue,生产者gbProducer,消费者gbConsumer。
    这里用的还是BufferQueue,Consumer端用BufferItemConsumer进行了封装。还记得我们Androdi正常显示的时候,Consumer是什么吗?没错BufferLayerConsumer,需要注意这其间的差别。BufferItemConsumer中持有gbConsumer对象。

  • 创建完BufferQueue后,再设置到 JNIImageReaderContext 中。注意BufferItemConsumer的FrameAvailableListener为JNIImageReaderContext中实现的FrameAvailableListener。

  • 最后通过ImageReader_setNativeContext,将native对象和Java的对象关联。

JNIImageReaderContext的类图


ImageReaderContext的类图

VirtualDisplay的创建

通过DisplayManager创建VirtualDisplay。

* frameworks/base/core/java/android/hardware/display/DisplayManager.java

    public VirtualDisplay createVirtualDisplay(@Nullable MediaProjection projection,
            @NonNull String name, int width, int height, int densityDpi, @Nullable Surface surface,
            int flags, @Nullable VirtualDisplay.Callback callback, @Nullable Handler handler,
            @Nullable String uniqueId) {
        return mGlobal.createVirtualDisplay(mContext, projection,
                name, width, height, densityDpi, surface, flags, callback, handler, uniqueId);
    }

DisplayManagerGlobal是一个单例,Android系统中就这么一个。

    public DisplayManager(Context context) {
        mContext = context;
        mGlobal = DisplayManagerGlobal.getInstance();
    }

DisplayManagerGlobal的createVirtualDisplay方法实现如下:

* frameworks/base/core/java/android/hardware/display/DisplayManagerGlobal.java

    public VirtualDisplay createVirtualDisplay(Context context, MediaProjection projection,
            String name, int width, int height, int densityDpi, Surface surface, int flags,
            VirtualDisplay.Callback callback, Handler handler, String uniqueId) {
        ... ...
        int displayId;
        try {
            displayId = mDm.createVirtualDisplay(callbackWrapper, projectionToken,
                    context.getPackageName(), name, width, height, densityDpi, surface, flags,
                    uniqueId);
        } catch (RemoteException ex) {
            throw ex.rethrowFromSystemServer();
        }
        if (displayId < 0) {
            Log.e(TAG, "Could not create virtual display: " + name);
            return null;
        }
        Display display = getRealDisplay(displayId);
        if (display == null) {
            Log.wtf(TAG, "Could not obtain display info for newly created "
                    + "virtual display: " + name);
            try {
                mDm.releaseVirtualDisplay(callbackWrapper);
            } catch (RemoteException ex) {
                throw ex.rethrowFromSystemServer();
            }
            return null;
        }
        return new VirtualDisplay(this, display, callbackWrapper, surface);
    }

mDm是DisplayManagerservice(DMS)的Stub。mDm.createVirtualDisplay直接看DMS的实现:

* frameworks/base/services/core/java/com/android/server/display/DisplayManagerService.java

        @Override // Binder call
        public int createVirtualDisplay(IVirtualDisplayCallback callback,
                IMediaProjection projection, String packageName, String name,
                int width, int height, int densityDpi, Surface surface, int flags,
                String uniqueId) {
            ... ...

            if (projection != null) {
                try {
                    if (!getProjectionService().isValidMediaProjection(projection)) {
                        throw new SecurityException("Invalid media projection");
                    }
                    flags = projection.applyVirtualDisplayFlags(flags);
                } catch (RemoteException e) {
                    throw new SecurityException("unable to validate media projection or flags");
                }
            }

            if (callingUid != Process.SYSTEM_UID &&
                    (flags & VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR) != 0) {
                if (!canProjectVideo(projection)) {
                    throw new SecurityException("Requires CAPTURE_VIDEO_OUTPUT or "
                            + "CAPTURE_SECURE_VIDEO_OUTPUT permission, or an appropriate "
                            + "MediaProjection token in order to create a screen sharing virtual "
                            + "display.");
                }
            }
            if ((flags & VIRTUAL_DISPLAY_FLAG_SECURE) != 0) {
                if (!canProjectSecureVideo(projection)) {
                    throw new SecurityException("Requires CAPTURE_SECURE_VIDEO_OUTPUT "
                            + "or an appropriate MediaProjection token to create a "
                            + "secure virtual display.");
                }
            }

            final long token = Binder.clearCallingIdentity();
            try {
                return createVirtualDisplayInternal(callback, projection, callingUid, packageName,
                        name, width, height, densityDpi, surface, flags, uniqueId);
            } finally {
                Binder.restoreCallingIdentity(token);
            }
        }

在DMS的createVirtualDisplay函数中,做了一些参数的初始化,project和secure的处理等。然后通过createVirtualDisplayInternal方法来真正创建。

createVirtualDisplayInternal函数

    private int createVirtualDisplayInternal(IVirtualDisplayCallback callback,
            IMediaProjection projection, int callingUid, String packageName, String name, int width,
            int height, int densityDpi, Surface surface, int flags, String uniqueId) {
        synchronized (mSyncRoot) {
            if (mVirtualDisplayAdapter == null) {
                Slog.w(TAG, "Rejecting request to create private virtual display "
                        + "because the virtual display adapter is not available.");
                return -1;
            }

            DisplayDevice device = mVirtualDisplayAdapter.createVirtualDisplayLocked(
                    callback, projection, callingUid, packageName, name, width, height, densityDpi,
                    surface, flags, uniqueId);
            if (device == null) {
                return -1;
            }

            handleDisplayDeviceAddedLocked(device);
            LogicalDisplay display = findLogicalDisplayForDeviceLocked(device);
            if (display != null) {
                return display.getDisplayIdLocked();
            }

            // Something weird happened and the logical display was not created.
            Slog.w(TAG, "Rejecting request to create virtual display "
                    + "because the logical display was not created.");
            mVirtualDisplayAdapter.releaseVirtualDisplayLocked(callback.asBinder());
            handleDisplayDeviceRemovedLocked(device);
        }
        return -1;
    }
  • mVirtualDisplayAdapter是DMS启动的时候初始化的
    启动时,用消息MSG_REGISTER_DEFAULT_DISPLAY_ADAPTERS注册的。
    private void registerDefaultDisplayAdapters() {
        // Register default display adapters.
        synchronized (mSyncRoot) {
            // main display adapter
            registerDisplayAdapterLocked(new LocalDisplayAdapter(
                    mSyncRoot, mContext, mHandler, mDisplayAdapterListener));

            mVirtualDisplayAdapter = mInjector.getVirtualDisplayAdapter(mSyncRoot, mContext,
                    mHandler, mDisplayAdapterListener);
            if (mVirtualDisplayAdapter != null) {
                registerDisplayAdapterLocked(mVirtualDisplayAdapter);
            }
        }
    }
  • mVirtualDisplayAdapter创建完后,用handleDisplayDeviceAddedLocked处理
    这里告诉Android上层,一个新的Display被添加了。
    private void handleDisplayDeviceAddedLocked(DisplayDevice device) {
        DisplayDeviceInfo info = device.getDisplayDeviceInfoLocked();
        if (mDisplayDevices.contains(device)) {
            Slog.w(TAG, "Attempted to add already added display device: " + info);
            return;
        }

        Slog.i(TAG, "Display device added: " + info);
        device.mDebugLastLoggedDeviceInfo = info;

        mDisplayDevices.add(device);
        LogicalDisplay display = addLogicalDisplayLocked(device);
        Runnable work = updateDisplayStateLocked(device);
        if (work != null) {
            work.run();
        }
        scheduleTraversalLocked(false);
    }

一个Display被添加了,先拿到它的信息,DisplayDeviceInfo。再将添加的 设备加到mDisplayDevices中。
最后,通过addLogicalDisplayLocked创建一个对应的逻辑显示屏,通过updateDisplayStateLocked更新 Display的信息,和Native的VirtualDisplay的信息保持同步。

VirtualDisplayAdapter的createVirtualDisplayLocked方法:

* frameworks/base/services/core/java/com/android/server/display/VirtualDisplayAdapter.java

    public DisplayDevice createVirtualDisplayLocked(IVirtualDisplayCallback callback,
            IMediaProjection projection, int ownerUid, String ownerPackageName, String name,
            int width, int height, int densityDpi, Surface surface, int flags, String uniqueId) {
        boolean secure = (flags & VIRTUAL_DISPLAY_FLAG_SECURE) != 0;
        IBinder appToken = callback.asBinder();
        IBinder displayToken = mSurfaceControlDisplayFactory.createDisplay(name, secure);
        final String baseUniqueId =
                UNIQUE_ID_PREFIX + ownerPackageName + "," + ownerUid + "," + name + ",";
        final int uniqueIndex = getNextUniqueIndex(baseUniqueId);
        if (uniqueId == null) {
            uniqueId = baseUniqueId + uniqueIndex;
        } else {
            uniqueId = UNIQUE_ID_PREFIX + ownerPackageName + ":" + uniqueId;
        }
        VirtualDisplayDevice device = new VirtualDisplayDevice(displayToken, appToken,
                ownerUid, ownerPackageName, name, width, height, densityDpi, surface, flags,
                new Callback(callback, mHandler), uniqueId, uniqueIndex);

        mVirtualDisplayDevices.put(appToken, device);

        try {
            if (projection != null) {
                projection.registerCallback(new MediaProjectionCallback(appToken));
            }
            appToken.linkToDeath(device, 0);
        } catch (RemoteException ex) {
            mVirtualDisplayDevices.remove(appToken);
            device.destroyLocked(false);
            return null;
        }

        // Return the display device without actually sending the event indicating
        // that it was added.  The caller will handle it.
        return device;
    }
  • 首先通过mSurfaceControlDisplayFactory创建一个displayToken,这个displayToken实际上是native的VirtualDisplay的token。
  • 最后VirtualDisplayAdapter创建的是一个VirtualDisplayDevice。

这里的mSurfaceControlDisplayFactory其实是对SurfaceControl调用的一个封装:

    public VirtualDisplayAdapter(DisplayManagerService.SyncRoot syncRoot,
            Context context, Handler handler, Listener listener) {
        this(syncRoot, context, handler, listener,
                (String name, boolean secure) -> SurfaceControl.createDisplay(name, secure));
    }

SurfaceControl的createDisplay,主要的是调用native函数,创建native的VirtualDisplay。

* frameworks/base/core/java/android/view/SurfaceControl.java

    public static IBinder createDisplay(String name, boolean secure) {
        if (name == null) {
            throw new IllegalArgumentException("name must not be null");
        }
        return nativeCreateDisplay(name, secure);
    }

到此Java层的创建VirtualDisplay的流程完成。

VirtualDisplay在Java层相关的类关系如下:


VirtualDisplay关系类图

简单梳理一下:

  • Android提供了DMS管理系统的Display
  • DisplayManagerGlobal是DMS的一个代理,唯一的代理。
  • 应用可以通过DisplayManager和DMS通信
  • 每个Display都有一个对应的LogcalDisplay进行描述。
  • 具体的显示屏用DisplayDevice进行描述,系统里面分为很多类型,VirtualDisplayDevice只是其中的一类。
  • 每种类型都有自己对应的Adapter,VirtualDisplayAdapter和VirtualDisplayDevice对应。

看完Java层的流程,我们再来看一下Native层的流程。我们关系的主要问题,还是ImageReader是怎么获取到显示屏幕的显示数据,显然现在还没有我们要的答案。

Native创建VirtualDisplay

nativeCreateDisplay函数JNI实现,如下:

* android_view_SurfaceControl.cpp

static jobject nativeCreateDisplay(JNIEnv* env, jclass clazz, jstring nameObj,
        jboolean secure) {
    ScopedUtfChars name(env, nameObj);
    sp token(SurfaceComposerClient::createDisplay(
            String8(name.c_str()), bool(secure)));
    return javaObjectForIBinder(env, token);
}

最终还是通过SurfaceComposerClient来创建的

sp SurfaceComposerClient::createDisplay(const String8& displayName, bool secure) {
    return ComposerService::getComposerService()->createDisplay(displayName,
            secure);
}

ComposerService的服务端实现,就是SurfaceFlinger。

sp SurfaceFlinger::createDisplay(const String8& displayName,
        bool secure)
{
    class DisplayToken : public BBinder {
        sp flinger;
        virtual ~DisplayToken() {
             // no more references, this display must be terminated
             Mutex::Autolock _l(flinger->mStateLock);
             flinger->mCurrentState.displays.removeItem(this);
             flinger->setTransactionFlags(eDisplayTransactionNeeded);
         }
     public:
        explicit DisplayToken(const sp& flinger)
            : flinger(flinger) {
        }
    };

    sp token = new DisplayToken(this);

    Mutex::Autolock _l(mStateLock);
    DisplayDeviceState info(DisplayDevice::DISPLAY_VIRTUAL, secure);
    info.displayName = displayName;
    mCurrentState.displays.add(token, info);
    mInterceptor.saveDisplayCreation(info);
    return token;
}

SurfaceFlinger在创建Display时,创建了一个DisplayToken。这个就是Java中我们说的那个token了。然后在将token添加到mCurrentState的displays中。创建的Display就保存在displays中。

Native的流程很简单,但是我们还没有看到数据是怎么流转的。别急,看看我们的Surface去哪儿了。

数据流分析

DisplayManager创建Display时,有mSurface,这个是ImageReader那边获取过来的。

static jobject ImageReader_getSurface(JNIEnv* env, jobject thiz)
{
    ALOGV("%s: ", __FUNCTION__);

    IGraphicBufferProducer* gbp = ImageReader_getProducer(env, thiz);
    if (gbp == NULL) {
        jniThrowRuntimeException(env, "Buffer consumer is uninitialized");
        return NULL;
    }

    // Wrap the IGBP in a Java-language Surface.
    return android_view_Surface_createFromIGraphicBufferProducer(env, gbp);
}

包装了一个IGraphicBufferProducer。现在我们再来捋一遍创建VirtualDisplay的流程,只关系Surface的去向。是不是最后给到了VirtualDisplayDevice中的mSurface。那么又是什么时候调的呢?

再看看DMS的handleDisplayDeviceAddedLocked方法,是不是有个scheduleTraversalLocked的调用?

这Traversal,通知了WMS,然后又从WMS绕回DMS,调的是
DMS的performTraversalInTransactionFromWindowManager,最后在performTraversalInTransactionLocked中,将调每个Device的performTraversalInTransactionLocked函数。

    private void performTraversalInTransactionLocked() {
        // Clear all viewports before configuring displays so that we can keep
        // track of which ones we have configured.
        clearViewportsLocked();

        // Configure each display device.
        final int count = mDisplayDevices.size();
        for (int i = 0; i < count; i++) {
            DisplayDevice device = mDisplayDevices.get(i);
            configureDisplayInTransactionLocked(device);
            device.performTraversalInTransactionLocked();
        }

        // Tell the input system about these new viewports.
        if (mInputManagerInternal != null) {
            mHandler.sendEmptyMessage(MSG_UPDATE_VIEWPORT);
        }
    }

VirtualDisplayDevice的performTraversalInTransactionLocked函数如下:

        public void performTraversalInTransactionLocked() {
            if ((mPendingChanges & PENDING_RESIZE) != 0) {
                SurfaceControl.setDisplaySize(getDisplayTokenLocked(), mWidth, mHeight);
            }
            if ((mPendingChanges & PENDING_SURFACE_CHANGE) != 0) {
                setSurfaceInTransactionLocked(mSurface);
            }
            mPendingChanges = 0;
        }

PENDING_SURFACE_CHANGE这个伏笔,在VirtualDisplayDevice创建的时候就已经埋下了。

        public VirtualDisplayDevice(IBinder displayToken, IBinder appToken,
                int ownerUid, String ownerPackageName,
                String name, int width, int height, int densityDpi, Surface surface, int flags,
                Callback callback, String uniqueId, int uniqueIndex) {
            super(VirtualDisplayAdapter.this, displayToken, uniqueId);
            mAppToken = appToken;
            mOwnerUid = ownerUid;
            mOwnerPackageName = ownerPackageName;
            mName = name;
            mWidth = width;
            mHeight = height;
            mMode = createMode(width, height, REFRESH_RATE);
            mDensityDpi = densityDpi;
            mSurface = surface;
            mFlags = flags;
            mCallback = callback;
            mDisplayState = Display.STATE_UNKNOWN;
            mPendingChanges |= PENDING_SURFACE_CHANGE;
            mUniqueIndex = uniqueIndex;
        }

没毛病~ 再通过setSurfaceInTransactionLocked函数,将Surface通过SurfaceControl,传给Native的VirtualDisplay。

    public final void setSurfaceInTransactionLocked(Surface surface) {
        if (mCurrentSurface != surface) {
            mCurrentSurface = surface;
            SurfaceControl.setDisplaySurface(mDisplayToken, surface);
        }
    }

SurfaceControl中有个一个sGlobalTransaction,Surface被暂时保存到sGlobalTransaction中。

    public static void setDisplaySurface(IBinder displayToken, Surface surface) {
        synchronized (SurfaceControl.class) {
            sGlobalTransaction.setDisplaySurface(displayToken, surface);
        }
    }

sGlobalTransaction生效是在closeTransaction时,这里是由WMS调的。openTransaction和closeTransaction成对出现,一个打开一个关闭。关闭时生效。

    private static void closeTransaction(boolean sync) {
        synchronized(SurfaceControl.class) {
            if (sTransactionNestCount == 0) {
                Log.e(TAG, "Call to SurfaceControl.closeTransaction without matching openTransaction");
            } else if (--sTransactionNestCount > 0) {
                return;
            }
            sGlobalTransaction.apply(sync);
        }
    }

apply函数如下:

        public void apply(boolean sync) {
            applyResizedSurfaces();
            nativeApplyTransaction(mNativeObject, sync);
        }
  • 一些同步被保存到SurfaceControl中
  • 再通过nativeApplyTransaction给到底层。

在JNI中将java的Transaction转换我们native的Transaction。

static void nativeApplyTransaction(JNIEnv* env, jclass clazz, jlong transactionObj, jboolean sync) {
    auto transaction = reinterpret_cast(transactionObj);
    transaction->apply(sync);
}

而我们的nativeSetDisplaySurface,最后如下:

status_t SurfaceComposerClient::Transaction::setDisplaySurface(const sp& token,
        const sp& bufferProducer) {
    if (bufferProducer.get() != nullptr) {
        // Make sure that composition can never be stalled by a virtual display
        // consumer that isn't processing buffers fast enough.
        status_t err = bufferProducer->setAsyncMode(true);
        if (err != NO_ERROR) {
            ALOGE("Composer::setDisplaySurface Failed to enable async mode on the "
                    "BufferQueue. This BufferQueue cannot be used for virtual "
                    "display. (%d)", err);
            return err;
        }
    }
    DisplayState& s(getDisplayStateLocked(token));
    s.surface = bufferProducer;
    s.what |= DisplayState::eSurfaceChanged;
    return NO_ERROR;
}

直接看SurfaceFlinger中的处理吧,注意我们这里what是DisplayState::eSurfaceChanged。

void SurfaceFlinger::setTransactionState(
        const Vector& state,
        const Vector& displays,
        uint32_t flags)
{
    ... ...

    size_t count = displays.size();
    for (size_t i=0 ; i

SF在setTransactionState时,调用每Display的setDisplayStateLocked

uint32_t SurfaceFlinger::setDisplayStateLocked(const DisplayState& s)
{
    ssize_t dpyIdx = mCurrentState.displays.indexOfKey(s.token);
    if (dpyIdx < 0)
        return 0;

    uint32_t flags = 0;
    DisplayDeviceState& disp(mCurrentState.displays.editValueAt(dpyIdx));
    if (disp.isValid()) {
        const uint32_t what = s.what;
        if (what & DisplayState::eSurfaceChanged) {
            if (IInterface::asBinder(disp.surface) != IInterface::asBinder(s.surface)) {
                disp.surface = s.surface;
                flags |= eDisplayTransactionNeeded;
            }
        }

前面我们创建的VirtualDisplay的token是不是Add到了mCurrentState.displays中,现在我们编辑它,将ImageReader那边给过来的Surface给到了disp.surface。

Oops~ 记住,我们的Surface给给到了mCurrentState.displays的disp.surface。

setTransactionState完成后,将通过setTransactionFlags出发SurfaceFlinger工作。SurfaceFlinger将处理Transaction。也就是会调用到handleTransaction函数。

我们只看和处理 Display相关的流程,这里将有一个场大战。

void SurfaceFlinger::handleTransactionLocked(uint32_t transactionFlags)
{
            ... ...
            // find displays that were added
            // (ie: in current state but not in drawing state)
            for (size_t i=0 ; i dispSurface;
                    sp producer;
                    sp bqProducer;
                    sp bqConsumer;
                    BufferQueue::createBufferQueue(&bqProducer, &bqConsumer);

                    int32_t hwcId = -1;
                    if (state.isVirtualDisplay()) {
                        // Virtual displays without a surface are dormant:
                        // they have external state (layer stack, projection,
                        // etc.) but no internal state (i.e. a DisplayDevice).
                        if (state.surface != NULL) {

                            // Allow VR composer to use virtual displays.
                            if (mUseHwcVirtualDisplays || getBE().mHwc->isUsingVrComposer()) {
                                ... ...//这类的流程我们暂时走不到,先不看。
                            }

                            sp vds =
                                    new VirtualDisplaySurface(*getBE().mHwc,
                                            hwcId, state.surface, bqProducer,
                                            bqConsumer, state.displayName);

                            dispSurface = vds;
                            producer = vds;
                        }
                    } else {
                        ... ...主显示,先不关心
                    }

                    const wp& display(curr.keyAt(i));
                    if (dispSurface != NULL) {
                        sp hw =
                                new DisplayDevice(this, state.type, hwcId, state.isSecure, display,
                                                  dispSurface, producer, hasWideColorDisplay);
                        hw->setLayerStack(state.layerStack);
                        hw->setProjection(state.orientation,
                                state.viewport, state.frame);
                        hw->setDisplayName(state.displayName);
                        mDisplays.add(display, hw);
                        if (!state.isVirtualDisplay()) {
                            mEventThread->onHotplugReceived(state.type, true);
                        }
                    }
                }
            }
  • 首先从DisplayDeviceState中拿出一个DisplayDeviceState
  • 创建一个createBufferQueue,注意区分这里的producer和bqProducer
  • 判断是不是虚显,如果是虚显,且state.surface不为空,将创建一个VirtualDisplaySurface,注意这里dispSurface和producer都是我们刚创建的VirtualDisplaySurface对象vds。
  • 最后创建DisplayDevice对象hw,初始化hw ,并添加到mDisplays中。
  • 另外,要注意的是,这里的 hwcId 为-1

这里引出了两个重要的量级类DisplayDevice和VirtualDisplaySurface。

我们先来看VirtualDisplaySurface

class VirtualDisplaySurface : public DisplaySurface,
                              public BnGraphicBufferProducer,
                              private ConsumerBase {

厉害了,我的VirtualDisplaySurface,继承了BnGraphicBufferProducer和ConsumerBase。这是即做Producer,也做Consumer。

先来看VirtualDisplaySurface的构造函数:

* frameworks/native/services/surfaceflinger/DisplayHardware/VirtualDisplaySurface.cpp

VirtualDisplaySurface::VirtualDisplaySurface(HWComposer& hwc, int32_t dispId,
        const sp& sink,
        const sp& bqProducer,
        const sp& bqConsumer,
        const String8& name)
:   ConsumerBase(bqConsumer),
    mHwc(hwc),
    mDisplayId(dispId),
    mDisplayName(name),
    mSource{},
    mDefaultOutputFormat(HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED),
    mOutputFormat(HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED),
    mOutputUsage(GRALLOC_USAGE_HW_COMPOSER),
    mProducerSlotSource(0),
    mProducerBuffers(),
    mQueueBufferOutput(),
    mSinkBufferWidth(0),
    mSinkBufferHeight(0),
    mCompositionType(COMPOSITION_UNKNOWN),
    mFbFence(Fence::NO_FENCE),
    mOutputFence(Fence::NO_FENCE),
    mFbProducerSlot(BufferQueue::INVALID_BUFFER_SLOT),
    mOutputProducerSlot(BufferQueue::INVALID_BUFFER_SLOT),
    mDbgState(DBG_STATE_IDLE),
    mDbgLastCompositionType(COMPOSITION_UNKNOWN),
    mMustRecompose(false),
    mForceHwcCopy(SurfaceFlinger::useHwcForRgbToYuv)
{
    mSource[SOURCE_SINK] = sink;
    mSource[SOURCE_SCRATCH] = bqProducer;

    resetPerFrameState();

    int sinkWidth, sinkHeight;
    sink->query(NATIVE_WINDOW_WIDTH, &sinkWidth);
    sink->query(NATIVE_WINDOW_HEIGHT, &sinkHeight);
    mSinkBufferWidth = sinkWidth;
    mSinkBufferHeight = sinkHeight;

    // Pick the buffer format to request from the sink when not rendering to it
    // with GLES. If the consumer needs CPU access, use the default format
    // set by the consumer. Otherwise allow gralloc to decide the format based
    // on usage bits.
    int sinkUsage;
    sink->query(NATIVE_WINDOW_CONSUMER_USAGE_BITS, &sinkUsage);
    if (sinkUsage & (GRALLOC_USAGE_SW_READ_MASK | GRALLOC_USAGE_SW_WRITE_MASK)) {
        int sinkFormat;
        sink->query(NATIVE_WINDOW_FORMAT, &sinkFormat);
        mDefaultOutputFormat = sinkFormat;
    } else {
        mDefaultOutputFormat = HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED;
    }
    mOutputFormat = mDefaultOutputFormat;

    ConsumerBase::mName = String8::format("VDS: %s", mDisplayName.string());
    mConsumer->setConsumerName(ConsumerBase::mName);
    mConsumer->setConsumerUsageBits(GRALLOC_USAGE_HW_COMPOSER);
    mConsumer->setDefaultBufferSize(sinkWidth, sinkHeight);
    sink->setAsyncMode(true);
    IGraphicBufferProducer::QueueBufferOutput output;
    mSource[SOURCE_SCRATCH]->connect(NULL, NATIVE_WINDOW_API_EGL, false, &output);
}
  • ImageReader那边过来的Surface,被保存在mSource[SOURCE_SINK] 中
  • 新创建的BufferQueue的Producer保存在mSource[SOURCE_SCRATCH] 中
  • 新的BufferQueue的Consumer给到mConsumer。
  • mDisplayId为-1

看明白了没有?这里主要点是的两个BufferQueue。一个是ImageReader的,另一个是VirtualDisplay的,也就是DisplayDevice的。

DisplayDevice的BufferQueue,为了便于区分,我们私自命名一个DisplayBufferQueue,主要是用来GLES合成,合成后的数据就queue到这个BufferQueue中。

ImageReader的BufferQueue,为了便于区分,我们私自命名一个ReaderBufferQueue,主要是用来读数据,合成完的数据,queue到DisplayBufferQueue中,再queue到ReaderBufferQueue。

事实上,DisplayBufferQueue的Buffer,也是从ReaderBufferQueue中dequeue出来的。我们来看一下VirtualDisplaySurface的dequeueBuffer和queueBuffer方法就明白了。

* frameworks/native/services/surfaceflinger/DisplayHardware/VirtualDisplaySurface.cpp

status_t VirtualDisplaySurface::dequeueBuffer(int* pslot, sp* fence, uint32_t w, uint32_t h,
                                              PixelFormat format, uint64_t usage,
                                              uint64_t* outBufferAge,
                                              FrameEventHistoryDelta* outTimestamps) {
    if (mDisplayId < 0) {
        return mSource[SOURCE_SINK]->dequeueBuffer(pslot, fence, w, h, format, usage, outBufferAge,
                                                   outTimestamps);
    }
    ... ...
}


status_t VirtualDisplaySurface::queueBuffer(int pslot,
        const QueueBufferInput& input, QueueBufferOutput* output) {
    if (mDisplayId < 0)
        return mSource[SOURCE_SINK]->queueBuffer(pslot, input, output);
    ... ...
}

看到没有,因为ImageReader使用时,mDisplayId为-1,所以,这里直接走的SOURCE_SINK mSource,就ImageReader那边的BufferQueue。我们加一个栈看看dequeueBuffer和queueBuffer。

VirtualDisplaySurface的dequeueBuffer栈

01-03 13:53:16.709   265   265 D VirtualDisplaySurface_queueBuffer1: #00 pc 0006f6db  /system/lib/libsurfaceflinger.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #01 pc 00054cb3  /system/lib/libgui.so (android::Surface::dequeueBuffer(ANativeWindowBuffer**, int*)+346)
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #02 pc 0069e648  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #03 pc 00345970  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #04 pc 0034582c  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #05 pc 00633650  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #06 pc 0062b30c  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #07 pc 00633474  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #08 pc 00627d3c  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #09 pc 0062a820  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #10 pc 000737c9  /system/lib/libsurfaceflinger.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #11 pc 000721b1  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #12 pc 00079fad  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #13 pc 0007ab59  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #14 pc 000797cf  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #15 pc 00078629  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #16 pc 00078411  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #17 pc 000100a3  /system/lib/libutils.so (android::Looper::pollInner(int)+294)
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #18 pc 0000fee5  /system/lib/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+32)
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #19 pc 00061ba7  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #20 pc 000773d1  /system/lib/libsurfaceflinger.so (android::SurfaceFlinger::run()+8)
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #21 pc 00002141  /system/bin/surfaceflinger
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #22 pc 000774a9  /system/lib/libc.so (__libc_init+48)
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #23 pc 00001df4  /system/bin/surfaceflinger

VirtualDisplaySurface的queueBuffer栈

01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #01 pc 00055423  /system/lib/libgui.so (android::Surface::queueBuffer(ANativeWindowBuffer*, int)+594)
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #02 pc 0069eb38  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #03 pc 0034628c  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #04 pc 00346f60  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #05 pc 00346930  /vendor/lib/egl/libGLES_mali.so (eglp_swap_buffers+740)
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #06 pc 0000ca29  /system/lib/libEGL.so (eglSwapBuffersWithDamageKHR+236)
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #07 pc 0005135d  /system/lib/libsurfaceflinger.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #08 pc 0007ab71  /system/lib/libsurfaceflinger.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #09 pc 000797cf  /system/lib/libsurfaceflinger.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #10 pc 00078629  /system/lib/libsurfaceflinger.so
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #11 pc 00078411  /system/lib/libsurfaceflinger.so
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #12 pc 000100a3  /system/lib/libutils.so (android::Looper::pollInner(int)+294)
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #13 pc 0000fee5  /system/lib/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+32)
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #14 pc 00061ba7  /system/lib/libsurfaceflinger.so
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #15 pc 000773d1  /system/lib/libsurfaceflinger.so (android::SurfaceFlinger::run()+8)
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #16 pc 00002141  /system/bin/surfaceflinger
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #17 pc 000774a9  /system/lib/libc.so (__libc_init+48)
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #18 pc 00001df4  /system/bin/surfaceflinger

合成的流程这里就不介绍了,只是这个数据流,大家再仔细体会。

ImageReader获取数据

我们再回到测试,看看ImageReader是怎么对到数据的。

合成的数据queue过来后,将会调回调到JNIImageReaderContext的监听,onFrameAvailable。

void JNIImageReaderContext::onFrameAvailable(const BufferItem& /*item*/)
{
    ALOGV("%s: frame available", __FUNCTION__);
    bool needsDetach = false;
    JNIEnv* env = getJNIEnv(&needsDetach);
    if (env != NULL) {
        env->CallStaticVoidMethod(mClazz, gImageReaderClassInfo.postEventFromNative, mWeakThiz);
    } else {
        ALOGW("onFrameAvailable event will not posted");
    }
    if (needsDetach) {
        detachJNI();
    }
}

postEventFromNative是java的方法

    private static void postEventFromNative(Object selfRef) {
        @SuppressWarnings("unchecked")
        WeakReference weakSelf = (WeakReference)selfRef;
        final ImageReader ir = weakSelf.get();
        if (ir == null) {
            return;
        }

        final Handler handler;
        synchronized (ir.mListenerLock) {
            handler = ir.mListenerHandler;
        }
        if (handler != null) {
            handler.sendEmptyMessage(0);
        }
    }

这里的handler是一个ListenerHandler。

    private final class ListenerHandler extends Handler {
        public ListenerHandler(Looper looper) {
            super(looper, null, true /*async*/);
        }

        @Override
        public void handleMessage(Message msg) {
            OnImageAvailableListener listener;
            synchronized (mListenerLock) {
                listener = mListener;
            }

            // It's dangerous to fire onImageAvailable() callback when the ImageReader is being
            // closed, as application could acquire next image in the onImageAvailable() callback.
            boolean isReaderValid = false;
            synchronized (mCloseLock) {
                isReaderValid = mIsReaderValid;
            }
            if (listener != null && isReaderValid) {
                listener.onImageAvailable(ImageReader.this);
            }
        }
    }

这里终于调回测试代码中onImageAvailable

        public void onImageAvailable(ImageReader reader) {
            mImageReaderLock.lock();
            try {
                if (reader != mImageReader) {
                    return;
                }

                Log.d(TAG, "New image available from virtual display.");

                // Get the latest buffer.
                Image image = reader.acquireLatestImage();
                if (image != null) {
                    try {
                        // Scan for colors.
                        int color = scanImage(image);
                        synchronized (this) {
                            if (mColor != color) {
                                mColor = color;
                                notifyAll();
                            }
                        }
                    } finally {
                        image.close();
                    }
                }
            } finally {
                mImageReaderLock.unlock();
            }
        }

看到没有,这个和SurfaceFlinger中Layer的处理是不相似?通过ImageReader去acquireLatestImage。

    public Image acquireLatestImage() {
        Image image = acquireNextImage();
        if (image == null) {
            return null;
        }
        try {
            for (;;) {
                Image next = acquireNextImageNoThrowISE();
                if (next == null) {
                    Image result = image;
                    image = null;
                    return result;
                }
                image.close();
                image = next;
            }
        } finally {
            if (image != null) {
                image.close();
            }
        }
    }

这里有一个循环,目的就是获取,最后一帧数据。acquireNextImage和acquireNextImageNoThrowISE是类似的,只是一个会抛出异常,一个不会。

    public Image acquireNextImage() {
        // Initialize with reader format, but can be overwritten by native if the image
        // format is different from the reader format.
        SurfaceImage si = new SurfaceImage(mFormat);
        int status = acquireNextSurfaceImage(si);

        switch (status) {
            case ACQUIRE_SUCCESS:
                return si;
            case ACQUIRE_NO_BUFS:
                return null;
            case ACQUIRE_MAX_IMAGES:
                throw new IllegalStateException(
                        String.format(
                                "maxImages (%d) has already been acquired, " +
                                "call #close before acquiring more.", mMaxImages));
            default:
                throw new AssertionError("Unknown nativeImageSetup return code " + status);
        }
    }

acquireNextImage出错后,会抛一些异常。

    private int acquireNextSurfaceImage(SurfaceImage si) {
        synchronized (mCloseLock) {
            // A null image will eventually be returned if ImageReader is already closed.
            int status = ACQUIRE_NO_BUFS;
            if (mIsReaderValid) {
                status = nativeImageSetup(si);
            }

            switch (status) {
                case ACQUIRE_SUCCESS:
                    si.mIsImageValid = true;
                case ACQUIRE_NO_BUFS:
                case ACQUIRE_MAX_IMAGES:
                    break;
                default:
                    throw new AssertionError("Unknown nativeImageSetup return code " + status);
            }

            // Only keep track the successfully acquired image, as the native buffer is only mapped
            // for such case.
            if (status == ACQUIRE_SUCCESS) {
                mAcquiredImages.add(si);
            }
            return status;
        }
    }

这类终于调到关键的了,nativeImageSetup函数。对应的JNI函数为ImageReader_imageSetup。

ImageReader_imageSetup函数:

static jint ImageReader_imageSetup(JNIEnv* env, jobject thiz, jobject image) {
    ALOGV("%s:", __FUNCTION__);
    JNIImageReaderContext* ctx = ImageReader_getContext(env, thiz);
    if (ctx == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException",
                "ImageReader is not initialized or was already closed");
        return -1;
    }

    BufferItemConsumer* bufferConsumer = ctx->getBufferConsumer();
    BufferItem* buffer = ctx->getBufferItem();
    if (buffer == NULL) {
        ALOGW("Unable to acquire a buffer item, very likely client tried to acquire more than"
            " maxImages buffers");
        return ACQUIRE_MAX_IMAGES;
    }

    status_t res = bufferConsumer->acquireBuffer(buffer, 0);
    if (res != OK) {
        ... ...
    }

    // Add some extra checks for non-opaque formats.
    if (!isFormatOpaque(ctx->getBufferFormat())) {
        ... ...
    }

    // Set SurfaceImage instance member variables
    Image_setBufferItem(env, image, buffer);
    env->SetLongField(image, gSurfaceImageClassInfo.mTimestamp,
            static_cast(buffer->mTimestamp));

    return ACQUIRE_SUCCESS;
}
  • 获取JNIImageReaderContext对象ctx
  • 从ctx中获取对应地Consumer BufferItemConsumer
  • 通过BufferItemConsumer的acquireBuffer接口去请求一块Buffer BufferItem
  • 将BufferItem和SurfaceImage关联
static void Image_setBufferItem(JNIEnv* env, jobject thiz,
        const BufferItem* buffer)
{
    env->SetLongField(thiz, gSurfaceImageClassInfo.mNativeBuffer, reinterpret_cast(buffer));
}
  • 设置SurfaceImage的timeStamp

BufferItemConsumer的acquireBuffer函数如下:

status_t BufferItemConsumer::acquireBuffer(BufferItem *item,
        nsecs_t presentWhen, bool waitForFence) {
    status_t err;

    if (!item) return BAD_VALUE;

    Mutex::Autolock _l(mMutex);

    err = acquireBufferLocked(item, presentWhen);
    if (err != OK) {
        if (err != NO_BUFFER_AVAILABLE) {
            BI_LOGE("Error acquiring buffer: %s (%d)", strerror(err), err);
        }
        return err;
    }

    if (waitForFence) {
        err = item->mFence->waitForever("BufferItemConsumer::acquireBuffer");
        if (err != OK) {
            BI_LOGE("Failed to wait for fence of acquired buffer: %s (%d)",
                    strerror(-err), err);
            return err;
        }
    }

    item->mGraphicBuffer = mSlots[item->mSlot].mGraphicBuffer;

    return OK;
}

这里waitForFence为0,这里不会去等Fence。

最终还是通过ConsumerBase的acquireBufferLocked去获取的

status_t ConsumerBase::acquireBufferLocked(BufferItem *item,
        nsecs_t presentWhen, uint64_t maxFrameNumber) {
    if (mAbandoned) {
        CB_LOGE("acquireBufferLocked: ConsumerBase is abandoned!");
        return NO_INIT;
    }

    status_t err = mConsumer->acquireBuffer(item, presentWhen, maxFrameNumber);
    if (err != NO_ERROR) {
        return err;
    }

    if (item->mGraphicBuffer != NULL) {
        if (mSlots[item->mSlot].mGraphicBuffer != NULL) {
            freeBufferLocked(item->mSlot);
        }
        mSlots[item->mSlot].mGraphicBuffer = item->mGraphicBuffer;
    }

    mSlots[item->mSlot].mFrameNumber = item->mFrameNumber;
    mSlots[item->mSlot].mFence = item->mFence;

    CB_LOGV("acquireBufferLocked: -> slot=%d/%" PRIu64,
            item->mSlot, item->mFrameNumber);

    return OK;
}

mConsumer为ImageReader的BufferQueue的Consumer。要是不记得了,回头去看看ImageReader_init。mConsumer通过acquireBuffer函数获取回来的,就是虚显合成后的数据。

这里用了两个BufferQueue,千万不要混淆了。

最后,测试代码中来扫描图时的处理,scanImage函数。

        private int scanImage(Image image) {
            final Image.Plane plane = image.getPlanes()[0];
            final ByteBuffer buffer = plane.getBuffer();

getPlanes函数如下:

        public Plane[] getPlanes() {
            throwISEIfImageIsInvalid();

            if (mPlanes == null) {
                mPlanes = nativeCreatePlanes(ImageReader.this.mNumPlanes, ImageReader.this.mFormat);
            }
            // Shallow copy is fine.
            return mPlanes.clone();
        }

对应的JNI函数为Image_createSurfacePlanes:

static jobjectArray Image_createSurfacePlanes(JNIEnv* env, jobject thiz,
        int numPlanes, int readerFormat)
{
    ... ...

    jobjectArray surfacePlanes = env->NewObjectArray(numPlanes, gSurfacePlaneClassInfo.clazz,
            /*initial_element*/NULL);
    ... ...

    LockedImage lockedImg = LockedImage();
    Image_getLockedImage(env, thiz, &lockedImg);
    if (env->ExceptionCheck()) {
        return NULL;
    }
    // Create all SurfacePlanes
    for (int i = 0; i < numPlanes; i++) {
        Image_getLockedImageInfo(env, &lockedImg, i, halReaderFormat,
                &pData, &dataSize, &pixelStride, &rowStride);
        byteBuffer = env->NewDirectByteBuffer(pData, dataSize);
        if ((byteBuffer == NULL) && (env->ExceptionCheck() == false)) {
            jniThrowException(env, "java/lang/IllegalStateException",
                    "Failed to allocate ByteBuffer");
            return NULL;
        }

        // Finally, create this SurfacePlane.
        jobject surfacePlane = env->NewObject(gSurfacePlaneClassInfo.clazz,
                    gSurfacePlaneClassInfo.ctor, thiz, rowStride, pixelStride, byteBuffer);
        env->SetObjectArrayElement(surfacePlanes, i, surfacePlane);
    }

    return surfacePlanes;
}
  • 先通过Image_getLockedImage函数数,生成一个LockedImage。
  • 再通过Image_getLockedImageInfo获取生成的LockedImage数据,将数据保存在一个byteBuffer对象中。
  • 根据byteBuffer数据创建SurfacePlane
    这样,数据就传到Java层,SurfacePlane中,即mBuffer。

LockedImage的生成,通过 Image_getLockedImage 函数:

static void Image_getLockedImage(JNIEnv* env, jobject thiz, LockedImage *image) {
    ALOGV("%s", __FUNCTION__);
    BufferItem* buffer = Image_getBufferItem(env, thiz);
    if (buffer == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException",
                "Image is not initialized");
        return;
    }

    status_t res = lockImageFromBuffer(buffer,
            GRALLOC_USAGE_SW_READ_OFTEN, buffer->mFence->dup(), image);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/RuntimeException",
                "lock buffer failed for format 0x%x",
                buffer->mGraphicBuffer->getPixelFormat());
        return;
    }

    // Carry over some fields from BufferItem.
    image->crop        = buffer->mCrop;
    image->transform   = buffer->mTransform;
    image->scalingMode = buffer->mScalingMode;
    image->timestamp   = buffer->mTimestamp;
    image->dataSpace   = buffer->mDataSpace;
    image->frameNumber = buffer->mFrameNumber;

    ALOGV("%s: Successfully locked the image", __FUNCTION__);
    // crop, transform, scalingMode, timestamp, and frameNumber should be set by producer,
    // and we don't set them here.
}
  • 先获取到BufferItem,Image_getBufferItem
  • 从BufferItem中lock Image,lockImageFromBuffer

lockImageFromBuffer函数如下;

status_t lockImageFromBuffer(BufferItem* bufferItem, uint32_t inUsage,
        int fenceFd, LockedImage* outputImage) {
    ALOGV("%s: Try to lock the BufferItem", __FUNCTION__);
    if (bufferItem == nullptr || outputImage == nullptr) {
        ALOGE("Input BufferItem or output LockedImage is NULL!");
        return BAD_VALUE;
    }

    status_t res = lockImageFromBuffer(bufferItem->mGraphicBuffer, inUsage, bufferItem->mCrop,
            fenceFd, outputImage);
    if (res != OK) {
        ALOGE("%s: lock graphic buffer failed", __FUNCTION__);
        return res;
    }

    outputImage->crop        = bufferItem->mCrop;
    outputImage->transform   = bufferItem->mTransform;
    outputImage->scalingMode = bufferItem->mScalingMode;
    outputImage->timestamp   = bufferItem->mTimestamp;
    outputImage->dataSpace   = bufferItem->mDataSpace;
    outputImage->frameNumber = bufferItem->mFrameNumber;
    ALOGV("%s: Successfully locked the image from the BufferItem", __FUNCTION__);
    return OK;
}
  • 通过lockImageFromBuffer函数,从GraphicBuffer中生成我们需要的outputImage。
  • 再同步对应的信息描述

lockImageFromBuffer函数:

status_t lockImageFromBuffer(sp buffer, uint32_t inUsage,
        const Rect& rect, int fenceFd, LockedImage* outputImage) {
    ... ...

    void* pData = NULL;
    android_ycbcr ycbcr = android_ycbcr();
    status_t res;
    int format = buffer->getPixelFormat();
    int flexFormat = format;
    if (isPossiblyYUV(format)) {
        res = buffer->lockAsyncYCbCr(inUsage, rect, &ycbcr, fenceFd);
        pData = ycbcr.y;
        flexFormat = HAL_PIXEL_FORMAT_YCbCr_420_888;
    }

    // lockAsyncYCbCr for YUV is unsuccessful.
    if (pData == NULL) {
        res = buffer->lockAsync(inUsage, rect, &pData, fenceFd);
        if (res != OK) {
            ALOGE("Lock buffer failed!");
            return res;
        }
    }

    outputImage->data = reinterpret_cast(pData);
    outputImage->width = buffer->getWidth();
    outputImage->height = buffer->getHeight();
    outputImage->format = format;
    outputImage->flexFormat = flexFormat;
    outputImage->stride =
            (ycbcr.y != NULL) ? static_cast(ycbcr.ystride) : buffer->getStride();

    outputImage->dataCb = reinterpret_cast(ycbcr.cb);
    outputImage->dataCr = reinterpret_cast(ycbcr.cr);
    outputImage->chromaStride = static_cast(ycbcr.cstride);
    outputImage->chromaStep = static_cast(ycbcr.chroma_step);
    ALOGV("%s: Successfully locked the image from the GraphicBuffer", __FUNCTION__);
    // Crop, transform, scalingMode, timestamp, and frameNumber should be set by caller,
    // and cann't be set them here.
    return OK;
}
  • 这里GraphicBuffer对象buffer就是VirtualDisplay合成后的数据,outputImage就是我们需要生成的数据。
  • 优先采用yuv的格式,通过GraphicBuffer的lockAsyncYCbCr接口去获取对应的数据地址。
  • 如果yuv获取失败,采用RGB的方式,lockAsync
  • outputImage->data就是数据的起始地址

最后经过Image_getLockedImageInfo处理后,我们的数据就获取好了。最终我们的数据是保存在Image.Plane的mBuffer中,是一个ByteBuffer。此时,我们就可以做各种我们需要的处理了。

你可能感兴趣的:(Android P 图形显示系统(四) Android VirtualDisplay解析)