gallery3d的源码分析很多,有些也很透彻。我的源码分析的参考资料也是来源于网络。
gallery3d的入口代码在gallery.java文件。首先来分析入口做了哪些事情。
uper.onCreate(savedInstanceState); final boolean imageManagerHasStorage = ImageManager.hasStorage(); boolean slideshowIntent = false; if (isViewIntent()) { Bundle extras = getIntent().getExtras(); Log.i(TAG, "Gallery, onCreate, isViewIntent"); if (extras != null) { slideshowIntent = extras.getBoolean("slideshow", false); Log.i(TAG, "Gallery, onCreate, isViewIntent, slideshowIntent:"+slideshowIntent); } } Log.i(TAG, "Images.Media.EXTERNAL_CONTENT_URI:"+Images.Media.EXTERNAL_CONTENT_URI.toString()); if (isViewIntent() && getIntent().getData().equals(Images.Media.EXTERNAL_CONTENT_URI) && slideshowIntent) { if (!imageManagerHasStorage) { Toast.makeText(this, getResources().getString(R.string.no_sd_card), Toast.LENGTH_LONG).show(); finish(); } else { Slideshow slideshow = new Slideshow(this); slideshow.setDataSource(new RandomDataSource()); setContentView(slideshow); mDockSlideshow = true; Log.i(TAG, "Gallery, onCreate, isViewIntent, Slideshow"); } return; }
入口首先检测是否有来自用户的看图动作。gallery3d的功能强大,不仅可以像ACDsee那样看图片,还可以播放mp4等格式的视频,例如可以播放手机视频客户端的视频,不信你可以试试。
接下来获得屏幕的密度:
if (PIXEL_DENSITY == 0.0f) { DisplayMetrics metrics = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(metrics); PIXEL_DENSITY = metrics.density; }
接下来的代码就是入口的精华:
mReverseGeocoder = new ReverseGeocoder(this); mRenderView = new RenderView(this); mGridLayer = new GridLayer(this, (int) (96.0f * PIXEL_DENSITY), (int) (72.0f * PIXEL_DENSITY), new GridLayoutInterface(4), mRenderView); mRenderView.setRootLayer(mGridLayer); setContentView(mRenderView); ;
ReverseGeocoder是获取照片位置的thread,关键函数就是computeMostGranularCommonLocation。
RenderView类是最核心的类,继承于GLSurfaceView,Gallery3D都是围绕这个类来实现界面的渲染和事件的处理。
GridLayer对象则是opengl surface上最核心的layer。这里说明了每个item即每张缩略图的大小,宽为96dpi,高位72dpi,可见视图最多显示4行缩略图。缩略图的横向间距是20dpi,纵向间距是40dpi。
最后setContentView(mRenderView)设置当前view为RenderView的对象,说明Gallery3D所有界面都是opengl渲染的,跟android基础UI没有任何关系。这其实不是件好事,毕竟opengl对硬件要求比较高,低端机器使用gallery3d效果不是很好,大家可以做对比。
界面准备好了,但是数据从何而来呢?请看下面:
Thread t = new Thread() { public void run() { int numRetries = 25; if (!imageManagerHasStorage) { showToast(getResources().getString(R.string.no_sd_card), Toast.LENGTH_LONG); do { --numRetries; try { Thread.sleep(200); } catch (InterruptedException e) { ; } } while (numRetries > 0 && !ImageManager.hasStorage()); } final boolean imageManagerHasStorageAfterDelay = ImageManager.hasStorage(); Log.i(TAG, "Gallery:onCreate, thread"); CacheService.computeDirtySets(Gallery.this); CacheService.startCache(Gallery.this, false); final boolean isCacheReady = CacheService.isCacheReady(false); // Creating the DataSource objects. final PicasaDataSource picasaDataSource = new PicasaDataSource(Gallery.this); final LocalDataSource localDataSource = new LocalDataSource(Gallery.this); final ConcatenatedDataSource combinedDataSource = new ConcatenatedDataSource(localDataSource, picasaDataSource); // Depending upon the intent, we assign the right dataSource. if (!isPickIntent() && !isViewIntent()) { if (imageManagerHasStorageAfterDelay) { mGridLayer.setDataSource(combinedDataSource); } else { mGridLayer.setDataSource(picasaDataSource); } if (!isCacheReady && imageManagerHasStorageAfterDelay) { showToast(getResources().getString(R.string.loading_new), Toast.LENGTH_LONG); } } else if (!isViewIntent()) { final Intent intent = getIntent(); if (intent != null) { final String type = intent.resolveType(Gallery.this); boolean includeImages = isImageType(type); boolean includeVideos = isVideoType(type); ((LocalDataSource) localDataSource).setMimeFilter(!includeImages, !includeVideos); if (includeImages) { if (imageManagerHasStorageAfterDelay) { mGridLayer.setDataSource(combinedDataSource); } else { mGridLayer.setDataSource(picasaDataSource); } } else { mGridLayer.setDataSource(localDataSource); } mGridLayer.setPickIntent(true); if (!imageManagerHasStorageAfterDelay) { showToast(getResources().getString(R.string.no_sd_card), Toast.LENGTH_LONG); } else { showToast(getResources().getString(R.string.pick_prompt), Toast.LENGTH_LONG); } } } else { // View intent for images. Uri uri = getIntent().getData(); Log.i(TAG, "Gallery, view intent for images, uri:"+uri.toString()); boolean slideshow = getIntent().getBooleanExtra("slideshow", false); final SingleDataSource singleDataSource = new SingleDataSource(Gallery.this, uri.toString(), slideshow); final ConcatenatedDataSource singleCombinedDataSource = new ConcatenatedDataSource(singleDataSource, picasaDataSource); mGridLayer.setDataSource(singleCombinedDataSource); mGridLayer.setViewIntent(true, Utils.getBucketNameFromUri(uri)); if (singleDataSource.isSingleImage()) { Log.i(TAG, "Gallery, view intent for images, set single image"); mGridLayer.setSingleImage(false); } else if (slideshow) { mGridLayer.setSingleImage(true); Log.i(TAG, "Gallery, view intent for images, start slide show"); mGridLayer.startSlideshow(); } } } }; t.start();
这段代码说明了数据从哪里来的。首先检查有没有外部存储设备,例如SD卡。如果没有就提示用户没有SD卡。
接着调用CacheService.computeDirtySets(Gallery.this)查看是否有新的相册或者视频。CacheService继承IntentService,CacheService.startCache(Gallery.this, false)启动这个service,然后这个service会处理新的相册或者视频,将其缓存到SD卡中,方便再次使用时直接从cache获取,快速显示。
数据源有几种:PicasaDataSource,LocalDataSource,ConcatenatedDataSource,SingleDataSource。其中PicasaDataSource是google提供的一种picasa图片服务,可以在这儿看到:http://picasa.google.com。LocalDataSource很好理解,就是储存在本地sd卡的数据源;ConcatenatedDataSource则是LocalDataSource和PicasaDataSource的结合体;SingleDataSource则是针对单张图片浏览时提供的数据源对象。
标准的数据源设置都是mGridLayer.setDataSource(combinedDataSource),即本地数据源和picasa数据源的结合。其实笔者猜想gallery3d是为了将本地图片和picasa连接起来,实现图片分享功能,可惜picasa不怎么成功,倒是gallery3d做得不错。
GridLayer的setDataSource做了什么?
public void setDataSource(DataSource dataSource) { MediaFeed feed = mMediaFeed; if (feed != null) { feed.shutdown(); sDisplayList.clear(); mBackground.clear(); } mMediaFeed = new MediaFeed(mContext, dataSource, this); mMediaFeed.start(); }
这个函数就是将数据源导入,显示到屏幕。细节后续再说,敬请期待。