Glide框架学习(一)

学习Glide首先要了解几个比较重要的类

  1. Glide:一个单例,用于呈现一个简单的静态接口,用于使用RequestBuilder构建请求并维护Engine,BitmapPool,Disk Cache和MemoryCache。
  2. RequestBuilder:一个用来处理设置和加载资源的普通类。
  3. Engine:负责启动加载并管理活动和缓存资源。
  4. Discache:用于写入和读取磁盘高速缓存的接口。(默认地址“image_manager_disk_cache”,默认的缓存大小为250M).
  5. BitmapPool:池的接口,允许用户重用Bitmap对象。
  6. MemoryCache:用于在内存缓存中添加和删除资源的接口。
  7. EnginJob:通过添加和删除负载的回调来管理负载的类,并在加载完成时通知回调。
  8. DecodeJob:负责从缓存数据或原始源解码资源并应用转换和转码的类。注意:此类具有与equals不一致的自然顺序。
  9. DecodeHelper:数据的相关处理全都在这里处理。
1、 Glide中的with方法,获取RequestManager实例
@NonNull
  public static RequestManager with(@NonNull Context context) {
    return getRetriever(context).get(context);
  }
  /**
  *getRetriever()判断context是否为空,获取Glide的单例,
  *然后通过getRequestManagerRetriever()获取RequestManagerRetriever实例.
  */
  
   /**
   * Get the singleton.
   *
   * @return the singleton
   */
  @NonNull
  public static Glide get(@NonNull Context context) {
    if (glide == null) {
      synchronized (Glide.class) {
        if (glide == null) {
          checkAndInitializeGlide(context);
        }
      }
    }

    return glide;
  }
  /**
  * checkAndInitializeGlide()调用了initializeGlide(@NonNull Context context, @NonNull GlideBuilder builder)方法
  */

Glide的get()方法看似传入Context、Activity、FragmentActivity等多种参数,其实就只有两类参数。Application与非Application参数。这么做的目的是将Glide与生命周期绑定。值得注意的是,非线程中使用Glide,Glide都强制与Application绑定处理。

2、 调用load方法,返回RequestBuilder对象
  /**
   * Equivalent to calling {@link #asDrawable()} and then {@link RequestBuilder#load(String)}.
   *
   * @return A new request builder for loading a {@link Drawable} using the given model.
   */
  @NonNull
  @CheckResult
  @Override
  public RequestBuilder<Drawable> load(@Nullable String string) {
    return asDrawable().load(string);
  }
  
    @NonNull
  private RequestBuilder<TranscodeType> loadGeneric(@Nullable Object model) {
    this.model = model;
    isModelSet = true;
    return this;
  }

asDeawable()返回RequestBuilder对象,将RequestBuilder初始化,再调用load方法,其中会调用loadGeneric()方法,将String以Object传入RequestBuilder中,并将String赋值给model,将isModel设置为true。

3、into方法
private <Y extends Target<TranscodeType>> Y into(
      @NonNull Y target,
      @Nullable RequestListener<TranscodeType> targetListener,
      @NonNull RequestOptions options) {
    Util.assertMainThread();
    Preconditions.checkNotNull(target);
    if (!isModelSet) {
      throw new IllegalArgumentException("You must call #load() before calling #into()");
    }

    options = options.autoClone();
    Request request = buildRequest(target, targetListener, options);

    Request previous = target.getRequest();
    if (request.isEquivalentTo(previous)
        && !isSkipMemoryCacheWithCompletePreviousRequest(options, previous)) {
      request.recycle();
      // If the request is completed, beginning again will ensure the result is re-delivered,
      // triggering RequestListeners and Targets. If the request is failed, beginning again will
      // restart the request, giving it another chance to complete. If the request is already
      // running, we can let it continue running without interruption.
      if (!Preconditions.checkNotNull(previous).isRunning()) {
        // Use the previous request rather than the new one to allow for optimizations like skipping
        // setting placeholders, tracking and un-tracking Targets, and obtaining View dimensions
        // that are done in the individual Request.
        previous.begin();
      }
      return target;
    }
    requestManager.clear(target);
    target.setRequest(request);
    requestManager.track(target, request);

    return target;
  }

重点:Request request = buildRequest(target, targetListener, options)与requestManager.track(target, request)这两行
这里会执行RequestManage中的track()方法

  void track(Target<?> target, Request request) {
    targetTracker.track(target);
    requestTracker.runRequest(request);
  }

这里又会执行requestTracker.runRequest()方法,在RequestTracker中会将request加入到WeakHashMap中。

4、最后一步步执行下来之后到GlideExecute类中,看到DefaultThreadFactory方法,开启了一个子线程去加载数据,在DecodeJob中执行。
  1. SigleRequest.java
public void onSizeReady(int with,int heigth){
    ···
    loadStatus = engine.load(...);
    ···
}
  1. Engine.java
public <R> LoadStatus load(...) {
    ...
    EngineJob<R> engineJob =
        engineJobFactory.build(...);

    DecodeJob<R> decodeJob =
        decodeJobFactory.build(...);

    jobs.put(key, engineJob);

    engineJob.addCallback(cb);
    engineJob.start(decodeJob);

    if (Log.isLoggable(TAG, Log.VERBOSE)) {
      logWithTimeAndKey("Started new load", startTime, key);
    }
    return new LoadStatus(cb, engineJob);
  }
  1. EngineJob.java
public void start(DecodeJob<R> decodeJob) {
    this.decodeJob = decodeJob;
    GlideExecutor executor = decodeJob.willDecodeFromCache()
        ? diskCacheExecutor
        : getActiveSourceExecutor();
    executor.execute(decodeJob);
  }
  
  @Override
  public void reschedule(DecodeJob<?> job) {
    // Even if the job is cancelled here, it 
    //still needs to be scheduled so that it can clean itself up.
    getActiveSourceExecutor().execute(job);
  }
  1. DEcodeJob.java
    注意run()方法,其中调用了runwrapped()方法。
    在EngineJob中持有DecodeJob.Call回调,当执行reschedule()方法时一直在执行DEcodeJob线程,知道加载完成
private void runWrapped() {
     switch (runReason) {
      case INITIALIZE:
        stage = getNextStage(Stage.INITIALIZE);
        currentGenerator = getNextGenerator();
        runGenerators();
        break;
      case SWITCH_TO_SOURCE_SERVICE:
        runGenerators();
        break;
      case DECODE_DATA:
        decodeFromRetrievedData();
        break;
      default:
        throw new IllegalStateException("Unrecognized run reason: " + runReason);
    }
  }

private DataFetcherGenerator getNextGenerator() {
    switch (stage) {
      case RESOURCE_CACHE:
        return new ResourceCacheGenerator(decodeHelper, this);
      case DATA_CACHE:
        return new DataCacheGenerator(decodeHelper, this);
      case SOURCE:
        return new SourceGenerator(decodeHelper, this);
      case FINISHED:
        return null;
      default:
        throw new IllegalStateException("Unrecognized stage: " + stage);
    }
  }

这里runWrapped()会执行getNextGenerator(),将currentGenerator赋值为SourceGenerator。执行runGenerators()方法。

private void runGenerators() {
    currentThread = Thread.currentThread();
    startFetchTime = LogTime.getLogTime();
    boolean isStarted = false;
    while (!isCancelled && currentGenerator != null
        && !(isStarted = currentGenerator.startNext())) {
      stage = getNextStage(stage);
      currentGenerator = getNextGenerator();

      if (stage == Stage.SOURCE) {
        reschedule();
        return;
      }
    }
    // We've run out of stages and generators, give up.
    if ((stage == Stage.FINISHED || isCancelled) && !isStarted) {
      notifyFailed();
    }
  }

在runGenerators()方法特别需要注意currentGenerator.startNext()方法,在SourceGenerator方法中的startNext()方法中特别需要注意一行代码 loadData.fetcher.loadData(helper.getPriority(), this);
此时再看HttpUrlFetcher.java文件

 @Override
  public void loadData(@NonNull Priority priority,
      @NonNull DataCallback<? super InputStream> callback) {
    long startTime = LogTime.getLogTime();
    try {
      InputStream result = loadDataWithRedirects(glideUrl.toURL(), 0, null, glideUrl.getHeaders());
      callback.onDataReady(result);
    } catch (IOException e) {
      if (Log.isLoggable(TAG, Log.DEBUG)) {
        Log.d(TAG, "Failed to load data for url", e);
      }
      callback.onLoadFailed(e);
    } finally {
      if (Log.isLoggable(TAG, Log.VERBOSE)) {
        Log.v(TAG, "Finished http url fetcher fetch in " + LogTime.getElapsedMillis(startTime));
      }
    }
  }

  private InputStream loadDataWithRedirects(URL url, int redirects, URL lastUrl,
      Map<String, String> headers) throws IOException {
    if (redirects >= MAXIMUM_REDIRECTS) {
      throw new HttpException("Too many (> " + MAXIMUM_REDIRECTS + ") redirects!");
    } else {
      // Comparing the URLs using .equals performs additional network I/O and is generally broken.
      // See http://michaelscharf.blogspot.com/2006/11/javaneturlequals-and-hashcode-make.html.
      try {
        if (lastUrl != null && url.toURI().equals(lastUrl.toURI())) {
          throw new HttpException("In re-direct loop");

        }
      } catch (URISyntaxException e) {
        // Do nothing, this is best effort.
      }
    }

    urlConnection = connectionFactory.build(url);
    for (Map.Entry<String, String> headerEntry : headers.entrySet()) {
      urlConnection.addRequestProperty(headerEntry.getKey(), headerEntry.getValue());
    }
    urlConnection.setConnectTimeout(timeout);
    urlConnection.setReadTimeout(timeout);
    urlConnection.setUseCaches(false);
    urlConnection.setDoInput(true);

    // Stop the urlConnection instance of HttpUrlConnection from following redirects so that
    // redirects will be handled by recursive calls to this method, loadDataWithRedirects.
    urlConnection.setInstanceFollowRedirects(false);

    // Connect explicitly to avoid errors in decoders if connection fails.
    urlConnection.connect();
    // Set the stream so that it's closed in cleanup to avoid resource leaks. See #2352.
    stream = urlConnection.getInputStream();
    if (isCancelled) {
      return null;
    }
    final int statusCode = urlConnection.getResponseCode();
    if (isHttpOk(statusCode)) {
      return getStreamForSuccessfulRequest(urlConnection);
    } else if (isHttpRedirect(statusCode)) {
      String redirectUrlString = urlConnection.getHeaderField("Location");
      if (TextUtils.isEmpty(redirectUrlString)) {
        throw new HttpException("Received empty or null redirect url");
      }
      URL redirectUrl = new URL(url, redirectUrlString);
      // Closing the stream specifically is required to avoid leaking ResponseBodys in addition
      // to disconnecting the url connection below. See #2352.
      cleanup();
      return loadDataWithRedirects(redirectUrl, redirects + 1, url, headers);
    } else if (statusCode == INVALID_STATUS_CODE) {
      throw new HttpException(statusCode);
    } else {
      throw new HttpException(urlConnection.getResponseMessage(), statusCode);
    }
  }

在这里会加载网络数据,然后获取InputStream,然后通过回调callback.onDataReady(result),流数据返回给SourceGenerator执行onDataReady()方法

//SourceGenerator.java
@Override
  public void onDataReady(Object data) {
    DiskCacheStrategy diskCacheStrategy = helper.getDiskCacheStrategy();
    if (data != null && diskCacheStrategy.isDataCacheable(loadData.fetcher.getDataSource())) {
      dataToCache = data;
      // We might be being called back on someone else's thread. Before doing anything, we should
      // reschedule to get back onto Glide's thread.
      cb.reschedule();
    } else {
      cb.onDataFetcherReady(loadData.sourceKey, data, loadData.fetcher,
          loadData.fetcher.getDataSource(), originalKey);
    }
  }

然后会回调DecodeJob的onDataFetcherReady()方法

//DecodeJob.java
@Override
  public void onDataFetcherReady(Key sourceKey, Object data, DataFetcher<?> fetcher,
      DataSource dataSource, Key attemptedKey) {
    this.currentSourceKey = sourceKey;
    this.currentData = data;
    this.currentFetcher = fetcher;
    this.currentDataSource = dataSource;
    this.currentAttemptingKey = attemptedKey;
    if (Thread.currentThread() != currentThread) {
      runReason = RunReason.DECODE_DATA;
      callback.reschedule(this);
    } else {
      TraceCompat.beginSection("DecodeJob.decodeFromRetrievedData");
      try {
        decodeFromRetrievedData();
      } finally {
        TraceCompat.endSection();
      }
    }
  }

通过decodeFromRetrievedData()方法解析流数据,关于如何解析,如何缓存那就加来再说啦!

本人菜鸡刚开始学习Glide源码,这里只是一个初步的路线,若有不适与错误的地方,欢迎大佬来喷

你可能感兴趣的:(Android学习笔记)