如果模型太大无法加载怎么办?

There are several solutions to use The XXL versions of ProtT5:

  1. Use a GPU with a big memory like NVIDIA Quadro RTX-8000 or NVIDIA A100, with/without half-precision.
  2. Use a GPU with less memory after you quantize the model, which will make the model size 3x-4x smaller:
    https://pytorch.org/docs/stable/quantization.html
  3. Convert the model to onnx, then Quantize the model, and use the CPU rather than GPU for inference:
    https://github.com/agemagician/ProtTrans/tree/master/Embedding/Onnx
  4. Parallelize the model across multiple small GPUs:
    https://huggingface.co/transformers/model_doc/t5.html#transformers.T5Model.parallelize

You can, of course, combine one or more of the above points.

你可能感兴趣的:(如果模型太大无法加载怎么办?)