s3cmd put large file到ceph Object Storage总是timed out

传送tensorflow模型文件到ceph Object Storage,150M的大文件总是timed out

$ s3cmd put -r ./deeplab --no-ssl --host=${AWS_HOST} --host-bucket= s3://kubeflow-models/

WARNING: Retrying failed request: /deeplab/1/variables/variables.data-00000-of-00001?uploads (timed out)

fix:

.s3cfg配置enable_multipart = False,disable 大于15M文件的切分传送。

// 或s3cmd 命令行 --disable-multipart

[default]

access_key = HBBI9J3QJ4SALFWYG3YX

secret_key = kWG2WY7XljpbibwXWmAa1tS1if0SqZCkDbwOu2NN

host_base = 192.168.10.11:32700

host_bucket = 192.168.10.11:32700

default_mime_type = binary/octet-stream

enable_multipart = False

guess_mime_type = False

multipart_chunk_size_mb = 200

你可能感兴趣的:(s3cmd put large file到ceph Object Storage总是timed out)