最近在做文件存储服务,使用腾讯云对象存储(COS),对于大文件的分块上传及断点续传问题,腾讯云API文档写的很粗糙,腾讯云推荐使用高级API 实现大文件上传,并且高级API上有断点续传API,经测试后根本不好用,断点续传API 必须和上传API 一起使用(实际谁会在上传的时候暂停一会,然后继续上传的),对于这个问题去问腾讯工程师那边回复也很模糊,一开始推荐我使用高级API,之后又告诉我高级API实现不了,再然后又让我使用高级API,直接把我搞晕了。
最后还是使用分块API 实现的,具体实现如下:
大文件上传
/**
* 初始化分块上传
*
* @param bucketName 桶名
* @param key key
* @return
*/
private String initiateMultipartUpload(String bucketName, String key, String storageClass) {
InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, key);
// 设置存储类型:标准存储(Standard), 低频存储存储(Standard_IA),归档存储(ARCHIVE)。默认是标准(Standard)
if (StringUtils.isBlank(storageClass)) {
request.setStorageClass(StorageClass.Standard);
} else {
if (StorageClass.valueOf(storageClass) == StorageClass.Standard) {
request.setStorageClass(StorageClass.Standard);
} else {
request.setStorageClass(StorageClass.Standard_IA);
}
}
String uploadId = null;
try {
InitiateMultipartUploadResult initResult = cosClient.initiateMultipartUpload(request);
// 获取uploadid
uploadId = initResult.getUploadId();
} catch (CosServiceException e) {
logger.error("分块上传失败:", e);
throw new CdpBaseBusinessException(CdpFileServerError.FILE_UPLOAD_FAILED);
}
return uploadId;
}
public static String uploadPart(String path, File file) {
String key = path + "/" + file.getName();
// 初始化分块上传的请求,调用COS InitiateMultipartUploadRequest 方法
String uploadId = initiateMultipartUpload(key);
Long batch = null;
try {
// 计算文件总大小
long totalSize = file.length();
// 设置分块大小:1M
byte data[] = new byte[1024 * 1024];
int batchSize = data.length;
// 计算分块数
batch = totalSize / batchSize + (totalSize % batchSize > 0 ? 1 : 0);
// 文件分块
List strings = new FileUtil().splitBySize(file.getPath(), batchSize);
Thread.sleep(1000);
for (int i = 0; i < batch; i++) {
System.out.println("共" + batch + "块,正在进行第" + (i + 1) + "块");
// 如果是最后一个分块,需要重新计算分块大小
long partSize = batchSize;
if (i == batch - 1) {
partSize = totalSize - i * batchSize;
}
// 分块上传
batchUpload(uploadId, strings.get(i), partSize, i + 1, key, false);
}
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
cosclient.shutdown();
JSONObject jsonObject = new JSONObject();
jsonObject.put("uploadId", uploadId);
jsonObject.put("key", key);
jsonObject.put("pieceSum", batch);
return jsonObject.toString();
}
/**
* 拆分文件
*
* @param fileName 待拆分的完整文件名
* @param byteSize 按多少字节大小拆分
* @return 拆分后的文件名列表
* @throws IOException
*/
public List splitBySize(String fileName, int byteSize)
throws IOException {
List parts = new ArrayList();
File file = new File(fileName);
int count = (int) Math.ceil(file.length() / (double) byteSize);
int countLen = (count + "").length();
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(count,
count * 3, 1, TimeUnit.SECONDS,
new ArrayBlockingQueue(count * 2));
for (int i = 0; i < count; i++) {
String partFileName = file.getName() + "."
+ leftPad((i + 1) + "", countLen, '0') + ".part";
threadPool.execute(new SplitRunnable(byteSize, i * byteSize,
partFileName, file));
parts.add(partFileName);
}
return parts;
}
/**
* 分割处理Runnable
*
* @author [email protected]
*/
private class SplitRunnable implements Runnable {
int byteSize;
String partFileName;
File originFile;
int startPos;
public SplitRunnable(int byteSize, int startPos, String partFileName,
File originFile) {
this.startPos = startPos;
this.byteSize = byteSize;
this.partFileName = partFileName;
this.originFile = originFile;
}
public void run() {
RandomAccessFile rFile;
OutputStream os;
try {
rFile = new RandomAccessFile(originFile, "r");
byte[] b = new byte[byteSize];
rFile.seek(startPos);// 移动指针到每“段”开头
int s = rFile.read(b);
os = new FileOutputStream(partFileName);
os.write(b, 0, s);
os.flush();
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* 分块上传
*/
private static void batchUpload(String uploadId, String path, Long partSize,
Integer partNumber, String key, Boolean isLastPart) {
try {
UploadPartRequest uploadPartRequest = new UploadPartRequest();
uploadPartRequest.setBucketName(bucketName);
uploadPartRequest.setKey(key);
uploadPartRequest.setUploadId(uploadId);
// 设置分块的数据来源输入流
File file = new File(path);
uploadPartRequest.setInputStream(new FileInputStream(file));
// 设置分块的长度
uploadPartRequest.setPartSize(file.length()); // 设置数据长度
uploadPartRequest.setPartNumber(partNumber); // 假设要上传的part编号是10
uploadPartRequest.setLastPart(isLastPart);
UploadPartResult uploadPartResult = cosclient.uploadPart(uploadPartRequest);
PartETag partETag = uploadPartResult.getPartETag();
file.delete();
System.out.println(partETag.getPartNumber());
System.out.println(partETag.getETag());
} catch (CosServiceException e) {
e.printStackTrace();
} catch (CosClientException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
重点就是文件拆分,一开始使用分块上传API时,往UploadPartRequest 中放的是整个文件的输入流,这会导致断点续传后的文件打不开,后询问腾讯云工程师后得知此处要放文件分块的输入流,所以就去找了个对文件拆分的工具类。
之后的断点续传就简单了,同样的对文件进行拆分然后上传就可以了
/**
* 断点续传
*/
public static String continueUpload(File file, String uploadId, String key) {
initCOSClient();
Long batch = null;
try (FileInputStream fileInputStream = new FileInputStream(file)) {
// 计算文件总大小
long totalSize = file.length();
// 设置每个分块的大小:1M
byte data[] = new byte[1024 * 1024];
int batchSize = data.length;
// 计算分块数
batch = totalSize / batchSize + (totalSize % batchSize > 0 ? 1 : 0);
List part = new FileUtil().splitBySize(file.getPath(), batchSize);
Thread.sleep(1000);
// 查询已上传的分块
ListPartsRequest listPartsRequest = new ListPartsRequest(bucketName, key, uploadId);
PartListing partListing = cosclient.listParts(listPartsRequest);
// 将已上传的分块编号放到list 中
List completePiece = new ArrayList<>();
for (PartSummary partSummary : partListing.getParts()) {
completePiece.add(partSummary.getPartNumber());
}
// 遍历所有分块
for (int i = 1; i <= batch; i++) {
// 判断当前分块是否已经上传
if (!completePiece.contains(i)) {
System.out.println("共" + batch + "块,正在进行第" + i + "块");
// 如果是最后一个分块,则重新计算分块大小
long partSize = batchSize;
if (i == batch) {
partSize = totalSize - (i - 1) * batchSize;
batchUpload(uploadId, part.get(i - 1), partSize, i, key, true);
} else {
batchUpload(uploadId, part.get(i - 1), partSize, i, key, false);
}
}
}
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
cosclient.shutdown();
JSONObject jsonObject = new JSONObject();
jsonObject.put("uploadId", uploadId);
jsonObject.put("key", key);
jsonObject.put("pieceSum", batch);
return jsonObject.toString();
}
总结:腾讯云的API 文档写的是真的不好,相对比阿里云的OOS 文档就清晰很多
分块上传困扰了我三天,最后终于是解决了,第一次写文章,写的不好请见谅;(上面代码为我测试时使用的代码,一些地方写的不规范,使用时请根据代码进行修改)