springboot3+vue3实现大文件分片上传和断点续传

大文件分片上传和断点续传

大文件分片上传是一种将大文件切分成小片段进行上传的策略。这种上传方式有以下几个主要原因和优势:

  1. 网络稳定性:大文件的上传需要较长时间,而网络连接可能会不稳定或中断。通过将文件切分成小片段进行上传,如果某个片段上传失败,只需要重新上传该片段,而不需要重新上传整个文件。
  2. 断点续传:由于网络的不稳定性,上传过程中可能会中断,导致上传失败。使用分片上传,可以记录已成功上传的片段,当上传中断后再次恢复时,可以跳过已上传的片段,只需上传剩余的片段,从而实现断点续传。
  3. 容错性:在大文件上传过程中,由于各种原因(例如网络中断、服务器故障等),可能会导致部分片段上传失败。通过分片上传,即使部分片段上传失败,也能够保留已成功上传的片段,减小上传失败的影响,提高上传的可靠性和容错性。
  4. 服务器资源管理:大文件上传可能会占用服务器大量的内存和网络带宽资源。通过分片上传,可以将服务器的资源分配给不同的上传任务,避免单个上传任务占用过多资源,提高服务器的可扩展性和资源利用率。

根据上面的概述, 总体就涉及到了两大概念: 分片上传断点续传 , 下面就分别介绍这两大概念.

演示如下图:

分片上传

​ 分片如图所示:

springboot3+vue3实现大文件分片上传和断点续传_第1张图片

​ 简单来说就如下几个步骤:

  1. 首先获取到选择文件的唯一标识符, 请求服务端查询该文件是否已经上传,如已经上传过返回文件地址, 结束上传功能**(这也就是文件秒传的原理)**
  2. 将需要上传的文件按照一定的分割规则,分割成小的切片;
  3. 循环上传每个分片数据(包含:分片数据, 分片索引, 分片唯一标识, 分片大小等),返回本次分片上传的索引;
  4. 发送完成后,服务端根据判断数据上传是否完整,如果完整,则进行数据块合成得到原始文件并返回文件访问地址。

通过以上几个步骤从而化解大文件上传响应慢,服务器带宽地址, 服务器需要更大的资源去接收等弊端.

断点续传

​ 在上传大的文件时, 如果出现了网络问题, 整个请求都会断掉, 在下一次上传时就会从头又开始上传, 有可能就会陷入死循环大的文件永远都无法上传成功.

​ 能实现断点续传功能, 它是离不开上面讲的分片上传功能的, 分片上传在网络异常时, 它只会中断后面未上传成功的部分, 在网络恢复重新上传时, 它就只会接着上次上传中断时的索引接着传, 从而节约上传时间, 提高速度.

代码实现

前端核心代码:

前置要求:

需要获取文件的唯一标识符, 这里使用的是md5:

yarn add spark-md5
yarn add @types/spark-md5

工具函数

import { Ref, ref } from "vue";
import { checkFileApi, uploadFileApi } from "../api/upload.ts";
import { ResultData } from "./request.ts";
import { CheckFileRes } from "../api/types.ts";
import { ElMessage } from "element-plus";
import SparkMD5 from "spark-md5";

export interface UploadFile {
    name: string,
    size: number,
    parsePercentage: Ref,
    uploadPercentage: any,
    uploadSpeed: string,
    chunkList?: number[],
    file: File,
    uploadingStop: boolean,
    md5?: string,
    needUpload?: boolean,
    fileName?: string
}


export const initChunk = () => {

    const currentUploadFile = ref<UploadFile>()!


    const checkFile = async (file: File, back: Function) => {
        const uploadFile: UploadFile = {
            name: file.name,
            size: file.size,
            parsePercentage: ref<number>(0),
            uploadPercentage: ref<number>(0),
            uploadingStop: false,
            uploadSpeed: '0 M/s',
            chunkList: [],
            file: file,
        }
        back(uploadFile)
        currentUploadFile.value = uploadFile;
        const md5: string = await computeMd5(file, uploadFile)
        if (!md5) {
            console.log("转md5失败")
            return
        }
        uploadFile.md5 = md5;
        const res: ResultData<CheckFileRes> = await checkFileApi(md5);
        if (!res.data?.uploaded) {
            uploadFile.chunkList = res.data?.chunkList;
            uploadFile.needUpload = true;
        } else {
            uploadFile.needUpload = false;
            uploadFile.uploadPercentage.value = 100;
            uploadFile.fileName = res.data.fileName
            console.log("文件已秒传");
            ElMessage({
                showClose: true,
                message: "文件已秒传",
                type: "warning",
            });
        }
    }

    const uploadFile = async (file: File) => {
        const uploadParam = currentUploadFile.value;
        if (!uploadParam) {
            throw Error('请先调用 [checkFile] 方法')
        }
        currentUploadFile.value = uploadParam;
        if (uploadParam?.needUpload) {
            // 分片上传文件
            // 确定分片的大小
            await uploadChunk(file, 1, uploadParam);
            clear()
        }

    }

    const changeUploadingStop = async (uploadFile: UploadFile) => {
        uploadFile.uploadingStop = !uploadFile.uploadingStop;
        if (!uploadFile.uploadingStop) {
            await uploadChunk(uploadFile.file, 1, uploadFile);
        }
    }

    const clear = () => {
        currentUploadFile.value = undefined
    }
    return {checkFile, uploadFile, changeUploadingStop};
}

const uploadChunk = async (file: File, index: number, uploadFile: UploadFile) => {
    const chunkSize = 1024 * 1024 * 10; //10mb
    const chunkTotal = Math.ceil(file.size / chunkSize);
    if (index <= chunkTotal) {
        // 根据是否暂停,确定是否继续上传

        // console.log("4.上传分片");

        const startTime = new Date().valueOf();

        const exit = uploadFile?.chunkList?.includes(index);
        if (!exit) {
            if (!uploadFile.uploadingStop) {
                // 分片上传,同时计算进度条和上传速度
                const form = new FormData();
                const start = (index - 1) * chunkSize;
                let end =
                    index * chunkSize >= file.size ? file.size : index * chunkSize;
                let chunk = file.slice(start, end);

                form.append("chunk", chunk);
                form.append("index", index + "");
                form.append("chunkTotal", chunkTotal + "");
                form.append("chunkSize", chunkSize + "");
                form.append("md5", uploadFile.md5!);
                form.append("fileSize", file.size + "");
                form.append("fileName", file.name);

                const res = await uploadFileApi(form)
                if (res.code === 200) {
                    uploadFile.fileName = res.data as string
                }
                const endTime = new Date().valueOf();
                const timeDif = (endTime - startTime) / 1000;
                uploadFile.uploadSpeed = (10 / timeDif).toFixed(1) + " M/s";

                uploadFile.chunkList?.push(index);
                uploadFile.uploadPercentage = parseInt(String((uploadFile.chunkList!.length / chunkTotal) * 100));
                await uploadChunk(file, index + 1, uploadFile);

            }
        } else {
            uploadFile.uploadPercentage = parseInt(String((uploadFile.chunkList!.length / chunkTotal) * 100));
            await uploadChunk(file, index + 1, uploadFile);
        }
    }
}

function computeMd5(file: File, uploadFile: UploadFile): Promise<string> {
    return new Promise((resolve, _reject) => {
        //分片读取并计算md5
        const chunkTotal = 100; //分片数
        const chunkSize = Math.ceil(file.size / chunkTotal);
        const fileReader = new FileReader();
        const md5 = new SparkMD5();
        let index = 0;
        const loadFile = (uploadFile: UploadFile) => {
            uploadFile.parsePercentage.value = parseInt(((index / file.size) * 100) + '');
            const slice: Blob = file.slice(index, index + chunkSize);

            fileReader.readAsBinaryString(slice);
        };
        loadFile(uploadFile);
        fileReader.onload = (e) => {
            md5.appendBinary(e.target?.result as string);
            if (index < file.size) {
                index += chunkSize;
                loadFile(uploadFile);
            } else {
                resolve(md5.end());
            }
        };
    });
}

vue文件







后端核心代码:

控制层

/**
* 文件上传接口
*/
@Slf4j
@RestController
@RequestMapping("/upload")
@RequiredArgsConstructor
public class UploadFIleController {

   private final WFileService wFileService;


   @GetMapping("/check")
   public AjaxResult<CheckVo> checkFile(@RequestParam("md5") String md5) {
       log.info("MD5值:" + md5);
       return wFileService.checkFile(md5);
   }

   /**
    * form-data传参时 @ModelAttribute 注解必须标记, 否则报错No primary or single unique constructor found for class
    *
    * @param dto 请求参数
    * @return 返回结果
    */
   @PostMapping("/chunk")
   public AjaxResult<Object> uploadChunk(@ModelAttribute UploadChunkDto dto) {
       return wFileService.uploadChunk(dto);
   }

}

service层

/**
 * @author wdhcr
 * 上传文件表服务层
 * @date 2023-11-22
 */
@Slf4j
@Service
@RequiredArgsConstructor
public class WFileServiceImpl extends ServiceImpl<WFileMapper, WFile> implements WFileService {

    private final WFileChunkService wFileChunkService;

    private final WdhcrProperties wdhcrProperties;

    @Override
    public AjaxResult<CheckVo> checkFile(String md5) {
        CheckVo checkVo = new CheckVo();

        //首先检查是否有完整的文件
        WFile wFile = getOne(new LambdaQueryWrapper<WFile>()
                .eq(WFile::getMd5, md5)
                .last("limit 1"));
        if (!ObjectUtils.isEmpty(wFile)) {
            //存在,就是秒传
            checkVo.setUploaded(true);
            checkVo.setFileName(wFile.getFileName());
            return AjaxResult.success(checkVo);
        }
        List<WFileChunk> chunks = wFileChunkService.list(new LambdaQueryWrapper<WFileChunk>()
                .eq(WFileChunk::getMd5, md5));
        List<Integer> chunkIndexes = Optional.ofNullable(chunks)
                .orElseGet(ArrayList::new)
                .stream().map(WFileChunk::getChunkIndex)
                .toList();
        checkVo.setChunkList(chunkIndexes);
        return AjaxResult.success(checkVo);
    }

    @Override
    public AjaxResult<Object> uploadChunk(UploadChunkDto dto) {
        String fileName = dto.getFileName();
        MultipartFile chunk = dto.getChunk();
        Integer index = dto.getIndex();
        Long chunkSize = dto.getChunkSize();
        String md5 = dto.getMd5();
        Integer chunkTotal = dto.getChunkTotal();
        Long fileSize = dto.getFileSize();

        String[] splits = fileName.split("\\.");
        String type = splits[splits.length - 1];
        String filePath = wdhcrProperties.getFilepath();
        String resultFileName = filePath + md5 + "." + type;

        wFileChunkService.saveChunk(chunk, md5, index, chunkSize, resultFileName);
        log.info("上传分片:索引:" + index + " , 总数: " + chunkTotal + ",文件名称" + fileName + ",存储名称" + resultFileName);
        if (Objects.equals(index, chunkTotal)) {
            WFile wFile = new WFile();
            wFile.setName(fileName);
            wFile.setMd5(md5);
            wFile.setFileName(resultFileName);
            wFile.setSize(fileSize);
            save(wFile);
            wFileChunkService.remove(new LambdaQueryWrapper<WFileChunk>()
                    .eq(WFileChunk::getMd5, md5));
            return AjaxResult.success("文件上传成功", resultFileName);
        } else {
            return new AjaxResult<>(201, "文件分片上传成功", index);
        }
    }
}
/**
 * 文件分片表服务层
 *
 * @author wdhcr
 * @date 2023-11-22
 */
@Service
public class WFileChunkServiceImpl extends ServiceImpl<WFileChunkMapper, WFileChunk> implements WFileChunkService {


    @Override
    public boolean saveChunk(MultipartFile chunk, String md5, Integer index, Long chunkSize, String resultFileName) {
        try (RandomAccessFile randomAccessFile = new RandomAccessFile(resultFileName, "rw")) {


            // 偏移量
            long offset = chunkSize * (index - 1);
            // 定位到该分片的偏移量
            randomAccessFile.seek(offset);
            // 写入
            randomAccessFile.write(chunk.getBytes());

            WFileChunk wFileChunk = new WFileChunk();
            wFileChunk.setMd5(md5);
            wFileChunk.setChunkIndex(index);
            return save(wFileChunk);
        } catch (IOException e) {
            e.printStackTrace();
            return false;
        }
    }
}
以上就是实现文件分片上传和断点续传的核心代码.

你可能感兴趣的:(springboot,文件存储服务,java,spring,boot,分片上传)