Cursor
是一款智能开发者编程工具,底层是由Chat-GPT3.5
or Chat-GPT4.0
支持的,不需要科学上网,国内可以直接使用。
重点:免费的 ,同时不需要账号登录。
安装支持:windwos、linux、mac
支持语言:支持 java、php、html、js、py、vue、go、css、c 等
先说下用法,很简单就两个快捷键操作:
输出的数据不会直接输出的文件中
),类似智能问答系统,根据上下文有问有答。Ctrl + K 快捷键:输入问题后,会把生成的答案直接写入到文档中。
- 不选中内容(代码)时,直接在文件光标位置开始编写内容。
- 选中内容(代码) 时,会在选择范围内进行编辑内容。
手动创建文件 main.html
:
#快捷键
Ctrl + K
输入:使用html编写一个小游戏
#下面会帮你自动随机生成一个小游戏,由于没有指定游戏名称或类型。
效果:
DOCTYPE html>
<html>
<head>
<title>小游戏title>
head>
<body>
<h1>欢迎来到小游戏h1>
<p>游戏规则:点击下面的按钮,看看你能得到多少分!p>
<button onclick="addScore()">点击得分button>
<p>得分:<span id="score">0span>p>
<script>
var score = 0;
function addScore() {
score++;
document.getElementById("score").innerHTML = score;
}
script>
body>
html>
Ctrl + KL 快捷键:输入问题后,会把生成的答案,展示到右侧的面板中,并且支持根据上下文继续问答。区别于
Ctrl + K
,不会把答写入到文档中。
不选中内容(代码)时,根据全文内容+问题 进行一个上下文解答,然后在右侧面板中展示问题答案。
选中内容(代码) 时,根据选中内容+问题 进行一个上下文解答,然后在右侧面板中展示问题答案。
#快捷键
Ctrl + K
输入内容为:使用html编写一个小游戏
#下面会帮你在面板中自动随机生成一个小游戏,由于没有指定游戏名称或类型。
支持 java、php、html、js、py、vue、go、css、c 等
尝试了哪些功能:
使用java语言,先创建 main.java 文件,文件名可自定义
#快捷键
Ctrl + K
输入:编写分片上传文件控制器
效果图:
#快捷键
Ctrl + A 全选中
#快捷键
Ctrl + K
输入:添加swagger描述
到此,Ctrl + K
基本用法已经演示完成了,是不是发现了一些问题,生成的方法内部并没有具体代码实现
,仅仅是定义了类和方法,那么我们要如何让其帮助我们实现方法内部的代码,下面将展开演示:
操作:
补充方法中的代码实现
Ctrl + K
输入:先校验文件,再把文件存储到服务器中,返回文件存储路径
演示图:
上传接口已经好了,上传文件块和上次文件完成(合并)这两个接口是有关系的,因此,要先选中这两个方法,再进行Ctrl + K
,输入:补充代码实现,演示如下:
#快捷键
手动先选中 上传文件块和上传文件完成 方法
#快捷键
Ctrl + K
输入:补充代码实现
演示图:
#手动选择类中的方法
#快捷键
Ctrl + L
输入:生成html完整示例
#快捷键
Ctrl + L
输入:依据选中的方法,生成html完整示例
继续
演示图就不放了,和上面差不多。
使用java语言,先创建 main.java 文件,文件名可自定义
#创建接口及接口的方法
Ctrl + K
输入:写一个通用文件管理接口,支持分片上传、合并、上传、下载、获取所有列表
#创建接口的实现类
Ctrl + K
输入:实现类 or minio实现类 or mongodb实现类
#完善接口实现类中代码实现, 先选中实现类
Ctrl + K
输入:补充代码实现
演示图:
不贴出来了,看最总成果展示贴出来的代码。
以下代码,全部由Cursor生成,但是整个调试过程还是挺漫长的。
注:代码全部是由ai自动生成
import org.apache.commons.io.IOUtils;
import org.bson.types.ObjectId;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoCursor;
import com.mongodb.client.gridfs.GridFSBucket;
import com.mongodb.client.gridfs.GridFSBuckets;
import com.mongodb.client.gridfs.GridFSDownloadByNameOptions;
import com.mongodb.client.gridfs.GridFSDownloadStream;
import com.mongodb.client.gridfs.GridFSFile;
import com.mongodb.client.gridfs.GridFSUploadOptions;
import com.mongodb.client.gridfs.GridFSUploadStream;
import org.apache.commons.net.ftp.FTP;
import org.apache.commons.net.ftp.FTPClient;
import org.apache.commons.net.ftp.FTPFile;
import org.bson.Document;
import io.minio.MinioClient;
import io.minio.Result;
import io.minio.errors.MinioException;
import io.minio.messages.Item;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
/**
* 文件管理器接口
*/
public interface FileManager {
/**
* 上传文件分片
* @param inputStream 文件输入流
* @param fileName 文件名
* @param contentType 文件类型
* @param chunkIndex 分片索引
* @param totalChunks 总分片数
* @throws Exception 异常
*/
void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception;
/**
* 合并文件
* @param fileName 文件名
* @param chunkSize 分片大小
* @param totalChunks 总分片数
* @throws Exception 异常
*/
void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception;
/**
* 上传文件
* @param inputStream 文件输入流
* @param fileName 文件名
* @param contentType 文件类型
* @throws Exception 异常
*/
void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception;
/**
* 下载文件
* @param fileName 文件名
* @return 文件输入流
* @throws Exception 异常
*/
InputStream downloadFile(String fileName) throws Exception;
/**
* 删除文件
* @param fileName 文件名
* @throws Exception 异常
*/
void deleteFile(String fileName) throws Exception;
/**
* 列出所有文件
* @return 文件列表
*/
List<Document> listFiles();
}
public class FileManagerImpl implements FileManager {
private static final String FILE_DIRECTORY = "C:/fileStorage/";
@Override
public void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception {
File directory = new File(FILE_DIRECTORY);
if (!directory.exists()) {
directory.mkdirs();
}
try (FileOutputStream outputStream = new FileOutputStream(new File(directory, fileName + "_" + chunkIndex))) {
IOUtils.copy(inputStream, outputStream);
} catch (Exception e) {
throw new Exception("Failed to upload file chunk", e);
}
}
@Override
public void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception {
File directory = new File(FILE_DIRECTORY);
if (!directory.exists()) {
directory.mkdirs();
}
try (FileOutputStream outputStream = new FileOutputStream(new File(directory, fileName))) {
for (int i = 0; i < totalChunks; i++) {
try (FileInputStream inputStream = new FileInputStream(new File(directory, fileName + "_" + i))) {
IOUtils.copy(inputStream, outputStream);
}
}
} catch (Exception e) {
throw new Exception("Failed to merge file", e);
}
}
@Override
public void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception {
File directory = new File(FILE_DIRECTORY);
if (!directory.exists()) {
directory.mkdirs();
}
try (FileOutputStream outputStream = new FileOutputStream(new File(directory, fileName))) {
IOUtils.copy(inputStream, outputStream);
} catch (Exception e) {
throw new Exception("Failed to upload file", e);
}
}
@Override
public InputStream downloadFile(String fileName) throws Exception {
File file = new File(FILE_DIRECTORY + fileName);
if (!file.exists()) {
throw new Exception("File not found");
}
return new FileInputStream(file);
}
@Override
public void deleteFile(String fileName) throws Exception {
File file = new File(FILE_DIRECTORY + fileName);
if (!file.exists()) {
throw new Exception("File not found");
}
file.delete();
}
@Override
public List<Document> listFiles() {
File directory = new File(FILE_DIRECTORY);
File[] files = directory.listFiles();
List<Document> documents = new ArrayList<>();
if (files != null) {
for (File file : files) {
Document document = new Document();
document.setName(file.getName());
document.setSize(file.length());
document.put("storageType", "local");
documents.add(document);
}
}
return documents;
}
}
public class MinioFileManagerImpl implements FileManager {
private static final String BUCKET_NAME = "file-storage";
private static final String ENDPOINT = "http://localhost:9000";
private static final String ACCESS_KEY = "minioadmin";
private static final String SECRET_KEY = "minioadmin";
private final MinioClient minioClient;
public MinioFileManagerImpl() throws Exception {
minioClient = new MinioClient(ENDPOINT, ACCESS_KEY, SECRET_KEY);
if (!minioClient.bucketExists(BUCKET_NAME)) {
minioClient.makeBucket(BUCKET_NAME);
}
}
@Override
public void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception {
String objectName = fileName + "_" + chunkIndex;
minioClient.putObject(BUCKET_NAME, objectName, inputStream, contentType);
}
@Override
public void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception {
List<String> objectNames = new ArrayList<>();
for (int i = 0; i < totalChunks; i++) {
objectNames.add(fileName + "_" + i);
}
String mergedObjectName = fileName;
minioClient.composeObject(BUCKET_NAME, objectNames, mergedObjectName);
for (String objectName : objectNames) {
minioClient.removeObject(BUCKET_NAME, objectName);
}
}
@Override
public void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception {
minioClient.putObject(BUCKET_NAME, fileName, inputStream, contentType);
}
@Override
public InputStream downloadFile(String fileName) throws Exception {
return minioClient.getObject(BUCKET_NAME, fileName);
}
@Override
public void deleteFile(String fileName) throws Exception {
minioClient.removeObject(BUCKET_NAME, fileName);
}
@Override
public List<Document> listFiles() throws Exception {
List<Document> documents = new ArrayList<>();
Iterable<Result<Item>> results = minioClient.listObjects(BUCKET_NAME);
for (Result<Item> result : results) {
Item item = result.get();
Document document = new Document();
document.setName(item.objectName());
document.setSize(item.size());
document.put("storageType", "minio");
documents.add(document);
}
return documents;
}
}
public class MongoFileManagerImpl implements FileManager {
private static final String DATABASE_NAME = "fileStorage";
private static final String COLLECTION_NAME = "files";
private final MongoCollection<Document> collection;
// public MongoFileManagerImpl() {
// MongoClient mongoClient = MongoClients.create();
// MongoDatabase database = mongoClient.getDatabase(DATABASE_NAME);
// collection = database.getCollection(COLLECTION_NAME);
// }
public MongoFileManagerImpl() {
MongoClient mongoClient = MongoClients.create(
MongoClientSettings.builder()
.applyToClusterSettings(builder ->
builder.hosts(Arrays.asList(new ServerAddress("localhost", 27017))))
.credential(MongoCredential.createCredential("username", "fileStorage", "password".toCharArray()))
.build());
MongoDatabase database = mongoClient.getDatabase(DATABASE_NAME);
collection = database.getCollection(COLLECTION_NAME);
}
@Override
public void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName + "_" + chunkIndex, options)) {
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to upload file chunk", e);
}
}
@Override
public void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSDownloadByNameOptions options = new GridFSDownloadByNameOptions().revision(0);
try (FileOutputStream outputStream = new FileOutputStream(new File(fileName))) {
for (int i = 0; i < totalChunks; i++) {
try (GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStreamByName(fileName + "_" + i, options)) {
IOUtils.copy(downloadStream, outputStream);
}
}
} catch (Exception e) {
throw new Exception("Failed to merge file", e);
}
}
@Override
public void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to upload file", e);
}
}
@Override
public InputStream downloadFile(String fileName) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSDownloadByNameOptions options = new GridFSDownloadByNameOptions().revision(0);
GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStreamByName(fileName, options);
if (downloadStream == null) {
throw new Exception("File not found");
}
return downloadStream;
}
@Override
public void deleteFile(String fileName) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
gridFSBucket.delete(new ObjectId(fileName));
}
@Override
public List<Document> listFiles() {
List<Document> documents = new ArrayList<>();
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
try (MongoCursor<GridFSFile> cursor = gridFSBucket.find().iterator()) {
while (cursor.hasNext()) {
GridFSFile file = cursor.next();
Document document = new Document();
document.setName(file.getFilename());
document.setSize(file.getLength());
document.put("storageType", "mongo");
documents.add(document);
}
}
return documents;
}
}
public class FtpFileManagerImpl implements FileManager {
private static final String SERVER_ADDRESS = "localhost";
private static final int SERVER_PORT = 21;
private static final String USERNAME = "username";
private static final String PASSWORD = "password";
private static final String REMOTE_DIRECTORY = "/fileStorage/";
private final FTPClient ftpClient;
public FtpFileManagerImpl() throws Exception {
ftpClient = new FTPClient();
ftpClient.connect(SERVER_ADDRESS, SERVER_PORT);
ftpClient.login(USERNAME, PASSWORD);
ftpClient.enterLocalPassiveMode();
ftpClient.setFileType(FTP.BINARY_FILE_TYPE);
ftpClient.changeWorkingDirectory(REMOTE_DIRECTORY);
}
@Override
public void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception {
String remoteFileName = fileName + "_" + chunkIndex;
ftpClient.storeFile(remoteFileName, inputStream);
}
@Override
public void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception {
try (FileOutputStream outputStream = new FileOutputStream(new File(fileName))) {
for (int i = 0; i < totalChunks; i++) {
String remoteFileName = fileName + "_" + i;
ftpClient.retrieveFile(remoteFileName, outputStream);
}
} catch (Exception e) {
throw new Exception("Failed to merge file", e);
}
}
@Override
public void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception {
ftpClient.storeFile(fileName, inputStream);
}
@Override
public InputStream downloadFile(String fileName) throws Exception {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
boolean success = ftpClient.retrieveFile(fileName, outputStream);
if (!success) {
throw new Exception("File not found");
}
return new ByteArrayInputStream(outputStream.toByteArray());
}
@Override
public void deleteFile(String fileName) throws Exception {
boolean success = ftpClient.deleteFile(fileName);
if (!success) {
throw new Exception("File not found");
}
}
@Override
public List<Document> listFiles() throws Exception {
FTPFile[] files = ftpClient.listFiles();
List<Document> documents = new ArrayList<>();
if (files != null) {
for (FTPFile file : files) {
Document document = new Document();
document.setName(file.getName());
document.setSize(file.getSize());
document.put("storageType", "ftp");
documents.add(document);
}
}
return documents;
}
}
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.util.List;
public class FileManagerTest {
private final FileManager fileManager;
public FileManagerTest() throws Exception {
fileManager = new MongoFileManagerImpl();
}
@Test
public void testUploadFile() throws Exception {
String fileName = "test.txt";
String content = "This is a test file";
InputStream inputStream = new ByteArrayInputStream(content.getBytes());
fileManager.uploadFile(inputStream, fileName, "text/plain");
InputStream downloadedInputStream = fileManager.downloadFile(fileName);
byte[] downloadedBytes = downloadedInputStream.readAllBytes();
String downloadedContent = new String(downloadedBytes);
assertEquals(content, downloadedContent);
fileManager.deleteFile(fileName);
}
@Test
public void testUploadFileChunk() throws Exception {
String fileName = "test.txt";
String content = "This is a test file";
int chunkSize = 5;
int totalChunks = (int) Math.ceil((double) content.length() / chunkSize);
InputStream inputStream = new ByteArrayInputStream(content.getBytes());
for (int i = 0; i < totalChunks; i++) {
byte[] chunkBytes = new byte[chunkSize];
int bytesRead = inputStream.read(chunkBytes);
InputStream chunkInputStream = new ByteArrayInputStream(chunkBytes, 0, bytesRead);
fileManager.uploadFileChunk(chunkInputStream, fileName, "text/plain", i, totalChunks);
}
fileManager.mergeFile(fileName, chunkSize, totalChunks);
InputStream downloadedInputStream = fileManager.downloadFile(fileName);
byte[] downloadedBytes = downloadedInputStream.readAllBytes();
String downloadedContent = new String(downloadedBytes);
assertEquals(content, downloadedContent);
fileManager.deleteFile(fileName);
}
@Test
public void testDeleteFile() throws Exception {
String fileName = "test.txt";
String content = "This is a test file";
InputStream inputStream = new ByteArrayInputStream(content.getBytes());
fileManager.uploadFile(inputStream, fileName, "text/plain");
fileManager.deleteFile(fileName);
boolean fileExists = true;
try {
fileManager.downloadFile(fileName);
} catch (Exception e) {
fileExists = false;
}
assertFalse(fileExists);
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file)) {
IOUtils.copy(downloadFile(fileName), outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file)) {
IOUtils.copy(downloadFile(fileName), outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {
uploadStream.setPosition(position);
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to continue upload file", e);
}
}
@Override
public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {
uploadStream.setPosition(position);
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to continue upload file", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file)) {
IOUtils.copy(downloadFile(fileName), outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath, long position) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file, true)) {
InputStream inputStream = downloadFile(fileName);
inputStream.skip(position);
IOUtils.copy(inputStream, outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {
uploadStream.setPosition(position);
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to continue upload file", e);
}
}
@Override
public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {
uploadStream.setPosition(position);
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to continue upload file", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file)) {
IOUtils.copy(downloadFile(fileName), outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath, long position) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file, true)) {
InputStream inputStream = downloadFile(fileName);
inputStream.skip(position);
IOUtils.copy(inputStream, outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {
uploadStream.setPosition(position);
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to continue upload file", e);
}
}
@Override
public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {
uploadStream.setPosition(position);
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to continue upload file", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file)) {
IOUtils.copy(downloadFile(fileName), outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath, long position) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file, true)) {
InputStream inputStream = downloadFile(fileName);
inputStream.skip(position);
IOUtils.copy(inputStream, outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {
GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);
GridFSUploadOptions options = new GridFSUploadOptions()
.chunkSizeBytes(1024 * 1024)
.metadata(new Document("fileName", fileName));
try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {
uploadStream.setPosition(position);
IOUtils.copy(inputStream, uploadStream);
} catch (Exception e) {
throw new Exception("Failed to continue upload file", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file)) {
IOUtils.copy(downloadFile(fileName), outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Override
public void downloadFileToDisk(String fileName, String localFilePath, long position) throws Exception {
File file = new File(localFilePath);
try (FileOutputStream outputStream = new FileOutputStream(file, true)) {
InputStream inputStream = downloadFile(fileName);
inputStream.skip(position);
IOUtils.copy(inputStream, outputStream);
} catch (Exception e) {
throw new Exception("Failed to download file to disk", e);
}
}
@Test
public void testListFiles() throws Exception {
List<Document> documents = fileManager.listFiles();
assertNotNull(documents);
}
}
ChunkUploadController.java
注:代码全部是由ai自动生成
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.multipart.MultipartFile;
import java.io.IOException;
/**
* 这个控制器处理分块文件上传和合并。
*/
@CrossOrigin
@Api(tags = "分块上传")
@RestController
@RequestMapping("/chunk")
public class ChunkUploadController {
@Autowired
private ChunkUploadService chunkUploadService;
/**
* 上传文件的一个分片
* @param file 文件分片
* @param chunkNumber 当前分片编号
* @param totalChunks 总分片数
* @param identifier 文件唯一标识
* @param filename 文件名
* @throws IOException
*/
@ApiOperation(value = "上传文件的一个分片", notes = "上传文件的一个分片以便稍后合并")
@PostMapping("/upload")
public ResponseEntity<?> upload(@RequestParam("file") MultipartFile file,
@RequestParam("chunkNumber") Integer chunkNumber,
@RequestParam("totalChunks") Integer totalChunks,
@RequestParam("identifier") String identifier,
@RequestParam("filename") String filename) throws IOException {
chunkUploadService.upload(file, chunkNumber, totalChunks, identifier, filename);
return ResponseEntity.ok().build();
}
/**
* 合并上传的文件分片
* @param identifier 文件唯一标识
* @param filename 文件名
* @throws IOException
*/
@ApiOperation(value = "合并上传的文件分片", notes = "将所有上传的文件分片合并成一个文件")
@PostMapping("/merge")
public ResponseEntity<?> merge(@RequestParam("identifier") String identifier,
@RequestParam("filename") String filename,
@RequestParam("totalChunks") Integer totalChunks) throws IOException {
chunkUploadService.merge(identifier, filename, totalChunks);
return ResponseEntity.ok().build();
}
}
ChunkUploadService.java
注:代码全部是由ai自动生成
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.UncheckedIOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Comparator;
import org.apache.commons.io.IOUtils;
import org.springframework.web.multipart.MultipartFile;
import org.springframework.stereotype.Service;
/**
* 此类提供上传和合并文件块的方法。
*/
@Service
public class ChunkUploadService {
/**
* 上传具有给定参数的文件块。
*
* @param file 要上传的块文件
* @param chunkNumber 正在上传的块的编号
* @param totalChunks 文件分成的总块数
* @param identifier 正在上传的文件的标识符
* @param filename 正在上传的文件的名称
* @throws IOException 如果发生I/O错误
*/
public void upload(MultipartFile file, Integer chunkNumber, Integer totalChunks, String identifier, String filename) throws IOException {
Path chunkPath = Paths.get("uploads", identifier, chunkNumber.toString());
Files.createDirectories(chunkPath.getParent());
if (!Files.exists(chunkPath)) {
Files.write(chunkPath, file.getBytes());
} else {
// 处理块文件已存在的情况
}
}
/**
* 此方法将所有已上传的块合并为给定文件标识符和文件名的单个文件。
* 如果没有上传所有块,则返回而不进行合并。
* @param identifier 正在上传的文件的标识符
* @param filename 正在上传的文件的名称
* @param totalChunks 文件分成的总块数
* @throws IOException 如果发生I/O错误
*/
public void merge(String identifier, String filename, Integer totalChunks) throws IOException {
if (!isUploadComplete(identifier, totalChunks)) {
// 处理未上传所有块的情况
return;
}
Path dirPath = Paths.get("uploads", identifier);
Path filePath = Paths.get("uploads", filename);
try (OutputStream out = Files.newOutputStream(filePath)) {
Files.list(dirPath)
.filter(path -> !Files.isDirectory(path))
.sorted(Comparator.comparingInt(path -> Integer.parseInt(path.getFileName().toString())))
.forEachOrdered(path -> {
try (InputStream in = Files.newInputStream(path)) {
IOUtils.copy(in, out);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
});
}
}
/**
* 此方法检查是否已上传给定文件标识符和总块数的所有块。
* 如果已上传所有块,则返回true。否则,返回false。
* @param identifier 正在上传的文件的标识符
* @param totalChunks 文件分成的总块数
* @return 如果已上传所有块,则为true,否则为false
* @throws IOException 如果发生I/O错误
*/
public boolean isUploadComplete(String identifier, Integer totalChunks) throws IOException {
Path dirPath = Paths.get("uploads", identifier);
long count = Files.list(dirPath)
.filter(path -> !Files.isDirectory(path))
.count();
return count == totalChunks;
}
}
ChunkUpload-1.html
分片上传
注:代码全部是由ai自动生成
DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Chunk Uploadtitle>
head>
<body>
<input type="file" id="file">
<button onclick="upload()">Uploadbutton>
<script src="https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js">script>
<script>
/**
* 上传文件
*/
async function upload() {
const file = document.getElementById('file').files[0]; // 获取文件
const chunkSize = 1024 * 1024 * 3; // 3MB
const totalSize = file.size; // 文件总大小
const totalChunks = Math.ceil(totalSize / chunkSize); // 总块数
const identifier = file.name + '-' + totalSize + '-' + Date.now(); // 文件标识符
const filename = file.name; // 文件名
for (let currentChunk = 0; currentChunk < totalChunks; currentChunk++) { // 循环上传每一块
const chunk = file.slice(currentChunk * chunkSize, (currentChunk + 1) * chunkSize); // 获取当前块
const formData = new FormData(); // 创建表单数据
formData.append('file', chunk); // 添加文件块
formData.append('chunkNumber', currentChunk); // 添加当前块数
formData.append('totalChunks', totalChunks); // 添加总块数
formData.append('identifier', identifier); // 添加文件标识符
formData.append('filename', filename); // 添加文件名
try {
const response = await axios.post('http://localhost:19000/chunk/upload', formData); // 发送上传请求
if (response.status !== 200) { // 如果上传失败
throw new Error('Upload failed'); // 抛出异常
}
} catch (error) {
console.error(error); // 输出错误信息
throw new Error('Upload failed'); // 抛出异常
}
}
const formData = new FormData(); // 创建表单数据
formData.append('identifier', identifier); // 添加文件标识符
formData.append('filename', filename); // 添加文件名
formData.append('totalChunks', totalChunks); // 添加总块数
try {
const response = await axios.post('http://localhost:19000/chunk/merge', formData); // 发送合并请求
if (response.status !== 200) { // 如果合并失败
throw new Error('Merge failed'); // 抛出异常
}
} catch (error) {
console.error(error); // 输出错误信息
throw new Error('Merge failed'); // 抛出异常
}
}
script>
body>
html>
ChunkUpload-2.html
分片上传,并带进度条(分块进度条和总进度条)
注:代码全部是由ai自动生成
DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Chunk Upload-带进度条title>
head>
<body>
<input type="file" id="file">
<button onclick="upload()">Uploadbutton>
<progress id="progressBar" value="0" max="100">progress>
<progress id="totalProgressBar" value="0" max="100">progress>
<script src="https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js">script>
<script>
/**
* 上传文件
*/
async function upload() {
const file = document.getElementById('file').files[0]; // 获取文件
const chunkSize = 1024 * 1024 * 3; // 3MB
const totalSize = file.size; // 文件总大小
const totalChunks = Math.ceil(totalSize / chunkSize); // 总块数
const identifier = file.name + '-' + totalSize + '-' + Date.now(); // 文件标识符
const filename = file.name; // 文件名
let totalPercentCompleted = 0;
for (let currentChunk = 0; currentChunk < totalChunks; currentChunk++) { // 循环上传每一块
const chunk = file.slice(currentChunk * chunkSize, (currentChunk + 1) * chunkSize); // 获取当前块
const formData = new FormData(); // 创建表单数据
formData.append('file', chunk); // 添加文件块
formData.append('chunkNumber', currentChunk); // 添加当前块数
formData.append('totalChunks', totalChunks); // 添加总块数
formData.append('identifier', identifier); // 添加文件标识符
formData.append('filename', filename); // 添加文件名
try {
const response = await axios.post('http://localhost:19000/chunk/upload', formData, {
onUploadProgress: function(progressEvent) {
var percentCompleted = Math.round((progressEvent.loaded * 100) / progressEvent.total);
document.getElementById('progressBar').value = percentCompleted;
totalPercentCompleted = Math.round(((currentChunk + 1) * 100) / totalChunks);
document.getElementById('totalProgressBar').value = totalPercentCompleted;
}
}); // 发送上传请求
if (response.status !== 200) { // 如果上传失败
throw new Error('Upload failed'); // 抛出异常
}
} catch (error) {
console.error(error); // 输出错误信息
throw new Error('Upload failed'); // 抛出异常
}
}
const formData = new FormData(); // 创建表单数据
formData.append('identifier', identifier); // 添加文件标识符
formData.append('filename', filename); // 添加文件名
formData.append('totalChunks', totalChunks); // 添加总块数
try {
const response = await axios.post('http://localhost:19000/chunk/merge', formData); // 发送合并请求
if (response.status !== 200) { // 如果合并失败
throw new Error('Merge failed'); // 抛出异常
}
} catch (error) {
console.error(error); // 输出错误信息
throw new Error('Merge failed'); // 抛出异常
}
}
script>
body>
html>
使用
Ctrl + K
orCtrl +L
时,编写内容总是不完整,写到一半就不写了,如何继续向下编写:
#两个命令内容继续向下编写:输入 continue 或 继续
Ctrl + K 输入 继续
本片文章阅读结束
作者:宇宙小神特别萌
代码仓库:
描述:喜欢文章的点赞收藏一下,关注不迷路,避免以后找不到哦,大家遇到问题下方可评论
本片文章阅读结束