Hello,is it possible to move manually the .dat and .idx to a different volume?
Moving it to a different volume server? Yes! Steps: 1. shut down the original volume server. 2. move the .dat and .idx files to the new volume server. 3. start or restart the new server. Optionally restart the original volume server if you still need it.
Hi, ChrisLu, Weed-FS is Great, I tested is fast:
ab -n 10000 -c 100 http://192.168.1.1:8080/2/014a96ba83
> Requests per second: 6029.21 [#/sec] (mean)
I try use "weed filer", But tested is slow:
ab -n 10000 -c 100 http://192.168.1.1:8888/2014/05/23/1.jpg
> Requests per second: 1885.35 [#/sec] (mean)
Could you tell me why? Thank you very much.
Filer does extra work, to serve a filer request: 1) filer: lookup directory in memory, this should be very fast 2) filer: lookup file "1.jpg" via leveldb 3) filer: proxy the request to weed volume 4) Volume server returns data to filer 5) filer: give the data back to client
Just compare one single file performance is not really meaningful. Put lots and lots of files under one filer folder, and randomly access one of them. Filer can handle it with fairly consistent speed without much speed degradation. That's where the power of filer shines.
Chris
In general, I think you may have some misunderstanding about fileId.
what are the limitations of the file id name, i've noticed it can take a-z, 0-9, but needs to be 12 chars long ? Please read the format definition in file_id.go. There are 3 parts in the file id <volumeid, filekey,="" cookie="">.
type FileId struct {
VolumeId VolumeId
Key uint64
Hashcode uint32
}
is there an automatic re-replication of volumes, when a volume server fails ? No. If a volume server fails, all volumes with the same volume id on that server will become readonly.
"Each data volume is size 32GB", what does this exactly mean? If I have a host server with 20GB disk space, will it be capable of running a volume server in it? What if I have a 40GB server, will the rest 8GB left unused? Also can I configure the data volume size to fit my needs? I'm new to weed-fs and is excited about it to use it in a production cluster having several nodes.
Btw, thanks a lot for the efforts to write this exciting file system.
Each "volume server" can have multiple "volume data files". Each "volume data file" currently has this 32GB limit.
For small servers as you suggested, I would recommend set the volumeSizeLimt to 10GB, and run 2 volumes on 20GB disk, run 4 volumes on 40GB disk. (of course, maybe leave some space for other purposes)
I think weed-fs currently is robust enough for many use cases. Let me know or file a bug if you see anything abnormal.
Chris