-
Notifications
You must be signed in to change notification settings - Fork 16
Feature request: add max Filesize. #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This will be easy to add. I'm curious to hear more about the use case. Do you have a large quantity of big files that are known to be unique, so you'd like the scan to go faster by ignoring them? |
on my disk are 299 Files greater 1 GB and I haven't counted them yet, but there will be millions of files smaller than 10 bytes from Backups. |
Scanning files mounted over the network (whether NFS or other) will be slow, no matter what. If there is any way to run the scan on the host which has the disk(s) locally, that would be the best approach. If some of the files are local and some are network mounted (not sure if that is your case), you could exclude the remote ones using the -X option (see docs). I'll add an option to exclude files larger than a given size. That said, if you have millions of files smaller than 10 bytes being read over the network, that will also be slow, likely more so than the large files (depending how large they are and network speed). If you're not doing so already, you might want to exclude the smallest files with -m 10 or whichever size limit makes sense for you. |
I tried it on the local pc, it is faster there even though it has significantly less power. |
cleaning a disk by min Filesize is pretty easy to do by hand.
But GBytes of small files is hard to compare by hand.
With min and max options can everyone work as he likes.
The text was updated successfully, but these errors were encountered: