My sysadmin is telling me that we should remove old static files from a server and store them in a database instead because having too many files on a filesystem impacts the general performance of the system. Is the impact significant? We have about 20,000 files in a directory at the moment, and would expect to hit 100,000 sometime in the next few years. This is on a relatively recent Ubuntu LTS system. If 100,000 isn't significant, then what number would be?
Edit: This is different from Maximum number of files in one ext3 directory while still getting acceptable performance? because I don't care about directory performance, but rather about total system performance if the number of files on a system reaches an arbitrary number. In my specific case, the sysadmin is arguing that Apache will slow down due to the total number of files on the entire system.
having too many files on a filesystem impacts performancebut thenI don't care about directory performanceand finallythe sysadmin is arguing that apache will slow down due to the total number of files on the entire system.So this is specifically about Apache? And furthermore, not about apache seeking in directories that have a lot of files, but just Apache in general slowing down because the filesystem has a lot of files, outside of a web root? – Wesley Sep 27 '13 at 19:50Apache in general slowing down because the filesystem has a lot of files, outside of a web root?I mentioned apache to give context though, so it is more of a general question about the filesystem. – samspot Sep 27 '13 at 20:04