0

I have an Amazon AWS instance with a storage mounted on /dev/xvdb, which is the "usual" /mnt/build_tmp. It is about 70GB (exactly 66,946,696kB). Attempting to write to it, it was apparently full. This seemed unlikely, so I checked and there were about 11GB of files on it (according to 'du') but /mnt (which contains only /mnt/build_tmp) was 100% full (according to 'df'). I deleted all of the files (about 6GB worth) except one (which was a big 5.5GB tar file) and now I have about 6GB of free space. Precisely, at the moment, this is the situation:

ubuntu@ip-172-31-60-67:/mnt$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/xvda1       8115168  6083076   1596816  80% /
none                   4        0         4   0% /sys/fs/cgroup
udev             7689964       12   7689952   1% /dev
tmpfs            1540092      780   1539312   1% /run
none                5120        0      5120   0% /run/lock
none             7700456       72   7700384   1% /run/shm
none              102400        8    102392   1% /run/user
/dev/xvdb       66946696 57365136   6174200  91% /mnt
ubuntu@ip-172-31-60-67:/mnt$ du
5773532 ./build_tmp
du: cannot read directory ‘./lost+found’: Permission denied
16  ./lost+found
5773552 .
ubuntu@ip-172-31-60-67:/mnt$ ls
build_tmp/  lost+found/
ubuntu@ip-172-31-60-67:/mnt$ ll build_tmp/
total 5.6G
drwxr-xr-x 2 ubuntu 4.0K Sep 18 18:33 ./
drwxr-xr-x 4 root   4.0K Aug 25 18:43 ../
-rw-rw-r-- 1 ubuntu 5.6G Sep 17 00:38 archive.tar.gz

Can anyone explain this? I have never seen anything like this before. I am thinking that it is somehow a result of AWS, but it might be something more generic.

In any case, I need to recover the missing 50GB+ of space on the disk.

[p.s. I already checked the superuser question "why is df different than du", it did not seem relevant to my problem.]

1 Answers1

0

This turns out to be a variant of the problem described here:

https://serverfault.com/questions/454194/disk-space-keeps-filling-up-on-ec2-instance-with-no-apperent-files-directories

The solution described there was the solution to this problem.

If a file that has been deleted is still open by a process, the space will not be reclaimed until the process closes the file (or is killed). If you can not identify the process that is holding a file open, then a reboot will help as that will close all running processes (and so close all open files).

Once I located the open process and killed it, the space was recovered.

  • lsof will help you find such files, but perhaps you have worked that out by now. Some programs do this so that if they crash, they won't leave temp files behind. It also provides a primitive level of tamper-resistance. du can only count what the directory entries can explain. – Michael - sqlbot Sep 18 '15 at 23:05