0

I've got something weird going on. Normally, I'd use df to show current available space on my hard drive, and noticed I was running short in my home partition. So I opened up gparted and saw I had more than was shown with df. It's a 5% difference, almost 2GB difference. I'm going to try using testdisk to see if anything is wrong.

Here's fdisk -l

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x11a8ba38

Device Boot Start End Blocks Id System /dev/sda1 63 10233404 5116671 b W95 FAT32 /dev/sda2 10233405 231239679 110503137+ 7 HPFS/NTFS/exFAT /dev/sda3 231241726 312580095 40669185 5 Extended /dev/sda5 308385792 312580095 2097152 82 Linux swap / Solaris /dev/sda6 243032064 308383743 32675840 83 Linux /dev/sda7 231241728 243030015 5894144 83 Linux

Partition table entries are not in disk order

and parted -l:

Model: ATA TOSHIBA MK1652GS (scsi)
Disk /dev/sda: 160GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags 1 32.3kB 5240MB 5239MB primary fat32 2 5240MB 118GB 113GB primary ntfs 3 118GB 160GB 41.6GB extended 7 118GB 124GB 6036MB logical ext4 6 124GB 158GB 33.5GB logical ext4 5 158GB 160GB 2147MB logical linux-swap(v1)

df -h shows:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.9G  4.1G  860M  83% /media/sda1
/dev/sda2             106G   86G   21G  81% /media/sda2
/dev/sda6              31G   29G  807M  98% /media/sda6
/dev/sda7             5.6G  3.8G  1.6G  71% /media/sda7

sda1 is a recovery partition for the laptop, sda2 is the windows partition, sda6 is /home, and sda7 is root.

As you can see in the screenshot below, both sda1 and sda2 are showing fine with df, and 6 and 7 (in the extended partition) aren't. :/

gparted shows: gparted screenshot

cfdisk complains of: FATAL ERROR: Bad logical partition 6: enlarged logical partitions overlap which it has never done before. This started happening after I was using a liveUSB to to use gparted to resize and move partitions. It failed to shrink the filesystem on my windows partition, and then when it refreshed everything looked okay.

I'm going to try the partedmagic liveUSB as soon as I get it on my flash drive, has anyone else had this problem or know a solution? I'd also like to fix my table entries so they are in order, but that can be done another time.

EDIT: So I'm in partedmagic, test disk tells me:

Disk /dev/sda - 160 GB / 149 GiB - CHS 19457 255 63
Current partition structure:
     Partition                  Start        End    Size in sectors

1 P FAT32 0 1 1 636 254 63 10233342 [PQSERVICE] 2 P HPFS - NTFS 637 0 1 14394 1 7 221006275 [XP] 3 E extended 14394 33 38 19457 53 52 81338370

Bad sector count.

So I guess that needs fixed somehow, I'll see if I can fix it with what I know of testdisk.

df without the human readable command shows this:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1              5106660   4226416    880244  83% /media/sda1
/dev/sda2            110503132  89294228  21208904  81% /media/sda2
/dev/sda6             32164696  29691780    839128  98% /media/sda6
/dev/sda7              5801560   3902728   1604128  71% /media/sda7

32164696 - 29691780 != 839128, there's something going wrong somewhere.

Found an answer to what's going on here: https://askubuntu.com/a/48511 from a google lead by some good ideas from Mike. I do feel silly.

Rob
  • 2,392
  • There is not enough information to respond to this question. Which partition is home, and what does df say, and did you account for block size overhead? – Paul Nov 29 '11 at 04:11
  • Added information. I'm not sure what you mean by block size overhead, I'm assuming what you mean is if I put the partitions in a spot where they start or end in the middle of a block, I mess something up? – Rob Nov 29 '11 at 04:19

4 Answers4

0

I don't know how to fix you problem, but I remember that when I partitioned my hard drive, one of the options was how much space to be reserved for the root user, and the default was 5%. I think the idea is that even if you (as a normal user) run out of space, the computer doesn't crash, because all the important stuff runs as root and has those extra 5% of space.

Mike
  • 111
  • The root user is in /root/, not /home/root/. – Rob Nov 29 '11 at 04:37
  • How is that relevant? Read again what I wrote, I think you missed the idea. – Mike Nov 29 '11 at 04:54
  • There's nothing to do with the root user on sda6. It's regular user's home directories only, and in this case it's only one user. – Rob Nov 29 '11 at 04:57
  • LOL at your confidence. I'll laugh at you when I turn out to be right. – Mike Nov 29 '11 at 05:03
  • Ha, you downvoted an answer because you didn't like a comment - now who's the jerk? :) I think that whenever you create an ext3 or ext4 filesystem, part of the space is reserved. Not in the form of files - that would be obviously useless, hence my lol. If you weren't asked about this, the partitioner probably went with the default 5%. – Mike Nov 29 '11 at 05:15
  • ext3 and 4 make a "lost+found" directory. That's empty and is taking up 16K, and shows just fine. http://askubuntu.com/questions/48488/ext4-partition-size-free-space-discrepancies looked into gparted, your answer is oddly worded. If you can make it clearer, I might be able to accept it. – Rob Nov 29 '11 at 05:17
  • I don't think lost+found has anything to do with reserved blocks. Again, reserving files would be pretty useless. What the system needs is reserved space, to be used in not-yet-specified files. – Mike Nov 29 '11 at 05:20
  • To fix it, you can do tune2fs -m 0 /dev/sda6 which removes the reserved blocks. – Rob Nov 29 '11 at 05:32
  • Mike, if you add to the answer that ext4 reserves blocks for root when created, and you can remove them with tune2fs, I'll accept the answer. A link to http://askubuntu.com/a/48511 would be great too. – Rob Nov 29 '11 at 05:37
  • Thank for offering, Rob, but I'm not here to collect reputation. I'm happy that your problem is solved. You can create a new answer as you described and accept it for other people's benefit. – Mike Nov 29 '11 at 05:51
  • At least edit the answer so I can change from down to up, it lead to me to the exact solution. – Rob Nov 29 '11 at 14:08
0

While the sector error may be an issue, the missing disk space is just from differing units used by different utilities.

gparted is using gibibytes and df is using gigabytes.

31.16Gib = 31.16 x 1024 x 1024 x 1024 bytes, or 33457795235

A gigabyte is 1000000000 bytes, so 33457795235 / 1000000000 = 33.5GB when rounded.

Paul
  • 60,031
  • 19
  • 150
  • 170
0

I don't know the implementation details of df, ext{2|3|4} or gparted, but I can tell you that measuring free space in a filesystem is not always cut and dry. I can imagine some circumstances in which two people (or programs) could disagree about how much free space there is, for some hypothetical filesystem.

  1. Consider that filesystems allocate space at a more coarse grain than file sizes. If your filesystem allocates in blocks of size x, and your file is x + 1 bytes, then 2x will be allocated, leaving x - 1 of wasted space. Is that space "free"? Your filesystem will never use it unless you change the length of your file, but a naive program might only count file sizes and thus under-report the amount of space use/over-report the amount of free bytes.

  2. Imagine some circumstance where your kernel's filesystem driver might have some runtime features that create the appearance of less available space. For example, it might have over-allocated some space for currently open files that are being written, to ensure contiguous allocations in case you might append to a file. Or perhaps some space is set aside for a journal or other metadata. I can imagine your filesystem driver's notion of space differing from what has actually been written to disk.

  3. Some metadata might be cached, and one of these is reading the cache and another is doing a deeper check on disk. I don't know what the ext family of filesystems do, but I'm thinking in particular of FAT, where the available of space was cached in a 32-bit integer to speed up querying free space.

  4. There could be some space marked as "in use" (so that no future allocations use that space) but not referenced by any inode. This might indicate something weird has gone on (I/O failures? bad disk?). I think fsck programs generally check for that sort of thing.

These are all hypothetical; I don't know what the real answer is. At any rate from the other messages there might be some problems. I'd run fsck and see what it says.

asveikau
  • 223
  • I fsck'd and everything came back fine. – Rob Nov 29 '11 at 05:36
  • @Rob - OK. What do you think of the other points? – asveikau Nov 29 '11 at 06:38
  • The problem was reserved blocks with ext4. Apparently the inode table is a set size, and ext4 reserves even more blocks (~5%, like mike said in another answer) for root, so that no regular user can fill the hard disk. I wonder why sudo df didn't show it properly then, but I can live with that. tune2fs -l shows a lot of information and learned some new things today. – Rob Nov 29 '11 at 14:14
0

As Mike said, there's some space reserved in an ext4 partition for root only, to prevent non-root users from filling the hard drive. More information on that in the comments of this answer over at askubuntu: https://askubuntu.com/a/48511

If you don't feel like you need this (as a single user on this computer, I don't) you can remove these reserved blocks with tune2fs:

tune2fs -m 0 /dev/XXX

where XXX is the partition you want to remove reserved blocks from.

Rob
  • 2,392