hugetlbfs: fix i_blocks accounting
authorEric Sandeen <sandeen@sandeen.net>
Wed, 29 Jul 2009 22:02:16 +0000 (15:02 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 30 Jul 2009 02:10:35 +0000 (19:10 -0700)
commite4c6f8bed01f9f9a5c607bd689bf67e7b8a36bd8
treed344bc7b6f89f7066c7e35ddff8c4a4b56904a36
parent659098141d02eb8e3545be8969d262e02d2f3f98
hugetlbfs: fix i_blocks accounting

As reported in Red Hat bz #509671, i_blocks for files on hugetlbfs get
accounting wrong when doing something like:

   $ > foo
   $ date  > foo
   date: write error: Invalid argument
   $ /usr/bin/stat foo
     File: `foo'
     Size: 0          Blocks: 18446744073709547520 IO Block: 2097152 regular
...

This is because hugetlb_unreserve_pages() is unconditionally removing
blocks_per_huge_page(h) on each call rather than using the freed amount.
If there were 0 blocks, it goes negative, resulting in the above.

This is a regression from commit a5516438959d90b071ff0a484ce4f3f523dc3152
("hugetlb: modular state for hugetlb page size")

which did:

- inode->i_blocks -= BLOCKS_PER_HUGEPAGE * freed;
+ inode->i_blocks -= blocks_per_huge_page(h);

so just put back the freed multiplier, and it's all happy again.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Acked-by: Andi Kleen <andi@firstfloor.org>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/hugetlb.c