LVM thin provisioning - file system usage and pool usage dosn't match

When I was demonstrating LVM thin provisioning to new batch of campus hires ; they pointed out an important mismatch between thin pool usage and actual file system usage.

I thought , it would be worthy to give it a try to find the cause.

So here we go; I created a thin pool with size of 100MB and thin volume of 1GB

Thin volume formatted with ext4 and mounted over /test_fs

Everything was going smooth as per the plan until we did a tar backup of /usr over /test_fs.

Here we go ;

* Created vg001 - size 12GB

[root@ol7-san ~]# vgdisplay vg001
  --- Volume group ---
  VG Name               vg001
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  12
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               12.00 GiB
  PE Size               4.00 MiB
  Total PE              3071
  Alloc PE / Size       0 / 0
  Free  PE / Size       3071 / 12.00 GiB
  VG UUID               kroGDG-Rs8L-1c0e-WBjJ-g0OG-4u8k-T4yaJl
[root@ol7-san ~]#

* Created a thin pool with size of 100MB

[root@ol7-san ~]# lvcreate -L 100M -T vg001/mythinpool
  Logical volume "mythinpool" created.
[root@ol7-san ~]#


* Created a thin volume of size 1GB

[root@ol7-san ~]# lvcreate -V1G -T vg001/mythinpool -n thinvolume
  Logical volume "thinvolume" created.
[root@ol7-san ~]#


* Formatted thinvolume with ext4

[root@ol7-san ~]# mkfs.ext4 /dev/vg001/thinvolume
mke2fs 1.42.9 (28-Dec-2013)

..(snipped)...
Writing superblocks and filesystem accounting information: done

[root@ol7-san ~]#


* Mounted the file system over directory /test_fs

[root@ol7-san ~]# mount /dev/vg001/thinvolume /test_thin
[root@ol7-san ~]# df -h /test_thin/
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/vg001-thinvolume  976M  2.6M  907M   1% /test_thin
[root@ol7-san ~]#


* Everything is good and lets do a tar backup

[root@ol7-san test_thin]# tar -cf usr_bkp.tar /usr
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets


* Lets check the FS usage

[root@ol7-san ~]# df -h /test_thin/
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/vg001-thinvolume  976M  312M  597M  35% /test_thin
[root@ol7-san ~]#


* Lets check the thinpool usage as well

[root@ol7-san test_thin]# lvs
  LV         VG    Attr         LSize   Pool       Origin Data%  Meta%  Move Log Cpy%Sync Convert
  mythinpool vg001 twi-aotz--   100.00m                   100.00 1.95
  thinvolume vg001 Vwi-aotz--   1.00g   mythinpool        9.77
[root@ol7-san test_thin]#


Pool is 100% utilized and it clearly shows its size of 100MB

SUPERSIZE..!!! :) File system usage shows 312MB , but our thin pool is only 100MB.

So where is the remaining 212MB stored ???

* ) I pressed Ctrl+C on tar as its hung and tried to do a sync

[root@ol7-san test_thin]# sync

It hung..!!! Gotcha.. the contents were stored in cache in memory and not really written to disk.

So that solves the mystery of file system usage and pool usage mismatch.

You may extend the pool after breaking tar to get the sync commit all dirty page to disk and return to terminal.

One more catch - removing file won't release thin pool usage ; you need to execute fstrim over the file system to get the space back to the pool




Comments

  1. I'm interested to see if you have a workaround. This brought down one of my servers. I've documented tests I've run. I'd be interested in your comments.

    https://www.facebook.com/groups/1681243271985192/?reaf=bookmarks

    ReplyDelete

Post a Comment

Popular posts from this blog

How to Configure YUM in RHEL6

How to Configure Squid with Mysql DB authentication

HMC vtmenu exit