Home‎ > ‎Server config‎ > ‎

Lustre - migrating data from OST's

Sometimes the OSTs in a file system have unbalanced usage, either due to the addition of new OSTs, or because of user error such as explicitly specifying the same starting OST index (e.g. -i 0) for a large number of files, or when creating a single large file on one OST. If an OST is full and an attempt is made to write more information to that OST (e.g. extending an existing file), an error may occur. The procedures below describe how to handle a full OST.

Checking File System Usage

The example below shows an unbalanced file system:

root@client01 ~]# lfs df -hUUID                 bytes   Used  Available Use%  Mounted ontestfs-MDT0000_UUID  4.4G   214.5M   3.9G     4%   /mnt/testfs[MDT:0]testfs-MDT0001_UUID  4.4G   144.5M   4.0G     4%   /mnt/testfs[MDT:1]testfs-OST0000_UUID  2.0T   751.3G   1.1G    37%   /mnt/testfs[OST:0]testfs-OST0001_UUID  2.0T   755.3G   1.1G    37%   /mnt/testfs[OST:1]testfs-OST0002_UUID  2.0T     1.9T  55.1M    99%   /mnt/testfs[OST:2] <-testfs-OST0003_UUID  2.0T   751.3G   1.1G    37%   /mnt/testfs[OST:3]testfs-OST0004_UUID  2.0T   747.3G   1.1G    37%   /mnt/testfs[OST:4]testfs-OST0005_UUID  2.0T   743.3G   1.1G    36%   /mnt/testfs[OST:5]filesystem summary: 11.8T     5.5T   5.7T    46%  /mnt/lustre

In this case, OST:2 is almost full and when an attempt is made to write additional information to the file system (with uniform striping over all the OSTs), the write command fails as follows:

[root@client01 ~]# lfs setstripe -c -1 /mnt/testfs[root@client01 ~]# dd if=/dev/zero of=/mnt/testfs/test_3 bs=10M count=100dd: writing `/mnt/testfs/test_3': No space left on device98+0 records in97+0 records out1017192448 bytes (1.0 GB) copied, 23.2411 seconds, 43.8 MB/s

Disabling MDS Object Creation on OST

To enable continued use of the file system, the full OST has to have object creation disabled. This needs to be done on all MDS nodes, since the MDS allocates OST objects for new files.

1. As the root user on the MDS use the lctl set_param command to disable object creation on the OST:

[root@mds01 ~]# lctl set_param osp.testfs-OST0002-*.max_create_count=0osp.testfs-OST0002-MDT0000.max_create_count=0osp.testfs-OST0002-MDT0001.max_create_count=0

The MDS connections to OST0002 will no longer create objects there. This process should be repeated for other MDS nodes if needed. If a new file is now written to the file system, the write will be successful as the stripes are allocated across the remaining active OSTs.

2. Once the OST is no longer full (e.g. objects deleted or migrated off the full OST), it should be enabled again:

[root@mds01 ~]# lctl set_param osp.testfs-OST0002-*.max_create_count=20000osp.testfs-OST0002-MDT0000.max_create_count=20000osp.testfs-OST0002-MDT0001.max_create_count=20000

Note: for releases 2.10.6 and earlier, the create_count must also be set to a non-zero value:

[root@mds01 ~]# lctl set_param osp.testfs-OST0002-*.create_count=128osp.testfs-OST0002-MDT0000.max_create_count=128osp.testfs-OST0002-MDT0001.max_create_count=128

Migrating Data within a File System

Data from existing files can be migrated to other OSTs using the lfs_migrate command. This can be done either while the full OST is deactivated, as described above, or while the OST is still active (in which case the full OST will have a reduced, but not zero, chance of being used for new files).

1. Identify the file(s) to be moved. In the example below, output from the getstripe command indicates that the file test_2 is located entirely on OST2:

[root@client01 ~]# lfs find /mnt/testfs -size +1T --ost 2/mnt/testfs/test_2lmm_stripe_count:  1lmm_stripe_size:   4194304lmm_pattern:       1lmm_layout_gen:    0lmm_stripe_offset: 2	obdidx		 objid		 objid		 group	     2	       1424032	      0x15baa0	             0

2. If this is very large, it should also be striped across multiple OSTs. Use lfs_migrate to move the file(s) to new OSTs:

[root@client01 ~]# lfs_migrate -c 4 /mnt/testfs/test_2

3. Check the file system balance. The df output in the example below shows a more balanced system compared to the df output in the example in Handling Full OSTs.

[root@client01 ~]# lfs df -hUUID                  bytes   Used Available Use% Mounted ontestfs-MDT0000_UUID   4.4G  214.5M      3.9G   4% /mnt/testfs[MDT:0]testfs-MDT0001_UUID   4.4G  144.5M      4.0G   4% /mnt/testfs[MDT:1]testfs-OST0000_UUID   2.0T    1.3T    598.1G  65% /mnt/testfs[OST:0]testfs-OST0001_UUID   2.0T    1.3T    594.1G  65% /mnt/testfs[OST:1]testfs-OST0002_UUID   2.0T  913.4G   1000.0G  45% /mnt/testfs[OST:2]testfs-OST0003_UUID   2.0T    1.3T    602.1G  65% /mnt/testfs[OST:3]testfs-OST0004_UUID   2.0T    1.3T    606.1G  64% /mnt/testfs[OST:4]testfs-OST0005_UUID   2.0T    1.3T    610.1G  64% /mnt/testfs[OST:5]filesystem summary:  11.8T    7.3T      3.9T  61% /mnt/testfs