You are here: Home Centre For Innovation Emerald Parallel file system

Parallel file system

File Store

The Panasas parallel storage is provided by Panasas ActiverStor 11 shelves. Each shelf has 1 Director Blade and 10 Storage Blades, with each Storage Blade having a raw capacity of 6TB with total usable capacity of 235TB. Each shelf is connected at 10 Gigabit. Each host on Emerald mounts the Panasas storage directly using its proprietary protocol.

Each user has access to 3 classes of storage, described in the table below

Emerald Storage Areas
Description
Home Directory Private to each users and located at /home/<institute>/<username>, with a 100GB quota. A daily snapshot is stored internally by Panasas and the storage is backed up to tape weekly.
Work Directory
An area shared between members of the same institute and located at /work/<institute>. There is no quota other than the allocated size of the institute's volume. This area is not backed up but Emerald administrators will not delete data from it without attempting to contact the user first. It is intended for data required for multiple jobs, but that can be recoverered from elsewhere if necessary
Scratch Directory
Shared between all users of Emerald, and located at /work/scratch this area is intended for temporary files used by jobs. Emerald administrators reserve the right to delete files older than 48 hours from this area.

Quotas on home directories can be increased if required, or further volumes created if required. Please contact us to discuss your requirements

Viewing quotas and usage

Home directories quotas and usage can be viewed by running the command

$ panfs_quota
while in your home directory, this will print out information like that below
  <bytes>    <soft>    <hard> : <files>    <soft>    <hard> : <path to volume> <pan_identity(name)>
   294912 unlimited unlimited :       5 unlimited unlimited : /home/stfc uid:0(root)

Usage of areas under /work can be viewed by running a command like

$ pan_df /work/stfc
Replace stfc with the name of area you are interested in, choose from bristol, oxford, soton, stfc or ucl. The standard df cannot be used as the /work areas are not mounted directly, the pan_df command gives output similar to
Filesystem           1K-blocks      Used Available Use% Mounted on
panfs://130.246.139.120/gpu/work/
                     3906250000 251374656 3654875344   7% /work/stfc

Recovering files from snapshots

Snapshots are taken at 4am every day and are accessible by cd-ing into the hidden directory .snapshot from any directory in the home filesystem. Running ls in  this directory will give output similar to the following :

$ ls
2012.07.27.04.05.01.gpuhome  2012.07.29.04.05.01.gpuhome  2012.07.31.04.05.01.gpuhome  2012.08.03.04.05.01.gpuhome
2012.07.28.04.05.01.gpuhome  2012.07.30.04.05.01.gpuhome  2012.08.02.04.05.01.gpuhome

Each of these directories is the state of the directory tress at the time the snapshot was taken. Files deleted or inadvertently overwritten can be simply copied out of the snapshot directory back into place.