Storage Resources
Overview
ARC offers several different storage options for users’ data:
Persistent Storage
Name |
Intent |
File System |
Environment Variable |
Per User Maximum |
Data Lifespan |
Available On |
---|---|---|---|---|---|---|
Long-term storage of files |
Qumulo |
$HOME |
640 GB 1 million files |
As long as the user account is active |
Login and Compute Nodes |
|
Project (TinkerCliffs, Infer) |
Long-term storage of shared, group files |
GPFS (replaced a BeeGFS system) |
- n/a - |
25 TB, 5 million files per faculty researcher (Expandable via investment) |
As long as the project account is active |
Login and Compute Nodes |
Long-term storage for infrequently-accessed files |
GPFS |
$ARCHIVE |
- |
to be negotiated in accordance with demonstrated need |
Login Nodes |
Scratch (temporary) storage
Name |
Intent |
Per User Maximum |
Data Lifespan |
File System |
Environment Variable |
Available On |
|
---|---|---|---|---|---|---|---|
Short-term access to working files. Automatic deletion. |
No size limits enforced |
90 days |
Vast |
- n/a - |
Login and compute nodes |
||
Fast Scratch (deprecated) |
(deprecated) Short-term access to working files |
No size limits enforced |
90 days |
Vast |
- n/a - |
Login and compute nodes |
|
Fast, temporary storage. Auto-deleted when job ends |
Size of node hard drive |
Length of Job |
Local disk hard drives, usually spinning disk or SSD |
$TMPDIR |
Compute Nodes |
||
Fast, temporary storage. Auto-deleted when job ends |
Size of node hard drive |
Length of Job |
Local disk hard drives, usually spinning disk or SSD |
$TMPDIR |
Compute Nodes |
||
Very fast I/O |
Size of node memory allocated to job |
Length of Job |
Memory (RAM) |
$TMPFS |
Compute Nodes |
Centralized repositories
Name |
Intent |
File System |
Environment Variable |
Per User Maximum |
Data Lifespan |
Available On |
---|---|---|---|---|---|---|
Central repo of large datasets and databases |
VAST |
- n/a - |
- |
- |
Login and compute nodes, Tinkercliffs only. |
Each is described in the sections that follow.
Home
Home provides long-term storage for system-specific data or files, such as installed programs or compiled executables. Home can be reached the variable $HOME
, so if a user wishes to navigate to their Home directory, they can simply type cd $HOME
. Each user is provided a maximum of 640 GB in their Home directories (across all systems). Home directories are not allowed to exceed this limit. Note that running jobs fail if they try to write to a Home directory when the hard limit is reached.
Note
Avoid reading/writing data to/from HOME in a job or using it as a working directory. Stage files into a “scratch” location to keep unnecessary I/O off of the HOME filesystem and improve performance. /fastscratch and Local Scratch
Project
Project (on TinkerCliffs and Infer) provide long-term storage for files shared among a research project or group, facilitating collaboration and data exchange within the group. Each Virginia Tech faculty member can request group storage up to the prescribed limit at no cost by requesting a storage allocation via ColdFront. Additional storage may be purchased through the investment computing or cost center programs.
Archive
Note
As of Fall 2023, the current storage system which hosts the VTARCHIVE storage has reached its end-of-life. We are working with individual groups with the largest footprint on how to manage data currently in place there.
The term “Archive” is meant to convey the idea of a thoughtful, curated collection whose members are discrete, well packaged and labeled. In this sense “scratch” and “work” are conceptual opposites of “archive”. VT’s ARCHIVE storage is also different in concept from a “backup”.
Archive provides users with long-term storage for data that does not need to be frequently accessed i.e. storing important/historical results. Archive is accessible from all ARC’s systems. Archive is not mounted on compute nodes, so running jobs cannot access files on it. Archive can be reached the shell variable $ARCHIVE
, so if a user wishes to navigate to their Archive directory, they can simply type cd $ARCHIVE
.
Since “Archival” is a long-term concept and students are generally expected to be users for a few years, they should not have or use a personal archive directory. A more appropriate arrangement would be to package and transfer data to their advisor or PI who maintains an archive of curated and packaged datasets.
Best Practices for archival storage
Because the ARCHIVE filesystem is backed by tape (a high capacity but very high latency medium), it is very inefficient and disruptive to do file operations (especially on lots of small files) on the archive filesystem itself. Archival systems are designed to move and replicate very large files; ideally users will tar all related files into singular, large files. Procedures are below:
To place data in $ARCHIVE
:
create a tarball containing the files in your
$HOME
(or$WORK
) directorycopy the tarball to the
$ARCHIVE
filesystem (use rsync in case the transfer were to fail)
To retrieve data from $ARCHIVE
:
copy the tarball back to your
$HOME
(or$WORK
) directory (use rsync in case the transfer were to fail).untar the file on the login node in your
$HOME
(or$WORK
) directory. Directories can be tarred up in parallel with, for example, gnu parallel (available via theparallel
module). This line will create a tarball for each directory more than 180 days old:
find . -maxdepth 1 -type d -mtime +180 | parallel [[ -e {}.tar.gz ]] || tar -czf {}.tar.gz {}
The resulting tarballs can then be moved to Archive and directories can then be removed. (Directories can also be removed automatically by providing the --remove-files
flag to tar
, but this flag should of course be used with caution.)
Scratch Filesystems
VAST - Global Scratch (Tinkercliffs only)
Note
Files and directories stored here are subject to automatic deletion. Do not use it for long term storage.
Local scratch storage options (see below) generally provide the best performance, but are constrained to the duration of a job and are strictly local to the compute node(s) allocated to a job. In constrast, we also have a VAST storage system which provides storage for temporary staging and working space with better performance characteristics than HOME or PROJECTS. It is “global” in the sense that it is accessible from any node on the Tinkercliffs cluster.
It is a shared resource and has limited capacity (364TB), but individual use at any point in time is unlimited provided it the space is available. A strict automatical deletion policy is in place wherein any file will be automatically deleted when it has reached an age of 90 days on /globalscratch.
Best practices
create a directory for yourself
mkdir /globalscratch/<username>
stage files for a job or set of jobs
keep the number of files and directories relatively small (ie. less than 10,000). It is a network-attached filesystem and incurs the same performance overhead for file operations that you would get with
/home
or/projects
.immediately copy any files you want to keep to a permanent location to avoid accidental deletion
always remember the 90-day automatic deletion policy
Automatic Deletion Details
As mentioned above, files and directories in /globalscratch
will be automatically deleted based on aging policies. Here is how that works:
The storage system runs an hourly job to identify files which have exceeding the aging policy (90 days) and adds these to the deletion queue.
The storage system runs an automated job at 12:00am UTC (7:00PM EST) every day to process the deletion queue.
Additionally, the storage system will detect and delete all empty directories regardless of age.
Restoring files
In some situations, deleted files and directories may be restored from “snapshots”. Snapshots are an efficient way to keep several instances of the status of a file system at regular points in time.
For the /globalscratch
file system, these are kept in the “hidden directory” /globalscratch/.snapshots
which contains a set of snapshots named according to the type (daily, weekly, or monthly) and the date-time when they were recorded. For example:
/globalscratch/.snapshot/week_2023-11-13_12_00_00_UTC
is an instance of a weekly snapshot which was recorded on 2023-11-13 at 12:00:00PM UTC.
Snapshots may be recorded in daily, weekly, and monthly cycles, but ARC reserves the right to adjust the frequencies and quantities of snapshots which are retained. Changes in the frequencies and quantities may occasionally be needed to adjust how much of the storage system capacity is dedicated to snapshot retentions.
Note
While snapshots provide some level of protection against data loss, they should not be viewed as a “backup” or as part of a data retention plan.
VAST - Fast Scratch (deprecated - do not use)
Warning
2024-03-08: The /fastscratch filesystem on Tinkercliffs is being replaced by /globalscratch and will be removed from Tinkercliffs at the end of the 2024 Spring semester. Recover any files you need from that system before then.
While the use of scratch storage options below are constrained to the duration of a job, the VAST storage system provides a temporary staging and working space with better performance characteristics than HOME or PROJECT. It is a shared resource and has limited capacity (200TB), but individual use is unlimited provided it
Local Scratch
Running jobs are given a workspace on the local drives on each compute node which are allocated to the job. The path to this space is specified in the $TMPDIR
environment variable. This provides a higher performing option for I/O which is a bottleneck for some tasks that involve either handling a large volume of data or a large number of file operations.
Note
Any files in local scratch are removed at the end of a job, so any results or files to be kept after the job ends must be copied to another location as part of the job. /globalscratch is a good choice for most people.
Local Drives
Running jobs are given a workspace on the local drives on each compute node. The path to this space is specified in the $TMPDIR
environment variable
Solid State Drives (SSDs)
Solid state drives do not use rotational media (spinning disks/platters) but memory-like flash storage which gives it better performance characteristics. The environment variable $TMPSSD
is set to a directory on an SSD accessible to the owner of a job when SSD is available on compute nodes allocated to a job.
Memory as storage
Running jobs have access to an in-memory mount on compute nodes via the $TMPFS
environment variable. This should provide very fast read/write speeds for jobs doing I/O to files that fit in memory (see the system documentation for the amount of memory per node on each system). Please note that these files are removed at the end of a job, so any results or files to be kept after the job ends must be copied to Work or Home.
NVMe Drives
Same idea as Local Scratch, but on NVMe media which “has been designed to capitalize on the low latency and internal parallelism of solid-state storage devices.” Running jobs are given a workspace on the local NVMe drive on each compute node if it is so equipped. The path to this space is specified in the $TMPNVME
environment variable. This provides another option for users who would prefer to do I/O to local disk (such as for some kinds of big data tasks). Please note that any files in local scratch are automatically removed at the end of a job, so any results or files to be kept after the job ends must be copied to Work or Home.
NVMe local scratch storage is available on nodes in the following nodes and capacities:
-
largemem_q
nodes, 1.8TBk80_q
nodes, 1.8TB
-
a100_normal_q
nodes, 11.7TBintel_q
nodes, 3.2TB
Global
On Tinkercliffs, the /global/
directory has been set up to provide centralized access to some commonly used databases and datasets which are large and/or have many files. All users can read these files, but for stability purposes, write permissions are only available to ARC personnel.
Some example datasets are the imagenet
dataset and some biodatabases such as those neede by AlphaFold or other genomics applications. If you know of a dataset you think we should add to this repository, please let us know by submitted an ARC helpdesk request.
Checking Usage
You can check your current storage usage (in addition to your compute allocation usage) with the quota
command:
[mypid@tinkercliffs2 ~]$ quota
USER FILESYS/SET DATA (GiB) QUOTA (GiB) FILES QUOTA NOTE
mypid /home 584.2 596 - -
GPFS
mypid /projects/myproject1 109.3 931
mypid /projects/myproject2 2648.4 25600