Using VS Code on ARC Clusters
This page describes how to use Visual Studio Code’s Remote-SSH (or similar IDEs such as Cursor) with ARC systems without violating login-node policies.
The core idea is:
Use login nodes only as a lightweight gateway and for editing / job management.
Run all real computation on compute nodes through Slurm jobs (batch or interactive), not directly on login nodes.
Relevant ARC documentation:
Acceptable Use Policy (including restrictions on login nodes)
Video tutorials (including VS Code examples)
1. Why login-node abuse via VS Code is a problem
ARC’s Acceptable Use Policy explains that heavy or long-running jobs must not run on login nodes:
Login nodes are shared gateways, not compute resources.
Allowed on login nodes:
Editing code and text files.
Light compilation (with limited threads).
Staging data and transfers.
Submitting and monitoring Slurm jobs.
Not allowed on login nodes:
CPU- or memory-intensive computations.
GPU jobs (there are no GPUs on the login nodes).
Large I/O or long-running interactive analysis.
Treating VS Code / Remote-SSH as a way to run training or production workflows on the login node.
ARC may terminate offending processes and may suspend accounts that repeatedly misuse login nodes.
Using VS Code safely means: edit on the login node, compute only inside Slurm jobs on compute nodes.
2. Prerequisites
Before using VS Code with ARC, you should have:
Network access
On-campus network or connected to VT VPN.
From off-campus, both login and compute nodes require VT VPN.
ARC account and allocations
An ARC account and at least one allocation you can charge jobs to.
SSH configuration
SSH keys set up and tested (passwordless or with passphrase) following Setting up and using SSH Keys.
VS Code and Remote-SSH
VS Code installed on your laptop.
The Remote – SSH extension installed in VS Code.
(Cursor users: use its built-in remote SSH support with the same SSH config.)
3. Configure SSH to an ARC login node
First, configure your local SSH client (on your laptop).
Edit ~/.ssh/config and add an entry for the login node of the cluster you use. Replace <your_VT_PID> with your VT username. For example, for Tinkercliffs:
Host tinkercliffs
HostName <your_VT_PID>@tinkercliffs2.arc.vt.edu
User <your_VT_PID>
IdentityFile ~/.ssh/id_ed25519 # or your private key path
You can use any friendly alias (tinkercliffs, arc-tc, etc.).
Test from a local terminal:
ssh tinkercliffs
If you can log in normally, you are ready to use VS Code Remote-SSH.
Note
Always use a login node for the same cluster where you plan to run jobs (e.g., tinkercliffs1/tinkercliffs2 for tc* nodes).
4. Standard workflow: VS Code on login node, compute via Slurm jobs
This is the recommended workflow for most users. It keeps all heavy work on compute nodes while still giving you a full-featured IDE.
4.1 Connect VS Code to the login node
Open VS Code on your laptop.
Use the Remote-SSH extension:
Command Palette →
Remote-SSH: Connect to Host...Select the host alias you configured (e.g.,
tinkercliffs).When prompted, choose Linux as the remote platform.
Once connected, use File → Open Folder… and choose a directory on ARC:
e.g.
/home/<your_VT_PID>or/projects/<your_project>/....
At this point:
The VS Code file explorer shows your ARC files.
The integrated terminal is running on the login node.
Any commands you run there must be light and short-lived.
4.2 Submit batch jobs from VS Code
Use Slurm for non-interactive workloads:
In the VS Code terminal (on the login node), create a job script such as
job.sh:#!/bin/bash #SBATCH --job-name=test-job #SBATCH --account=<account> #SBATCH --partition=<partition> #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=4 #SBATCH --time=01:00:00 module load python python my_script.py
Submit the job:
sbatch job.shMonitor the job with tools like
squeueandsacct.
All heavy computation now happens on compute nodes, not on the login node.
4.3 Start an interactive job for debugging or REPL
For interactive debugging or exploratory work, request an interactive job:
interact --account=<account> --partition=<partition> --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 --time=02:00:00
Once the job starts:
hostname
will show a compute node name (e.g., tc006). Run Python, R, C/C++ binaries, etc. inside this interactive shell only.
When done:
exit
to end the interactive job and free resources.
Note
The VS Code server itself still runs on the login node in this workflow. Only your interactive shell (and the processes it launches) run on the compute node, which is acceptable as long as heavy work is contained inside Slurm jobs.
5. Connect VS Code directly to a compute node
In some cases, you may want the VS Code server itself to run on the compute node (for example, to move language-server CPU/memory load off the login node). This requires an active job on a node and the Remote - SSH extension installed in VS Code (see Prerequisites).
5.1 Configure ProxyJump for compute nodes
Edit ~/.ssh/config on your local system/laptop and add the following blocks. Replace <your_VT_PID> with your VT username. Each block tells SSH to reach compute nodes by jumping through the cluster’s login node automatically.
# Tinkercliffs compute nodes
Host tc-intel* tc0* tc1* tc2* tc3* tc-hm* tc-gpu* tc-dgx* tc-xe*
ProxyJump <your_VT_PID>@tinkercliffs2.arc.vt.edu
User <your_VT_PID>
# Falcon compute nodes
Host fal0* fal1*
ProxyJump <your_VT_PID>@falcon2.arc.vt.edu
User <your_VT_PID>
# Owl compute nodes
Host owl0* owl1* owl-hm* owlmln*
ProxyJump <your_VT_PID>@owl3.arc.vt.edu
User <your_VT_PID>
With this in place, SSH connections to compute nodes are handled automatically, no further edits needed when you switch nodes. This is a one-time setup on your local system.
Caution
This does not bypass Slurm. Only SSH into compute nodes where you have an active interactive job, and you must still respect your allocation and time limits.
5.2 Start an interactive job and note the node name
From a terminal connected to the login node, request an interactive job:
interact --account=<account> --partition=<partition> --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 --time=02:00:00
Once the job starts, run:
hostname
Example output:
tc006
This is the compute node where your job is running. Keep this terminal open, closing it ends your job. You can connect to the node directly by name from your local terminal. For example, if your node is tc006:
ssh tc006— routes through the Tinkercliffs login node to the compute node
Note: Connecting via
sshto the compute node in your terminal is not the same as connecting VS Code to the compute node. Continue through the steps below to complete the VS Code setup.
5.3 Connect VS Code to the compute node
Open VS Code on your local system.
Open the Command Palette and choose Remote-SSH: Connect to Host…
Type or select the compute node name (for example,
tc006).When prompted, choose Linux as the remote platform.
Once connected, use File → Open Folder… or File → Add Folder to Workspace… to open your project directory (for example,
/home/<your_VT_PID>or/projects/...).
The VS Code server is now running on the compute node. This connection stays active as long as your Slurm job is running.
Important
When you are done, run exit in the interactive job terminal or cancel the job with scancel <jobid>, closing VS Code does not end your Slurm job. Leaving a job running idle wastes allocated resources and may result in your account being flagged for waste of resources.
If you plan to use AI extensions or chat assistants (such as Claude or GitHub Copilot), keep them here on the compute node to avoid putting load on the login node.
6. Troubleshooting and common mistakes
Common mistakes
Mistake 1 – Running heavy jobs directly in the login node terminal in VS Code
Example:
python train_model.pythat runs for hours, GPU jobs, or multi-process workloads on the login node.Fix: Use Slurm:
Batch jobs with
sbatch.Interactive jobs with
interactorsrun --pty.Run heavy commands only inside those job shells.
Mistake 2 – Trying to reach ARC systems from off-campus without VT VPN
From off-campus, both login and compute nodes require VT VPN.
Fix: Connect to VT VPN before using
sshor VS Code Remote-SSH.
Mistake 3 – Using a login node from a different cluster in ProxyJump
Example of incorrect setup:
HostName tc006(Tinkercliffs compute node)ProxyJump <your_VT_PID>@owl3.arc.vt.edu(Owl login node)
Fix: Use a login node from the same cluster as the compute node:
e.g.,
tinkercliffs1/tinkercliffs2fortc*nodes.
Mistake 4 – Forgetting to end interactive jobs
Leaving interactive sessions running idle wastes resources and may be canceled by ARC staff.
Fix:
exitfrom job shells when finished and close any associated VS Code remote sessions.
Troubleshooting
VS Code remote server fails to start or connect
If VS Code fails to connect or behaves unexpectedly after a broken session, the remote server files may be corrupted. The simplest fix is to remove the
.vscode-server directory.
rm -rf ~/.vscode-server
VS Code will reinstall the remote server automatically on your next connection.
7. Summary
When using VS Code or Cursor with ARC:
Always connect first to a login node via Remote-SSH.
Use the login node only for:
Editing files.
Managing jobs.
Light, short-running commands.
Run all real computation via Slurm on compute nodes:
Batch jobs (
sbatch).Interactive jobs (
interact/srun --pty).
For advanced users:
Use SSH wildcard
ProxyJumppatterns to connect VS Code directly to the compute node while you have a job on that node, instead of editing your SSH config for each new node.
Following this workflow keeps you within ARC’s Acceptable Use Policy and provides a safe, efficient remote-development experience.