From an end-user perspective an HPC Beowulf Cluster logically consists of login, compile and job submission roles. It is common to combine these roles within a single front-end machine. In some cases these roles are combined with the system administration and provisioning tasks of a cluster on a dedicated management node.
HPC Beowulf clusters come in many forms and sizes. Several types of computational nodes, dedicated front end machines and secondary fail-over head nodes are not uncommon. The same applies to the Beowulf internal networking. It is common to have a few physical networks at the back of the cluster. One simple Ethernet network may be implemented for server health monitoring. Another for data provisioning and file sharing where yet another, perhaps even non-Ethernet network like InfiniPath or InfiniBand, may be used for inner-job high performance communication. Applications relying on message passing benefit greatly from lower latency.
The login node role is available to compile software, to submit a parallel or batch program to a job queuing system and to gather/analyze results. Other than when observing interactive jobs it should rarely be necessary for a user to log on to one of the slave nodes and in some cases slave node logins are disabled altogether.
Login to your environment
The login node is the node you where can log in to and work from. On the login node you can:
- compile your code
- develop applications
- submit applications to the cluster for execution
- monitor running applications
Most clusters are Linux based. To login using a *nix operating system, you can use a terminal. Then type
On a Windows operating system, you can download a SSH client, for instance, PuTTY, and enter the cluster's address and click connect. Enter your user name and password credentials when asked for. You can download PuTTY from the official website at: http://www.chiark.greenend.org.uk/~sgtatham/putty/
If your system administrator has changed the default SSH port from 22 to something else, you can specify the port with the -p option:
ssh -p <port> username@clustername