Home
Uni-Logo
 
# Hardware Usage Guide ## Introduction This guide is a summary of useful information for students working in the LMB department (lab project, master project, theses, HiWi). If you have particular needs or something does not work as expected, contact your supervisor or open a ticket for the technical support. ## Get Access ### Physical Access Follow the instructions in the Project Guide for getting physical access with your Unicard to the building and student room. ### TF Account access Several downstream accesses (Ticket system, KISLURM cluster, mail account) depend on your TF account. Please make sure your account works by logging in here: [User support for TF Freiburg](https://support.informatik.uni-freiburg.de/?run=account) If you need to reset your password write an email to the Poolmanager at poolmgr@informatik.uni-freiburg.de In urgent cases if you do not get a response from the Poolmanager you can also ask our technician for this (contact below). ### Workstation Access For logging in at the LMB systems, you will be assigned a new account associated to your username but with a different password. Your supervisor needs to contact the technical support for setting up your account. The technical support needs the student name, the faculty account name, and the expected duration of the project. When the account is set up, the first thing you are required to do is changing the password. ### SSH Access within the LMB network Within the LMB network, workstations and servers are accessible via ssh as follows: ```bash ssh -p 2122 name ``` Compute nodes that are managed by our cluster software (`lmbtorque`) are not accessible via ssh. Servers that are not workstations or compute nodes are also not accessible via ssh. ### SSH Access from Remote Remotely using an SSH client: ```bash ssh -p 2122 lmblogin.informatik.uni-freiburg.de ``` Use your lmb credentials to log in as if you were in a workstation on lmb pool. You will log directly to the lmblogin server, from there you can ssh to any workstation/server you need as you were in a lmb workstation. ## Data storage * We have two subsystems now, namely Torque and KISLURM. They do not share the filesystem. Only the LMB home is mounted at KISLURM at `/ihome/yourusername`. The SLURM home is mounted at LMB at `/misc/kishome/yourusername`. Datasets etc. must be synced with tools like `scp` or `rsync`. Data is mostly shared via NFS and available on all LMB machines. Depending on your usage, you might want to store data at different locations: - `/home/username`: This directory has very limited storage (typically around 1GB). Almost no data should be put here. - `/misc/student/username`: This directory is backuped, but has a storage limit of 100GB. Small but important data, e.g. code, should be stored here. - `/misc/student/username/nobackup`: This directory is not backuped. Use it to store things like software cache folders reproducible installations (e.g. conda environments). - `/misc/lmbraidXX/username`: HDD raids that offer a lot of storage, but are not backuped. Large and reproducible data, e.g. datasets and experiment outputs, should be stored here. - Note: If some software uses the `/home/username` path and you cannot change this, use a soft symbolic link.<br/> Move the folder `mv /home/username/bigfolder /misc/student/username/nobackup`<br/> and create the link `ln -s /misc/student/username/bigfolder /home/username/bigfolder` In addition to that, some cluster nodes have local SSDs that are available at `/scratchSSD2` or `/scratchSSD`. Loading data from these local SSDs is very fast (and takes some load off from our network). However, data is not shared across machines, which means you have to transfer your data to each machine where you want to run your job. It makes sense for you to use these local SSDs if your datasets are relatively stable over time and if you need to load a lot of data or data that is stored as many small individual files. ## Servers The server **dacky** is a development node, which can be accessed using ```ssh```. Do not run programs that take a long time to complete; use the cluster instead. The development node is meant for developing! Compute jobs need to be run via the cluster management system. ## Ticket system If you need support e.g. you need a special software package or you have technical problems with the system, then write a ticket. - The ticket system is available at [https://lmbticket.informatik.uni-freiburg.de/](https://lmbticket.informatik.uni-freiburg.de/). - You need to create an account before you can submit tickets. ## Contact Technical support (Account Management). - [Stefan Teister](http://lmb.informatik.uni-freiburg.de/people/teister/index.html) (main contact).