Python, SciPy, pytorch, PyCUDA and more

General Quick Start

Using a Python notebook in your browser


  1. Ensure that you have configured SSH correctly, and know how to establish a SOCKS tunnel.
  2. Activate the tunnel in both your SSH client and your browser (turn on FoxyProxy or similar)
  3. Navigate to http://simba-compute-gpu-2:8088/tree/ (or any other node – Just alter the hostname part in the URL, right after http://) You should land on a page similar to the screenshot above.
  4. Browse to tmp/ and create a new notebook under New → Notebooks → Python 2
    💡 The file will be saved under /tmp/notebooks. Note that this directory is shared between all users of the cluster; don't put anything confidential in there.
  5. Type some Python, e.g.
    1 + 1
    and click  to execute it.

Using the Command Line

  1. ssh
  2. ssh simba-compute-11
  3. module load python
    module load python3
  4. Run ipython
  5. Load a Python package, e.g. import nlopt



  1. ssh
  2. ssh simba-compute-gpu-1
  3. module load cuda
  4. List the GPUs available:
    nvidia-debugdump -l
  5. module load python
    module load python3
  6. Run ipython
  7. Copy and paste the PyCUDA demo code (you might have to change the last line in Python 3 to add parentheses to the call to print)

Expected result: a matrix of zeroes


  1. ssh
  2. ssh simba-compute-gpu-1
  3. module load python3
  4. Run ipython
  5. Copy and paste the demo code (intended for Python 3)


TensorFlow is a general-purpose SIMD programming framework with machine learning and neural networks as primary targets, that uses Python as the driver language.

To try it out, just browse the Python notebook on one of the GPU nodes, e.g. http://simba-compute-gpu-3:8088/  (note: you must have an SSH SOCKS tunnel configured and active, e.g. with FoxyProxy). At the root of the Python notebook, you will find some tutorials that ought to Just Work™.

To run a more serious computation, you will have to teach yourself Docker as this is the way TensorFlow is installed on the Simba cluster. For workloads on a GPU node, use the nvidia-docker command instead of plain docker, i.e.

  ssh simba-compute-gpu-3
  # The three lines below form a single command and should be copied and pasted together:
  nvidia-docker run --rm -it --name mycomputation \
       -v $HOME:/home  \
  ls /home    # Shows your home directory on the cluster
  # now type at the ipython prompt
  import tensorflow
  hello = tf.constant("Hello world")  # And so on, more from here

💡 The -v flag creates a Docker volume at /home (inside the container), showing $HOME (i.e. your home directory, outside the container). This is the way to make your script and your data available to the computation.

More "hello, world"-ish resources:


pytorch is to Python as Torch is to LUA, which is quite handy if you don't intend to learn the latter.

  1. Ensure that you can ssh into any node, and do so on one of the GPU nodes e.g.
    ssh simba
    ssh simba-compute-gpu-2
  2. Run
    module load python3
  3. Start
    and copy and paste some sample code, for instance this one.

More Python tools

Note: all entries in the table below, unless specified otherwise, have ssh access and module load python or module load python3 as implicit prerequisites (see above).

Module or package Quickstart instructions
iPython See above
PyCUDA See above
virtualenv virtualenv foo
Parallel Python /usr/local/share/python/examples/sum_primes
NLopt See above
Anaconda conda help
Anaconda Accelerate ssh simba-compute-gpu-01
module load python
source activate accelerate
Theano and Lasagne ssh simba-compute-gpu-01
module load python3
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 ipython

Then copy and paste some code from the respective tutorials: Theano, Lasagne