Summer 2012 HPC Training Program Calendar

Title of the tutorial Location Date Time Instructor
Introduction to parallel programming with MATLAB Sumwalt, 102 May 16, 2012

10am - 12pm

Dr. Nikolai Sergueev
Introduction to Linux Sumwalt, 102 May 18, 2012 10am - 12pm Dr. Jerry Ebalunode
Introduction to cluster computing on Plank: A hybrid GPU-CPU cluster Sumwalt, 102 May 21, 2012

10am - 12pm

Dr. Jerry Ebalunode
Introduction to programming with C/C++ Sumwalt, 102 May 23, 2012

10am - 12pm

Dr. Jerry Ebalunode
Introduction to programming with Fortran Sumwalt, 102 May 25, 2012

10am - 12pm

Dr. Nikolai Sergueev
Introduction to parallel programming with MPI Part1 Sumwalt, 102 May 30, 2012

10am - 12pm

Dr. Nikolai Sergueev
Introduction to parallel programming with MPI Part2 Sumwalt, 102 June 1, 2012

10am - 12pm

Dr. Nikolai Sergueev
Introduction to parallel programming with Intel Cilk Plus Sumwalt, 102 June 4, 2012

10am - 12pm

Dr. Jerry Ebalunode
Introduction to parallel programming with OpenMP Sumwalt, 102 June 6, 2012 10am - 12pm Dr. Jerry Ebalunode
Introduction to CUDA GPGPU programming Sumwalt, 102 June 8, 2012

10am - 12pm

Dr. Nikolai Sergueev
How to write Teragrid/XSEDE supercomputing allocation proposal Sumwalt, 102 June 12, 2012 10am - 12pm Dr. Jerry Ebalunode

 

Spring 2012 HPC Training Program Calendar

Title of the tutorial Location Date Time Instructor
Introduction to Linux Sumwalt, 231 March 3, 2012

11am - 1pm

3pm - 5pm

Dr. Jerry Ebalunode
Introduction to cluster computing on Plank: A hybrid GPU-CPU cluster Sumwalt, 231 March 14, 2012

9am - 11am

11am - 1pm

3pm - 5pm

Dr. Jerry Ebalunode
Introduction to programming with Fortran Sumwalt, 231 March 15, 2012

11am - 1pm

3pm - 5pm

Dr. Nikolai Sergueev
Introduction to programming with C/C++ Sumwalt, 231 March 21, 2012

9am - 11am

1pm - 3pm

Dr. Jerry Ebalunode
Introduction to parallel programming with MPI Sumwalt, 231 March 26, 2012

11am - 1pm

3pm - 5pm

Dr. Nikolai Sergueev
How to parallelize your program: interactive session Sumwalt, 231 April 4, 2012

9am - 11am

1pm - 3pm

Dr. Nikolai Sergueev
Introduction to CUDA GPGPU programming Sumwalt, 231 April 19, 2012

11am - 1pm

3pm - 5pm

Dr. Nikolai Sergueev
Introduction to parallel programming with Intel Cilk Plus Sumwalt, 231 April 25, 2012

11am - 1pm

 

Dr. Jerry Ebalunode

HPC Workshop Descriptions

Introduction to Linux

Basic introduction to Linux operating environment and high performance computing clusters at USC for current users of Windows or Mac systems. Topics covered will include user accounts, permissions, shells, file system navigation, basic commands, manipulating files and folders, common text editors, and executing programs. Upon completion of this course, users should be able to comfortably work within a Linux computing environment.

Introduction to cluster computing on Planck: A hybrid GPU-CPU cluster

This short course introduces participants to the computing environment of USC's hybrid GPU-CPU high performance computing cluster – Plank, including how to prepare jobs, run them, and retrieve results . The Plank supercomputer with theoretical peak performance of 57 teraflops is a new addition to USC's group of HPC clusters. Topics covered will include system architecture, system access, customizing your user environment, compiling and linking codes for cpu's or gpu's, the SGE batch queuing system, job scripts, matlab jobs, submission of serial or interactive or parallel gpu/cpu jobs to the batch system. Upon completion of this course, users should be able to comfortably work within Planck HPC computing environment and other similar HPC environments at USC and the TeraGrid.

Introduction to programming with Fortran

This short tutorial introduces Fortran - one of the modern  programming languages that meets all the needs of the scientific community. Having Fortran programming skills is very useful for developing high performance computing applications and work with the large number of existing scientific codes. In the tutorial we will cover basic syntax including code structure, data types, input/output, etc as well as show how to write, compile, and run Fortran codes. Basics of the Makefile scripting will also be covered. The tutorial is intended primarily for those people who do not have any programming experience. Familiarity with Unix/Linux environment and some of the editor (such as vi, emacs) is desired but not mandatory. Upon completion of the tutorial, participants should be able to understand existing scientific Fortran codes as well as write their own simple codes.

Introduction to programming with C++

For many years, C++ has served as the de facto language for writing fast, powerful, and robust scientific & engineering applications. In this short tutorial, you would learn the basics of programming in C++. Materials covered would include C++ concepts such as data types, functions,arrays, pointers and reading and writing data.  The tutorial is intended primarily for those people who do not have any programming experience. Familiarity with the Unix/Linux environment and some of the file editors such as nano or vi or emacs is desired but not mandatory.  Upon completion of the tutorial, participants should be able to write, compile and debug C++ codes for solving simple numerical problems.

Introduction to parallel programming with MPI

This course focuses on writing parallel programs using MPI standard on the USC as well as other high performance computing resources. Topics covered include a variety of MPI processor-to-processor communication algorithms, MPI data types for message passing, connection topologies, etc. The tutorial will help users to get familiar with the basics of the MPI programming and parallelization techniques. Upon competition, users should be able to comfortably work with the MPI routines and write simple parallel codes. Prerequisites: Basic familiarity with PC, using Linux and the shell environment, and working knowledge of at least one the programming language (C, C++, Fortran, Matlab). Interested participants not familiar with Linux should try to attend the introduction to Linux tutorial scheduled earlier in the day.

How to parallelize your program: interactive session

This interactive tutorial gives an overview of several MPI parallelization techniques for "everyday" scientific computing. You will learn how to simply convert most of your research code(s) into parallel one(s) and speed up your calculations. Topics covered include parallelization of DO loops and I/O blocks, block and cyclic distribution, etc. As a part of the interactive tutorial, several MPI examples (such as solution of partial differential equation) will be coded and discussed in the class. Upon completion, participants should be able to effectively parallelize the sequential codes with MPI and tune them for better performance.

Prerequisites: Knowledge of at least one of the programming languages (C, C++, Fortran) and general familiarity with the basics of MPI

Introduction to parallel programming with OpenMP

Basic introduction to parallel computing using OpenMP, including a quick tutorial on writing parallel OpenMP code. Participants with basic programming experience are welcome. The USC Planck and ACM-chem clusters will be used for all activities. Topics covered will include parallel computing and programming concepts, parallel computer architecture, data and task parallelism using the OpenMP interface. Examples of modifying serial code to run in parallel will be presented. Upon completion of this course, users should be able to write entry level parallel OpenMP applications that can take run on shared memory systems.

Introduction to CUDA GPGPU programming

This tutorial introduces the Graphics Processing Unit (GPU) as a parallel computing device, the CUDA parallel programing language and associated CUDA numerical libraries for use in high performance computing. We start be demonstrating the difference between CPU and GPU and explaining how to compile and run GPU programs. Second part of the tutorial focuses on using GPU libraries such as FFT or Linear Algebra (BLAS) for intensive numerical computations. Upon completion, participants should understand key GPU concepts and be able to write simple GPU programs with CUDA C/C++.

Introduction to parallel programming with Intel Cilk Plus

Cilk Plus is an extension to the C and C++ that offers a quick, easy and reliable way to improve the performance of programs on multicore processors by exploiting task and vector level parallelism. This brief tutorial introduces participants to Cilk Plus. It includes a quick tutorial on writing fast parallel code by implementing the Cilk Plus syntax. Participants with basic programming experience in C or C+++ are welcome. The USC Planck and ACM-chem clusters will be used for all activities. Topics covered will include parallel computing and programming concepts, parallel computer architecture, data and task parallelism using the Cilk Plus interface. Examples of modifying serial code to run in parallel will be presented. Upon completion of this course, users should be able to write entry level parallel Cilk Plus based applications that can take run on multi core and many core systems.