MATLAB seminars at the University of Chicago

9:30 – 10:00 a.m.

Registration and sign-in. Walk-ins are welcome. 

10:00 a.m. – 12:00 p.m.

Session 1: Optimizing Performance and Memory in MATLAB

This session will focus on techniques for developing efficient MATLAB programs using best coding practices. We will demonstrate simple ways you can improve and optimize your code to boost execution speed by orders of magnitude. We will also address common pitfalls in writing MATLAB code and explore the use of the MATLAB Profiler to find bottlenecks. Other topics include strategies for handling large amounts of data in MATLAB and avoiding “out-of-memory” errors. We will provide you with an understanding of the causes of memory limitations in MATLAB and a set of techniques to increase the available memory in MATLAB. You will gain an understanding of how different MATLAB data types are stored in memory and how you can program in MATLAB to use memory efficiently. We will also show techniques for minimizing memory usage in MATLAB while accessing, storing, processing, and plotting data.

 Highlights include:

  • Leveraging the power of vector and matrix operations in MATLAB
  • Identifying and addressing bottlenecks in your code
  • Understanding memory and its constraints
  • Minimizing your memory footprint in MATLAB 

12:00 – 12:30 p.m.

 A light lunch will be served.

12:30 – 2:30 p.m.

Session 2: Parallel Computing with MATLAB on Multicore Desktops and GPUs

During this session you will learn how to solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters. We will introduce you to high-level programming constructs that allow you to parallelize MATLAB applications and run them on multiple processors. We will show you how to overcome the memory limits of your desktop computer by distributing your data on a large scale computing resource, such as a cluster. We will also demonstrate how to take advantage of GPUs to speed up computations without low-level programming.

 Highlights include:

  • Toolboxes with built-in support for parallel computing
  • Creating parallel applications to speed up independent tasks
  • Scaling up to computer clusters, grid environments or clouds
  • Employing GPUs to speed up your computations

 –Register now–