Overview

Teaching: 0 min
Exercises: 0 min
Questions
Objectives

Intro/Definition

Wikipedia.org

CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.[1] It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing – an approach termed GPGPU (General-Purpose computing on Graphics Processing Units). The CUDA platform is a software layer that gives direct access to the GPU’s virtual instruction set and parallel computational elements, for the execution of compute kernels. The CUDA platform is designed to work with programming languages such as C, C++, and Fortran. CUDA supports programming frameworks such as OpenACC and OpenCL. Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Haskell, R, MATLAB, IDL, and native support in Mathematica. Used for:

CUDA provides both low-level API and higher-level API CUDA is compatible with most standard operating systems.

Glossery

First resource Warps and occupancy

Components

CUDA 8.0 comes with the following libraries:

CUDA 8.0 comes with these other software components:

Cuda in Python

pycuda

PyCUDA lets you access Nvidia’s CUDA parallel computation API from Python.

Features

pycuda II

Advantages

Documentation

Key Points