GPU Programming with OpenACC Training Course
OpenACC is an open standard for heterogeneous programming that allows code to run across various platforms and devices, including multicore CPUs, GPUs, FPGAs, and others.
This instructor-led live training (available online or onsite) targets beginner to intermediate developers who want to leverage OpenACC to program heterogeneous devices and harness their parallel processing capabilities.
By the end of this training, participants will be able to:
- Establish an OpenACC development environment.
- Write and execute a basic OpenACC program.
- Annotate code with OpenACC directives and clauses.
- Utilize the OpenACC API and associated libraries.
- Profile, debug, and optimize OpenACC applications.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practical practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to make arrangements.
Course Outline
Introduction
- What is OpenACC?
- OpenACC vs OpenCL vs CUDA vs SYCL
- Overview of OpenACC features and architecture
- Setting up the development environment
Getting Started
- Creating an OpenACC project in Visual Studio Code
- Exploring project structure and files
- Compiling and running the program
- Displaying output with printf and fprintf
OpenACC Directives and Clauses
- Understanding OpenACC directives and clauses
- Using parallel directives for creating parallel regions
- Using kernels directives for compiler-managed parallelism
- Using loop directives for parallelizing loops
- Managing data movement with data directives
- Synchronizing data with update directives
- Improving data reuse with cache directives
- Creating device functions with routine directives
- Synchronizing events with wait directives
OpenACC API
- Understanding the role of OpenACC API
- Querying device information and capabilities
- Setting device number and type
- Handling errors and exceptions
- Creating and synchronizing events
OpenACC Libraries and Interoperability
- Understanding OpenACC libraries and interoperability
- Using math, random, and complex libraries
- Integrating with other models (CUDA, OpenMP, MPI)
- Integrating with GPU libraries (cuBLAS, cuFFT)
OpenACC Tools
- Understanding OpenACC tools in development
- Profiling and debugging OpenACC programs
- Performance analysis with PGI Compiler, NVIDIA Nsight Systems, Allinea Forge
Optimization
- Factors affecting OpenACC program performance
- Optimizing data locality and reducing transfers
- Optimizing loop parallelism and fusion
- Optimizing kernel parallelism and fusion
- Optimizing vectorization and auto-tuning
Summary and Next Steps
Requirements
- Understanding of C/C++ or Fortran language and parallel programming concepts
- Basic knowledge of computer architecture and memory hierarchy
- Experience with command-line tools and code editors
Audience
- Developers who want to learn how to use OpenACC to program heterogeneous devices and exploit their parallelism
- Developers who wish to write portable and scalable code that can run on different platforms and devices
- Programmers who want to explore the high-level aspects of heterogeneous programming and optimize their code productivity
Open Training Courses require 5+ participants.
GPU Programming with OpenACC Training Course - Booking
GPU Programming with OpenACC Training Course - Enquiry
GPU Programming with OpenACC - Consultancy Enquiry
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend is a series of AI processors engineered for high-performance inference and training tasks.
This instructor-led live training, available online or onsite, targets intermediate-level AI engineers and data scientists aiming to develop and optimize neural network models utilizing Huawei’s Ascend platform and the CANN toolkit.
Upon completion of this training, participants will be capable of:
- Setting up and configuring the CANN development environment.
- Creating AI applications using MindSpore and CloudMatrix workflows.
- Enhancing performance on Ascend NPUs through custom operators and tiling techniques.
- Deploying models to either edge or cloud environments.
Format of the Course
- Interactive lectures and discussions.
- Practical application of Huawei Ascend and the CANN toolkit within sample applications.
- Guided exercises centered on model construction, training, and deployment.
Course Customization Options
- To arrange a customized training session tailored to your specific infrastructure or datasets, please get in touch with us.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) represents Huawei's dedicated AI compute stack, designed for deploying and optimizing AI models on Ascend AI processors.
This instructor-led live training, available either online or onsite, is tailored for intermediate-level AI developers and engineers aiming to efficiently deploy trained AI models onto Huawei Ascend hardware. The curriculum leverages the CANN toolkit alongside tools such as MindSpore, TensorFlow, or PyTorch.
Upon completion of this training, participants will be capable of:
- Grasping the CANN architecture and its critical function within the AI deployment pipeline.
- Converting and adapting models from widely-used frameworks into formats compatible with Ascend.
- Utilizing tools like ATC, OM model conversion, and MindSpore for both cloud and edge inference tasks.
- Identifying deployment issues and optimizing performance on Ascend hardware.
Course Format
- Interactive lectures coupled with live demonstrations.
- Practical lab sessions employing CANN tools along with Ascend simulators or physical devices.
- Real-world AI model deployment scenarios.
Customization Options for the Course
- For inquiries regarding customized training arrangements for this course, please contact us directly.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix serves as Huawei’s comprehensive platform for AI development and deployment, engineered to facilitate scalable, production-ready inference pipelines.
This instructor-led live training (available online or onsite) targets beginner to intermediate AI professionals seeking to deploy and oversee AI models using CloudMatrix, integrated with CANN and MindSpore.
Upon completing this training, participants will be capable of:
- Utilizing CloudMatrix for model packaging, deployment, and serving.
- Converting and optimizing models specifically for Ascend chipsets.
- Establishing pipelines for both real-time and batch inference tasks.
- Monitoring deployments and optimizing performance within production environments.
Course Format
- Interactive lectures and discussions.
- Practical application of CloudMatrix through real-world deployment scenarios.
- Guided exercises centered on conversion, optimization, and scaling.
Customization Options
- For a customized training session tailored to your specific AI infrastructure or cloud environment, please contact us to arrange.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs engineered for AI and HPC workloads, providing robust support for large-scale training and inference tasks.
This instructor-led live training (available online or onsite) targets intermediate to advanced developers looking to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
Upon completion of this training, participants will be able to:
- Gain an understanding of Biren GPU architecture and its memory hierarchy.
- Set up the development environment and effectively use Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Course Format
- Interactive lectures and discussions.
- Hands-on experience with the Biren SDK on sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request customized training for this course tailored to your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI chips designed for efficient inference and training in both edge computing and data center environments.
This instructor-led live training (available online or on-site) targets intermediate developers looking to build and deploy AI models utilizing the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
Upon completing this training, participants will be equipped to:
- Configure and set up the BANGPy and Neuware development environments.
- Develop and optimize Python- and C++-based models for Cambricon MLUs.
- Deploy models to edge and data center devices operating on the Neuware runtime.
- Integrate machine learning workflows with MLU-specific acceleration capabilities.
Course Format
- Interactive lectures and discussions.
- Practical, hands-on experience with BANGPy and Neuware for development and deployment.
- Guided exercises focusing on optimization, integration, and testing.
Customization Options
- For customized training tailored to your specific Cambricon device model or use case, please contact us to arrange.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s comprehensive AI computing toolkit designed to compile, optimize, and deploy artificial intelligence models on Ascend AI processors.
This instructor-led live training, available both online and onsite, is tailored for beginner-level AI developers seeking to understand how CANN integrates into the model lifecycle—from initial training to final deployment—and how it interfaces with popular frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completion of this training, participants will be able to:
- Grasp the purpose and architectural design of the CANN toolkit.
- Configure a development environment utilizing CANN alongside MindSpore.
- Convert and deploy a basic AI model onto Ascend hardware.
- Acquire foundational knowledge to support future CANN optimization or integration initiatives.
Course Format
- Interactive lectures accompanied by group discussions.
- Hands-on laboratory sessions featuring simple model deployment exercises.
- A step-by-step walkthrough of the CANN toolchain and its integration points.
Course Customization Options
- For organizations interested in a customized training program, please reach out to us to make the necessary arrangements.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit empowers powerful AI inference on edge devices like the Ascend 310. CANN offers essential tools for compiling, optimizing, and deploying models in environments with limited compute and memory resources.
This instructor-led, live training (online or onsite) targets intermediate-level AI developers and integrators who want to deploy and optimize models on Ascend edge devices using the CANN toolchain.
By the end of this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Build lightweight inference pipelines using MindSpore Lite and AscendCL.
- Optimize model performance for environments with limited compute and memory.
- Deploy and monitor AI applications in real-world edge use cases.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work with edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack provides a tightly integrated environment for developing and deploying AI solutions, spanning from the low-level CANN SDK to the high-level MindSpore framework, all optimized for Ascend hardware.
This instructor-led training session, available both online and on-site, is designed for technical professionals ranging from beginner to intermediate levels. It aims to clarify how CANN and MindSpore collaborate to support AI lifecycle management and inform infrastructure decisions.
Upon completion of this training, participants will be able to:
- Grasp the layered architecture of Huawei’s AI compute stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and toolchain in comparison to industry alternatives.
- Place Huawei's AI stack within the context of enterprise or cloud/on-premises environments.
Course Format
- Interactive lectures and discussions.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs focusing on the model flow from MindSpore to CANN.
Course Customization Options
- To request customized training for this course, please contact us to make arrangements.
Optimizing Neural Network Performance with CANN SDK
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) serves as Huawei's foundational AI compute platform, enabling developers to fine-tune and maximize the performance of deployed neural networks on Ascend AI processors.
This instructor-led, live training session, available online or on-site, targets advanced AI developers and system engineers looking to enhance inference performance through CANN's sophisticated toolset, such as the Graph Engine, TIK, and custom operator development.
Upon completion of this training, participants will be capable of:
- Comprehending CANN's runtime architecture and performance lifecycle.
- Utilizing profiling tools and the Graph Engine for performance analysis and optimization.
- Developing and optimizing custom operators using TIK and TVM.
- Addressing memory bottlenecks and enhancing model throughput.
Course Format
- Interactive lectures and discussions.
- Hands-on labs featuring real-time profiling and operator tuning.
- Optimization exercises based on edge-case deployment scenarios.
Customization Options
- To arrange a customized training session for this course, please contact us directly.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) delivers robust deployment and optimization capabilities for real-time AI applications in computer vision and natural language processing, particularly on Huawei Ascend hardware.
This instructor-led, live training, available online or onsite, is designed for intermediate-level AI practitioners looking to build, deploy, and optimize vision and language models using the CANN SDK for production scenarios.
Upon completion of this training, participants will be able to:
- Deploy and optimize CV and NLP models leveraging CANN and AscendCL.
- Utilize CANN tools to convert models and integrate them into active pipelines.
- Enhance inference performance for tasks such as detection, classification, and sentiment analysis.
- Construct real-time CV/NLP pipelines tailored for edge or cloud-based deployment.
Course Format
- Interactive lectures combined with live demonstrations.
- Practical labs focused on model deployment and performance profiling.
- Designing live pipelines using real-world CV and NLP use cases.
Customization Options
- For customized training requests, please reach out to us to arrange.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM facilitate advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (available online or on-site) is designed for advanced-level system developers who want to create, deploy, and tune custom operators for AI models utilizing CANN’s TIK programming model and TVM compiler integration.
Upon completion of this training, participants will be capable of:
- Writing and testing custom AI operators with the TIK DSL for Ascend processors.
- Integrating custom operations into the CANN runtime and execution graph.
- Leveraging TVM for operator scheduling, auto-tuning, and benchmarking.
- Debugging and optimizing instruction-level performance for custom computation patterns.
Course Format
- Interactive lectures and demonstrations.
- Practical coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Course Customization Options
- To request customized training for this course, please contact us to make arrangements.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU architectures, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for local AI and high-performance computing (HPC) markets.
This instructor-led live training (available online or onsite) targets advanced-level GPU programmers and infrastructure specialists aiming to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
Upon completion of this training, participants will be able to:
- Evaluate the compatibility of existing CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance across platforms and identify optimization opportunities.
- Address practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Hands-on labs for code translation and performance comparison.
- Guided exercises focusing on multi-GPU adaptation strategies.
Course Customization Options
- To request customized training for this course based on your specific platform or CUDA project, please contact us to arrange.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon stand out as prominent AI hardware ecosystems in China, each providing specialized acceleration and profiling capabilities for large-scale AI production workloads.
This instructor-led live training, available online or onsite, targets advanced AI infrastructure and performance engineers seeking to enhance model inference and training workflows across these diverse Chinese AI chip architectures.
Upon completion of this training, participants will be equipped to:
- Execute benchmarking procedures on Ascend, Biren, and Cambricon platforms.
- Pinpoint system bottlenecks and identify inefficiencies in memory and compute resources.
- Implement optimizations at the graph, kernel, and operator levels.
- Refine deployment pipelines to achieve superior throughput and reduced latency.
Course Format
- Interactive lectures and group discussions.
- Practical application of profiling and optimization tools across each platform.
- Guided exercises designed around real-world tuning scenarios.
Customization Options
- For tailored training aligned with your specific performance environment or model architecture, please reach out to us to make arrangements.