Get in Touch

Course Outline

Core Performance Concepts and Metrics

  • Latency, throughput, power consumption, and resource utilization
  • Distinguishing between system-level and model-level bottlenecks
  • Profiling methodologies for inference versus training phases

Profiling Techniques for Huawei Ascend

  • Leveraging CANN Profiler and MindInsight
  • Analyzing kernel and operator diagnostics
  • Understanding offload patterns and memory mapping

Profiling Techniques for Biren GPU

  • Utilizing Biren SDK for performance monitoring
  • Optimizing kernel fusion, memory alignment, and execution queues
  • Conducting power and temperature-aware profiling

Profiling Techniques for Cambricon MLU

  • Using BANGPy and Neuware performance utilities
  • Gaining kernel-level visibility and interpreting logs
  • Integrating the MLU profiler with deployment frameworks

Optimization at Graph and Model Levels

  • Strategies for graph pruning and quantization
  • Operator fusion and restructuring computational graphs
  • Standardizing input sizes and fine-tuning batch parameters

Memory and Kernel Optimization

  • Optimizing memory layouts and reuse patterns
  • Managing buffers efficiently across different chipsets
  • Applying platform-specific kernel tuning techniques

Cross-Platform Best Practices

  • Achieving performance portability through abstraction strategies
  • Developing unified tuning pipelines for multi-chip environments
  • Case study: Tuning an object detection model across Ascend, Biren, and MLU

Summary and Next Steps

Requirements

  • Hands-on experience with AI model training or deployment pipelines
  • Understanding of GPU/MLU compute principles and model optimization techniques
  • Familiarity with performance profiling tools and key metrics

Target Audience

  • Performance engineers
  • Machine learning infrastructure teams
  • AI system architects
 21 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories