Data Streaming and Real Time Data Processing Training Course
Course Overview
This course offers a practical and structured entry point into developing real-time data streaming systems. It explores core concepts, architectural patterns, and industry-standard tools essential for processing continuous data at scale. Participants will gain the skills to design, implement, and optimize streaming pipelines using contemporary frameworks. The curriculum advances from foundational principles to practical applications, empowering learners to confidently create production-ready real-time solutions.
Training Format
• Instructor-led sessions with guided explanations
• Concept walkthroughs accompanied by real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A sessions
Course Objectives
• Grasp the concepts of real-time data streaming and system architecture
• Distinguish between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Utilize distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimize real-time data solutions tailored to business needs
This course is available as onsite live training in Serbia or online live training.Course Outline
Course Outline Day 1
• Introduction to data streaming concepts
• Fundamentals of batch versus real-time processing
• Basics of event-driven architecture
• Common industry use cases
• Overview of the streaming ecosystem
Day 2
• Design patterns for streaming architecture
• Fundamentals of distributed messaging systems
• Producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Concepts and frameworks for stream processing
• Event time versus processing time
• Windowing techniques and their use cases
• Stateful stream processing
• Basics of fault tolerance and checkpointing
Day 4
• Data transformation within streaming pipelines
• ETL and ELT processes in real-time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud-based streaming services
Day 5
• Monitoring and observability in streaming systems
• Basics of security and access control
• Performance tuning and optimization
• Comprehensive pipeline design review
• Real-world use cases such as fraud detection and IoT processing
Open Training Courses require 5+ participants.
Data Streaming and Real Time Data Processing Training Course - Booking
Data Streaming and Real Time Data Processing Training Course - Enquiry
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Upcoming Courses
Related Courses
Administrator Training for Apache Hadoop
35 HoursAudience:
This course is designed for IT professionals seeking solutions for storing and processing large-scale datasets within distributed system environments.
Goal:
To provide in-depth knowledge regarding the administration of Hadoop clusters.
Big Data Analytics in Health
21 HoursBig data analytics is the process of scrutinizing extensive and diverse datasets to reveal correlations, latent patterns, and other valuable insights.
The healthcare sector generates vast volumes of complex, heterogeneous medical and clinical data. Leveraging big data analytics within this domain holds immense potential for deriving insights that enhance healthcare delivery. However, the sheer scale of these datasets presents significant challenges for analysis and practical implementation in clinical environments.
Through this instructor-led, live remote training, participants will learn how to conduct big data analytics in healthcare by completing a series of hands-on live-lab exercises.
Upon completion of this training, participants will be able to:
- Install and configure big data analytics tools, including Hadoop MapReduce and Spark
- Comprehend the characteristics of medical data
- Apply big data techniques to manage medical data
- Explore big data systems and algorithms in the context of healthcare applications
Target Audience
- Developers
- Data Scientists
Course Format
- A blend of lectures, discussions, exercises, and intensive hands-on practice.
Note
- To request customized training for this course, please contact us to make arrangements.
Hadoop For Administrators
21 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. Over the course of three days (with an optional fourth day), participants will explore the business advantages and practical applications of Hadoop and its surrounding ecosystem. The training covers essential skills such as planning cluster deployment and scaling, installing, maintaining, monitoring, troubleshooting, and optimizing Hadoop environments. Attendees will also gain hands-on experience with bulk data loading, explore various Hadoop distributions, and learn to install and manage tools within the Hadoop ecosystem. The course concludes with a session on securing clusters using Kerberos.
“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
A combination of lectures and hands-on labs, with an approximate split of 60% instruction and 40% practical lab work.
Hadoop for Developers (4 days)
28 HoursApache Hadoop stands out as the leading framework for processing Big Data across server clusters. This course offers developers a comprehensive introduction to key components within the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop stands out as one of the most widely adopted frameworks for managing Big Data across server clusters. This course provides an in-depth exploration of data management within HDFS, alongside advanced techniques in Pig, Hive, and HBase. These sophisticated programming methods are particularly valuable for seasoned Hadoop developers.
Audience: developers
Duration: three days
Format: 50% lectures and 50% hands-on labs.
Hadoop Administration on MapR
28 HoursTarget Audience:
This course is designed to demystify big data and Hadoop technologies, demonstrating that these concepts are accessible and easy to grasp.
Hadoop and Spark for Administrators
35 HoursThis instructor-led live training in Serbia (online or onsite) is aimed at system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecosystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course introduces HBase, a NoSQL database built on Hadoop. It is designed for developers who will use HBase to build applications, as well as administrators responsible for managing HBase clusters.
We will guide developers through HBase architecture, data modeling, and application development. The course also covers using MapReduce with HBase and discusses administrative topics focused on performance optimization. The training is highly practical, featuring numerous lab exercises.
Duration: 3 days
Audience: Developers and Administrators
Infomatica with Big Data (BDM)
7 HoursInformatica with Big Data (BDM) is a program aimed at empowering data professionals to develop, manage, and analyze large datasets by leveraging the most advanced technologies and architectures in the Big Data sector. The curriculum covers the entire lifecycle, from data ingestion, integration, cleansing, and curation, to data analytics and the exposure and consumption of big data services.
Participants will explore solutions that process massive datasets using Big Data technologies and architectures such as Apache Hive, Apache Hadoop, and Apache Spark. The course also offers hands-on experience with Informatica tools like Bloombox, Big Data Management, and iData Fabric to deepen your understanding of big data technologies like MapReduce and Hadoop. Upon completion, learners will be equipped to create end-to-end data solutions using Informatica and its associated Big Data offerings.
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source platform designed for flow-based data integration and event processing. It facilitates automated, real-time data routing, transformation, and system mediation between diverse systems, supported by a web-based UI and granular control mechanisms.
This instructor-led training, available both onsite and remotely, targets intermediate-level administrators and engineers looking to deploy, manage, secure, and optimize NiFi dataflows within production environments.
Upon completion of this training, participants will be capable of:
- Installing, configuring, and maintaining Apache NiFi clusters.
- Designing and managing dataflows across various sources and sinks.
- Implementing logic for flow automation, routing, and data transformation.
- Optimizing performance, monitoring operations, and troubleshooting issues.
Course Format
- Interactive lectures featuring discussions on real-world architecture.
- Hands-on labs focused on building, deploying, and managing flows.
- Scenario-based exercises conducted in a live-lab environment.
Customization Options
- For customized training tailored to your needs, please contact us to arrange.
Apache NiFi for Developers
7 HoursIn this instructor-led live training in Serbia, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
PySpark and Machine Learning
21 HoursThis course offers a hands-on introduction to developing scalable data processing and Machine Learning workflows using PySpark. Participants will gain insights into how Apache Spark functions within contemporary Big Data ecosystems and learn to process large datasets efficiently by applying distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in Serbia, participants will learn how to combine Python and Spark to analyze big data while engaging in hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in Serbia (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MLlib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio is a comprehensive, data-centric platform that unifies big data capabilities, artificial intelligence, and governance into a single, cohesive solution. Its Rocket and Intelligence modules facilitate rapid data exploration, transformation, and advanced analytics, making it ideal for enterprise environments.
This instructor-led live training, available both online and onsite, is designed for intermediate-level data professionals eager to master the Rocket and Intelligence modules within the Stratio ecosystem using PySpark. The curriculum emphasizes looping structures, user-defined functions (UDFs), and complex data logic.
Upon completion of this training, participants will be equipped to:
- Efficiently navigate and operate within the Stratio platform, utilizing both Rocket and Intelligence modules.
- Apply PySpark effectively for data ingestion, transformation, and analytical tasks.
- Utilize loops and conditional logic to orchestrate data workflows and execute feature engineering.
- Develop and manage user-defined functions (UDFs) to create reusable data operations in PySpark.
Course Format
- Interactive lectures and group discussions.
- Extensive exercises and practical practice sessions.
- Hands-on implementation within a live laboratory environment.
Course Customization Options
- For customized training tailored to your specific needs, please reach out to us to make arrangements.