Careertail
About UsCoursesCareer PathsBlogOpportunities
Log In
Courses>Programming Languages>Building Data Engineering Pipelines in Python
Data ScienceBuilding Data Engineering Pipelines in Python
Price:Paid
Length:4 Hours
Language:English
Content type:video
level:advanced
Updated:04 March 2024
Published:04 September 2022
Similar courses
Opportunities
Courses>Programming Languages>Building Data Engineering Pipelines in Python
Building Data Engineering Pipelines in Python
4 Hours
18758 students
Syllabus
After seeing this chapter, you will be able to explain what a data platform is, how data ends up in it, and how data engineers structure its foundations. You will be able to ingest data from a RESTful API into the data platform’s data lake using a self-written ingestion pipeline, made using Singer’s taps and targets.
You will learn how to process data in the data lake in a structured way using PySpark. Of course, you must first understand when PySpark is the right choice for the job.
Stating “it works on my machine” is not a guarantee it will work reliably elsewhere and in the future. Requirements for your project will change. In this chapter, we explore different forms of testing and learn how to write unit tests for our PySpark data transformation pipeline, so that we make robust and reusable parts.
We will explore the basics of Apache Airflow, a popular piece of software that allows you to trigger the various components of an ETL pipeline on a certain time schedule and execute tasks in a specific order. Here too, we illustrate how a deployment of Apache Airflow can be tested automatically.
Similar courses
Opportunities
Make the most out of your online education
Careertail
Copyright © 2021 Careertail.
All rights reserved
Quick Links
Get StartedLog InAbout UsCourses
Company
BlogContactsPrivacy PolicyCookie PolicyTerms and Conditions
Stay up to date
Trustpilot
Careertail
Courses>Programming Languages>Building Data Engineering Pipelines in Python
Data ScienceBuilding Data Engineering Pipelines in Python
Price:Paid
Length:4 Hours
Language:English
Content type:video
level:advanced
Updated:04 March 2024
Published:04 September 2022
Similar courses
Opportunities
Courses>Programming Languages>Building Data Engineering Pipelines in Python
Building Data Engineering Pipelines in Python
4 Hours
18758 students
Syllabus
After seeing this chapter, you will be able to explain what a data platform is, how data ends up in it, and how data engineers structure its foundations. You will be able to ingest data from a RESTful API into the data platform’s data lake using a self-written ingestion pipeline, made using Singer’s taps and targets.
You will learn how to process data in the data lake in a structured way using PySpark. Of course, you must first understand when PySpark is the right choice for the job.
Stating “it works on my machine” is not a guarantee it will work reliably elsewhere and in the future. Requirements for your project will change. In this chapter, we explore different forms of testing and learn how to write unit tests for our PySpark data transformation pipeline, so that we make robust and reusable parts.
We will explore the basics of Apache Airflow, a popular piece of software that allows you to trigger the various components of an ETL pipeline on a certain time schedule and execute tasks in a specific order. Here too, we illustrate how a deployment of Apache Airflow can be tested automatically.
Similar courses
Opportunities
Make the most out of your online education
Careertail
Copyright © 2021 Careertail.
All rights reserved
Quick Links
Get StartedLog InAbout UsCourses
Company
BlogContactsPrivacy PolicyCookie PolicyTerms and Conditions
Stay up to date
Trustpilot