Bachelor's or advanced degree in a relevant field (e.g., Computer Science, Engineering, Information Technology).
Proven experience as a Data Engineer, with expertise in designing and implementing scalable data architectures.
Proficiency in Python scripting and experience with relevant data engineering tools and frameworks.
Strong knowledge of ETL processes and data integration techniques.
Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in deploying and managing data solutions in the cloud.
Excellent problem-solving and communication skills.
Ability to work in an agile and collaborative environment.
Proficiency in Python scripting for data manipulation, ETL processes, and automation.
Strong SQL skills for querying and manipulating data in relational databases.
Knowledge of Apache Kafka, Apache Airflow, Apache Spark
Google Cloud Platform (GCP): Hands-on experience with cloud services for data storage, processing, and analytics.
Proficient in version control to manage and track changes in code (GIT)
Containerization (Docker): Understanding of containerization for deploying and managing applications.
Familiarity with agile development practices.