This job is no longer available. Continue your job search here.
Data Engineer
Ahmedabad
Job No. atci-4646746-s1789843
Full-time
Job Description
Project Role : Data Engineer
Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Must have skills : PySpark
Good to have skills : Databricks Unified Data Analytics Platform
Minimum 3 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary: As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Design and develop data solutions for data generation, collection, and processing. - Create and maintain data pipelines to ensure efficient data flow. - Implement ETL (extract, transform, and load) processes to migrate and deploy data across systems. - Ensure data quality and integrity by performing data validation and cleansing. - Optimize data infrastructure to improve performance and scalability. - Collaborate with cross-functional teams to understand data requirements and provide technical expertise. - Troubleshoot and resolve data-related issues in a timely manner. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Good To Have Skills: Experience with Databricks Unified Data Analytics Platform. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in PySpark. - This position is based in Mumbai. - A 15 years full-time education is required.
Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Must have skills : PySpark
Good to have skills : Databricks Unified Data Analytics Platform
Minimum 3 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary: As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Design and develop data solutions for data generation, collection, and processing. - Create and maintain data pipelines to ensure efficient data flow. - Implement ETL (extract, transform, and load) processes to migrate and deploy data across systems. - Ensure data quality and integrity by performing data validation and cleansing. - Optimize data infrastructure to improve performance and scalability. - Collaborate with cross-functional teams to understand data requirements and provide technical expertise. - Troubleshoot and resolve data-related issues in a timely manner. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Good To Have Skills: Experience with Databricks Unified Data Analytics Platform. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in PySpark. - This position is based in Mumbai. - A 15 years full-time education is required.
Qualifications
15 years full time education
Please be informed that at any given point in time, you can only have one "Active" application.
Please be informed that at any given point in time, you can only have one "Active" application.