Ta oferta pracy nie jest już dostępna. Kontunuj wyszukiwanie ofert pracy tutaj.
Integration Engineer
Bengaluru
Job No. atci-5508908-s2013823
Full-time
Job Description
Project Role : Integration Engineer
Project Role Description : Provide consultative Business and System Integration services to help clients implement effective solutions. Understand and translate customer needs into business and technology solutions. Drive discussions and consult on transformation, the customer journey, functional/application designs and ensure technology and business solutions represent business requirements.
Must have skills : Data Engineering
Good to have skills : NA
Minimum 5 year(s) of experience is required
Educational Qualification : 15 years full time education
Role Summary
As a Data Engineer, you will have deep technical knowledge across core components of a modern, modular, cloud-native data platform. You will take care of extraction data from source systems (SAP ERP / SAP IBP / Veeva / MES etc ) to S3 buckets (build the bronze data) and from S3 to Redshift to build our silver / golden data products ready to be consumed by business.
Key Responsibilities
Implement robust, scalable data services in AWS using Glue, Redshift, Iceberg, Lambda, EMR, Step Functions, Apache Airflow, etc.
Develop infrastructure-as-code modules and support continuous delivery pipelines.
Collaborate on architectural proposals and ensure alignment with broader platform strategy.
Work together with the Product Manager to gather data requirements, understand business priorities, and translate them into technical specifications.
Partner with data scientists and domain engineers to enable governed self-service Data Management Capabilities.
Perform code reviews and contribute to team knowledge-sharing and documentation.
Minimum Qualifications
5–8 years of experience in software or data engineering, preferably in cloud environments.
Proficiency in Python, SQL, and tools like dbt or Apache Spark.
Experience with AWS data stack (e.g., Glue Catalog, IAM, S3).
Solid understanding of CI/CD, DevOps, and IaC tools like Pulumi or Terraform.
Develop and manage ETL/ELT workflows to ingest data from multiple sources (structured, semi-structured, and unstructured).
Preferred Qualifications
Exposure to data lakehouse and data mesh architectures.
Familiarity with data privacy and access control implementations (OAuth, RBAC).
Experience with AI/ML integration or supporting MLOps workflows.
AWS certifications such as AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect.
Knowledge of Supply Chain, manufacturing and quality processes
Project Role Description : Provide consultative Business and System Integration services to help clients implement effective solutions. Understand and translate customer needs into business and technology solutions. Drive discussions and consult on transformation, the customer journey, functional/application designs and ensure technology and business solutions represent business requirements.
Must have skills : Data Engineering
Good to have skills : NA
Minimum 5 year(s) of experience is required
Educational Qualification : 15 years full time education
Role Summary
As a Data Engineer, you will have deep technical knowledge across core components of a modern, modular, cloud-native data platform. You will take care of extraction data from source systems (SAP ERP / SAP IBP / Veeva / MES etc ) to S3 buckets (build the bronze data) and from S3 to Redshift to build our silver / golden data products ready to be consumed by business.
Key Responsibilities
Implement robust, scalable data services in AWS using Glue, Redshift, Iceberg, Lambda, EMR, Step Functions, Apache Airflow, etc.
Develop infrastructure-as-code modules and support continuous delivery pipelines.
Collaborate on architectural proposals and ensure alignment with broader platform strategy.
Work together with the Product Manager to gather data requirements, understand business priorities, and translate them into technical specifications.
Partner with data scientists and domain engineers to enable governed self-service Data Management Capabilities.
Perform code reviews and contribute to team knowledge-sharing and documentation.
Minimum Qualifications
5–8 years of experience in software or data engineering, preferably in cloud environments.
Proficiency in Python, SQL, and tools like dbt or Apache Spark.
Experience with AWS data stack (e.g., Glue Catalog, IAM, S3).
Solid understanding of CI/CD, DevOps, and IaC tools like Pulumi or Terraform.
Develop and manage ETL/ELT workflows to ingest data from multiple sources (structured, semi-structured, and unstructured).
Preferred Qualifications
Exposure to data lakehouse and data mesh architectures.
Familiarity with data privacy and access control implementations (OAuth, RBAC).
Experience with AI/ML integration or supporting MLOps workflows.
AWS certifications such as AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect.
Knowledge of Supply Chain, manufacturing and quality processes
Qualifications
15 years full time education