Skip to main content Skip to footer

Performance Engineer

Database Administration Team Lead/Consultant | Full time | Experience: 5-10 years
ジョブ番号 ATCI-5260036-S1930226 | Pune | Required Skill: Databricks Unified Data Analytics Platform
応募する
Project Role : Performance Engineer
Project Role Description : Diagnose issues that an in-house performance testing team has been unable to. There are five aspects to Performance Engineering: software development lifecycle and architecture, performance testing and validation, capacity planning, application performance management and problem detection and resolution.
Must have skills : Databricks Unified Data Analytics Platform
Good to have skills : NA
Minimum 5 year(s) of experience is required
Educational Qualification : 15 years full time education

Summary :
The ideal candidate will have experience building:
Reusable Python/PySpark frameworks for standardizing data engineering workflows
Test frameworks to ensure pipeline reliability and correctness
Data quality frameworks for monitoring and validation
Additionally, hands-on experience with Datadog or similar observability tools is required to monitor pipeline performance, optimize resource usage, and ensure system reliability.
You will work within a cross-functional team, building scalable, production-grade data pipelines on cloud platforms such as AWS, Azure, or GCP.

Roles & Responsibilities:-

Data Engineering & Framework Development
Develop and maintain ETL/ELT pipelines in Databricks using PySpark and Python.
Build reusable, modular frameworks to accelerate development and enforce standards across pipelines.
Implement test frameworks for automated unit, integration, and regression testing of pipelines.
Design and maintain data quality frameworks to validate ingestion, transformation, and output.
Optimize Spark jobs for performance, scalability, and cost-efficiency.
Collaborate with data architects to define robust data models and design patterns.
Cloud & Platform Integration
Integrate Databricks pipelines with cloud-native storage services (e.g., S3, ADLS, Snowflake).
Implement CI/CD pipelines for Databricks notebooks and jobs using Git, Jenkins, or Azure DevOps.
Ensure pipelines follow best practices for modularity, reusability, and maintainability.
Monitoring, Observability & Optimization
Use Datadog to monitor pipeline performance, resource utilization, and system health.
Build dashboards and alerts for proactive monitoring and troubleshooting.
Analyze metrics and logs to identify bottlenecks and improve reliability.
Collaboration & Delivery
Partner with data scientists, analysts, and business stakeholders to translate requirements into scalable solutions.
Conduct code reviews, enforce best practices, and mentor junior engineers.
Promote knowledge-sharing of reusable frameworks, testing practices, and data quality approaches.

Professional & Technical Skills:-

Bachelor s or Master s degree in Computer Science, Engineering, or related field.
5–8 years of experience in data engineering or software development.
3+ years hands-on experience with Databricks and PySpark.
Strong Python programming skills, including writing reusable libraries and frameworks.
Experience designing and implementing test frameworks for ETL/ELT pipelines.
Experience building data quality frameworks for validation, monitoring, and anomaly detection.
Proficiency in SQL and experience with cloud data warehouses (Snowflake, Redshift, BigQuery).
Familiarity with Datadog or similar monitoring tools for metrics, dashboards, and alerts.
Experience integrating Databricks with AWS, Azure, or GCP services.
Working knowledge of CI/CD, Git, Docker/Kubernetes, and automated testing.
Strong understanding of data architecture patterns — medallion/lakehouse architectures preferred.

Nice to Have
Experience with Airflow, Prefect, or Azure Data Factory for orchestration.
Exposure to infrastructure-as-code tools (Terraform, CloudFormation).
Familiarity with MLflow, Delta Live Tables, or Unity Catalog.
Experience designing frameworks for logging, error handling, or observability.
Knowledge of data security, access control, and compliance standards.

Soft Skills
Strong problem-solving and analytical skills.
Excellent verbal and written communication.
Ability to work in agile, cross-functional teams.
Ownership mindset, proactive, and self-driven.

Additional Information:- The candidate should have a minimum of 5 years of experience in Large Language Models.
- This position is based at our Bengaluru office.
- A 15 years full-time education is required.
15 years full time education

Pune

雇用機会の均等化に関する声明

アクセンチュアは選考に際し、適用される法令に基づき、応募者を年齢、人種、思想信条、肌の色、宗教、性別、国籍、出生地、民族的起源、障がいの有無、性的指向、性同一性、遺伝情報、婚姻、パートナーの有無、市民権において差別することなく、全ての応募者に対し適用される法令に基づき採用選考を行います。

We work with one shared purpose: to deliver on the promise of technology and human ingenuity. Every day, more than 775,000 of us help our stakeholders continuously reinvent. Together, we drive positive change and deliver value to our clients, partners, shareholders, communities, and each other.

We believe that delivering value requires innovation, and innovation thrives in an inclusive and diverse environment. We actively foster a workplace free from bias, where everyone feels a sense of belonging and is respected and empowered to do their best work.

At Accenture, we see well-being holistically, supporting our people’s physical, mental, and financial health. We also provide opportunities to keep skills relevant through certifications, learning, and diverse work experiences. We’re proud to be consistently recognized as one of the World’s Best Workplaces™.

Join Accenture to work at the heart of change. Visit us at www.accenture.com.

採用職種の紹介

AI

AI/データサイエンスの仕事と募集要項:新たな可能性を見つける

AIとデータの持つ無限の可能性を引き出し、責任あるアプローチでビジネスの未来を再構築します。

さらに詳しく

テクノロジーの仕事と募集要項:イノベーションを加速させる原動力となる

お客様が変革を遂げ、新しい働き方を確立し、より良い世界を築くために不可欠なテクノロジー。テクノロジーを実践の場で駆使し、ポジティブな変化の実現に貢献しませんか。

さらに詳しく