Skip to main content Skip to footer

Data Science Practitioner

Noida Job No. atci-4650316-s1795820 Full-time
Apply Now

Please be informed that at any given point in time, you can only have one "Active" application.

Please be informed that at any given point in time, you can only have one "Active" application.

Job Description

Project Role : Data Science Practitioner
Project Role Description : Formulating, design and deliver AI/ML-based decision-making frameworks and models for business outcomes. Measure and justify AI/ML based solution values.
Must have skills : Hadoop Administration
Good to have skills : Apache Hadoop
Minimum 3 year(s) of experience is required
Educational Qualification : 15 years full time education

Summary: We are looking for a skilled Hadoop Administrator to join our dynamic team. The ideal candidate will be responsible for managing and maintaining our Hadoop clusters, ensuring their performance, availability, and security. You will work closely with data engineers, data scientists, and other stakeholders to ensure that our big data infrastructure is robust and scalable. Your role will involve installing, configuring, and monitoring Hadoop components, as well as troubleshooting and resolving any issues that arise. You will also be responsible for implementing best practices for data storage, processing, and security. In addition, you will play a key role in capacity planning and performance tuning to ensure that our Hadoop environment can handle the growing demands of our business. The successful candidate will have a strong background in system administration, with specific experience in managing Hadoop clusters. You should be familiar with various Hadoop ecosystem components such as HDFS, YARN, MapReduce, Hive, HBase and Spark. Excellent problem-solving skills, attention to detail, and the ability to work in a fast-paced environment are essential for this role. If you are passionate about big data technologies and have a proven track record of managing large-scale Hadoop environments. Roles & Responsibilities: - Install, configure, and maintain Hadoop distributions like Hortonworks, Cloudera - Monitor Hadoop clusters for performance and availability. - Troubleshoot and resolve issues related to Hadoop components. - Implement best practices for data storage and processing. - Ensure the security of Hadoop clusters and data. - Collaborate with data engineers and data scientists. - Perform capacity planning and performance tuning. - Manage and monitor Hadoop ecosystem components such as HDFS, YARN, MapReduce, Hive, HBase, and Spark. - Automate routine tasks using scripting languages. - Maintain documentation for Hadoop infrastructure and processes. - Ensure compliance with data governance and regulatory requirements. - Provide support for data ingestion and ETL processes. - Monitor and manage Hadoop cluster resource utilization. - Implement and manage data replication and high availability solutions. - Stay updated with the latest developments in Hadoop and big data technologies. - Conduct regular audits and health checks of Hadoop clusters. - Familiar with Pyspark, Airflow, Kerberos, Kafka and OKE cluster. Professional & Technical Skills: - Strong knowledge of Hadoop ecosystem components such as HDFS, YARN, MapReduce, Hive, HBase, and Spark. - Experience with Linux/Unix system administration. - Experience with shell scripting, Python and DevOps setup. - Proficiency in scripting languages such as Python, Bash, or Perl. - Familiarity with configuration management tools like Ansible, Puppet, or Chef. - Experience with monitoring tools such as Ambari and Splunk. - Strong problem-solving and troubleshooting skills. - Experience with data security and compliance requirements. - Knowledge of data warehousing and ETL processes. - Experience with Oracle cloud platforms. - Familiarity with containerization technologies like Docker and Kubernetes. - Experience with version control systems such as Git. - Ability to manage multiple tasks and projects simultaneously. - Strong attention to detail and accuracy. - Experience with performance tuning and optimization. Additional Information: - The candidate should have a minimum of 3 years of experience in Hadoop Administration. - This position is based at our Bengaluru office. - A 15 years full-time education is required.

Qualifications

15 years full time education

Please be informed that at any given point in time, you can only have one "Active" application.

Please be informed that at any given point in time, you can only have one "Active" application.

What people are saying about us

"Best Company to work with"

Current Employee - Software Test Engineer in Bangalore

 

Pros: best work place, flexible timing, work from home

Cons: Work life balance depends upon the project. Full Review

 

MORE ACCENTURE INDIA RATINGS & REVIEWS

Life at Accenture

Training and Development

Take time away to learn and learn all the time in our regional learning hubs, connected classrooms, online courses and learning boards.

Work Environment

Be your best every day in a work environment that helps drive innovation in everything you do.

Learn more about Accenture

Our Expertise

See how we embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.

Meet Our People

From entry-level to leadership, across all business and industry segments, get to know our people harnessing technology to make a difference, every day.

Stay connected

Join Our Team

Search open positions that match your skills and interest. We look for passionate, curious, creative and solution-driven team players.

Keep Up to Date

Stay ahead with careers tips, insider perspectives, and industry-leading insights you can put to use today–all from the people who work here.

Job Alert Emails

Personalize your subscription to receive job alerts, latest news and insider tips tailored to your preferences. See what exciting and rewarding opportunities await.