- Consult as part of a team that’s in charge of building end-to-end digital transformation capabilities, and lead fast-moving development teams using Agile methodologies.
- Design and build Big Data and real-time analytics solutions using industry standard technologies, and work with data architects to make sure Big Data solutions align with technology direction.
- Lead by example, role-modeling best practices for unit testing, CI/CD, performance testing, capacity planning, documentation, monitoring, alerting, and incident response.
- Keep everyone from individual contributors to top executives in the loop about progress, communicating across organizations and levels. If critical issues block progress, refer them up the chain of command to be resolved in a timely manner.Optimize NLU model by implementing NLP systems, performing intent classification and entity extraction, and user testing Develop and maintain digital conversational flows, dialog Research, Architect, Prototype, and Test Dialogue Management system and Natural Language Generator Connect to data source (e. g. multiple xml documents) and query database
- Pinpoint and clarify key issues that need action, lead the response, and articulate results clearly in actionable form.
- Show a strong aptitude for carrying out solutions and translating objectives into a scalable solution that meets end customers’ needs within deadlines.Collaborate with research teams working on a variety of deep learning and NLP problems
- Bachelor's degree in Computer Science, Engineering, Technical Science or 12 years of experience in programming and building large scale data/analytics solutions operating in production environments.
- Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation, feature engineering and machine learning, using Spark in combination with pySpark, Java, Scala or Python; either on premise or on Cloud (AWS, Google or Azure). Experience in working with Databricks, SageMaker and other Cloud native tools is highly desired.
- Minimum 1 year of designing and building performant data tiers (or refactoring existing ones), that supports scaled AI and Analytics, using different Cloud native data stores on AWS, Azure and Google (Redshift, S3, Big Query, SQLDW etc.) as well as using NoSQL and Graph Stores.
- Minimum 1 year of designing and building streaming data ingestion, analysis and processing pipelines using Kafka, Kafka Streams, Spark Streaming and similar cloud native technologies.
- Minimum 1 year of designing and building secured and governed Big Data ETL pipelines, using Talend or Informatica technologies; for data curation and analysis of large le production deployed solutions.
- Experience implementing smart data preparation tools such as Palate, Trifacta, Tamr for enhancing analytics solutions.
- Minimum 1 year of building Business Data Catalogs or Data Marketplaces for powering business analytics using technologies such as Alation, Collibra, Informatica or custom solutions.
Life at Accenture
Work where you're inspired to explore your passions and where your talents are nurtured and cultivated. Innovate with leading-edge technologies on some of the coolest projects you can imagine.
Learn more about Accenture
With more than 500,000 employees in more than 120 countries, Accenture solves our clients' toughest challenges by providing a broad range of services and solutions in strategy, consulting, digital, technology and operations.