- Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
- Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
- Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
- Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
- Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
- Minimum 6 months of expertise in implementation with Databricks.
- Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
- Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
- Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
- Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
- Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
- Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
- Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
- Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
- Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
LocationsAZ - Tempe,CA - Los Angeles,CA - Sacramento,CA - San Diego,CA - San Francisco,DC,DC - Washington,FL - Jacksonville,FL - Tampa,GA - Atlanta,IL - Chicago,MA - Boston,MD - Baltimore,MI - Detroit,MN - Minneapolis,MO - St Louis,NC - Charlotte,NJ - Edison,NJ - Florham Park,NJ - Jersey City,NY - Albany,NY - New York,OH - Cleveland,OH - Columbus,OR - Portland,PA - King Of Prussia,PA - Philadelphia,PA - Pittsburgh,TN - Memphis,VA - Arlington,VA - Richmond,WA - Seattle,WI - Madison,WI - Milwaukee
Equal Employment Opportunity Statement
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Jobs Near You
SAP SuccessFactors Consultant - Midwest
Business & Technology Integration
SAP Ariba Technology Consulting Senior Manager
Business & Technology Integration
Salesforce Senior Developer (vLocity) - Accenture Technology