- Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
- Minimum 2+ years of designing, implementing large scale data solutions operating in production environments using Spark, Hadoop and NoSQL ecosystem on premise or on Cloud (AWS, Google or Azure) using many of the relevant technologies such as Nifi, Spark, Kafka, HBase, Hive, Cassandra, Impala, GraphDB etc.
- Minimum 1 year of architecting data and building performant data models at scale for Hadoop/NoSQL/GraphDB ecosystem of data stores to support different business consumption patterns (using technologies such as Hive, Impala, Cassandra, HBase, Neo4j, DataStax Graph).
- Minimum 1+ years of Spark data processing using Java, Python, Scala; for data curation and analysis of large scale production deployed solution.
- Minimum 1 year of data integration, curation in a ‘Big Data’ environment, using Talend Big Data Integration or Informatica BDE; for data curation and analysis of large scale production deployed solutions.
- Minimum 2 years designing and implementing relational or data warehousing models working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza) and understanding of the challenges and limitations of these traditional solutions.
- Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
- Minimum 1 year of experience integration large scale BI/Visualization solutions (e.g. Tableau, Qlik, Spotfire) with Big Data platforms.
- Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for modern data platforms that use Hadoop and NoSQL on premise or on AWS, Google and Azure cloud.
- Minimum 1 year of experience securing Hadoop/NoSQL based modern data platforms on-premise or on AWS, Google, Azure cloud.
- Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
- Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
- Experience integration enterprise data management toolsets (e.g. Informatica, Talend etc.) with Big data platforms.
- Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity:
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.