Job Description :
Client needs candidates that has experience in Spark development ( a must ), Scoop and Scala, Teradata is the source database they will extract from using a Teradata tools. and then populate a hadoop database using Cloudera suite.

Client would also like all candidates to take an Spark/Teradata online assessment test.



First resource – hadoop senior developer that is strong with Sparc, Scala, Scoop, Hive, Cloudera environment – must be on-site in Atlanta (start immediately) for rest of year



Second resource – senior hadoop dev ops consultant – automated testing, automated code deployment, bitbucket, bamboo, SonarCube, experience with best practices policies and procedures needed for devops and then help implement them. Start immediately for rest of 2017.



Here is a sample resume:

Around 8 years of total IT development experience in all phases of the SDLC

3+ years of Scala/Apache Spark experience and 4+ years of Hadoop/Java Developer experience in all phases of Hadoop and HDFS development.

Extensive experience and actively involved in Requirements gathering, Analysis, Design, Coding and Code Reviews, Unit and Integration Testing.

Experience in designing Use Cases, Class diagrams, Sequence and Collaboration diagrams for multi-tiered object-oriented system architectures utilizing Unified Modeling Tools (UML) such as Rational Rose, Rational Unified Process (RUP) Working knowledge of Agile Development and Test Driven Development(TDD) Business Driven Development (BDD) methodologies.

Extensive knowledge of Client–Server technology, web-based n-tier architecture, Database Design and development of applications using J2EE Design Patterns like Singleton, Session Facade, Factory Pattern and Business Delegate.

Hands on experience in Hadoop ecosystem including Spark, Kafka, HBase, Scala, Pig, Impala, Sqoop, Oozie, Flume, Mahout, Storm, Tableau, Talend big data technologies.

Involved in converting Hive/SQL queries into Spark transformations using Spark RDD and Scala concepts.

Experience working with SQL, PL/SQL and NoSQL databases like Microsoft SQL Server, Oracle, HBase, Cassandra and MongoDB.

Experience in Importing and exporting data from different databases like MySQL, Oracle, Netezza, Teradata, DB2 into HDFS using Sqoop, Talend.

Involved in writing Pig scripts to transform raw data into forming baseline data.

Worked on data warehouse product Amazon Redshift, which is a part of the AWS.

Good experience in design the jobs and transformations and load the data sequentially & parallel for initial and incremental loads in Talend.

Experience in developing and scheduling ETL workflows in Hadoop using Oozie.
Experience in deploying and managing the Hadoop cluster using Cloudera Manager.


Client : teradata

             

Similar Jobs you may be interested in ..