Job Description :

Position Title: Big Data Developer with Spark and Scala

Location: Mclean, VIRGINIA (Remote)

Employment Type: C2C

Duration: 6 to 8  months

POSITION OVERVIEW : Worked as Hadoop Developer- Will be responsible for writing high-performance, reliable and maintainable code.- Will write MapReduce/Scala jobs- Hadoop development, implementation and support- Loading from disparate data sets- Pre-processing using Hive- Translate complex functional and technical requirements into detailed design- Perform analysis of vast data stores and uncover insights- Maintain security and data privacy- Well versed in HDP/AWS architecture.- High-speed querying- Propose best practices/standards- Assists architecture team on solution design and implementation.- Providing assistance when technical problems arise. - Monitoring systems to make sure they meet business requirements. POSITION GENERAL DUTIES AND TASKS : 3+ years of total Experience in Java/Scala.- 2+ years hands-on experience in Hadoop programming- Hands on experience in Java, Scala and Spark- Hands on experience with Kafka, NiFi, AWS, Maven, Stash and Bamboo- Hands on experience to write MapReduce jobs.- Good knowledge on spark architecture.- Writing high-performance, reliable and maintainable code.- Good knowledge of database structures, theories, principles, and practices.- Good understanding of Hadoop, YARN, AWS EMR- Familiarity with data loading tools like Talend, Sqoop- Familiarity with cloud database like AWS Redshift, Aurora MySQL.- Familiarity of Apache Zeppelin/EMR Notebook.- Knowledge of workflow/schedulers like Oozie or Apache AirFlow.- Analytical and problem solving skills, applied to Big Data domain- Strong exposure in Object Oriented concepts and implementation- Proven understanding with Hadoop, HBase, Hive, and HBase- Good aptitude in multi-threading and concurrency concepts

Client : NTT Data


Similar Jobs you may be interested in ..