Job Description :
Welcome to the CANA-H2R family. Let us assist you in experiencing the ''role of a lifetime'' that you have only heard of from the network or read in media. Join a global family of 150000+ professionals leading the IT industry through decades of experience in providing high-end solutions, next generation services, support, innovation, training etc. Join a company that has committed to hiring 10,000 workers in USA in the next two years. Be part of a company that has federal support for their commitment to opening four state of the art technology centers across the USA.

COME AND JOIN | CANA-H2R is hiring Spark Developer Consultant for a full time Permanent role with one our client at Austin, TX/Sunnyvale, CA and other multiple locations in USA.

Qualification:
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of experience in Information Technology.

Preferred:
At least 2 years of hands on design and development experience on Big data related technologies – PIG, Hive, MapReduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs
Should be able to work in team in diverse/ multiple stakeholder environment
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Must have experience working on Big Data Processing Frameworks and Tools – MapReduce, YARN, Hive, and Pig.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats – Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools – Maven, SBT, Jenkins, SVN, and GIT.
Experience in working on Agile and Rally tool is a plus.
Strong understanding and hands-on programming/scripting experience skills – UNIX shell, Python, Perl, and JavaScript.
Should have worked on large data sets and experience with performance tuning and troubleshooting.
Knowledge of Java Beans, Annotations, Logging (log4j), and Generics is a plus.
Knowledge of Design Patterns - Java and/or GOF is a plus.
Knowledge of Spark, Spark Streaming, Spark SQL, and Kafka is a plus.
Experience to Financial domain is preferred
Experience and desire to work in a Global delivery environment