Job Description :
Our client is looking for an energetic, high-performing Senior Engineer with extensive hands on experience to join our Big Data team.
This position is accountable for delivering the infrastructure solutions of assigned big data applications throughout the complete use case lifecycle. The responsibilities include: identify and document big data use case requirements; lead the design and engineering of the infrastructure solutions; accountable for the implementation and production roll out of the solutions and train the production staff for steady state support. The infrastructure solution delivered need to be resilient, scalable, secured and with high performance that meet all the functional and non-functional requirements.
This position is also responsible for managing the Hadoop platform roadmap that drives the release cycle of our Hadoop cluster capabilities: core Hadoop framework components as well as eco-system products.
You will be working hand in hand with our principle Hadoop architect and top-notch Hadoop developers, data scientists and data engineers to build and run the clusters to realize our big data vision.
Position Requirements:
3+ years of solution architecture in Hadoop
Strong ability to drive complex technical solutions deployed at an enterprise level; ability to drive big data technology adoption and changes through education and partnership with stakeholders
Negotiate, resolve, and prioritize complex issues and provide explanations and information to others on difficult issues
Estimate and organize own work to meet or negotiate deadlines - lead / facilitate the creation of estimates
Self-starter who can work with minimal guidance but strong communication
Technical Qualifications:
Demonstrated experience in architecture, engineering and implementation of enterprise-grade
production big data use cases.
Extensive knowledge about Hadoop Architecture and HDFS.
Hands on experience in MapReduce, Hive, Pig, Java, HBase, Solr, and the following Hadoop eco-system products: Sqoop, Flume, Oozie, Storm, Spark, and/or Kafka.
Hands on delivery experience working on popular Hadoop distribution platforms like Cloudera, HortonWorks or MapR.
Hands on experience in architectural design and solution implementation of large scale Big Data use cases.
Understanding of industry patterns for big data solutions.
Demonstrated experience in working with the vendor(s) and user communities to research and testing new technologies to enhance the technical capabilities of existing Hadoop cluster.
Demonstrated experience in working with Hadoop architect and big data users to implement new Hadoop eco-system technologies to support multi-tenancy cluster.
Understanding of NoSQL technologies.
Shell Scripting, Python, Java and/or C/C++ programming experience.