Job Description :
*Bachelor Degree and 5 years Information Technology experience OR Technical Certification and/or College Courses and 7 year Information Technology experience OR 9 years Information Technology experience.
*Possess ability to manage workload, manage multiple priorities, and manage conflicts with customers/employees/managers, as applicable.
*Must have extensive hands on experience in designing, developing, and maintaining software solutions in Hadoop cluster.
*Must have experience with strong UNIX shell scripting, SQOOP, eclipse, HCatalog .
*Must have experience with NoSql Databases like HBASE, Mongo or Cassandra
*Must have experience with Developing Pig scripts/Hive QL ,UDF for analyzing all semi-structured/unstructured/structured data flows.
*Must have working experience with Developing MapReduce programs running on the Hadoop cluster using Java/Python.
*Must have working experience with Spark and Scala.
*Must have knowledge of cloud computing infrastructure (e.g. Amazon Web Services EC2) and considerations for scalable, distributed systems
*Must demonstrate Hadoop best practices
*Demonstrates broad knowledge of technical solutions, design patterns, and code for medium/complex applications deployed in Hadoop production.
*Must have working experience in the data warehousing and Business Intelligence systems
*Participate in design reviews, code reviews, unit testing and integration testing.
*Assume ownership and accountability for the assigned deliverables through all phases of the development lifecycle.
*SDLC Methodology (Agile / Scrum / Iterative Development
             

Similar Jobs you may be interested in ..