Job Description :
Mandatory Skills

- 8+ years of overall IT experience including 2+ years in big data technologies (Hadoop,Spark core, Spark SQL, Impala/Hive, HBase, Oozie, Sqoop, Java/Scala/Python
- Data Warehouse modernization experience on Hadoop cluster
- Strong RDBMS skills
- Unix Shell Scripting experience is a MUST.
- MapR hadoop experience is a MUST
- Experience in working with large volumes of data (Terabytes), analyze the data structures and design in Hadoop cluster effectively.
- Open to work on Proof of Concepts, learn/operate and innovate new ideas.
- Understanding of customer needs and business savviness
- A self-starter with the ability and willingness to drive architectural changes
- Experience in data migration from relational databases to Hadoop HDFS and MapR HDFS
- Demonstrated analytical and problem solving skills, particularly those that apply to a “Big Data” environment.
- Application performance tuning and troubleshooting experience
- Any hadoop certifications preferably MapR (MCHD,MCSD)

Nice to Have

- NoSQL experience is a Big Plus
- Exposure to MapR DB

Additional Skills

- Excellent verbal and written communication skills
- Excellent analytical / problem solving
- Excellent work coordination with offshore team for ETL development and support activities
- Comes with a programming background, OOPS, ability to understand the big picture of the project, role of his/her code being developed and cascading effects.
             

Similar Jobs you may be interested in ..