Job Description :
Hands on experience in Design, Development Build activities in Spark and Hadoop Projects Proficient in ingestion and processing of structured and unstructured data using Hadoop ecosystems and Spark framework
Strong SQL / Query writing skills , Good understanding of Hadoop platform including jobs processing
Proficient in Hive, Sqoop, Hbase and Spark Should be able to do complex design and high performance architecture.
Working knowledge and experience in Unix environment & experience in shell scripting
Experience in ETL processing involving large volume of data Ingestion into Data lake with utilities around Bigdata Platform
Performance tuning, troubleshooting data-related technical issues and identify proper solutions
Experience working with data collection/logging, Big Data tools and technologies (Hadoop, Spark, etc
Any exposure to Snowflake Cloud data warehousing is preferred
Self-starter who can work with minimal guidance
Strong communication skills
             

Similar Jobs you may be interested in ..