Job Description :
5+ years of experience in designing and developing software.
Prior experience with Designing of big data applications
Expertise building pipe lines using big data technologies, databases and tools Spark, Spark Streaming, Apache NiFi, Kafka, HDFS, YARN, MapReduce, Pig, Hive, Oozie.
Experience with AWS & Hortonworks distribution is preferred
Build data pipeline including data creation, ingestion, management, and client consumption
Develop Extract Transform Load (ETL) process and data structures using efficient programming standards and practices.
Good communication skills.
Experience with Java, Play, AKKA & micro-services
             

Similar Jobs you may be interested in ..