Job Description :
Experience on AWS and its web service offering S3, Redshift, EC2, EMR, Lambda, CloudWatch, RDS, Step functions, Spark streaming etc. 2. Good knowledge of Configuring and working on Multi node clusters and distributed data processing framework Spark. 3. Hands on 3 years of experience with EMRApache Spark Hadoop technologies 4. Experience with must have Linux, Python and PySpark, Spark SQL. 5. Experience with Java and Scala 6. Experience in working with large volumes of data Tera-bytes, analyze the data structures and design in Hadoop cluster effectively. 7. Experience in designing scalable data pipelines, complex event processing, analytics components using big data technology Spark Python Scala PySpark, 8. Expert in SQLPLSQL, redshift, NoSQL database 9. Experience in process orchestration tools Apache Airflow, Apache NiFi etc. 10. Hands on knowledge of Big Data Analytics, Predictive Analytics, Nifty with design, development and enhancement of Data Lakes, constantly evolve with emerging tools and technologies.
             

Similar Jobs you may be interested in ..