Job Description :
Please share your profile at

Job Description:
8+years working in Hadoop, Elasticsearch, or similar large-scale data platforms
5+ years of working experience in Spark – Scala or Python
Ability to use a wide variety of open source technologies and cloud services (experience with AWS is required)
Fluency in common query languages, API development, data transformation, and integration of data streams.
A working knowledge of code and script - Python and/or Ruby, Shell, Logstash
Responsible for designing, deploying, and maintaining computing platforms that enable large scale data analysis
Contributes to design and architecture, configures, deploys, and documents components used in the analytics platforms that include Hadoop/Spark, AWS S3, Redshift, etc.
Identifies gaps and improves the existing platform to improve quality, robustness, maintainability, and speed.
2+ years of Experience installing and managing Apache Kafka a plus
Evaluates new and upcoming big data solutions and makes recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines. Experience in ML Model design and implementation
Monitors and improves environment to maximize compute efficiency and minimize cost.
Any Hadoop certifications is a major plus
             

Similar Jobs you may be interested in ..