Job Description :
Responsibilities

Perform architecture design, data modeling, and implementation of Big Data platform and analytic applications
Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings; bring these insights and best practices
Stand up and expand data as service collaboration with partners in US and other international markets
Apply deep learning capability to improve understanding of user behavior and data
Develop highly scalable and extensible Big Data platforms which enable collection, storage, modeling, and analysis of massive data sets
Qualifications

Minimum Requirements

Over 8 years of engineering and/or software development experience.
Hands-on experience in Apache Big Data Components/Frameworks
Deep technical expertise in Spark, Hive, Impala, Kudu
Over 3 years’ experience with Python(PySpark) and Scala.
Strong devops skills and ability to guide the client in the physical deployment of clusters. Should be expert in Maven, GitHub and Jenkins.
Strong Data Modeling. The candidate must have end to end experience in at least two big data data warehouse projects.
Expertise in real-time data streaming using Kafka and Spark Streaming. Should be able to deploy and monitor Kafka clusters.
Strong expertise in developing the data as a service platform.
Should have significant experience in consuming data from third party Data API’s.
3 years of experience in working with and developing datasets for Tableau developers and Data scientists.
Experience in architecture and implementation of large and highly complex projects
Deep understanding of cloud computing infrastructure and platforms
History of working successfully with cross-functional engineering teams
Demonstrated ability to communicate highly technical concepts in business terms and articulate business value of adopting Big Data technologies
Bachelor’s Degree
             

Similar Jobs you may be interested in ..