Job Description :
Job Requirements:
Degree Level: BS or MS in Computer Science or other engineering discipline
3+ years industry experience building and operating distributed data systems in production
Very strong programming skills in Scala or Java
Strong understanding in tuning and performance optimization of Apache Spark jobs
Experience with integration of data from multiple data sources
Experience with various messaging systems, such as Kafka or RabbitMQ
Ability to manage and solve ongoing issues with a Spark/Hadoop cluster

Desired:
Familiarity with distributed machine learning frameworks like Spark MLlib
General understanding of machine learning / deep learning methods