Job Description :
5+ years of programming language experience required, preferably in Scala, Python and PySpark
3+ years of Enterprise experience designing, building and operating in-production big data/stream processing and/or enterprise data warehouse is required
Experience handling data with Databricks
Proficiency with HDFS and MapReduce is preferred
2+ year of experience with Apache Spark/Storm and NoSQL storage is preferred
Experience with big data platforms is preferred
Knowledge of machine learning/distributed systems is preferred.
Knowledge of Front store/customer loyalites program is preferred
             

Similar Jobs you may be interested in ..