Job Description :
Hadoop with Spark,Amazon EMR,Scala - 10 OPENINGS
MUST BE SPARK DEVELOPERS for this data role!!
PLANO, TX (locals highly preferred)

Minimum Requirements:
Hadoop – 2 years – Ideally 4 years
Spark – 1 Year – Ideally 2+ years (including Scala)
Amazon EMR – 6 months – Ideally 1-2 Years (may be smaller pool with this. I am waiting to see if they are open to have Spark, but no AWS as an option, but for now, this is critical. Must have hands on with Amazon EMR and dealing with complex aspects, not just surface knowledge.
Nice to have is larger Big data ecosystem, Strong Map Reduce, R, Python, Java, Yarn, related…

MUST HAVE Full time concentrated SPARK experience, not a side role. Must have deep understanding and can answer 90% of the below questions with relative ease.
Must be able to explain advanced optimization, handling large scale solution, parallelization design, can explain in a solid and confident way, difference between Spark and Map Reduce.
If they can answer the below questions and have solid hands on focused development experience for 1+ years on SPARK / Amazon, then we should have a good candidate.


Provide the # years the candidate has on each of the following skills:
Data Engineering experience
AWS EMR
Hadood
Scala
Spark
Pig
Hive
SQL
             

Similar Jobs you may be interested in ..