Job Description :
Role: Python Developer with Spark/Scala Experience
Experience: 8-10 Years
Location: Charlotte, NC
Job Description:
Role is of a Scala/Python and Spark developer to build and deploy analytics models on an AWS based Data Lake Existing models built on Spark 1.6 need to be remediated to Spark 2.10 so it can work with EMR clusters, as well help building new models

Skillset need:
3+ years with Spark 1.6 and 2.10 Development, specifically remediating code to new version
Hands on experience with back-end programming, specifically Scala, python.
Knowledge working with EMR, Hive, and S3Knowledge in Shell Scripting.
Ability to write ETL jobs through Spark.
Ability to Handle Performance and Memory issues.
Good knowledge of database structures, theories, principles, and practices.
Hands on experience in PostgreQL.
Familiarity with data loading tools like Sqoop.
Analytical and problem solving skills, applied to Big Data domain.
Writing high-performance, reliable and maintainable code.
Proven understanding with RDS or other columnar Databases.
Good aptitude in multi-threading and concurrency concepts.
Atlassian Products knowledge is preferred
Worked on moving legacy data warehouse data into S3 on AWS Cloud.
Involved in best practices, supervision and support to enterprise analytics models deployed on AWS based Data Lake.