Job Description :

Role:           Data Engineer

Duration:   12 Months & Extendable

Location:    New York 

 

·         3to 5 years of experience in Data Engineering using Python along with Pyspark/Spark - MUST

·         3-5 years of experience in building big data solutions with pyspark , preferably in Data Analytics space - MUST

·         AWS is preferred cloud and good to have.

·         Hands on development experience in building distributed Big Data solutions including ingestion, caching, processing, consumption, logging & monitoring
Strong technical communication skills

·         Hands on Experience developing data engineering solutions in python using: S3, EMR, Glue, Athena, kafka and notebooks
Experience in the following is preferred:

·         Agile (Scrum) methodology

·         Experience developing SaaS application backends and APIs using a variety of tools

·         Experience turning abstract business requirements into concrete technical plans

·         Proficiency with algorithms (including time and space complexity analysis), data structures, and software architecture

·         Must be a quick learner to evaluate and embrace new technologies in the Big data space.

 

             

Similar Jobs you may be interested in ..