Job Description :
Role : Python/Spark Data Engineer

Location : Carlsbad ,CA

Job Description :

We are looking for a seasoned Python/Spark Data Engineer in AWS Cloud environment. The candidate will work with the Solution architects, Data Modeler and other data engineer to develop Spark-based data Processing Pipeline to integrate on premise transactional data, Telemetry data and various external data in our AWS Cloud Data Platform (Redshift, S3,DynomoDB, RDS, etc

The candidate should have extensive experience in designing, implementing and deploying scalable high-performance data services in the Amazon Web Services (AWS) cloud.
In addition, the engineer should also be able to translate functional and non-functional (technical) requirements into the design of data services.

Skill Requirements :

Strong Experience in developing Python or Scala-based Spark Data processing pipeline for large data set of data integration in AWS Cloud environment
Strong Experience with AWS Glue Catalog, ETL and Athena for Data Analysis
Experience with Data bricks-based Spark data ingestion and data stream is a Plus
Experience in deploying data services in AWS using lambda functions through Spark and EMR
Experience in building high-performance data queries from relational and as well non-relational (NoSQL) data sources: Red Shift, MongoDB, Aurora,
Experience in tuning performance of data services
Experience with Kafka and Kinesis
Experience with continuous delivery and associated tooling (Ansible, Jenkins, Terraform
Experience with micro service or event driven architectures
Experience with Docker, Linux and shell scripting
Excellent Communication and Interpersonal skills