Job Description :
Location – Atlanta , Georgia

Hope you are doing well

6 months (fixed)

Travel activity: 50%

Interview: Phone or video call

Job Description:

Client is looking to hire an experienced and highly motivated AWS Big Data engineer to design and develop data pipelines using AWS Big Data tools and services and other modern data technologies. In this role, you will play a crucial part in shaping the future big data and analytics initiatives for many customers for years to come!

Must have:

PySpark - 2 year(s) of experience

Glue - 1 year(s) of experience

Python - 2 year(s) of experience

Spark - 1 year(s) of experience

Redshift - 1 year(s) of experience

Experience required:

About the Opportunity

You are a motivated data engineer who is passionate about building at scale on Amazon Web Services (AWS You thrive at simplifying hard problems and can articulate the solution to both technical and non-technical stakeholders.

Key Responsibilities Build end-to-end big data pipelines on AWS, including:

Ingestion/replication via DMS from traditional on-prem RDBMS (e.g. Oracle, MS SQL Server, IBM DB2, MySQL, Postgres)

Real-time ingestion and processing with Kinesis Streams, Kinesis Firehose, and Kinesis Analytics

CDC, ETL and Analytics via AWS Glue, EMR, Spark, Presto, Athena, Flink, Python/PySpark, Scala, Zeppelin

Refactoring of existing RDBMS scripts (e.g. PL/SQL. T-SQL, PL/pgSQL) to PySpark or Scala

Buildout of data warehouse and published data sets using RedShift, Aurora, RDS, ElasticSearch

Scripting with AWS Lambda

Experience Requirements

5+ years of experience in software development with Python, Scala or Java

3+ years of database development experience with RDBMS • 2+ years of database development experience within Hadoop ecosystem, including Spark • 2+ years of hands-on data engineering on AWS • AWS Big Data Specialty and/or Solutions Architect Professional Certification is a plus • A Bachelor’s Degree from an accredited college in Computer Science or equivalent