Job Description :

Role: Big Data With Python                                 

Location: San Ramon, CA

Duration: Long Term

 

Required Skills:

 

Python, Hadoop, Spark, Kafka, Scala

 

The successful candidate will build data solutions using state of the art technologies to acquire, ingest and transform big datasets. You will design and build data pipelines to support applications and data science projects following software engineering best practice.

Design and develop data applications using big data technologies (Hadoop, MapR, AWS) to ingest, process, and Analyse large disparate datasets. Build robust data pipelines on Cloud using Airflow, Spark/EMR, Kinesis, Kafka, Lambda or other technologies.

Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and AWS 'big data' technologies.

Work with data and analytics experts to strive for greater functionality in our data systems. Implement architectures to handle large scale data and its organization.

 

             

Similar Jobs you may be interested in ..