Job Description :

Hadoop Data Engineer

Top Skills:

- Python

- SQL

- AWS

- Spark

Job Requirements/Description:

5+ years relevant work experience in the Data Engineering field

3+ years of experience working with Hadoop and Big Data processing frameworks (Hadoop, Spark, Hive, Flink, Airflow etc.)

2+ years of experience Strong experience with relational SQL and at least one programming language such as Python, Scala, or Java

Experience working in AWS environment primarily EMR, S3, Kinesis, Redshift, Athena, etc.

Experience building scalable, real-time and high-performance cloud data lake solutions

Experience with source control tools such as GitHub and related CI/CD processes.

Experience working with Big Data streaming services such as Kinesis, Kafka, etc.

Experience working with NoSQL data stores such as HBase, DynamoDB, etc.

Experience with data warehouses/RDBMS like Snowflake & Teradata


Acclive, is an IT Services Company working with Fortune 500 clients and is based out of Arlington, Virginia. Acclive works extensively in all the major industries like BFSI, Oil and Gas, Utilities, Healthcare and many more. Acclive is focused on providing customer centric solutions, and has an offshore-onshore model.

             

Similar Jobs you may be interested in ..