Job Description :

Big data engineers develop, test, and maintain Big Data solutions for a company. Their job is to gather large amounts of data from multiple sources and ensure that downstream users can access the data quickly and efficiently. Essentially, big data engineers ensure the company’s data pipelines are scalable, secure, and able to serve multiple users

Technical Skills:

·        In-depth experience in SQL, Hive, Linux-Bash, Sqoop, HDFS

·        Experience in Spark, Kafka, HBase, Python, Phoenix, Oracle.

·        Experience in deployment of project/modules to live production environments.

·        Experience in AWS and EMR-Hadoop a solid plus.

·        Exposure to ETL / BI tools like Informatica, Cognos

·        High-level analytical and problem-solving skills

·        Should have worked with Hadoop updates, managed/external tables, de-duplication of data, incremental vs. historical loads, ACID tables.

 

Soft Skills:

·        Good Communication, and able to take directions  

·        Work with the team goals and ability to understand the business requirements

·        Familiarity in working in Agile environment (SCRUM), user stories, estimation

·        Ability to work on multiple projects at the same time


Preferred:

- AWS Technologies such as Glue, EMR, step functions, Data Pipeline, Athena, SQS, SNS, RDS, Redshift, DynamoDB, Aurora

             

Similar Jobs you may be interested in ..