Job Description :

Job Title : Big Data Engineer

Duration : Contract

Location : Tyson, VA

Skills

  • Bachelor’s degree in Computer Science, Information Systems or related field;
  • 7+ years of work experience on the Hadoop (Cloudera) based  Data Lake solutions .
  • 5+ years of Data Ingestion/processing experience using Shell scripting, Python, Scala, Hive,  Spark and PySpark.
  • 1-2 years of work experience on cloud-native and cloud-agnostic Data Lake/Data warehousing solutions preferably AWS S3, AWS Redshift, Snowflake.
  • Proficient in Agile based delivery approach.
  • Work experience on AWS Glue/AWS data Pipeline/Snowpipe/SSIS/ Java is a plus.
  • Ability to work as part of a team, self-motivation, adaptability and positive attitude.
  • Must have strong communication skills.
     

Responsibilities

  • Design and Develop Data Ingestion and Processing Code using Python/Pyspark/ Language R/Hive on the Cloudera CDH Platform.
  • Design and support Cloud Migration POC work.
  • Create and update Design Specs and reference Architecture documents to enable acceleration in solution development.
  • Cloudera Data Platform Innovating new ideas, researching related technology, developing new concepts, prototyping and delivering implementations
  • Participate in testing and peer code reviews to identify any bugs and ensure reusability of code.
  • Automate the deployment of the solutions by using Shell scripts/Python/Oozie.
  • Work with internal subject matter experts to define requirements for new demo environments
  • Collaborating with the Apache community on Hadoop and other related open source projects
  • Work with IT change Management group to promote the developed code/scripts from non-production to production environments.
  • Work with Architecture team (Application/Security/Infrastructure/Data) to get their approval on the designed solutions.
             

Similar Jobs you may be interested in ..