Job Description :

Job Summary:

Spark / Scala developer position with key skills in Spark, Kafka, big data platform tools, i.e. Cloudera/Horton Works/Databricks.

Experience with Databricks would be a plus.

 

Essential Job Functions:

  • Design and development of data ingestion pipelines.
  • Perform data migration and conversion activities.
  • Develop and integrate software applications using suitable development methodologies and standards, applying standard architectural patterns, taking into account critical performance characteristics and security measures.
  • Collaborate with Business Analysts, Architects and Senior Developers to establish the physical application framework (e.g. libraries, modules, execution environments).
  • Perform end to end automation of ETL process for various datasets that are being ingested into the big data platform.

Required:

  • SCALA
  • SQL
  • Spark/Spark Streaming
  • Big Data Tool Set
  • Linux
  • Python
  • Other Responsibilities:
  • Document and maintain project artifacts.
  • Maintain comprehensive knowledge of industry standards, methodologies, processes, and best practices.
  • Other duties as assigned.

Minimum Qualifications and Job Requirements:

  • Must have a bachelor’s degree in Computer Science or related IT discipline.
  • Must have at least 8 years of IT development experience.
  • Must have knowledge of 3 years of SCALA/Python – Spark programming.
  • Must have relevant professional experience working with big data toolsets.
  • Knowledge of standard software development methodologies such as Agile and Waterfall
  • Strong communication skills.
  • Must be willing to flex work hours accordingly to support application launches and manage production outages if necessary.
             

Similar Jobs you may be interested in ..