Job Description :
Mandatory Skills:
python/java programming skills
apache spark, hive & hadoop
knowledge of hadoop/big data

Responsibilities:
Design, implement, and support a platform providing ad hoc access to large datasets
Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark
Build analytics tools/application using data pipelines that provide actionable insights to business.
Responsible for business delivery.
Work with stakeholders including product/Data science to assist with data related technical issues and support their infrastructure needs.
Mentor junior data engineers.

Requirements:
python/java programming skills must.
apache spark, hive & hadoop must.
knowledge of hadoop/big data must.
strong analytical & SQL skills
strong project management and organizational skills
knowledge of git/jira/jenkins
             

Similar Jobs you may be interested in ..