Job Description :
Position: Data Engineer with Flink or Druid
Location: Denver, CO
Duration: 12+ Months
 
Data Engineer 
 
Primary Skills - Scala, Spark, AWS, Kafka, Flink, Druid
About the Data Engineer role
Data Engineer responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
 
Responsibilities
 
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes,
                  optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a
                   wide variety of data sources using SQL and AWS 'big data' technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer
                  acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with
                  data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and
                  optimizing our product into an innovative industry leader.
               Work with data and analytics experts to strive for greater functionality in our data systems.
 
Requirements
 
Bachelor's degree in Computer Science Engineering, Data Analytics, or a related technical degree.
5+ years of experience working with distributed data technologies (e.g. Hadoop, MapReduce,
                  Spark, Kafka, Flink etc) for building efficient, large-scale 'big data' pipelines.
Strong Software Engineering experience with proficiency in at least one of the following
                   programming languages: Java, Python, Scala or equivalent.
Implement data ingestion pipelines both real time and batch using best practices.
Experience with building stream-processing applications using Apache Flink, Kafka Streams or
                  others.
Experience with Cloud Computing platforms like Amazon AWS, Google Cloud etc.
 
 
Remarks:
Data Engineer Primary Skills - Scala, Spark, AWS, Kafka, Flink, Druid
 
,
M. Neeraja
US IT Recruiter
Email:  Direct
Keylent Inc | 1000 N West Street, Suite 1200 | Wilmington,DE,19801
             

Similar Jobs you may be interested in ..