Job Description :
Hello,
Greeting from KrishAnsh!!
We are looking for a candidate for the position of Senior Big Data Engineer.
Position: Senior Big Data Engineer
Duration: Long term Contract
Location: Bristol, Connecticut
Required Skillset:
Basic Requirements:
Summarize job responsibilities, core deliverables and major duties. What is required for the position to exist?
-Focus on major areas of work, typically 20% or more of role % of Time
Not your first rodeo – Have 5+ years of experience developing with a mix of languages (Java, Scala, Python etc and open source frameworks to implement data ingest, processing, and serving technologies in near-time basis.
Data and API ninja –You are also very handy with big data framework such as Hadoop & Apache Spark, No-SQL systems such as Cassandra or DynamoDB, Streaming technologies such as Apache Kafka; Understand reactive programming and dependency injection such as Spring to develop REST services.
Have a technology toolbox – Hands on experience with newer technologies relevant to the data space such as Spark, Kafka, Apache Druid (or any other OLAP databases
Cloud First - Plenty of experience with developing and deploying in a cloud native environment preferably AWS cloud.
Embrace ML – Work with data scientists to operationalize machine learning models and build apps to make use of power of machine learning.
Problem solver – Enjoy new and meaningful technology or business challenges which require you to think and respond quickly.
Passion and creativity – Are passionate about data, technology, & creative innovation.
Required Education, Experience/Skills/Training:
Minimum and Preferred. Inclusive of Licenses/Certs (include functional experience as well as behavioral attributes and/or leadership capabilities)
Required
Bachelor’s degree or better in Computer Science or a related technical field or equivalent job experience.
Preferred
Masters in Computer Science or similar is preferred.
Prior experience building internet scale platforms – handling Peta- byte scale data, operationalizing clusters with hundreds of compute nodes in cloud environment.
Experience in operationalizing Machine Learning workflows to scale will be a huge plus as well.
Experience with Content Personalization/Recommendation, Audience Segmentation for Linear to Digital Ad Sales, and/or Analytics
Experience with open source such as Spring, Hadoop, Spark, Kafka, Druid, Pilosa and Yarn/Kubernetes.
Experience in working with Data Scientists to operationalize machine learning models.
Proficiency with agile development methodologies shipping features every two weeks. It would be awesome if you have a robust portfolio on Github and / or open source contributions you are proud to share.