Job Description :

A successful Data Engineer will have:
? A Bachelor’s degree in Computer Science Engineering, Data Analytics, or a related technical degree.
Technical requirements:
? Five+ years of experience working with distributed data technologies (e.g. Hadoop, MapReduce, Spark,
Kafka, Flink etc) for building efficient, large-scale ‘big data’ pipelines;
? Strong Software Engineering experience with proficiency in at least one of the following programming
languages: Java, Golang, Python, Scala or equivalent;
? Implement data ingestion pipelines both real time and batch using best practices;
? Experience with building stream-processing applications using Apache Flink, Kafka Streams or others;
? Experience with Cloud Computing platforms like Amazon AWS, Google Cloud etc.;
? Experience supporting and working with cross-functional teams in a dynamic environment;
? Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
? Experience with ELK stack.
? Ability to work in a Linux environment.
Ideal qualifications:
? Experience in building distributed, high-volume data services;
? Experience with big data processing and analytics stack in AWS: EMR, S3, EC2, Athena, Kinesis, Lambda,
Quicksight etc.;
? Knowledge of data science tools and their integration with data lakes

? Experience in container technologies like Docker/Kubernetes

             

Similar Jobs you may be interested in ..