Job Description :
BASIC QUALIFICATIONS
This position requires a Bachelor''s Degree in Computer Science or a
related technical field, and 5+ years of meaningful employment
experience.

· 5+ years of relevant work experience in Big data engineering, ETL,
Data Modeling, and Data Architecture.
· Expert-level skills in writing and optimizing SQL.
· Experience with Big Data technologies such as Hive/Spark, AWS EMR,
AWS Glue, AWS Lambda and Kinesis.
· Proficiency in one of the scripting languages - python, ruby, java or similar.
· Experience operating very large data warehouses, data lakes and
building streaming data pipelines.
· Proven interpersonal skills and standout colleague.
· A real passion for technology. We are looking for someone who is
keen to demonstrate their existing skills while trying new approaches.

PREFERRED QUALIFICATIONS
· Master’s or Bachelor degree in Computer Science, Engineering, or related field
· Authoritative in ETL optimization, designing, coding, and tuning big
data processes using Apache Spark or similar technologies.
· Experience with building data pipelines and applications to stream
and process datasets at low latencies.
· Demonstrate efficiency in handling data - tracking data lineage,
ensuring data quality, and improving discoverability of data.
· Sound knowledge of distributed systems and data architecture
(lambda design and implement batch and stream data processing
pipelines, knows how to optimize the distribution, partitioning, and
MPP of high-level data structures.
· Knowledge of Engineering and Operational Excellence using standard
methodologies.
             

Similar Jobs you may be interested in ..