Job Description :
Description:
Develop and maintain applications built using Big Data solutions such as Apache Spark, Zeppelin, Apache Kafka, S3, Presto, Python and Hive in AWS ecosystem
Write complex and efficient spark pipelines to transform raw data sources into easily accessible models by coding across several languages such as Scala, Java and SQL.
Build custom integrations between cloud-based systems using APIs leveraging JAX-RS and Open API specifications.
Profile data sources, create dimensional models, implement ETL, and load into data warehouse and data lake
Build, manage, and deploy distributed systems in the AWS ecosystem – AWS S3, EC2, EMR,SNS, Lambda
Create custom Spark SQL queries to help support the analytics team.
Gather requirements from teams and translate into code. Work with distributed team.
Deliver clean, scalable, maintainable code and monitoring.
Requirements
4-6 years software development and data engineering experience or comparable skill level with at least 2-3 years on AWS
Strong Spark, Scala software development skills are required for this position
Great problem solving skills and the ability to break down large problems into actionable steps
Experience with configuration and maintenance of distributed computing systems
Knowledge of open API specification(Swagger)
Strong understanding of Web APIs / HTTP / REST / JSON
Experience with design, development, deployment, versioning and maintenance of secure RESTful APIs