Job Description :
Responsibilities:
Develop solutions to big data problems utilizing common tools found in the ecosystem.
Develop solutions to real-time and offline event collecting from various systems.
Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources.
Analyze massive amounts of data and help drive prototype ideas for new tools and products.
Design, build and support APIs and services that are exposed to other internal teams
Employ rigorous continuous delivery practices managed under an agile software development approach
Ensure a quality transition to production and solid production operation of the software
Skills & Requirements:
5+ years programming experience
Bachelors or Masters in Computer Science, Statistics or related discipline
Experience in software development of large-scale distributed systems – including proven track record of delivering backend systems that participate in a complex ecosystem.
Experience working on big data platforms in the cloud or on traditional Hadoop platforms
Experienced in NoSQL / SQL
Experienced in Microservice development
Experienced in RESTful API development
Strong Experience with AWS Core and Container technologies and Real-time Streaming (such as Kafka, Kinesis)
Experience in one or more languages: Python, Scala/Java, Spark, Batch, Streaming, ML
Experience with Performance tuning at scale
Experienced with one of the Analytics tools - Presto / Athena, QuickSight, Tableau
Test-driven development/test automation, continuous integration, and deployment automation
Enjoy working with data – data analysis, data quality, reporting, and visualization
Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly.
Great design and problem solving skills, with a strong bias for architecting at scale.