Job Description :
Job Title: Big Data Platform Engineer, Hadoop
Please review and if interested email a copy of your resume
We can set up a time to talk.
Department: Engineering, Support and DevOps team for Data Fabric/Compute Fabric and various other internally developed applications
Must have skills:
* 5+ years of experience in Spark/Hadoop
* 5+ years of experience programming in Java/Scala
* 5+ years of experience in Docker(Swarm/Kuberenetes)
* 5+ years of experience with Python/Shell Scripting
* Experience with NoSQL/Graph databases, such as MongoDB, Ignite, and Druid
* Experience with Hadoop, Docker, Kubernetes, and ELK Stack a plus
* Experience with Hive
* Proficient understanding of distributed computing principles
* Proficient understanding of networking principles
* Understanding of underlying hardware & operating systems
Primary Responsibilities:
* Take ownership of component(s) of the workflows supported within our data ecosystem
* Interact with CM development teams across RBC and understand their application requirements, data access patterns, and assist them with expediting onboarding to various platforms from a engineering and development aspect.
* Design and develop systems that meet our latency, volume, storage and scale expectations to enhance KPI metrics, and monitoring capabilities for use across multiple teams
* Participate in meetings to help influence architectural, engineering, and development decisions
* Follow our Agile software development process with daily scrums and monthly Sprints
* Ability to work collaboratively on a cross-functional team with a wide range of experience levels
* Define best practices for spark usage; work with teams to influence their architecture
* Provide expertise in Spark Performance tuning
* Perform knowledge sharing, conduction education workshops, and train other employees is expected
Duration: 12 months plus contract
Location Remote/Onsite New York area