Job Description :
Location: Wilmington, DE/ Houston, TX/ NYC, NY

No of openings: 6

1-2 Skype interview with Implementation partner and 1 end client Skype interview with the client.

Mandatory skills : Python and Scala

Optional skills : Spark & Hadoop



Looking for top notch, driven and dedicated engineer to work on converting compute intense spark workloads from CPU to GPU.



Responsibilities:

- Implement the core technologies upon which the rest of the tools are built. This includes systems to capture, inspect and modify usage of the GPU through Metal.

- - Build tools specific functionality within the Metal framework.

- Help design and implement algorithms to analyze GPU workloads using Monte Carlo simulated data.

- Work with frameworks accelerated by Apache Spark & NVIDIA to provide a foundation for deploying Financial & Risk Models in Production meeting requested business SLAs.



Qualifications:

- Excellent programming skills and knowledge of Scala & Python

- Thorough experience in Big-Data environment, particularly with Spark & Hadoop.

- Knowledge of GPU APIs such as RAPIDs, Numba, Cuda

- Excellent software design, problem solving and debugging skills

- Knowledge of GPU hardware and software architecture a major plus