Job Description :
Job Description :
Responsibilities

1. Development and support of Scala/Python/R/Java based applications that serve as AIML model wrappers. Application development in big data environments preferred.

2. Knowledge of Hadoop ecosystem such as Hive, Drill, etc.

3. Experience with building applications in Spark (Scala or Pyspark)

4. Experience in building and supporting REST API services with Python/Java

5. Knowledge of version control, devops tools (Jenkins, udeploy, etc, and production automation tools (Autosys, etc

6. Responsible for AI model deployment to ML Platform, monitor big-data jobs via yarn logs, Spark UI, etc. and post-production support.

7. Hands-on experience with unix tools, shell scripting and developing in scope technical documents.

8. Exhibit high degree of flexibility to changing requirements.

9. Coordinate and test the deployment of new machine learning tools.

10. Coordinating with Hadoop team to install necessary software for machine learning

11. Work with users to coordinate the access to environments

12. Work with business and technical leaders, sponsors, and analysts on problem understanding, requirements and architecture discovery, analyze complex business requirements, data analysis and modeling, designs and writes functional and technical specifications

13. Working collaboratively in large teams

Required Skills:

4+ years of application development and implementation experience

2+ years of Python experience • 2+ years of Hadoop/Big Data experience

1+ years of experience with machine learning tools Desired Skills:

2+ years of Java experience

2+ years of Teradata experience

4+ years of Linux experience

3+ years of experience in creating and supporting Java, Python, Scala, R based applications and services frameworks

R administration and coding experience is a plus

Python administration and coding experience

Experience with Open Source Data Scientist tools and applications is a plus • Data Robot or Digital Reasoning experience is a plus