Job Description :

Role: SW Engineer ( AI/ML)

Location: Frisco, TX - 100% REMOTE

Duration: 6 Months

Visa: No OPT, CPT, H1B, H4

Job Description

Engineers in this role accomplish business objectives by monitoring system functions from all points of system processing, identifying, and assisting in solving processing problems. The engineer works closely with internal partners and external clients by providing technical assistance on products and services, department reporting and process trending. They are responsible for optimizing Python/PySpark in a Hadoop ecosystem. Working with large data sets and pipelines using tools and libraries of Hadoop ecosystem such as Spark, HDFS, YARN, Hive and Oozie. Designing and developing cloud applications: AWS, OCI or similar.

Required Skills -

- Python/PySpark
- Big Data experience
- Ability to communicate clearly with key stakeholders
- Data aggregation, standardization, linking, quality check mechanisms, and reporting.
- Critical thinking
- Healthcare experience

Job Duties -

Key Responsibilities:
• Develop high quality software modules for Cotiviti, Inc. product suite
• Conduct unit and integration testing
• Analyze and resolve software related issues originating from internal or external customers
• Analyze requirements and specifications and create detailed designs for implementation
• Independently troubleshoot and resolve issues with minimal or no guidance
• Collaborate closely with offshore development teams to provide technical translation of business requirements and ensure software construction adheres to Cotiviti best practices coding techniques
• Execute all appropriate facets of the Cotiviti Software Development Lifecycle with a desire for continuous improvement
• Mentor other developers
• Ability to work in a cross-functional global team environment
• Complete all responsibilities as outlined in the annual performance review and/or goal setting. Required
• Complete all special projects and other duties as assigned.
• Must be able to perform duties with or without reasonable accommodation.

Job Requirements -

Qualifications:
Software Engineer – Big Data
• 5+ years in Python/PySpark
• 5+ years optimizing Python/PySpark jobs in a hadoop ecosystem
• 5+ years’ working with large data sets and pipelines using tools and libraries of Hadoop ecosystem such as Spark, HDFS, YARN, Hive and Oozie.
• 5+ years with designing and developing cloud applications: AWS, OCI or similar.
• 5+ years in distributed/cluster computing concepts.
• 5+ years with relational databases: MS SQL Server or similar
• 3+ years with NoSQL databases: HBASE (preferred)
• 3+ years in creating and consuming RESTful Web Services
• 5+ years’ in developing multi-threaded applications; Concurrency, Parallelism, Locking Strategies and Merging Datasets.
• 5+ years’ in Memory Management, Garbage Collection & Performance Tuning.
• Strong knowledge of shell scripting and file systems.
• Preferred: · Knowledge of CI tools like Git, Maven, SBT, Jenkins, and Artifactory/Nexus
• Knowledge of building microservices and thorough understanding of service-oriented architecture
• Knowledge in container orchestration platforms and related technologies such as Docker, Kubernetes, OpenShift.
• Understanding of prevalent Software Development Lifecycle Methodologies with specific exposure or participation in Agile/Scrum techniques
• Strong knowledge and application of SAFe agile practices, preferred.
• Flexible work schedule.
• Experience with project management tools like JIRA.
• Strong analytical skills
• Excellent verbal, listening and written communication skills
• Ability to multitask and prioritize projects to meet scheduled dea

Desired Skills & Experience -

Healthcare Experience isn't a requirement but very high on the Client's nice to have. Those candidates will take priority.

             

Similar Jobs you may be interested in ..