Job Description :
Position: Software Engineer

Location: Cleveland, OH

Duration: 9 months



Description:






Job Overview & Purpose:
Client in Cleveland is searching for Software Engineers to help us build complex algorithms that implement state of the art analytics in a Hadoop, with technologies like MapReduce and Spark. Our Software Engineers bring a full-stack perspective to the entire engineering organization and make an impact on software development, automation configuration, monitoring, and process improvements. Candidates will have the opportunity to make significant contributions to current and future versions of our Big Data platform that transforms our partners’ clinical, financial, and operational data into actionable information that enables Population Health and Performance Management.

Successful candidates must have 2+ years’ experience in, and be able to demonstrate
Proficiency with Big Data Hadoop ecosystem and associated technologies
Knowledge of software development in an object-oriented programming language: Java(required), Python, Ruby, etc.
Comfortable working in a Linux environment
Experience with Docker (Kubernetes nice to have)
Hands-on experience with testing frameworks and methodologies

Essential Functions:
Participate in the full lifecycle of large feature development through definition, design, implementation, and testing
Be an advocate for developing best practices in the organization, and bring in knowledge of new technologies to the team
Regularly contribute to ongoing improvements in engineering process and product development ecosystem
Foster an environment of continuous learning and improvement
Contribute to ongoing education initiatives and the on-boarding of new engineers
Take technical ownership for specific facets of the technology stack
Participate in designing and building large distributed systems which scale well
Participate in Research and Development (R&D) activities at the project level
Share areas of technical expertise
Improve processing time and reduce complexity
Effective time management skills
Develop, troubleshoot, and optimize new and existing distributed code
Responsible for project and code quality, including participation in code reviews
Analyze and improve the performance of our distributed data processing system
Develop tools and utilities to maintain high system availability, monitor data quality, and provide statistics
Develop understanding of healthcare terminology and workflows, including the processing, aggregation, and analysis of data

Minimum Qualifications:
Bachelor’s Degree in Computer Science or related field
Knowledge of software development in an object-oriented programming language: Java, C++, C#, Python, PHP, Ruby, Perl, etc.
Strong understanding of data structures, algorithms, and complexity analysis
Comfortable coding and working in a Unix environment
Proven analytical and troubleshooting skills
Proven verbal and written communication skills, including the ability to articulate complex technical concepts to non-technical stakeholders
Ability to work gracefully and effectively in high pressure situations
Eagerness to learn new things, continually growing and expanding personal abilities

Knowledge, Skills, Abilities Preferred:
Master’s Degree or higher in Computer Science or related field
3 or more years of software development experience
Experience developing in Hadoop ecosystems or other distributed computing systems
Experience with Extract, Transform, and Load (ETL) principles
Experience developing and maintaining APIs for internal and external users
Experience with healthcare IT systems and workflows/experience
             

Similar Jobs you may be interested in ..