Job Description :
Job Description

Job Family Summary

Big Data Engineering incorporates DevOps techniques across many disciplines – including mathematics/statistics, computer programming, data engineering and ETL, software development, and high performance computing – with traditional business expertise, with the goal of extracting meaning from data to optimize future business decisions. Individuals in this field should be lifelong learners with a desire to become experts in several of these disciplines and sufficiently proficient in others to effectively design, build, and deliver analytics products to optimize future decisions.

The Big Data Engineer job family is accountable for DevOps engineering of data solutions which includes designing and building systems for data storage and analytics that enable better decisions to achieve Client’s goals

The Individual must demonstrate sufficient adaptability to quickly develop new skills across these disciplines as those disciplines evolve as well as assist in the selection and development of other team members.

Job Summary

This role is responsible for a DevOps approach to development of new systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve analytics; and address new technical challenges using existing and emerging technology solutions.

Key Responsibilities

Executes complex functional work tracks and drives the execution of operational/technical objectives for data analytic outputs and business solutions.
Partners with other internal teams and peers in the department to ensure holistic Big Data solutions meet the needs of various stakeholders. 
With coaching, can identify new areas of data, research and big data technology that can solve business problems
Leverages and uses Big Data best practices to develop technical solutions used for analytical insights.
Acts as an Influencer within the department on the effectiveness of Big Data solutions to solve their business problems
Supports Innovation; regularly provides new ideas to help people, process, and technology that interact with analytic ecosystem.
With coaching, develops and builds frameworks/prototypes that integrate big data and advanced analytics to make better business decisions.
Executes on Big Data requests to improve the accuracy, security, quality, completeness, speed of data, and decisions made from Big Data analysis.
Uses, learns, teaches, and supports a wide variety of Big Data and Data Science tools to achieve results (i.e., R, ETL Tools, Hadoop, and others
Uses, learns, teaches, and supports a wide variety of programming languages on Big Data and Data Science work (i.e. Java, C#, Python, and Perl)
Supports a clear communication strategy that keeps all relevant stakeholders informed and provides an opportunity to influence the direction of the work
Trains and develops other engineers.

Job Qualifications
2 - 5 Years of experience as a Big Data Engineer.

Bachelor’s Degree in Computer Science, MIS, or related area, or equivalent work experience. Master’s Degree in a quantitative or scientific field would be a plus.

Experience in using software development to drive data science & analytic efforts
Experience with database integration, dataflow management & ETL technologies
Experience with various data types (e.g. Relational, Unstructured, Hierarchical, Linked “Graph” Data)
Experience in developing, managing, and manipulating large, complex datasets
Understanding of security risks and vulnerabilities pertaining to open source systems leveraging tools and techniques to minimize risk. Where appropriate, provide recommendations and justifications to ensure speed of access while minimizing risk for scientists and developers.
Experience and solid understanding of Bigdata ecosystems such as Hadoop, Spark, Streaming, Kafka


Ability to code and develop prototypes in languages such as Python,  Scala, Java, C, R, SQL
Ability to communicate and present advanced technical topics to general audiences including teams across multiple time zones.
Leading project teams of various skills levels
Understanding of predictive modeling techniques, a plus
Automation, Configuration Management (e.g. Ansible, Puppet), Dev-ops practices, CI/CD pipelines (e.g. Jenkins
Elementary networking skills, switching, routing, firewalls, load balancing.
Linux Containers / Docker.