Job Description :
Job Title: Lead Engineer(BigData and Hadoop)

Location: Dallas & Plano,TX

Job Type: C2C

JD:

Digital : BigData and Hadoop Ecosystems, Digital : IBM BigData

Experience 6-8 years


The Lead Engineer, Enterprise Analytics (Advanced Software Engineer) will report to a Senior Manager - Enterprise Analytics. Using Big Data technology, you will be responsible for designing, developing and implementing data analysis using cloud architecture and applying data science techniques for data exploration

Primary Responsibilities:

Conducting data analysis, research and data modeling using structured and unstructured data

Process, cleanse, and verify the integrity of data used for analysis

Perform ad-hoc analysis and present results in a clear manner

Design rich data visualizations to communicate complex ideas to customers or company leaders

Developing automated methods for ingesting large datasets into an enterprise-scale analytical system using Scoop, Spark and Kafka

Identifying technical implementation options and issues

Plays a key role in defining the future IT architecture and develops roadmaps of the systems, platforms and/or processes for the company across disciplines

Provides technical leadership and coaching to other engineers and architects across the organization

Partners and communicates cross-functionally across the enterprise

Provides highest level of expertise for the development and/or specification and/or implementation of Analytical solutions (e.g. R, Python, Tableau & Datameer)

Able to explain technical issues to senior leaders in the company, including the board, in non-technical, understandable terms

Interact with business teams to gather, interpret and understand their business needs creating design specifications.

Foster the continuous evolution of best practices within the development team to ensure data standardization and consistency

Core Competencies & Accomplishments:

10 + years of professional experience

2+ years of experience with Big data technology and analytics

Experience in one or more languages (e.g., Python or Java)

Experience with data visualization tools like Tableau

Experience with applied data sciences techniques, including but not limited to, machine learning approaches and statistical modeling

Proficiency in using query languages such as SQL, Hive

In-depth knowledge of statistical software packages (e.g. SAS and R)

Understanding of Big Data tools (e.g., NoSQL DB, Hadoop, Hbase)

Understanding of data preparation and manipulation using Datameer tool

Knowledge of SOA, IaaS, and Cloud Computing technologies, particularly in the AWS environment

Experience with Hadoop/Hive, Spark and Scoop highly desirable

Experience with continuous software integration, test and deployment.

Experience in agile software development paradigm (e.g., Scrum, Kanban)

The ability to work within a dynamic programmatic environment with evolving requirements and capability goals

Self-motivated. Capable of working with little or no supervision

Maintenance of up-to-date knowledge in the appropriate technical area

Strong written and verbal communication