Job Description :
Hadoop Data Engineer
Hartford, CT
12-18 Months
Phone then Skype
If EAD then Minimum Visa expirations is must be 06/2021. Thanks !!
We are looking for a savvy Hadoop Data Engineer to join our growing team of
analytics experts. The contractor will be responsible for building and
optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is
an experienced data pipeline builder.
The Hadoop Data Engineer will support our software developers and database
architects initiatives and will ensure optimal data delivery architecture is
consistent throughout ongoing projects. They must be self-directed and
comfortable supporting the data needs of multiple teams, systems and
Fundamental Components:
* Develops large scale data structures and pipelines to organize,
collect and standardize data that helps generate insights and addresses
reporting needs.
* Collaborates with other data teams to transform data and integrate
algorithms and models into automated processes.
* Uses knowledge in Hadoop architecture, HDFS commands and experience
designing & optimizing queries to build data pipelines.
* Builds data marts and data models to support Data Science and other
internal customers.
* Analyzes current information technology environments to identify and
assess critical capabilities and recommend solutions.
* Experiments with available tools and advises on new tools in order
to determine optimal solution given the requirements dictated by the
model/use cases
* 3 or more years of progressively complex related experience.
* Has strong knowledge of large scale search applications and building
high volume data pipelines.
* Experience building data transformation and processing solutions.
* Knowledge in Hadoop architecture, HDFS commands and experience
designing & optimizing queries against data in the HDFS environment.
* Ability to understand complex systems and solve challenging
analytical problems.
* Ability to leverage multiple tools and programming languages to
analyze and manipulate data sets from disparate data sources.
* Strong collaboration and communication skills within and across
* Strong problem solving skills and critical thinking ability.
SKILL SET desired:
* Hive
* Shell Script
* Unix
* Hadoop Concepts (Sqoop, YARN, MapReduce ,etc