Job Description :
Lead Hadoop Developer
Location is Dearborn MI
12 months
Must interview in-person
H1''s are fine
Key skills/requirements:
1. Thorough knowledge of the Hadoop EcoSystem with emphasis on SPARK
(transformations, persistence and streaming
2. Deep understanding of performance bottlenecks in the data streaming
supply chain spanning Hadoop and IBM queuing tools.
3. Possess the bearings of a lead developer that has authored blogs or
suggested solutions to other blog posts. Bring that skill to the team for
day-to-day suggestions on improvements and provide new solutions to emerging
technology problems.
4. Coach/mentor other junior team members as necessary.
Essential Job Functions: 1. Design and development of data ingestion
pipelines. 2. Perform data migration and conversion activities. 3. Develop
and integrate software applications using suitable development methodologies
and standards, applying standard architectural patterns, taking into account
critical performance characteristics and security measures. 4. Collaborate
with Business Analysts, Architects and Senior Developers to establish the
physical application framework (e.g. libraries, modules, execution
environments 5. Perform end to end automation of ETL process for various
datasets that are being ingested into the big data platform.
Job Summary: The Sr. Java/Hadoop Developer position will provide expertise
in a wide range of technical areas, including but not limited to: Cloudera
Hadoop ecosystem, Java, collaboration toolsets integration using SSO,
configuration management, hardware and software configuration and tuning,
software design and development, and application of new technologies and
languages which are aligned with other FordDirect internal projects.
Required: 1. Java/J2EE 2. Web Applications, Tomcat (or any equivalent App
server) , RESTful Services, JSON 3. Spring, Spring Boot, Struts, Design
Patterns 4. Hadoop (preferably Cloudera (CDH , HDFS, Hive, Impala, Spark,
Oozie, HBase 5. SCALA 6. SQL 7. Linux Good to Have: 8. Google Analytics,
Adobe Analytics 9. Python, Perl 10. Flume, Solr 11. Strong Database Design
Skills 12. ETL Tools 13. NoSQL databases (Mongo, Couchbase, Cassandra) 14.
JavaScript UI frameworks (Angular, NodeJS, Bootstrap) 15. Python 16. Good
understanding and working knowledge of Agile development Other
Responsibilities: 1. Document and maintain project artifacts. 2. Suggest
best practices, and implementation strategies using Hadoop, Java, ETL tools.
3. Maintain comprehensive knowledge of industry standards, methodologies,
processes, and best practices. 4. Other duties as assigned. Minimum
Qualifications and Job Requirements: . Must have a Bachelor''s degree in
Computer Science or related IT discipline . Must have at least 5 years of IT
development experience. . Must have strong, hands-on J2EE development . Must
have indepth knowledge of SCALA - Spark programming . Must have 3+ years
relevant professional experience working with Hadoop (HBase, Hive,
MapReduce, Sqoop, Flume) Java, JavaScript, .Net, SQL, PERL, Python or
equivalent scripting language . Must have experience with ETL tools . Must
have experience integrating web services . Knowledge of standard software
development methodologies such as Agile and Waterfall . Strong communication
skills. . Must be willing to flex work hours accordingly to support
application launches and manage production outages if necessary Specific
Knowledge, Skills and Abilities: . Ability to multitask with numerous
projects and responsibilities . Experience working with JIRA and WIKI . Must
have experience working in a fast-paced dynamic environment. . Must have
strong analytical and problem solving skills. . Must have excellent verbal
and written communication skills . Must be able and willing to participate
as individual contributor as needed. . Must have ability to work the time
necessary to complete projects and/or meet deadlines.
             

Similar Jobs you may be interested in ..