Job Description :
Hello,

Hope you''re doing well. My name is Dhananjay and I''m a member of the ICONMA Recruiting Team.

ICONMA is a global information consulting management firm providing Professional Staffing Services and Project-Based Solutions for organizations in a broad range of industries.

I am pleased to announce that we have received a new opening with our direct client and I believe you may be interested in hearing more about it.

Currently we are looking for a Sr. Hadoop Developer in Beaverton, OR

Senior Hadoop Developer

Location: Beaverton, OR
Duration: 6 months

Description
Client is embracing Big Data technologies to enable data-driven decisions. Manager is looking to expand our Hadoop Engineering team to keep pace.
As a Sr. Hadoop developer Candidate will work with a variety of talented client teammates and be a driving force for building solutions for client Digital.
Candidate will be working on development projects related to consumer behavior, commerce, and web analytics.

Responsibilities:
Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to design and implement end to end solution.
Build utilities, user defined functions, and frameworks to better enable data flow patterns.
Research, evaluate and utilize new technologies/tools/frameworks centered around Hadoop and other elements in the Big Data space.
Define and build data acquisitions and consumption strategies
Build and incorporate automated unit tests, participate in integration testing efforts.
Work with teams to resolving operational & performance issues
Work with architecture/engineering leads and other teams to ensure quality solutions are implements, and engineering best practices are defined and adhered to.

Qualification:
MS/BS degree in a computer science field or related discipline
6+ years’ experience in large-scale software development
1+ year experience in Hadoop
Strong Java programming, Python, shell scripting, and SQL
Strong development skills around Hadoop, Spark, MapReduce, Hive, and Pig
Strong understanding of Hadoop internals
Good understanding of file formats including JSON, Parquet, Avro, and others
Experience with databases like Oracle
Experience with performance/scalability tuning, algorithms and computational complexity
Experience (at least familiarity) with data warehousing, dimensional modeling and ETL development
Ability to understand and ERDs and relational database schemas
Proven ability to work cross functional teams to deliver appropriate resolution

Nice to have:
Experience with AWS components and services, particularly, EMR, S3, and Lambda
Experience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra
Experience with messaging & complex event processing systems such as Kafka and Storm
Automated testing, Continuous Integration / Continuous Delivery
Scala
Machine learning frameworks
Statistical analysis with Python, R or similar

Skills:
Required
Application support
Architecture
Best practices
Budget

Additional
Business requirements
Coding
Database
Deployment
Governance
Integration
Leads
Mitigation
Optimization
Performance testing
Problem solving
Remediation
Solutions
Oracle

Minimum Degree Required:
Associate''s Degree

Certifications & Licenses:
AWS
Hadoop
Spark