Job Description :
Title: Lead Hadoop Engineer, Richmond, VA & Atlanta

Location: Richmond, VA & Atlanta


Looking for hands-on leads who can do the development & who can get in there and lead from a technical perspective.
Provide recommendations on how to correct problems & actually dig in. Have the intellect to dig-in
Lot of experience with query
Big Data area need to have familiarity - spark, Scala, and SQL
Self- Starter
No hand holding
Agile environment
Want people who can drive the work
Must have great communication.

Key Skills:

Solid database and data warehousing background
Ability to lead a small/medium size team from a technology perspective (lay out right technical direction, drive technical work to completion)
Expertise in big data technologies, especially Scala/Spark, HBase, Hive, Yarn, Kafka.
Expertise in writing/debugging complex sql against large data warehouse tables
Expertise in Unix scripting
AWS experience a major plus (prefer experience with EMR, SnowFlake, Elastic Search)

Job Description:

Make ongoing recommendations of solution standards and system configurations
Work in cross-disciplinary teams to understand client needs and ingest rich data sources.
Research, experiment, and utilize leading Big Data technologies in AWS
Enabling an innovative approach to data platforms which greatly increase the flexibility, scalability, and reliability of IT services at a lower cost
Leading design reviews, planning , developing and resolving technical issues
Delivering highly complex solution & Design using in-depth knowledge within the business domain
Design data warehouses on platforms such as AWS Redshift, , Hadoop and other high performance platforms
Design custom ETL processes based on customer needs and their existing data sources
Optimize data warehouses for performance
Contribute to the core design of data architecture, data models and schemas, and implementation plans
Understand the practicalities of DevOps style (agile/lean) approaches to software development.
Assist in technical presentations and info-sharing sessions to create a clear and uniform understanding of the new systems to various constituents within
Provide guidance and lead resolution of technical issues as they arise.
At times, responsible for assisting with client and project engagements to document technical requirements for database, applications, integration, infrastructure, etc.
Develops and owns list of final enhancements, also performs technical design reviews and code reviews.

Job Requirements:

9+ years of experience with data technologies including Hadoop (Cloudera) , HBase, Spark, Hive, RedShift etc.,
Strong experience with data integration technologies including Apache NiFi (or similar tools)
5+ years of experience in developing and maintain ETL processes
Knowledge of Oracle, SQL Server and Tableau high preferred
Experience implementing solutions on the Public Cloud , AWS preferred
Experience building applications with Dev Ops tools such as Bamboo, Bitbucket, Artifactory etc. is preferred
5+ years of experience with NoSQL implementation (Mongo) a plus
Experience designing and implementing Healthcare applications
Experience with security and privacy standards; e.g. HIPAA, HITRUST, ISO27001 etc.,
Excellent Written, verbal, and diagrammatic communication skills
Demonstrated ability to interface effectively and collaborate with clients, peers, and management to develop solutions and ensure stakeholder buy-in.
Experience in Agile software development life cycles and DevOps principles