Job Description :
Hi,



I have an immediate role as ETL-Hive Developer.  Below are job details. If interested please send updated resume, rate and contact details….



Role: ETL-Hive Developer

Work Location & Reporting Address: Bellevue, WA
Contract duration (in months): 6+



No Customer Round



Job Details:

Responsible for Development, implementation & maintenance of various Hadoop applications / data feeds in Big Data ecosystem.

Responsible for integrating & providing development support including code check-in, library updates and management, and deployment control for updated or modified jobs.

Troubleshoot any failures and ensure the job completes. Modifying and troubleshooting queries, script and ETLs for the supported feeds in line with the runbook.

Maintain the parser and automated processes. Perform daily check to verify the integrity.

Review system and application logs, and verifying completion of scheduled jobs & optimize the performance.

Manage the ingestion and transformation of data feeds into production cluster.

Coordinate major code level issues and changes with development and Quality Assurance teams.

Monitor & Manage Scheduler.

Responsible for Shell Scripting programming, Java, EDW platforms with Knowledge around any Data Integration and/or EDW tools.

Perform daily system monitoring(alarms/KPI’s), verifying the integrity and availability of all hardware, server resources, systems and key processes, reviewing system and application logs, and verifying completion of scheduled jobs such as backups.

Manage & optimize Daily Aggregation jobs. Troubleshoot in case of any issues / delays in order to ensure on time delivery.

Any major issues with the Code, Modifications / change in Source data type warrants rewriting the code.

Monitor & Manage Scheduler.



Skills :

Experience in developing and supporting Java, Python, Storm, Spark and Map Reduce, Tez applications in a distributed platform

Experience in working MPP systems like Hadoop, Impala, Teradata and Oracle and analyzing Hundreds of Terabytes of data.

Experience in implementing with real-time data ingestion frameworks with Spark, Kafka and Flume

Should have complete hands-on experience on Cassandra & Scala.

Experience in building and implementing data pipelines with Hadoop ecosystem tools to capture near real-time/batch processing pipelines.

Hands on experience in working with HortonWorks and Cloudera Enterprise Hadoop distributions.

Experience in processing and moving large structured and/or unstructured and semi-structured data   into data lake or out of data Lake.



Minimum work experience: 6+ Years



Thanks & Regards



Venkatesh Kondameedi

US IT Recruiter

Smart Folks Inc

Direct:

Fax