Job Description :
Role: Spark Developer
Location : Newark, DE
Duration : Long Term

Job Description
Performance tuning of Hadoop clusters and various Hadoop components and routines
Monitor job performances, file system/disk-space management, cluster and database connectivity, log files, management of backup/archival/security and troubleshooting various user issues
Hadoop cluster performance monitoring and tuning, disk space management
Collaborate with various cross functional teams; infrastructure, network, and application for various activities: deployment of new software, environment, capacity
Performance tuning of Spark Jobs
Development using Python, Spark, Hive, SQL, Impala and shell scripts