Job Description :
Work Location: San Jose, CA

This is a CONTRACT role.

Rate = $75 per hr on C2C

Please find the job description below:

Description for Big Data Developer

============================
The ideal candidate has to be skilled in the following with more than 9+ years of overall experience

Role :

Client Handling

· Discuss and Communicate status, updates and issues with the customer

· Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

· Work with data and analytics experts to strive for greater functionality in our data systems.

· Commit and deliver the POC



Data handling and Manipulation

· This involves understanding and appreciating the customers’ existing data solution (DW / products / DB schema ) from a high level

· Understanding the scope and complexity of the POC to be built

· Understand Source data and its formats

· Using the Infoworks toolkit to ingest the data into Infoworks Hive store

· Re-engineer the existing DW/Report/Cube logic into hiveQL

· Build, Edit and Optimize the Queries and cubes in infoworks



Must-haves

Must be expert in various Data Warehouse, Reporting & ETL technologies
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience building and optimizing big data data pipelines, architectures and data sets.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
Strong organizational skills. The ability to focus on the big picture and work towards reaching the end goal
Experience supporting and working with cross-functional teams in a dynamic environment.
Experience working with one or more of the Hadoop distributions (Cloudera, Hortonworks, MapR)
Experience with big data tools: Hadoop, Hive, YARN, MapReduce, Sqoop
Experience with relational SQL , including Postgres and Cassandra.
Ability to write shell scripts, linux commands
Ability to to basic programming and debug stack trace for Java



Nice-to-haves

Good experience on Big Data Environment.
Strong analytic skills related to working with unstructured datasets.
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
Ability to play a lead role in a team for large projects. (the person should be hands on as well)
Experience with big data tools: Spark, HBase, Kafka
Experience with NoSQL databases
Experience with data pipeline and workflow management tools: Airflow
Experience with Public Cloud Services: AWS, GCP, Azure, IBM Bluemix
Experience with stream-processing systems: Storm, Spark-Streaming
Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala



Looking forward to your positive response.