Job Description :
Title: Software Developer– IV
Location: Chandler, AZ
Duration: 9 Months Contract

Note: Candidates should able to obtain public trust security clearance.
Job Description:
You will architect and implement a Big Data Platform that is scalable for ingesting, transforming, persistence and query of peta bytes of data from heterogeneous data sources both structured and unstructured where storage is separate from compute and capable of performant and multi-tenant data analytics including machine learning.The platform should provide ability to manage big data with attributes of Volume, Velocity, Variety and VeracityThis group is exploring a new field and as such are looking for a true expert that can help them learn faster and ramp up quickly.

You will be responsible for defining the big data architectural blueprint under the supervision of senior leaders of the organization
You will be responsible for implementing the platform
You will be responsible for building reference implementations on the platform based on real world use cases

Essential Skills:
Detailed knowledge of key ingredients of a Big Data Platform with hands on experience in setting up such platforms in large corporations following industry standard patterns and practices
Hands on experience of creating architectural blueprints and approaches of separating storage from distributed compute where storage is infinite using open source or vendor supplied components.
Provide details of how to implement such blueprints.
Hands on experience in building a lambda architecture for big data with speed, batch and serving layers with ability to maintain data real time and scheduled.
Historical data should be separable from real time data for faster query capabilities.
Expert level experience with Industry standard platforms like Apache Hadoop/ HDFS/ Kaffka / Spark required
Hands on experience in building a large-scale data pipeline using distributed compute on a big data platform with data ingestion, data transformation, data persistence and data warehouse capabilities
Hands on experience in performance tuning for data analytics on the platform where storage is separate from compute.
Expose data via standard odbc, jdbc brokers and performant microservices to the consuming layers like an industry standard data analytics platform including apache spark / power bi etc

Desirable Skills:
Hands on experience using a big data platform for predictive modeling, statistics, Machine Learning, Data Mining, and other data analysis techniques to collect, explore, and extract insights from structured and unstructured data
Enterprise graph platforms using big data to build connected graphs using heterogenous data domains

Minimum Educational Requirement:
Prefer a MS or PHD but as a minimum must have at least a Client.

Client : Intel