Job Description :

AWS Data Architect
Irvine, CA - Remote work maybe an option for strong candidates but periodic travel to Irvine maybe needed.
12 months contract
Rate: $90-$100 / hr
Infosys/IFE client
The Position:
Responsible for all aspects of data acquisition, data transformation, analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Investigate, evaluate, test and recommend technical solutions for future systems. The Sr. Staff Engineer will support software developers, database architects, data scientists on data initiatives and will ensure optimal data delivery architecture.
Data Design Management:
" Own product data sets from the definition phase through to production deployment.
" Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time.
" Provide solutions for the design and implementation of Hadoop EMR Cluster/ Big Data Infrastructure.
" Optimize Spark Jobs using Spark UI and set custom Spark configuration parameters where ever needed for Job completion.
" Develop ETL Jobs to process raw and curated dataset using AWS Glue and orchestrate using Glue Workflows.
" Automate Glue ETL definition and workflows using AWS Cloud Formation Template (CFT)
" Develop and automate Docker workloads using AWS ECS Fargate
" Deploy and maintain Hadoop/Big Data/Spark and database storage Infrastructures in AWS cloud.
" Create data environments and/or data sets to serve a wide range of data users, including but not limited to Data Scientists, Data Analysts, Business Analysts etc
" Develop AWS QuickSight Dashboards to come up with charts for different KPI's.
" Perform offline analysis of large data sets using components of a big data software ecosystem.
" Support the evaluation of big data technologies and prototype solutions to improve data processing architecture.
" Troubleshoot and determine root cause of complex data provenance, metadata issues and engineering questions that may involve interfacing with various technical staff in multiple organizations and with differing levels of expertise.
" Investigate, evaluate, test and recommend technical solutions for future systems.
" Develop tools and procedures to monitor and automate system tasks on servers and clusters
" Author extract, transform and load (ETL) scripts for moving and curating data into data sets for storage and use by a datalake, data warehouse and datamart.
Technical Advisor:
" Collaborate with other teams to design, develop, and deploy data tools that support both operations and product use cases.
" Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead
What we're looking for:
" Knowledge of database concepts, object and data modeling techniques and design principles
" Detailed knowledge of database architectures, software, and facilities
" Successful history of manipulating, processing, and extracting value from large disconnected data sets
" Experience with programming languages - Python (required), Scala, Ruby, R Database technologies - SQL, performance tuning concepts, AWS RDS, RedShift, MySQL
" Experience with big data batch processing tools: Hadoop MapReduce, ElasticSearch, PIG, Hive, Cascading/Scalding, Apache Spark, AWS Glue, AWS EMR
" Experience with stream-processing systems: Kinesis, Kafka, MQTT
" Experience with relational NoSQL databases including DynamoDB
" Ability to write JSON, XML, YAML and other data definition schemas
" Works on advanced complex technical projects or business issues requiring state of the art technical knowledge or industry.
" Works on significant and unique issues where analysis of situations or data requires an evaluation of intangibles. Exercises independent judgment in methods, techniques and evaluation criteria for obtaining results.
" May be the in-house expert on specific technologies.
" Ability to provide a leadership role for the work group through knowledge in area of specialization.
" Bachelor's degree in computer science, computer engineering, or a related field, or the equivalent combination of education and related experience.
" 12 years of professional experience as a data software engineer
" 2 years of experience with AWS cloud or other cloud Big Data computing design, provisioning, and tuning.
" Related AWS certification, preferred
" Previous experience as a Data Engineer / Database Administrator and/or Business Intelligence Analyst.



Client : ClifyX, INC

             

Similar Jobs you may be interested in ..