Job Description :
Role: AWS Data Architect (Building and Optimizing data pipeline in AWS Environment)
Location: Pleasanton , CA
Duration: 12+ Months

Role Overview: We're looking for a strong AWS Data Architect who will be responsible for building and optimizing data pipeline in AWS Environment.

QUALIFICATIONS:



· · Strong experience in a Data Engineering role, Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

· · Advanced working SQL knowledge and experience working with relational databases, query authoring and familiarity with a variety of databases

· · Experience with database design principles including relationships and normalization, database structures, indexes and views and analyzing requirements and purpose of the database

· · Experience building and optimizing ‘big data’ data pipelines, architectures and data sets (This is key required for this position )

· · Experience building processes supporting data transformation, data structures, metadata, dependency and workload management

· · Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores

· · Deep experience with AWS services: S3, RedShift, Lambda, Cognito, Glue, Athena, QuickSight (Important and mandatory)

· · Experience with relational SQL and NoSQL databases

· · Strong skills with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

· · Ability to architect end-to-end ETL pipelines

· · Experience with BI Tools

· · Strong quantitative, analytical, process development, facilitation and organizational skills required

· · Excellent documentation and verbal communication skills.

· Ability to communicate technical vision in clear terms to peers as well as outside of the engineering/development team