Job Description :
Senior Big Data Quality Assurance Engineer

Location: Beaverton, OR
Duration: 8 months

Description:
Education:
Typically requires a Bachelors Degree and minimum of 5 years directly relevant work experience

Skills:
Required:
Test Automation
Big Data Analytics
Hadoop
Data Warehouse
QA Engineering

Additional:
Data Analysis
Hive
Requirements Analysis
Agile
Python
SQL
AWS
Documentation
Scripting
Jenkins
Development/Design
Amazon EMR
Spark
ETL
Database
Integration
Test Plans
Application Support
Problem Solving

Minimum Degree Required:
Associate''s Degree

Attachment Description:
As a Sr. Big Data Quality Assurance Engineer you will work with a variety of talented client teammates and be a driving force for building solutions for client Digital. Candidate will be working on development projects related to consumer behavior, commerce, and web analytics.

Responsibilities:
- Requirements analysis and data analysis
- Testing and test automation for distributed data processing pipelines built using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to understand and test end to end solution.
- Build utilities, functions, and frameworks to better enable testing of data flow patterns.
- Research, evaluate and utilize new technologies/tools/frameworks centered around Hadoop and other elements in the Big Data space.
- Lead or participate in integration testing efforts.
- Work with teams to resolving operational & performance issues
- Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to.

Qualification:
- MS/BS degree in a computer science field or related discipline
- 6+ years'' experience in large- scale software development
- 1+ year experience with Hadoop ecosystem, including tools like Hive and Spark. Understanding of Hadoop internals.
- Strong experience in SQL
- Experience with Python, shell scripting
- Good understanding of file formats including JSON, Parquet, Avro, and others
- Experience with databases like Oracle
- Experience with performance/scalability tuning, algorithms and computational complexity
- Experience (at least familiarity) with data warehousing, dimensional modeling and ETL development
- Experience with test automation and build tools like Jenkins
- Ability to understand and ERDs and relational database schemas
- Strong documentation skills
- Proven ability to work cross functional teams to deliver appropriate resolution

Nice to have:
- Experience with AWS components and services, particularly, EMR, S3, and Lambda
- Experience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra
- Experience with messaging & complex event processing systems such as Kafka, Kinesis, or Storm
- Continuous Integration / Continuous Delivery
- Scala
- Machine learning frameworks
- Statistical analysis with Python, R or similar

Conference Call Notes:
- Looking for someone with Test Automation experience, no specific automation tool required.
- Prefer someone who has experience in running automation tests through Jenkins.
- Hadoop, SQL and Python experience required.
- AWS preferred.
             

Similar Jobs you may be interested in ..