Job Description :
Responsibilities:
- Design, build, and maintain ETL pipeline of analytical data warehouse
- Acquire data from transactional source systems to our data warehouse (typically using Spark, Pig, Python, Teradata and Redshift)
- Migrate Teradata procedures into Spark SQL based ETL scripts.
- Detect data quality issues and their root causes. Implement fixes and design data audits to capture the issues in the future.

Qualifications:
- Must have experience with Big Data
- Expert level experience in Spark, Pig and Hive
- Strong SQL background
- Big data & production support ETL experience is a must
- Python/Perl/Java/C++ (Experience in one or more)
- Scalability. You have dealt with TBs of data before
- Data modeling, SQL and data warehousing
- Experience with Teradata or Redshift is huge plus
- Experience with analytical tools supporting data analysis, reporting and visualization (MicroStrategy, Tableau, R, etc is a plus

A few more things to know:
- Our culture is unique and we live by our values.
- You will need to be comfortable working in the most agile of environments.
- Requirements will be vague. Iterations will be rapid. You will need to be nimble and take smart risks.