Job Description :
Job: Big Data Engineer –
100% Remote
12 Month(s) Contract -
Detroit, Michigan

Interview Information
2 step - Panel interview first step and then second step deep dive coding assessment on Python, Spark, SQL

The Big Data Engineer is responsible for engaging in the design, development and maintenance of the big data platform and solutions at Quicken Loans. This includes the platform host data sets that support various business operations and enable data-driven decisions as well as the analytical solutions that provide visibility and decision support using big data technologies. The Big Data Engineer is responsible for administering a Hadoop cluster, developing data integration solutions, resolving technical issues, and working with Data Scientists, Business Analysts, System Administrators and Data Architects to ensure the platform meets business demands. This team member also ensures that solutions are scalable, include necessary monitoring, and adhere to best practices and guidelines. The Big Data Engineer helps mentor new team members and continues to grow their knowledge of new technologies.

Responsibilities:
Develop ELT processes from various data repositories and APIs across the enterprise, ensuring data quality and process efficiency
Develop data processing scripts using Spark
Develop relational and NoSQL data models to help conform data to meet users’ needs using Hive and HBase
Integrate platform into the existing enterprise data warehouse and various operational systems
Develop administration processes to monitor cluster performance, resource usage, backup and mirroring to ensure a highly available platform
Address performance and scalability issues in a large-scale data lake environment
Provide big data platform support and issue resolutions to Data Scientists and fellow engineers

Requirements:
Master\'s degree in computer science, software engineering or a closely related field
2 years of experience with Hadoop distribution and ecosystem tools such as Hive, Spark, NiFi and Oozie
2 years of experience developing batch and streaming ETL processes
2 years of experience with relational and NoSQL databases, including modeling and writing complex queries
Proficiency in at least one programming language, such as Python or Java
Experience with Linux system administration, scripting and basic network skills
Excellent communication, analytical and problem-solving skills

Enterprise Req Skills
Big Data,Hadoop,Hive,Spark,Python,Java,data modeling,DAISS,AWS,Hortonworks,Cloudera

Top Skills Details:
Please have all candidates take a Python 3 IKM and run the questions in the additional skills section past them to send for my review.

1) Python is preferred, but will consider candidates with Java provided they are willing to learn Python
2) Hadoop tooling - HIVE and Spark
3) Big Data fundamentals - core understanding of building data pipelines, data modeling, Linux

Multiple positions open on several different teams.
1) Team is building trusted data products
2) Another team the Big Data Engineer would work very closely with data scientists to help implement their models
3) Platform and Support team could also use people to support DevOps and Performance Tuning activities

Work Environment
Strong Scaled Agile environment, will work on part of Scrum teams within Data Intelligence group. Roughly 380 people in Data Intelligence on over 30 teams. Each Scrum team has 8-15 depending on it\'s focus.

Additional Skills Tags
Big Data,Hadoop,Hive,Spark,Python,Java,data modeling,DAISS,AWS,Hortonworks,Cloudera

Additional Skills & Qualifications
Migrating environment to AWS, so any experience with AWS EMR would be highly preferred
Impact to the Internal/External Customer
Better decision making, AI and ML integrated into existing products