Job Description :
Duration: 6+ Months
Client: AAA
Location: Costa Mesa, CA
Bill Rate: Send at Market
Will hire over the phone
What they will be doing:
Collaborate with application teams to build out an enterprise data lake ? actually need to be able to DESIGN the data lake ? starting data lake from scratch
Operations and administration of an Apache Hadoop Ecosystem
Utilizing Team foundation Server Git, Jenkins, Ansible, Nexus IQ/Repo and SonarQube to enable CI / CD.
Develop automation using scripting languages such as Python, Shell, PowerShell and Bash
Implement and support Hadoop Applications such as Sqoop, Hue, Impala, Hive, Hbase
Implement and support streaming technologies such as Kafka, Spark & Kudu
Work with enterprise and solution architects to develop a Big Data cloud architecture and implement.
Working as part of DevOps teams to accelerate the delivery of business value
Qualifications:
A passion for technology - we are looking for someone who is keen to leverage their existing skills and seek out new skills and solutions.
Ability to manage numerous requests concurrently and strategically, prioritizing when necessary.
2+ years of work experience with ETL, data modeling, and business intelligence big data architectures
2 years of Linux system administration
4 years of Hadoop infrastructure administration
2+ years of experience implementing and utilizing Continuous Integration and Continuous Deployment pipelines (CI/CD)
Experience in two or more of the following databases: DB2, MS SQL Server, Oracle, MySQL, Mongo, Cassandra.
Experience in on-premises and Cloud computing implementation and integration.
Solid experience in implementing high performance, high availability, reliable, and secured software, and know how to scale a system horizontally.
Knowledge on how to implement and manage API (Web Services) versioning