Job Description :
Big Data Engineer
Quantity: 2
Duration: 6-12 months + extension
Location: McLean, VA (100%)

Role: Cloud Engineer (Support)
Interview Process: Zoom Link Video / F2F for local
Contract to Hire: No

The client is setting up interviews very quickly with the candidates that have the proper experience. Just as a reminder the client is looking for experience with Scala, Spark, and AWS for production support.
Project: Production Support

Top 3:
1. Apache Spark (Development Side)
2. Big Data
3. Spring MVC
4. AWS EMR is good to have here
5. Know how to debug a Rest Service
6. Scala – Their code is all in this.

Hours: 8-5 but must be able to make the system stable in off hours

Nice to Have: Monitoring Tools like Elk, Data dog. They should know this

Notes:
- This role is mostly production support but they need to also know how to make code changes or enhancements. This product is built completely in house and this person will be a part of this team to do production support on this development team
- Experience Level: 2-3 years of Spark and AWS (8 years overall in the industry)
- Need problem solvers!!!
- They did 4 interviews: Main challenge was communication skills. When they asked what they would do in a time crunch scenario they couldn’t answer this fast enough. When a job breaks down and the cluster is not behaving what do you do next?

AWS/Data Transformation Prod Support Engineer

We are looking for a prod support/software engineer to assist our team with the data management platform responsible for the discovery, indexing, and analysis of operational data used to inform our most critical strategic and near real-time decisions.

Our systems are used by a large number of teams with varying roles across Capital One including: data scientists, machine learning specialists, business analysts, and data engineers.

You should have strong problem solving/debugging skills and possess excellent written and verbal communication skills. Additionally, you should have some AWS experience, familiarity with GitHub, and be comfortable working at the Linux command line (including bash or python scripting

Development experience with Apache Spark, Java 8+, and the Spring framework would be very helpful. Operations experience with Data Dog, Cloud Watch, ELK or Splunk, and AWS would also be beneficial.


Responsibilities:
Responding to requests from partner teams who utilize our systems
Troubleshooting and debugging problems in AWS, Java/Spring code, logs, and Data Dog
Clearly communicate technical information with developers and business stakeholders


Basic Qualifications:
At least 3 years of experience in software development including design, development, and testing
At least 3 years of experience with one or more major programming languages: Java, Python, Groovy, Scala, or Go
At least 2 years of experience with Linux command line or scripting
At least 1 years of experience with Git version control
At least 1 year of experience with a RESTful web service framework


Preferred Qualifications:
Bachelor’s, Master’s, or Minor degree in Computer Science or related field
3+ years of experience with Java 8+ and Spring
1+ years of experience working with Apache Spark
1+ years of experience working with AWS EMR
1+ years of experience working with Amazon Web Services (AWS)
1+ years of experience with relational database systems (e.g. PostgreSQL, MySQL, RDS, Aurora)
1+ years of experience with NoSQL systems (e.g. MongoDB, Redis, DynamoDB, Cassandra, HBase, Neo4j)
1+ years of experience with publish/subscribe or messaging systems (e.g. Kafka, Kinesis, SQS, Rabbit MQ, etc
AWS Certification or any other cloud certification


Client : Google

             

Similar Jobs you may be interested in ..