Job Description :

NEED RESUMES

MasterCard

Openings: 2

Title: Senior Software Development Engineer- Big Data Engineer

Location: San Francisco, California (4th & 5th Floor)

Duration: 12 Months

Group: Enterprise Data Solutions

***Non Sponsored Candidates ONLY (US Citizens and Green Cards are acceptable)

o What are your top 3 required technical skills?

1. Automation Dev Ops Engineering [Hadoop, Spark]

2. Operational experience in real-time, streaming and data pipelines relevant frameworks [NiFi or Airflow]

3. Java/ Scala and Python programming

4. PCF or Cloud experience in general.

  • Duration of assignment?

12 months

o Is there an anticipated opportunity for conversion or extension?
Yes

o What is the name of your group?

Data platform

o How does that fit into the overall Mastercard organization?

Data platform , Data Enablement program , O & T

o What is your team's main responsibility?

Data platform is focused on enabling insights into Mastercard network and help build data-driven products by curating and preparing data in a secure and reliable manner. Moving to a "Unified and Fault-Tolerant Architecture for Data Ingestion and Processing" is critical to achieving this mission.

o How would you describe the culture of your team?

Innovation motivated , Data and insights driven, Agile , Self-organized and Self-learned data engineering culture

o What level of competency are you looking for? [Please use: Foundational, Intermediate, or Advanced]

Foundational :
As a Site Reliability Engineering or DevOps Engineer

Experience as a software engineer or software architect

Experience solving for scalability, Performance and stability

Expert knowledge of Linux operating systems and environment and Scripting (Shell and Python preferred)

Intermediate :
1)
Data Warehouse related projects in product or service based organization

2) Operational Experience in Big Data Stacks ( Hadoop ecosystem, Spark is a plus)

3) Operational experience troubleshooting network/server communication

4) Experience with performance Tuning of Database Schemas, Databases, SQL, ETL Jobs, and related scripts

5) Expertise in enterprise metrics/monitoring with frameworks such as Splunk, Druid, Grafana

Advanced :
1)
Experience with cloud computing services, particularly deploying and running services in Azure or AWS

2) Operational Experience in real-time ,streaming and data pipelines relevant frameworks ( Kafka and NiFi is a plus)

o What will the work schedule be?

40 hours per week

o What are your top 3 required technical skills?

1. Automation Dev Ops Engineering [Hadoop, Spark]

2. Operational experience in real-time, streaming and data pipelines relevant frameworks [NiFi or Airflow]

3. Java/ Scala and Python programming

4. PCF or Cloud experience in general.

o What are a couple of desired/nice to have skills?

1. N/A

o What soft skills would you like to see in a candidate?

1. Team working

2. Fluent communications

3. Self-organized

Job Description Summary

Overview

Mastercard is the global technology company behind the world's fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless . We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities.

Data Engineering and platform is focused on enabling insights into Mastercard network and help build data-driven products by curating and preparing data in a secure and reliable manner. Moving to a "Unified and Fault-Tolerant Architecture for Data Ingestion and Processing" is critical to achieving this mission.

As a Senior Site Reliability / DevOps Engineer in Data Engineering and platform, with your strong technical chops and to help our platform and services establish a true SRE capability. You will be partnering with the data engineering team, so the ability to influence and provide operational guidance is key. Initially, the SREs focus will be contributing to the development of operational tools and practices that help maintain service availability across hosted and cloud-based infrastructure. You must have an understanding of the full stack and how systems are built as well as a grasp of operational best practices.

Role

Partner on the design the next implementation of Mastercard secure, global data and insight architecture, building new Stream processing capabilities and operationalizing "Unified Data Acquisition and Processing (UDAP) platform"

Identify and resolve performance bottlenecks either proactively

Work with the customer support group as needed to resolve performance issues in the field

Explore automation opportunity and develop tools to automate some of the day to day operations tasks

Provide performance metrics and maintain dashboards to reflect production systems health

Conceptualize and implement proactive monitoring where possible to catch issues early

Experiment with new tools to streamline the development, testing, deployment, and running of our data pipelines.

Work with cross functional agile teams to drive projects through full development cycle.

Help the team improve with the usage of data engineering best practices.

Collaborate with other data engineering teams to improve the data engineering ecosystem and talent within Mastercard.

Creatively solve problems when facing constraints, whether it is the number of developers, quality or quantity of data, compute power, storage capacity or just time.

Maintain awareness of relevant technical and product trends through self-learning/study, training classes and job shadowing.

All About You

At least Bachelor's degree in Computer Science, Computer Engineering or Technology related field or equivalent work experience

Intermediate experience in Data Warehouse related projects in product or service based organization

Foundational experience as a Site Reliability Engineering or DevOps Engineer

Foundational experience overall with experience as a software engineer or software architect

Experience solving for scalability, Performance and stability

Expert knowledge of Linux operating systems and environment and Scripting (Shell and Python preferred)

A deep expertise in your field of Software Engineering

Expert at troubleshooting complex system and application stacks

Operational Experience in Big Data Stacks ( Hadoop ecosystem, Spark is a plus)

Operational Experience in real-time ,streaming and data pipelines relevant frameworks ( Kafka and NiFi is a plus)

Operational experience troubleshooting network/server communication

Experience with performance Tuning of Database Schemas, Databases, SQL, ETL Jobs, and related scripts

Expertise in enterprise metrics/monitoring with frameworks such as Splunk, Druid, Grafana

Experience with cloud computing services, particularly deploying and running services in Azure or GCP

A belief in data driven analysis and problem solving and a proven track record in applying these principles

An organized approach the planning and execution of major projects


Required Skills :
Basic Qualification :
Additional Skills :
Candidate must be your W2 Employee :No
Interview Process :
Additional Keywords :
Degree Requirements :
Certification Requirement :
Minimum Experience (In Years) :0
Travel Requirements :
             

Similar Jobs you may be interested in ..