Job Description :


Location:San Francisco, CA (Locals Only)

Duration: 12 months

MOI: Video

You’ll use technologies like Scala, Scalding, Spark, Hadoop,Druid, BigQuery, Presto, Zeppelin, Tableau, and Python as you process andaggregate vast amounts of data into traces, metrics, alerts, and visualizationsthat tell our engineers exactly when and where to find the most important performancebottlenecks.

Who You Are:

You have experienced the singular gratification of making a system faster.

You want everybody, everywhere, to have access to a fast, global, publiccommunications platform.

Helping others to build faster systems sounds like a fun and exciting challengeto you.

Working with other teams, building powerful and intuitive tooling, andautomating manual processes is second nature to you.

You are pragmatic, iterative, and customer-driven. You focus on where you can add the most value.

You’re organized, self-starting, and resourceful. You know how & when to ask for help.

You have excellent written and verbal communication skills.

You are comfortable within distributed work environments, collaborating acrosstime zones and cultures.


Working with a multitude of internal customers and stakeholders across the entire SDLC

Building platforms and tools for other engineers

Working on very large scale distributed systems

Defining and analyzing application and system metrics

Building out data pipelines with Scala, Spark, or Hadoop

Data analytics with SQL, BigQuery, Druid, Presto, and Tableau

Fundamentals of statistics

Performance engineering: tuning, regressions detection, and profiling

Similar Jobs you may be interested in ..