Job Description :
Job Summary
The XXXX team is preparing the next wave of features that leverage its internally sourced XXXX. We are developing tools to analyze and process data automatically at scale.
If you are a self-motivated team player, who thrives in a fast-paced, constantly changing environment, passionate about building great products and learning new technologies, this is the job for you. If you are a smart, creative, ambitious software engineer who’s always looking for a better way, we’d like to talk to you.
Key Qualifications
Experience working with large data sets and pipelines, ideally using tools and libraries of Hadoop ecosystem such as Spark, HDFS, YARN and Hive
Exceptional object-oriented programming skills – Java or Scala preferred and Python
Experience with Solr/ Lucene, Cassandra and related technologies is plus
Drive and passion for building software, do what it takes in both ability and attitude
Excellent analytical, debugging and problem solving skills
Excellent oral and written communication skills
Passion for geo-spatial data exploration

The XXXX Data team is chartered to process, analyze and best quality data for XXXX.
In this role you will be responsible for analyzing & writing code to fix some of the issues observed in the data. You''ll also be responsible for designing and developing critical tools for the XXXX data platforms. Our tools run thousands of times a day to ensure the rapid evolution of our XXXX data and its timely update.
You’ll interact with product managers and other stakeholders to understand various needs and translate them into highly performing code to operate on a large dataset efficiently.
Minimum Bachelors degree in CS or equivalent with 4 - 6 years industry experience
Additional Requirements
Experience with geo-spatial tools like Arc or QGIS and familiarity with computational geometry are a plus