Job Description :
Responsibilities of the role include:
Build data pipeline frameworks to automate high-volume and real-time data delivery to our cloud platform
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
Develop and enhance applications using a modern technology stack such as Java, Python, Shell Scripting, Scala, Postgres, Angular JS, React, and Cloud based data warehousing services such as Snowflake
Perform unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
Bachelor’s Degree; nice to have Accounting / finance business knowledge is a plus
5+ years of experience building data pipelines and using ETL tools to solve complex business problems in an Agile environment
5+ years of experience in at least one scripting language (SQL, Python, Perl, JavaScript, Shell)
3+ year of experience using relational database systems (Snowflake, PostgreSQL, or MySQL)
3+ year experience working on streaming data applications (Spark Streaming, Kafka, Kinesis, and Flink)
3+ years of experience in big data technologies (MapReduce, Cassandra, Accumulo, HBase, Spark, Hadoop, HDFS, AVRO, MongoDB, or Zookeeper)
3+ years of experience with Amazon Web Services (AWS), Microsoft Azure or another public cloud service