Job Description :
8+ years of industry experience as an Analyst or related specialty.
3+ years Programming experience manipulating and analyzing data (Python or Scala)
Experience building robust and scalable data integration (ETL) pipelines using Airflow, SQL, Python and Spark.
Experience in data modeling, ETL development, and Data warehousing.
Data Warehousing Experience with Oracle, Redshift, Teradata, Snowflake etc.
Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
Adept at queries, report writing and presenting findings
Defining new data collection and analysis processes
An analytical mind and inclination for problem-solving
Experience building data products incrementally and integrating and managing datasets from multiple sources.
Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, Apache Druid)
such as S3, EC2, and EMR (Spark) etc.
Lead the transformation of a peta-byte scale batch-based processing platform to a near real-time streaming platform using technologies such as Apache Kafka, Cassandra, Spark and other open source frameworks.
             

Similar Jobs you may be interested in ..