Location: 100% REMOTE
Duration: 12+ Months contract
Technical Skills:
Years of exp on Python, SQL, SparkSQL | |
Years of exp on Java Development | |
Years of exp on PySpark | |
Years of exp on Spark | |
Job Description:
• 2+ years with Big Data Hadoop cluster (HDFS, Yarn, Hive, MapReduce frameworks), Spark
• 2+ years of recent experience with building and deploying applications in AWS (S3, Hive, Glue, EMR, AWS Batch, Dynamo DB, Redshift, Cloudwatch, RDS, Lambda, SNS, SWS etc.)
• 4+ years of Java, Python, SQL, SparkSQL, PySpark
• Excellent problem solving skills and strong verbal & written communication skills
• Ability to work independently as well as part of a team
Desired Skills:
• Knowledge of Spark streaming technologies
• Familiarity with Hadoop / Spark information architecture, Data Modeling, Machine Learning (ML), Talend
• Knowledge of Financial Products, Risk Management, Portfolio Management is preferred but not mandatory. Training will be provided to help you gain ground