Job Description :
Job Title : Spark Streaming & Spark SQL Developer
Duration: 3 - 6 Months contract
Location: Remote (EST Time zone)
Experience required : 10 + years
* Experience with Spark, Scala, Python and expertise in Big Data technologies.
* Experience working in On-premise big data environments.
* Must possess hands on with all phases of software engineering including requirements analysis, application design and code development and testing.
Job Responsibilities:
* 5+ years of Big Data (Hadoop, Hbase, Hive, Scala), 2+ years in Spark development, Sqoop, Flume, Kafka and Python
* 5+ years of experience in software development life cycle
* 5+ years of experience in Project life cycle activities on development and maintenance projects
* Experience with end-to-end implementation of DW BI projects, especially in data warehouse and mart Developments
* Strong knowledge and hands-on experience in SQL, Unix shell scripting
* Knowledge and experience with full SDLC lifecycle
* Experience with Lean / Agile development methodologies
* Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data
* Solid experience with Data integration, Data Quality and data architecture
* Expertise in impact analysis due to changes or issues
* Experience in preparing test scripts and test cases to validate data and maintaining data quality
* Strong hands-on programming/scripting experience skills - UNIX shell, Perl, and JavaScript
* Experience with design and implementation of ETL/ELT framework for complex warehouses/marts.
* Knowledge of large data sets and experience with performance tuning and troubleshooting
* Hands-on development and CI/CD DevOps with a willingness to troubleshoot and solve complex problems
* Ability to work in team in diverse/ multiple stakeholder environment
* Ability to communicate complex technology solutions to diverse teams namely, technical, business and management teams
* Excellent verbal and written communication skills