Job Description :
Only Citizens,Green Card and GC EAD. Please no H1

Our client is looking for a candidate will be able to perform ETL job design, development and automation activities with minimal supervision, while supporting manual ETL tasks for various development and implementation projects. The candidate will work with a team of talented engineers to enhance a highly scalable, fault tolerant, and responsive big data platform with next generation streaming data technologies. This team member will be responsible for designing and supporting data scientists with their data needs across all solution assets. They will help prove concepts, evaluate technologies and contribute to ideas that can turn into actionable implementations

You will be an important member of the team tasked with transforming a Cloud platform from a traditional enterprise data architecture to a streaming architecture based on open source and Hadoop technologies

Responsibilities:

The ETL developer is responsible for working with multiple platforms, architectures, and diverse technologies gathering requirements, design and developing ETL processes to transform information from multiple sources.The successful candidate will be able to perform ETL development, articulate and implement best practices for ETL reusability and modularity, across Pentaho PDI primarily and tools such as Apache NiFi and Kafka. The candidate must have a successful track record in ETL job design, development and automation activities with minimal supervision. The candidate will be expected to support manual ETL tasks for various development and implementation projects and work with team members to automate such tasks/jobs; Troubleshoot, monitor and coordinate defect resolution related to ETL processing; Responsible for support of all existing ETL processes across various data assets.

Required Skills:
2+ years of experience in with Pentaho PDI development is essential
Hands-on experience of Apache NiFi Data Services development
Hands-on experience with developing and integrating Kafka into data feeds / data flows
Ability to design complex data feeds and experience with integrating web services.
Experience in Python based development
Experience in Linux scripting is essential

Desired Skills:
Experience with AWS services, specifically lambda functions, Data pipeline and Glue is an advantage
Working experience with AWS Redshift and Mongo DB preferred
Be self-motivated to learn and enhance cloud based engineering and automation for data management
Be able to work independently with minimal supervision

Qualifications (Education/Experience) :
Bachelor''s degree in Computer Science or related discipline
Ability to secure Public trust clearance


Client : Govt

             

Similar Jobs you may be interested in ..