Job Description :
Responsibilities:
Working with multiple platforms, architectures, and diverse technologies gathering requirements, design and developing ETL processes to transform information from multiple sources
Troubleshooting, monitoring and coordinating defect resolution related to ETL processing
Data modeling, database design, and ETL design & development in an Agile development environment
Design and develop ETL procedures
You have played a technical role as a part of a team, implementing solutions based on industry standard architecture patterns: Loosely coupled architecture, web-oriented architecture, and service-oriented architecture.

Qualifications:
Required Skills:
Bachelor''s degree in Computer Science or related discipline
2+ years of hands-on experience with Hadoop ecosystems stream processing pipelines such as: Kafka, Flume, Apache Storm, Apache Samza or Apache Spark Streaming.
1+ year of hands-on experience with real-time data ingestion technologies such as: StreamSets and Apache NiFi
1+ year of hands-on experience with search-based solutions for real-time indexing, analysis and visualization such as ElasticSearch/ELK, Splunk, Solr or others
Hands-on experience with messaging oriented application stacks, pub-sub architecture, clustering and brokers, queues and topics, etc.
1+year with XML and/or JSON for parsing and enriching data in flight and for transport

Desired Skills:
Familiar with real-time data visualization techniques and enabling live analytics
A real time thinker who can imagine the future with out of the box solutions
Self-motivated to deep dive into subjects of solution relevance
Possesses a background in engineering
Possesses experience in building distributed systems
Experience with Java and familiarity with other modern languages such as Scala, R and Node.js.
Possess aptitude and front end skills necessary to design and implement administrative dashboards with substantial independent working abilities

NOTE: Ability to obtain Public Trust MBI required.

Client : govt