Job Description :
Position:: Hadoop Big Data Developer
Onsite/Offsite: Atlanta, GA
Contract:: 6-12 Months
Team Composition:
Summary: Required a Core Software Engineering currently building the next generation Online and Offline Core Exchange Platform (CEP 2.0) using Big Data Technologies (Hadoop and NoSQL) to solve business problems on a common architecture. The various technologies such as Kafka, Hadoop, Spark, Cassandra, and Elastic Search are being utilized under the cover to solve Data Factory, Exchange, Batch and Insights domains with clear abstraction. Migrations focus on leveraging well defined Platform Shell APIs, Mango DSL (Java/XText based), AVRO data structures and Restful services to help implementations leverage the platform and build product and services on top.

Team Requirements:
Prior experience in large scale data and app migrations done in global scale
Ability to understand the big data platform and its customer interaction points to ensure the customer requirements are mapped to platform and vice versa
Experience in OOO, Component design patterns
Working with implementation teams to understand the data domain, define data in AVRO, migrate from current model to AVRO, define and code rules in DSL
Expertise in Java, DSL (home grown and XText Based), Unix and Scripting
Expertise in Test driven development, Continuous Integration and Release Management is necessary
Good grasp of operational issues related to data, schema changes, ingestion and micro batching (spark) and large scale batch (spark-hadoop)
Domain experience with Credit, Employment and Utility exchanges and or in financial domain
Experience in AWS is preferable (at least 2 out of 7 team members)