Job Description :
We have a new job requirement. If interested, kindly send me your updated resume with the below filled skill matrix ASAP.

Job Title: Big Data Architect
Duration: Direct Full-Time
Location: Alpharetta, GA

Our client’s strategy is to transform to a data-driven culture in which large volumes of disparate data drive new revenue opportunities, increase returns on investments, and improve business decisions. You will work in CDO organization with our Data Lake group, reporting to the Director of Big Data Engineering. Our group builds the Data Lake platform serving analytics for multiple lines of business.

Lead Big Data engineering team with specialization on data ingestion (from 100 source systems in batch and near real-time), egestion and governance.
Participate in collaborative software and system design and development of the new Data Lake on Hortonworks HDP and HDF distributions.
Manage own learning and contribute to technical skill building of the team.
Inspire and cultivate the engineering mindset and systems thinking.
Gain deep technical expertise in the data movement patterns, practices and tools.
Play active role in Big Data Communities of Practice.
Put the minimal system needed into production.

Qualifications:
Required Qualifications:
Bachelor’s degree or higher in Computer Science or a related field.
Good understanding of distributed computing and big data architectures.
Passion for software engineering and craftsman-like coding prowess.
Proven experience in developing Big Data solutions in Hadoop Ecosystem using Apache NiFi, Kafka, Flume, Sqoop, Apache Atlas, Hive, HDFS, HBase and Spark (Hortonworks HDP and HDF preferred
Experience with at least one of the leading CDC (Change Data Capture) tools like Informatica PowerCenter.
Development experience with at least one NoSQL database. HBase or Cassandra preferred.

Polyglot development (4-5 years ):
Capable of developing in Java and Scala with good understanding of functional programming, SOLID principles and, concurrency models and modularization.
DevOps: Appreciates the CI and CD model and always builds to ease consumption and monitoring of the system. Experience with Maven (or Gradle or SBT) and Git preferred.
Experience in Agile development including Scrum and other lean techniques.
Should believe in You Build! You Ship! And You Run! Philosophy.
Personal qualities such as creativity, tenacity, curiosity, and passion for deep technical excellence.

Desired Qualifications:
Experience with Big Data migrations/transformations programs in the Data Warehousing and/or Business Intelligence areas.
Experience with ETL tools like Talend, Pentaho, Attunity etc.
Knowledge of Teradata, Netezza etc.
Good grounding in NoSQL data stores such as Cassandra, Neo4j etc.
Strong knowledge on computer algorithms.
Experience with workload orchestration and automation tools like Oozie, Control-M etc.
Experience in building self-contained applications using Docker, Vagrant. Chef


Please email your MS Word resume to and complete the skills-matrix below
Full name:
Degree Major:
Total IT exp:
Total Big Data Architect exp:
Total exp in distributed computing and big data architectures:
Total Hadoop Ecosystem using Apache NiFi, Kafka, Flume, Sqoop, Apache Atlas, Hive, HDFS, HBase and Spark (Hortonworks HDP and HDF preferred): Please Specify:
Total exp with at least one of the leading CDC (Change Data Capture) tools like Informatica PowerCenter.: Please Specify:
Total Development exp with at least one NoSQL database. HBase or Cassandra preferred.: Please Specify:
Total exp in developing in Java and Scala with good understanding of functional programming, SOLID principles and, concurrency models and modularization:
Total exp in Agile development including Scrum and other lean techniques:
Are you US Citizen/GC/Other? Plz Specify:
Expected Hourly Rate:
Email:
Skype ID:
Cell#:
Home#:
Best time to reach you:
Availability:
A) Phone Interview:
B) Face to Face Interview:
C) Start Date:
Current City and State:
Current Location residential address:
Willing to relocate: