Job Description :

Data Archtiect

Job Description:

Requirements:
10+ years of total Experience in Java/Scala.
Minimum of 5+ years of hands-on experience in Hadoop programming
Minimum of 5+ years of hands-on experience in Java, Scala, and Spark
Additional 3+ years of hands-on experience with but not limited to the following: Kafka, NiFi, AWS, Maven, Stash and Bamboo
Hands-on experience to write MapReduce jobs. - Good knowledge on spark architecture.
Writing high-performance, reliable and maintainable code. - Good knowledge of database structures, theories, principles, and practices.
Proven understanding of Hadoop, HBase, Hive.
Good understanding of Hadoop, YARN, AWS EMR
Familiarity with data loading tools like Talend, and Sqoop - Familiarity with cloud databases like AWS Redshift, and Aurora MySQL.
Familiarity of Apache Zeppelin/EMR Notebook.
Knowledge of workflow/schedulers like Oozie or Apache AirFlow.
Analytical and problem-solving skills, applied to the Big Data domain - Strong exposure to Object Oriented concepts and implementation

Responsibilities:
Provide top-quality solution design and execution
Provide support in defining the scope and sizing of work
Align the organization’s big data solutions with the Client initiatives as requested
Engage with clients to understand strategic requirements
Responsible for translating business requirements into technology solutions
Work with domain experts to put together a delivery plan and stay on track
Ensuring the integrity of data backup and storage systems
Monitoring security software and tools
Managing the configuration of operating systems
Responding to security or usability concerns quickly
Anticipating possible problems and resolving them before they impact users when possible
Preparing reports for supervisors



Client : Implementer Need

             

Similar Jobs you may be interested in ..