Job Description :
Big Data Developer with Kafka

Dallas, TX


Job Description:          

MUST HAVE strong hands experience with Kafka, Lambda Architecture, Java, Hadoop, Spark, S3, Spark Streaming Benchmark systems, analyse system bottlenecks and propose solutions to eliminate them.
Clearly articulate pros and cons of various technologies and platforms.
Document use cases, solutions and recommendations.
Excellent written and verbal communication skills.
Explain the work in plain language.
Help program and project managers in the design, planning and governance of implementing projects of any kind.
Perform detailed analysis of business problems and technical environments and use this in designing the solution.
Work creatively and analytically in a problem-solving environment.
Work in teams, as a big data environment is developed in a team of employees with different disciplines.
Work in a fast-paced agile development environment.
Assess the quality of datasets for a Hadoop data lake.
Fine tune Hadoop applications for high performance and throughput.
Troubleshoot and debug any Hadoop ecosystem run time issues.
Understanding enterprise Hadoop environment.
Good understanding HBase clusters.
Assign schemas and create Hive tables.