Job Description :
Responsibilities:
Work with business users to understand use cases and explore tools and technologies to meet the needs.
Experiment with the tools, run POCs, compare various competing offerings in the market to identify.
Engage the vendors to perform POCs - install tools and self-service experimentation.
Define criteria to compare and shortlist tools for a particular business need.
Work in cross-disciplinary teams to understand ways to ingest rich data sources such as social media, news, internal/external documents, emails, financial data, and operational data.
Understand and define process for how the selected tools / technologies will be implemented, operationalized and monitored for the business users.
Provide technical consulting, design and coding/ prototyping for Hadoop Platform activities.
Rapidly architect, design, prototype, and implement architectures to tackle the Big Data and Data Science needs.
Research, experiment, and utilize leading Big Data methodologies, such as Hadoop, Spark, Kafka, Kinesis, Redshift, Microsoft Azure, AWS etc.
Implement and test data processing pipelines, and data mining / data science algorithms on a variety of hosted settings, such as AWS, Azure, and on-premise clusters.
Implement automation to cut down time on manual processes.
Work with the team to build insightful visualizations, reports, and presentations.


Client : Direct Client