Job Description :
Role: Devops with Kafka Location: Richmond, VA Duration: Contract Experience: 9+ years Keep the Data Pipeline infrastructure (Kafka and peripheral tools) operational across AWS Cloud based environments. Work closely with cross functional team members such as developers, operations, product managers, and other stakeholders, architects on projects from idea to implementation. Leverages DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation, and Test-Driven Development Respond to customer and operations-reported application issues and incidents. Automate Infrastructure tasks so that they can be achieved consistently, quickly, and at the scale our business needs demand. Design solutions for freeing up engineers and key resources to focus on features and business tasks, by bringing to bear extensive experience in automation, system administration, advanced troubleshooting and performance management. Bring in new ideas, whether it's a new tool or technology, which will help us massively innovate. Bring an automation mindset and ensure manual tasks are automated. Develop tools that will improve platform availability and engineer's ability to respond to incidents Create automation solutions for our infrastructure platform to achieve a hands-free environment Be a part of our weekly 24x7 on-call rotation and conduct incident reviews Build CICD pipelines using technologies such as Jenkins in a containerize environment Troubleshoot and resolve issues in development, test and production environments Develop full stack solutions and continuous delivery frameworks that improve the ability of the IT delivery teams to efficiently deliver solutions with quality and reduced time to market. Write complex code, building infrastructure as code, work with immutable "cloud based environments, and build the supporting automated toolsets necessary to support the continuous delivery pipeline. Integrate Open Source & COTS products across the continuous delivery pipeline to provide a comprehensive automated system from epic definition, development, test and deploy of Companies applications within our data center and Amazon. Design, develop and implement automated solutions, based on a set of standards and processes which establish consistency across the enterprise, reduce risk, and promote efficiencies in support of the organization's goals and objectives. Provide support (coaching and mentoring) for teammate's work activities on a regular basis Actively review their own and the team's work product and implement improvements seen from other teams or within the industry to drive continuous improvement of the team's efficiency, speed, and quality. Design, develop and implement automated solutions, based on a set of standards and processes which establish consistency across the enterprise, reduce risk, and promote efficiencies in support of the organization's goals and objectives. Proactively monitor health of environments and drive troubleshooting and tuning as required. Evaluate & build different compute frameworks for all tiers for technologies in AWS cloud. Identify technical obstacles early and work closely with team to find creative solutions to build prototypes & develop deployment strategies, procedures, road maps etc. Investigate the impact of new technologies on the platform, Client One users, customers & recommend solutions Concentrate on a wide range of loosely defined complex situations, which require creativity and originality, where guidance and counsel may be unavailable. Qualifications-Guidelines: Engineer with a strong background in building and maintaining highly scalable distributed data systems such as data pipelines based on data streaming, Hadoop etc. Experience in deploying, managing & supporting scalable, highly available data services with Kafka, Kenesis, Hadoop, Spark, Flink in AWS Cloud and datacenter environments. Manage large scale multi-node Kafka cluster environments residing on AWS.Handle all Kafka environment builds, including design, capacity planning, cluster setup, performance tuning and ongoing monitoring. Hands-on experience in standing up and administrating Kafka platform which includes creating a backup & mirroring of Kafka Cluster brokers, broker sizing, topic sizing, h/w sizing, performance monitoring, broker security, topic security, consumer/producer access management(ACL) Proficient with AWS & Linux OS administration and troubleshooting along with associated scripting languages, and networking stack. CentOS/RedHat OS knowledge preferred. Experienced knowledge of any CI/CD Pipeline tools - Worked on creating pipeline and automating task via Jenkins. Excellent command of Git. Experienced knowledge of infrastructure automation tools such as Chef, Ansible etc. Experience with CloudFormation, Terraform, ELK, Grafana etc. Experience with Docker and a container orchestration platform like Kubernetes. Experience with running/managing Kubernetes clusters and workloads. Proficient in Java/Scala/Go/Python/scripting with at least 4 years of experience. Experience building and maintaining user-facing libraries, API's is a plus. Strong verbal and written communication skills. Do-er. You have a bias toward action, you are not afraid to try out things. Fearless. Big, undefined problems don't frighten you.