Job Description :
Title - Big Data Developer

Location: Baltimore, Maryland

Duties:

Manages major projects that involve providing professional support services and/or the integration, implementation and transition of large, complex systems.
Provides design and development of e-government solutions and is responsible for technical design and implementation of the architecture.
Designs, develops and maintains infrastructure and backend applications.
Provides expertise on defining the role of broadband and wireless applications.
Provides definition of current State architecture blueprints. Provides expertise with web servers, gateways, and application servers and content management systems.
Provides experience in web application technologies and middleware solutions.
Researches new technologies and products for their applicability to business processes.
Must be able to compare various solutions and determine the one which best fits the need.
Ensures that development efforts are well planned and in agreement with standards.

Education:

A Bachelor's Degree from an accredited college or university with a major in Computer Science, Information Systems, Engineering, Business, or other related scientific or technical discipline is required

Qualifications:

The proposed candidate must have experience working in the database domain with at least 5 years of experience managing all operations of Big Data stack – preferably Cloudera in a Cloud Environment.
The individual will work closely with development teams and customers to design, architect, implement, and support big data solutions.
The candidate must have Hands-on experience in Hadoop Ecosystem including HDFS, Hive, Yarn, Spark, Storm, Map-Reduce, Pig, Flume, Kafka as well as Working experience with NoSQL Platforms.
They should Design proper Hadoop Cluster environments for application and data consumption.
Design and implement automation using scripts, must be proficient in scripting.
Design and implement replication and backups for mission critical/tier-1 applications.
Monitor and troubleshoot jobs, problems and performance issues in Hadoop clusters.
Recommend and implement in depth tuning for infrastructure and applications. Understanding of the HDFS file system and its methods of replication.