Job Description :
Note:
Need 10+ years of experience
Need a copy of Visa, DL, Passport number and LinkedIn
Need Big data engineer with Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, Cassandra, HBase, Hive, Flume, Sqoop, Spark, Kafka, etc.

Hi,

My name is Yateesh from Cygnus Professionals .

I am looking for an Big Data Engineer with Hadoop - Costa Mesa, CA/ Texas/ Florida for one of my clients. Please find the information below.
You can reach me on (Ext 9057) or

Job Description:

Title: Big Data Engineer with Hadoop
Location: Costa Mesa, CA/ Texas/ Florida
Duration: 9+ Months

Key Responsibilities:
Work in a collaborative manner with our scaled agile (SAFe) teams to rapidly deliver solutions
Utilize your software engineering skills including Java, Python, Scala and Ruby to Analyze disparate, complex systems and collaboratively design new products and services
Integrate new data sources and tools
Implement scalable and reliable distributed data replication strategies
Leverage Amazon Web Services to provide innovative solutions
Convert business requirements to working prototypes and then deployable solutions
Design and implement high-performance, scalable data solutions
Provide best in class security in everything you do
Automate everything

Knowledge, Experience & Qualifications:
BS degree in computer science, computer engineering or equivalent
5+ year experience delivering enterprise software solutions
Proficient in Java, Python, Ruby(nice to have)
Familiarity with scripting languages
Familiarity with AWS scripting and automation
Must be able to quickly understand technical and business requirements and be able to translate them into technical implementations
Experience with Agile Development methodologies
3+ years’ experience across multiple Hadoop / Spark technologies such as Hadoop, Map Reduce, HDFS, Cassandra, HBase, Hive, Flume, Sqoop, Spark, Kafka, etc.
Experience with data ingestion and transformation
Solid understanding of secure application development methodologies

Pluses:
Experience in developing large-scale software platforms involving ETL, data quality, fusion of data, and real-time ingestion and delivery
Experience in streaming data processing platform such as Kafka, etc.
Experience with data collection using public API’s
Experience in developing real-time based solutions
Experience with the Scaled Agile Framework (SAFe)
             

Similar Jobs you may be interested in ..