Job Description :
ElevatIQ, Inc. provides technology and management consulting services and helps its customers solve operational and growth challenges with the adoption of cutting-edge technologies. Its services include digital transformation roadmap development, change management, business integration, system selection and procurement, vendor selection, skill development, and system implementation and adoption advisory services. The company serves customers in various industries including manufacturing, life-sciences, retail, logistics, professional services, banking, and insurance across North America.



We are working with a client who is looking for Big Data Engineer with minimum of 10 years software development experience, minimum 4 years experience on Big Data Platform. Must have active current experience with Scala, Java, Python, Oracle, HBase, Hive, Flair for data, schema, data model, how to bring efficiency in big data related life cycle. This a Full time position available and great career progression opportunity in Sunnyvale, CA. We are a fast growing and progressive company with many exciting opportunities, looking for staff who wants to work in long term and stability. We offer attractive Salaries and exciting Projects for you to work on. And guaranteed upskilling training to further your career.  So, if you have the Technical skills/abilities and the right candidate for this role, pls apply:



Job Summary:


Duration – Long term contract to Hire
OPEN to All Visa
C2C
Interview – Skype Interview
Special Skills required – Hands on coding experience with Java, Core Java, Multithreading.
Must have Java, Scala, Python, Spark, Hive and Kafka experience. Has to be able to hit the ground running




Job Responsibilities:


Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time.
Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases.Perform offline analysis of large data sets using components from the Hadoop ecosystem.Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead.
Own product features from the development phase through to production deployment.
Evaluate big data technologies and prototype solutions to improve our data processing architecture.




Candidate Profile:


BS in Computer Science or related area
Around 10 years software development experience
Minimum 4 Year Experience on Big Data Platform
Must have active current experience with Scala, Java, Python, Oracle, HBase, Hive
Flair for data, schema, data model, how to bring efficiency in big data related life cycle
Understanding of automated QA needs related to Big data
Understanding of various Visualization platform (Tableau, D3JS, others)
Experience in Cloud providers like AWS, Azure preferable
Proficiency with agile or lean development practices
Strong object-oriented design and analysis skills
Excellent written and verbal communication skills




Top Skills Set:


Programming languages Java, Python, Scala, R
Database – Oracle, complex SQL queries, performance tuning concepts, AWS RDS, RedShift
Batch processing Hadoop MapReduce, Cascading/Scalding, Apache Spark, AWS EMR
Stream processing Spark streaming, Apache Storm, Flink
NoSQL HBase, MongoDB, Cassandra, Riak,
ETL Tools/Data/Stage,/Informatica
Code/Build/Deployment git, Bit Bucket, svn, maven



Personal Qualifications:


Excellent communication and decision making skills are essential.
Strong analytical, problem solving and decision-making skills.
Zeal to lean new technologies, frameworks and appetite for growth
Identify project risks and recommend mitigation efforts.
Identify project issues, communicate them and assist in their resolution.
Assist in continuous improvement efforts in enhancing project team methodology and performance.
Cooperative team focused attitude





How to Contact?

ANKITA GUPTA | Technical Account Manager 
Email:  
Direct