Job Description :
Matrix Technology Group is a highly recognized provider of professional IT Consulting services in the US.

Feel Free to reach me at 9 0 8 2 7 9 1 2 4 6 OR anishm (at) matrixonweb com

I would like to let you know that one of our highly esteemed client has immediate need for Big Data Engineering Architect Consultant.

Here is our Direct Client requirement. Kindly respond to this requirement with your resume, contact, visa status, salary and current location info to speed up the interview process. Feel Free to reach me at OR anishm (at) matrixonweb com

Job Title: Big Data Engineering Architect Consultant
Duration: Permanent- Fulltime
Salary: Open (Direct Placement)

Locations: Midwest :IL - Chicago, IN - Indianapolis, MI - Detroit, MN - Minneapolis, MN - St Paul, OH - Cincinnati, OH - Columbus - Cleveland, OH - MO - Kansas City, Toledo, MO - St Louis, WI - Milwaukee
Southeast : DC - Washington, MD - Baltimore , VA - Arlington, VA – Richmond, GA - Atlanta, NC - Charlotte, FL - Tampa, FL – Miami, FL - Orlando
Southwest: OK - Oklahoma City, AZ - Phoenix, CO - Denver, TX - Austin, TX - Dallas, TX – Houston -
Northeast :CT - New Haven, CT - Hartford, DE - Wilmington, MA - Boston, NJ - Jersey City, NJ - Murray Hill, NJ - Florham Park, NY - New York, PA - Philadelphia, PA - Pittsburgh
West : CA - Los Angeles, CA - Los Alamitos, CA - Norwalk, CA - El Segundo, CA - Sacramento, CA - San Diego, CA - San Francisco, CA - San Jose, WA - Seattle, OR - Portland

Basic Qualifications:
Bachelor''s degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience
Minimum 1 year of architecting, implementing and successfully operationalizing large scale data solutions in production environments using Hadoop and NoSQL ecosystem on premise or on Cloud (AWS, Google or Azure) using many of the relevant technologies such as Nifi, Spark, Kafka, HBase, Hive, Cassandra, EMR, Kinesis, BigQuery, DataProc, Azure Data Lake etc.
Minimum 1 year of architecting data and buildconing performant data models at scale for Hadoop/NoSQL ecosystem of data stores to support different business consumption patterns off a centralized data platform
Minimum 1 year of Spark/MR/ETL processing, including Java, Python, Scala, Talend; for data analysis of production Big Data applications
Minimum 1 year of architecting and industrializing data lakes or real-time platforms for an enterprise enabling business applications and usage at scale
Minimum 2 years designing and implementing relational data models working with RDBMS and understanding of the challenges in these environment

Preferred Skills:
Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale and others
Minimum 1 year of experience implementing large scale BI/Visualization solutions on Big Data platforms
Minimum 1 year of experience implementing large scale secure cloud data solutions using AWS data and analytics services e.g. S3, EMR, Redshift
Minimum 1 year of experience implementing large scale secure cloud data solutions using Google data and analytics services e.g. BigQuery, DataProc
Minimum 1 year of experience building data management (metadata, lineage, tracking etcand governance solutions for modern data platforms that use Hadoop and NoSQL on premise or on AWS, Google and Azure cloud
Minimum 1 year of experience securing Hadoop/NoSQL based modern data platforms on-premise or on AWS, Google, Azure cloud
Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop or NoSQL technologies on premise or transition to AWS, Google clouds
Experience implementing data wrangling and data blending solutions for enabling self-service solutions using tools such as Trifacta, Paxata
1 year industry systems development and implementation experience OR Minimum of 2 years of data loading, acquisition, storage, transformation, and analysis
Minimum 1 years of using Talend, Informatica like ETL tools within a Big Data environment to perform large scale metadata integrated data transformation
Minimum 1 year of building Business Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies

Responsibilities:
Architect modern data solutions in a hybrid environment of traditional and modern data technologies such as Hadoop, NoSQL
Create technical and operational architectures for these solutions incorporating Hadoop, NoSQL and other modern data technologies
Implement and deploy custom solutions/applications using Hadoop/NoSQL
Lead and guide implementation teams and provide technical subject matter expertise in support of the following:
-Designing, implementing and deploying ETL to load data into Hadoop/NoSQL
-Security implementation of a Hadoop/NoSQL solutions
-Managing data in Hadoop/NoSQL co-existing with traditional data technologies in a hybrid environment
-Troubleshooting production issues with Hadoop/NoSQL
-Performance tuning of a Hadoop/NoSQL environment
             

Similar Jobs you may be interested in ..