Job Description :
"F1 Visas, which are classified as either CPTs or OPTs will not be considered anymore."DNA:Job Title* Technology Architect | Big Data - Hadoop | HadoopWork Location*Bellevue WA 98008Vendor Rate*99.00Contract duration (in months 3Target Start Date* 22 Aug 2018Does this position require Visa independent candidates only? YesJob Details:Must Have Skills (Top 3 technical skills only) * 1. Strong working knowledge of Teradata, Hive and HBase 2. Experience in data architecture for handling streaming and batch data within a data warehouse landscape 3. Understanding of Big Data technologies, Hadoop and modern data architectures like Spark and NoSQL structuresNice to have skills (Top 2 only)1. Experience in distributed data processing framework such as Spark and MapReduce and data streaming such as Kafka, Spark streaming preferred2. Strong programming skill with at least one of the following Python, Java, Scala, etcDetailed Job Description:Strong working knowledge of Teradata, Hive and HBaseExperience in data architecture for handling streaming and batch data within a data warehouse landscape Understanding of Big Data technologies, Hadoop and modern data architectures like Spark and NoSQL structures Experience in distributed data processing framework such as Spark and MapReduce and data streaming such as Kafka, Spark streaming preferred Experience with API management solution Strong programming skill with at least one of the followingMinimum years of experience*5+Certifications Needed: NoTop 3 responsibilities you would expect the Subcon to shoulder and execute*:1. Experience with data profiling tools such as Ataccama, Trifacta, etc preferred2. Excellent SQL tuning knowledge and experience with Teradata Query Grid preferred3. Experience working with vendors to evaluate, select, and implement 3rd party solutionsInterview Process (Is face to face requiredYesAny additional information you would like to share about the project specs/ nature of work:Strong working knowledge of Teradata, Hive and HBaseExperience in data architecture for handling streaming and batch data within a data warehouse landscapeUnderstanding of Big Data technologies, Hadoop and modern data architectures like Spark and NoSQL structuresExperience in distributed data processing framework such as Spark and MapReduce and data streaming such as Kafka, Spark streaming preferredExperience with API management solutionStrong programming skill with at least one of the following


This section has been enriched. Please review and make any edits prior to saving the request template.



Skills:



Category

Name

Required

Experience





No items to display.