Job Description :
In the role of Technology Architect, you will interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle including Requirements Elicitation, Application Architecture Definition and Design. You will play an important role in creating the high level design artifacts. You will also deliver high quality code deliverables for a module, lead validation for all types of testing and support activities related to implementation, transition and warranty. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.

Location for this position: Plano, TX

Bachelor’s degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education.
Minimum of 9+ years of work experience in the Information Technology Field.
Minimum of 5+ years of hands on experience Big Data technologies.
Demonstrate strong leadership, communication, analytical and organizational skills. Should possess excellent communication and articulation skills.
Experience with architecture and design of large-scale shared data solutions with multiple stakeholders.
Expertise in Hadoop ecosystem Enterprise product distribution through Cloudera or Hortonworks
Expertise in Hadoop ecosystem components such as HDFS, MapReduce 2, Hive, Pig, Zookeeper
Expertise with Hadoop ecosystem and experience with Hive, Pig, Oozie, Flume, Impala, Spark and Sqoop.
Expertise in building distributed systems, query processing, database internals or analytic systems Expertise with data schema - logical and physical data modeling
Hands on Experience with Spark, HBase, Java (MapReduce), Python (linux shell like scripts) development
Able to independently architect solutions, lead code reviews and ensure quality throughout the life of each project.
Experience in full software development lifecycle of the Data Warehousing Project.
Strong knowledge of database modeling principles, techniques and enterprise data management best practices.
Hands on Experience in loading data into HDFS from heterogeneous databases – DB2, Oracle, and SQL server using Apache Sqoop.
Experience in analysis of data using Hive, Pig and Impala and managing, navigating data and tables using Hue.
Intensively worked with Oozie, Flume, Sqoop, Spark, Storm and Hive for data loading and analytics
Experience in exploring the data lake using big data tools for fraud & risk detection, sentiment analysis, etc. using Spark libraries
Experience with data mining techniques and analytics functions Predictive analytics experience using R
Efficient in Writing SQL’s with complex joins ,aggregations, UNIX Scripts
Good understanding of Data Warehouse modeling concepts.
Must be able to provide Solutions or Enhancements to fix the issues quickly when reported by the clients or users.
Flexibility to Self learn and understand the system, further assist with query tuning and application performance
Ability to effectively manage day-to-day interactions and relationships with a diverse group of team members


At least 7 years of experience in software development life cycle stages
At least 5 years of experience Big Data technologies and ecosystem
At least 7 years of experience in Project life cycle activities on development and maintenance projects
At least 3 years of experience in Design and Architecture review
At least 2 years of experience in application support and maintenance (including some experience on-call support)
Good Analytical skills
High impact communication
Ability to ramp up in new technologies
Ability to work in a team, in diverse/ multiple stakeholder environments.
Experience and desire to work in a Global delivery environment