Job ID: CSS0314182
No.of Positions: 5
Educational Requirement: Bachelor’s degree in computer science, computer information systems, information technology, or a combination of education and experience equating to the U.S. equivalent of a Bachelor’s degree.
- Understanding business needs, analysing functional specifications and map those to develop and designing MapReduce programs and algorithms.
- Execution of Hadoop ecosystem and Applications through Apache HUE.
- Optimizing Hadoop MapReduce code, Hive/Pig scripts for better scalability, reliability and performance.
- Developed the OOZIE workflows for the Application execution.
- Feasibility Analysis (For the deliverables) – Evaluating the feasibility of the requirements against complexity and time lines.
- Performing data migration from Legacy Databases RDBMS to HDFS using Sqoop.
- Configuring, maintaining and monitoring Hadoop Cluster using Cloudera Manager (CDH5) and Hortonworks distribution (HDP 2.4)
- Handling HUE the open source web-based interface to interact with Hadoop Services
- Writing Pig scripts for data processing.
- Implemented Hive tables and HQL Queries for the reports. Written and used complex data type in Hive. Storing and retrieved data using HQL in Hive. Developed Hive queries to analyze reducer output data.
- Highly involved in designing the next generation data architecture for the unstructured data
- Working with product managers methodologies to deliver high quality solutions on time and working with operations teams to ensure our applications and services are highly available and reliable
Benefits: Standard Company Benefits