Anwar

Logo

Hadoop Developer

Pune

Anwar Mansuri

Emai-anwar.iter@gmail.com

Contact no. – 9424081044

Summary

Overall having 3.7 Years of extensive experience including 2 years of Big Data and its technologies.

 

Ø  Experience in designing and developing coding software components in big data that includes various tools like hive, sqoop and pig etc.

Ø  Analyze customer data obtained from different servers.

Ø  Developed custom input formats and user defined data types for the map reduce program to parse the zipped jsons.

Ø  Developing UNIX /Linux scripting for execution of jobs.

Ø  Analyzing results stored in HBase to ensure correctness.

Ø  Having experience on importing and exporting data from different database systems (MySql, Sql) to hadoop file system, hive using sqoop.

Ø  Having experience in hadoop and its components and distributed environment in Map Reduce Programming hadoop hive programming. 

Ø  Experience on hadoop development and hands on experience on hadoop administration.

Ø  Strong analytical skills and business requirement understanding skills.

Ø  Strong experience in big data ecosystem tools – Hive, Pig, Sqoop, HBase, MapReduce, and Yarn.

Ø  Experience in UNIX/ Linux OS, UNIX / Linux shell scripting.

Ø  Strong knowledge of data-processing using pig or hive.

Ø  Good communication, presentation and interpersonal skills.

Ø  Able to work well independently and within a team.

Ø  Experience on HortonWorks and Cloudera environment.

 

Certifications  

Ø  Certificate of achievement of Accessing Hadoop Data Using Hive from Big Data University, December 9, 2015.

Ø  Certificate of achievement of Big Data Analytics from Big Data University, December 3, 2015.

Ø  Certificate of achievement of Big Data fundamental from Big Data University, November 29, 2015

 

Professional Experience                                                         

July 2012 to till date, Software Engineer, ITechsolutions India Pvt.Ltd., Pune, Maharashtra

 

Work Experience                                                        

 

A.    Project Client:             US based, leading Manufacturing Company

Project Name:              Analytic Data System (Large Scale)

Team Size:                   5

Role:                            Hadoop Developer

Description:                 Description of Data System Implementation project includes Design, Development and Implementation of Analytic Data System (ADS) for Reporting and Analytics for the customer. Enhancing the current reporting solution, Introduced new Hadoop based technologies. Improving current design and architecture to make it more maintainable and scalable. Re-engineering the overall analysis flow of the application. Leading POC’s and exploring Open Source solutions for improving the system and implementing new features. Introduced hive for ad-hoc query and efficiently used hive features to save space by integrating with existing data design.

 

Contribution:  

·         Importing data from ETL to HDFS using Sqoop.

·         Storing the data into hive data warehouse.

·         Analyzing data and provided stats to client.

·         Writing logic for map reduce programs.

·         Real Time Data Ingestion using Kafka for copying data from one cluster to another cluster.

·         Involving in writing Hive queries to load and process data in Hadoop File System.

·         Creating hive tables, loading with data and writing hive queries which will run internally in map reduce way.

·         Transform the entire data from one cluster to other clusters.

Environment:               Hortonworks, Apache Hive, Sqoop, Apache Pig, Hbase, Oozie

Platform (OS):              Linux Systems, Windows desktop for development

B.    Organization:               ITechsolutions India Pvt.Ltd

Project Client:             US based, leading Bank

Project Name:              Reporting (Large Scale)

Team Size:                   6

Role:                            Hadoop Developer

Description:                 Project goal is to move current architecture of dynamic banking system to do heavy processing on Hadoop based framework. The new architecture is supposed to save huge mainframe processing cost as well as would enable current system to handle much more data. New application enabled business to manage his customers very easily and effectively. Big Data Platform for analysis Structured data and crunching it on various dimensions. User can visualize results on web-based UI, where it can be filtered on various dimensions for his business improvement and also make more clients for providing reporting service to his customers.

Contribution:  

·         Developing Hive queries required to create data warehouse on MapReduce Big Data platform.

·         Importing data from ETL to HDFS using Sqoop.

·         Analyzing data and provided stats to client.

·         Writing logic for map reduce programs.

·         Involving in writing Hive queries to load and process data in Hadoop File System.

·         Using Hbase for OLTP operations.

·         Involved in requirement analysis phase.

·         Writing pig scripts & UNIX shell script to optimize or setup ETL processes.

·         All the components run on OOZIE clone job.

·         Transforming the entire data from one cluster to other clusters.

 

Environment:               Apache Hive, Sqoop, Apache Pig, Hbase, OOZIE

Platform (OS):              Linux Systems, Windows desktop for development

C.    Organization:               ITechsolutions India Pvt.Ltd

Project Client:             Reliance Communications Ltd.

Project Name:              eCMD (Enterprise Crediting Monitoring and Dunning)

Team Size:                   8

Role:                            Team Member

Description:                 It is credit control system through eCMD dunning system of reliance land line phones and baring and unbarring done through this system.

 

 

Contribution:

·         Experience In development and maintenance of database objects.

·         Upload data through backend using SQL Loader.

·         Maintenance & Code review of Project developed in SQL, PLSQL and making the necessary Customization at the back-end level as per the requirement of the Business.

 

 

Skills

Core components: Hadoop, HDFS, Hadoop MapReduce, YARN

Data Access Components: Pig and Hive

Data Storage Component is: HBase

Data Integration Components: Apache Flume, Sqoop,

Data Management and Monitoring Components:     Ambari, Oozie and Zookeeper.

Data Serialization Components:    Thrift and Avro

 

Technology:                  HUE, Flume, Kafka, Spark, Java, Sql, Solr, Sequence, Maven, JSON, Tableau

Operating Systems:         Linux, Windows, UNIX

Environment:                 Cloudera, Hortonworks, AWS* Beginner

 

Education

Bachelor of Engineering, RGPV Bhopal

  • Updated 5 years ago

To contact this candidate email anwar.iter@gmail.com

Contact using webmail: Gmail / AOL / Yahoo / Outlook

Share

Post a comment

Your email address will not be published. Required fields are marked *

Contact
close slider
  • +44 (0)203 004 9596

    info@versatilestaffing.co.uk
  • This field is for validation purposes and should be left unchanged.