Post Job Free

Resume

Sign in

Senior Manager

Location:
Bangalore, Karnataka, India
Salary:
4800000
Posted:
December 17, 2020

Contact this candidate

Resume:

Uday Kiran G

E-mail: adirf4@r.postjobfree.com

Mobile: +1-315-***-**** / +91-805*******

Professional Summary

Having 10+ years of IT experience in diversified fields of Application Software Design, Development, Migration and Integrations in Big Data, Data Warehousing, Data Analytics and Business Intelligence.

7+ years of leading, building, managing IT teams to deliver high quality projects on time.

Experience and domain knowledge in various industries such as E-Commerce, Healthcare, Retail, Manufacturing, BFSI and Supply Chain.

Expert in designing Project Roadmaps (Technology, Resource, Timelines & Budget).

Contributed in R&D to Apache (Pulsar, Dask) and Qubole (Sparklens) communities.

9+ years of experience in Data Analysis, Migration, Cleansing, Transformation, Integration, Ingestion, Storage, Querying, Import and Export of data using Hadoop ecosystem tools like HDFS, Hive, HBase, Pig, Sqoop, MapReduce, Flume, Kafka, Oozie, Falcon, Scala and Spark.

Extensive experience in data Analytics, generating data visualization reports using IBM Watson and Tableau in creating dashboards.

Strong experience creating real time data solutions using Apache Spark – Core, SQL, Streaming and MLlib.

Experience in working with Cloudera and HortonWorks Hadoop Distributions, and knowledge over the Hadoop Architecture and various components.

Hands on experience on AWS cloud services (VPC, EC2, S3, Redshift, Data Pipeline, EMR, Glue, RDS, Airflow) & Azure cloud Services (Data Factory, Data Lake Store, Data Lake Analytics, U-SQL, HDInsight, DataBricks, PowerShell, Synapse Analytics, Power BI, SSIS, SSRS, Cosmos DB, Stream Analytics).

Experience in working with Cost Optimization and Data Security Strategies on Cloud.

Developed and maintained ETL (Data Extraction, Transformation and Loading) pipelines using Informatica, IBM Data Stage, SSIS and Talend.

Good knowledge in Database Creation and maintenance of Logical, physical data models with Oracle, Teradata, DB2 and SQL Server databases.

Interpret problems and provides solutions to business problems using data analysis, data mining, optimization tools and statistics.

Knowledge of working with Proof of Concepts (PoC's) and gap analysis and gathered necessary data for analysis from various sources, prepared data for data exploration using data munging and Teradata.

Working closely with Customer's, cross-functional teams, research scientists, software developers, end users and stack holders in an Agile/Scrum work environment to drive data model implementations and algorithms into practice.

Experience in Kubernetes to deploy scale, load balance and manage Docker containers with multiple name spaced versions.

Educational Details

Masters in Data Science from BITS PILANI, 2020.

Data Analytics & Machine Learning - Business Implementation (Executive Program) from IIT Roorkee, 2020.

Bachelors in Computer science from JNTU University Hyderabad, 2010.

Certifications and Awards

Certified Project Management Professional- IT (PMP- Agile) from International Business Management Institute

Certified Azure Data Engineer from Microsoft

Certified TOGAF Practitioner/Architect from The Open Group

Certified in Blockchain from University of Buffalo

Certified in HDP Certified – Spark Developer from Hortonworks

Certified in IBM 000-421: Datastage 8.5.

Certified in 1Z0-007 Oracle 9i: PL/SQL Developer Certified Associate

Received “Associate of the Year” & “Young Achiever” Awards from Cognizant, 2014.

Technical Skill Set

Hadoop Ecosystem

Apache Hadoop 1.x/2.x(YARN), HDFS, Map Reduce, Hive, Pig, Zookeeper, Sqoop, Spark, Oozie, Flume, Kafka, Ambari, Falcon, Kylin, Sparklens

Distributions

HortonWorks, Cloudera

Cloud

AWS cloud :EC2, S3, EMR, Redshift, Glue, RDS, Airflow, SWF, Snowflake

MS Azure : Data Factory, Data Lake Store, Data Lake Analytics, HDInsights, PowerShell, Synapse Analytics, U-SQL, SSIS, SSRS, CosmoDB

GCP : BigQuery, Dataproc

DWH BI Tools

IBM Datastage, SAP BODs, Informatica, SSIS,SSAS, Tableau, IBM Watson

Relational Databases

Teradata, DB2, Oracle 11g/10g, MS-SQL Server 2003/2005

NoSQL Databases

HBase, Cassandra, MongoDB

Programming Languages

Scala, Python, Java, Dask

Scripting

Unix Shell Scripting, Perl, Python

Reporting tools

IBM Watson, SSRS, Tableau, Cognos, BO, Crystal Report, Power BI

Methodologies

Agile/Scrum, Waterfall

Other Tools

Eclipse, IDE IntelliJ, GitHub, Jenkins, Putty, Control M, Autosys, Kubernetis, Jira, Atlas, AgilOne, Google Analytics, Shopify, Adobe Analytics

Project Annexure

Client: PVH Corporation Jan 2020 – Till Date

Location: Bangalore, India & Bridgewater, NY

Role: Senior Manager – Data Platform

Description:

PVH Corp., formerly known as the Phillips-Van Heusen Corporation, is an American clothing company with $10 billion Revenue. which owns brands such as Van Heusen, Tommy Hilfiger, Calvin Klein, IZOD, Arrow, Warner's, Olga, True & Co., and Heritage Brands.

Role and Responsibilities:

Managing the Customer Insights & Ecommerce Data Analytics teams on both On-shore and Offshore.

Successfully delivered V-trail (Virtual 3D clothing Experience) and Custom Recommendation Engine applications for CK & TH brands for Europe and Japan Regions.

Engaging with Product Management and Business Stakeholders to understand priority. Based on that developing the roadmap and detailed project plan with milestones.

Worked closely with the Architects and cross-functional teams, and follow-established practices for the delivery of solutions meeting QCD (Quality, Cost & Delivery). Within the established architectural guidelines.

Drawing insights from data and clearly communicating them (verbal/written) to the stakeholders and senior management as required.

Participating in hiring and build teams enabling them to be high performing agile teams.

Works closely with the project teams as outlined in the Agile methodology providing guidance in implementing solutions at various stages of projects.

Implemented OLAP-on-Hadoop with Kyligence team, for deriving users query patterns for intense Decision-making. Which saves $2M/year on AdPros.

Performed data visualization with IBM Watson and generated dashboards to present the figures to users.

Environment:

HortonWorks, HDFS, Hive, Spark, Sqoop, Kafka, Oozie, Scala, Snowflake, Oracle, AWS S3, EMR, Glue, Redshift, RDS, MySQL, Informatica Power Centre, Informatica BDM, Teradata, Tableau. CAM/CAD, Google Analytics, Shopify

Client: Lowe’s Companies Inc. Nov 2016 – Dec 2019

Location: Bangalore, India, Mooresville, NC & Hamilton, ON

Role: Senior Architect/ Manager – Big Data/Hadoop & DWH

Description:

Lowe's Companies, Inc. An American company which operates more than 2,700 stores, a chain of retail home improvement and appliance stores in the United States, Canada, and Mexico. Globally, Lowe's is also the second-largest hardware chain. With over $70 Billions revenue.

Designed applications for Data Ingestion, Transformation and Analytics, Which deals data pipeline with multiple Hadoop ecosystem tools in order to satisfy the reporting needs of the organization.

Role and Responsibilities:

Managing Lowe’s LIL (R&D) – Analytics & Operations with 38 members (Development, DevOps & Support Teams).

Establishing and maintaining a very strong trusted relationship with stakeholders and teams being partnered.

Design the solutions for key business initiatives ensuring alignment with future state analytics architecture vision, for applications like Customer 360, Demand Forecasting, Promotion Engine, Search optimization, Payments & Loyalty, Space & Assortment Optimization, Replenishment Program and Lowes Canada

Works closely with the project teams as outlined in the Agile methodology providing guidance in implementing solutions at various stages of projects.

Performing data mining, segmentation analysis and business forecasting mining on Large Data Sets using Hadoop Ecosystem and Visualization Tools

Responsible for data identification, collection, exploration & cleaning for modeling, participate in model development.

Developed and designed sampling methodologies, and analyzed the survey data for pricing and availability of products.

Investigated product feasibility by performing analyses that include market sizing, competitive analysis and positioning.

Perform Daily validation of Business data reports by querying databases and rerun of missing business events before the close of Business day.

Working on Performance tuning and Optimization of jobs on Hadoop ecosystem.

Root cause analysis of data discrepancies between different business system looking at Business rules, data model and provide the analysis to development/bug fix team.

Coordinated the execution of A/B tests to measure the effectiveness of personalized recommendation system.

Performed data visualization with IBM Watson and generated dashboards to present the figures to users.

Environment:

HortonWorks (HDP 2.6), HDFS, Hive, HBase, Pig, Scoop, MapReduce, Flume, Kafka, Oozie, Scala, Spark, Core Java, Oracle, Google Analytics, Azure Insight 3.6, DLS, DLA, DataBricks, Storage and Sql server 2014, Synapse Analytics, Teradata, IBM Watson, Adobe Analytics

Client: Baxter Healthcare Ltd (CSC) Apr 2015 – Oct 2016

Location: Bangalore, India

Role: Subject Matter Expert (SME)

Description:

Baxter International Inc., develops, manufactures and markets products that save and sustain the lives of people with hemophilia, immune disorders, infectious diseases, kidney disease, trauma, and other chronic and acute medical conditions.

The purpose of the project is to perform the analysis on the Effectiveness and validity of Health Care products and to store terabytes of log information generated by the source providers as part of the analysis and extract meaningful information out of it.

Role and Responsibilities:

My role in this project is SME/Team Lead.

Supporting OEMs and third party service providers on their post-sales service operations.

Involved in gathering the requirements, designing, development and testing phases.

Managing Data Lake on Hadoop, and getting data from Heterogonous source systems.

Designing Generic and reusable frameworks for Ingesting data, for both batch and streaming applications

Developed custom functionalities in Spark using UDFs.

Extracting the large data sets and analyzing data using Spark SQL.

Collecting and aggregating large amounts of real-time log data and deriving metrics for users reporting needs.

Provided Technical support for production environments resolving the issues, analyzing the defects, providing and implementing the change requests.

Environment:

HortonWorks Distribution, HDFS, Hive, HBase, Pig, Scoop, MapReduce, Flume, Kafka, Oozie, Scala, Spark, Oracle

Client: Southern California Edison (PPBU) (Cognizant) Feb 2013 – Mar 2015

Location: Bangalore, India

Role: Tech Lead

Description:

Southern California Edison, the largest subsidiary of Edison International, is the primary electricity supply company for much of Southern California, USA. Provided the Data services to the PPBU Team, it mainly deals with three different areas like Demand Forecasting, Meter reading and Distribution of power.

Using Hadoop ecosystem, in order to perform ETL operations mainly includes the transformation of data based on Business requirements. Mapping requirements and providing customized solutions involving finalization of design specifications and selection of appropriate techniques. In addition, coordinating with reporting team.

Role and Responsibilities:

Lead a team of 5 members for Initial Hadoop setup POC.

Involved in gathering the requirements, designing, development and testing.

Unit / Integration / System Testing of the components.

Developed the ingestion framework to transfer data between Hadoop and a relational database.

Migrated ETL Jobs to Hadoop ecosystem, mainly using Hive, MapReduce and Pig latin Jobs.

Involved in writing Complex SQL Queries for business users reporting needs

Addressing user data issues and working on root-cause analysis.

Generating reports using BLUELIGHT.

Environment:

Cloudera, HDFS, Hive, HBase, Pig, Scoop, MapReduce, SAP BODs 4.0, Informatica, Oracle, MySQL, Java

Client: South State Bank, US (Cognizant) Sep 2011 to Jan 2013

Location: Bangalore, India

Role: ETL Developer (Hadoop Migration)

Description:

South State Bank, based in Columbia, South Carolina, is the largest bank based in South Carolina and a subsidiary of South State Corporation, a bank holding company. The company had 168 branches, all of which are in South Carolina, North Carolina, Georgia, and Virginia.

Provided the ETL services to the Payments team, mainly for the transformation of data from one server to another server. Mapping requirements and providing customized solutions involving finalization of design specifications and selection of appropriate techniques to with DQ rules.

Role and Responsibilities:

Worked on the development framework on Data Quality and DaaS for Fraud Detection.

Worked in various project related documentations like Detail design, Tech Spec, Standard Operating Procedure and Mapping document.

Data Stage jobs created to extract, transform and load data from various sources

Preparing Unit Test Cases and involved in System Testing.

Writing Complex SQL queries

Compile and testing Datastage server jobs.

Taking backups for all Datastage jobs and maintaining them in Project shared folder on daily basis.

Setting up the Hadoop cluster and initiated the POC on HDFS, MR, Sqoop and Hive

Environment:

ETL Tools - Data Stage7.5, Abinitio v3.1.7, Oracle, Control-M, UNIX scripting, Facets, HDFS, MR, Hive, Scoop.

Client: Optum Healthcare, US (Cognizant) Feb 2011 – Sept 2011

Location: Bangalore, India

Role: ETL Developer

Description:

Optum Healthcare Services formerly Ingenix HeathCare Innovation and Information, provides detailed healthcare analytical services to leading Fortune 500 employers through the flagship product Parallax-I.

The Data Services Organization (DSO) comprising of the Data Management and the Data Processing Group after receiving these data feeds processes them through well-defined steps before the data is loaded in the Parallax-I tool enabling employers to run Analytic Reports on their Healthcare Costs and Utilization for the various program areas.

Role and Responsibilities:

My role in this project was a Developer

Extensively used Data Stage Designer to develop various Parallel jobs to extract, cleanse, transform, integrate and load data into Enterprise Data Warehouse tables.

Deriving metrics for user reporting needs. Addressing user data issues and working on root-cause analysis.

Involved in Performance Tuning and optimization of jobs.

Environment:

ETL Tools - Data Stage 8.1, Autosys, UNIX scripting, Oracle

Client: GameLoft SE, France Apr 2010 – Jan 2011

Location: Hyderabad, India

Role: Internee

Description:

Gameloft SE is a French video game publisher based in Paris, France.

Role and Responsibilities:

Majorly worked on automation process and testing of Vivendi platform and levels

Extensively worked on UIT- for data mocking, testing and automation through Unix scripts.

Designed prototypes for Gamer level maintenance.

Personal Details

Date of Birth: 22nd March 1989

Nationality: Indian

Skype id: udaykiran221

Visa Status: B1- US Business Visa (Validity 2028)

DECLARATION:

I hereby declare that, all the information mentioned above is true to best of my knowledge

(Uday Kiran Gubbala)



Contact this candidate