Post Job Free
Sign in

Manager Data

Location:
Vasant Nagar, Karnataka, India
Posted:
September 28, 2020

Contact this candidate

Resume:

Samarjeet Upadhyay

Mobile: +91-875*******

E-mail:*****.***********@*****.***

PROFESSIONAL ABRIDGEMENT

A dedicated professional with 10 years ‘of experience in Hadoop/Big data/ETL products.

4+ Years of experience in setting up/managing big data/Hadoop project.

Technical Manager/Product Manager handling cross functional team .

Valid B1 visa.

WORK EXPERIENCE

PROFICIENCY / SKILLS/ ROLE: Technical Manager/Product owner

Technical

Bigdata: Hadoop (Cloudera and MAPR), HDFS, HIVE, PIG, Sqoop, Spark, Elastic

Search, Oozie, Hbase, Kafka, Spark sql,

AWS (S3, EMR, EC2, Dynamo Db, Airflow, Cloud watch,Glue), Redis

ETL: Informatica, Teradata, Oracle, Unix

Scripting: Python, Unix, Scala

Database: Teradata, Oracle, SQL Server

CI/CD: GIT, Jenkins,

Domain

Healthcare (5 years)

Telecom (2 years),

Banking (2 years)

Search/Recommendation Platform (1 years)

Technical Experience

Experience with large-scale SaaS applications

Experience in developing data pipelines and Big data solutions.

Experience in building and deploying applications on Cloud Platforms like AWS

Experience in applying Continuous Integration and Deployment practices

Experience in defining and developing API standards

Strong analytical skills with ability to collect organize, analyse significant amounts of information and massive datasets (expertise in SQL, Hive, Spark sql)

Experience in Implementation and migration experience in Hadoop ecosystem such as Spark, Scala, Python, Hive, HDFS

Strong experience with Cloud products and technologies – micro services architecture, API, Cloud SQL, API gateways, ELK, Visualization tools.

Product Management experience

Scrum Master, Product Manager, Technical Manager, Vendor Management Resource/People management.

Tools used are JIRA, Confluence, Rally

Requirement analysis, Estimation, Negotiation, Task allocation, Planning, Monitoring Reporting, Escalation, Stake holder management, Overall project management.

Current Responsibilities:

Working with engineer team to translate the roadmap into a working product

Managing the Iterative product development.

Managing the creation/deployment of complete data pipeline for Hadoop/Bigdata product.

Involved in the design & architectural discussions with the Architect and developers.

Plan, Execute and ensure end to end closure of project, while managing people and delivering excellence in all parameter of time. Cost and process in project.

Creation of technical WBS for every new feature/release.

Create, Maintain and Priotrize product backlog/Feature list

Managing project schedule/scope via Gantt chart/Milestone document.

Communicate issue/progress regularly and timely to client.

Resource Planning for BI projects.

Giving UAT to clients.

Deploying the product and getting feedback from customer/client

Responsible for interview, mentoring the new resource and aligning to projects.

ACHIEVEMENTS

Won Best Performer award in the year 2012-13 in Infosys.

Won Best Performer award in the year 2013-14 in OPTUM

Star Award from Emids in the year 2017

People Growth Awards in the year 2017

Star Award from Emids in the year 2018

New shining star of Huawei 2019

PROJECTS WORKED IN BIG DATA/Hadoop/Machine learning

Claims data integration Claims data integration Frame work have been developed to ingest claims and provider data across different layers in data lake,these data are converted in to health care Global data format,which in turn is use for mastering the member(patient) data for different clients by applying various Business and data Transformation.

Search and Video Recommendation: Video recommendation platform is a machine learning platform which help user to recommend videos on the basis of his search behavior for the data stored in the data lake. This data is stored in the elastic search for fast retrieval.

Data ingestion Platform : (DIP) is the framework which have been developed to process and store Claims, Provider and Patient data in the data lake, which provides ability to Client to analyze its large data set. The data here comes from different data sources like table and files which then get stored across different layer in data lake, before getting stored in ODS (from

Kafka). This data is visualize by Aetna with help of REST web portals built on the top of this ODS.

XDF (EXTREME DATA FRAME WORK) XDF is the framework created to process all the All the data getting injected in the data lake from the data ingestion tool MITO. Once the data is Injected in the data lake, we will be processing the data via XDF frame work developed in the Spark. All the transformation logic for the different client is applied via Jexl and Jason scripts and the data will be loaded in index of elastic search server via es loader from parquet file by Accessing Apache drill. And these data is later use for reporting purpose for various application.

Big Data Patient Analytics

BDPA is a Big Data Patient Analytics Platform designed and developed for processing various Healthcare patient data. The platform is designed in such a way that different BIUs can ingest their data in to the Hadoop ecosystem. Which is latter used for analytical purpose using different BI reporting tool?

:

PERSONAL PROFILE:

Date of Birth :26th May, 1988

Contact Address: Flat No D503 Nester Harmony Mahadevpura.

I hereby declare that the information furnished above are true to the best of my knowledge. Date:

Place: Bangalore(SamarjeetUpadhyay)

ORGANIZATION

TITLE

Infosys Limited

Systems Engineer

OptumInsight

Senior engineer

Synchronoss.Technologies

Senior Engineer

Huawei Technologies

Senior Technical Lead/ Product Manager

Emids Technologies

Technical Manager/Product owner



Contact this candidate