Post Job Free
Sign in

Data Management Engineer

Location:
Southlake, TX
Posted:
March 26, 2025

Contact this candidate

Resume:

Mallik Koneru

214-***-**** ************.**@*****.***

Professional Summary:

•I am an IT Professional with 13+ years of IT experience as Data warehousing, ETL, ELT and Data Engineer. I have extensive experience in Development, Administration and Implementation of BI tools like Datastage, Snowflake and Azure.

•Extensive experience in data modeling, analysis, design, implementation, testing and production support of ETL solutions for Data Warehousing and Business Intelligence environments.

•Strong knowledge of Data warehouse principles and methodologies - Ralph Kimball/Bill Inman. Experienced with designing and implementing star schemas.

•Efficient in creating source to target ETL mappings using DataStage, Azure Data Factory.

•Hands on Experience on API’s and Webservice hosting / consuming using DataStage Hierarchical stage, Azure Web Apps, postman to capture Employee payroll and Customer travel information.

•Worked on Azure Cloud environment, Implementing Azure tools like Azure Data Factory, Logic Apps, Azure Functions, Event Hub, Auth/Rest calls, Key Vaults, Storage Account and related tools under given resource group from Azure portal.

•Implement, monitor and maintain security on Azure key vault, storage account (blob, File, Table).

•Experience in Snowflake features Data sharing, SnowSQL, SnowPipe, Tasks, zero copy clone and Time travel.

•Exposure in processing semi structured data files like JSON and Parquet into snowflake.

•used AWS S3 storage for files and datasets and Redshift DWH for relational database.

•Experienced in Datastage Administration, setting up DS environment, Installing DS on client (Windows) and servers (Unix), configuring databases, creating users and defining their roles using console, creating environmental variables at project level.

•Follow and update runbooks for any installation, upgrades of the product suite and fix packs to avoid vulnerabilities. Monitoring/deleting older logs, kill long running jobs and releasing job locks.

•Consistently used File stage, Database stage, Transformer, Copy, Join, Funnel, Aggregator, Lookup, Filter and other processing stages to develop parallel jobs.

•Experienced in development and debugging of jobs using stages like Peek, Head & Tail Stage, Column generator stage.

•Designed and developed Datastage jobs that read/capture real time data using Hierarchal stages to make service calls between Datastage and other real time service environments.

•Deployed services like data assets and Data planning as per business needs in Cloud Pak for Data (CP4D).

•Define policies using CP4D for data Governance to mask personal data for data privacy and provide access to data for limited people.

•The data virtualization under IBM Planning analytics is used to re-evaluate business needs and to make business decisions.

•Hands-on experience on databases like DB2, Oracle, SQL Server, Azure Postgres, Azure SQL and extensive experience in working with MPP databases like Teradata and Snowflake.

•Strong hands-on experience using Teradata utilities (SQL, Fast Load, Multiload, FastExport, Tpump), worked on Teradata SQL, and BTEQ scripts.

•Possess strong knowledge of database management in writing complex SQL queries, stored procedures (SP), database tuning, query optimization and resolving key performance issues in DB2, SQL, Oracle and Teradata databases.

•Used version control tools like runrcs, SVN Check in, UCD, Git and used ARM Templates to deploy code in Azure environment.

•Worked on UNIX-AIX (IBM), Solaris (Oracle), and LINUX (Red Hat) Servers.

•Skilled in using Python for automation of various processes, including data pre - processing, data cleaning, and data validation.

•Used python for file validations and handling bad data and sending it to source team as part of daily job runs.

•Worked extensively in Unix shell scripting, Perl and Python scripting to automation processes.

•Hands-on experience with Job scheduling tools Autosys, Tivoli, Tidal and Control-M.

•Worked extensively on Setup/Implementation of EMFT (SFTP) and Axway process to send/received files from different sources.

•Have good knowledge in Big Data technologies Hadoop, Scala, Oozie, Pig, Hive.

•Good knowledge in messaging queues like IBM message Queues and Kafka

•Extensively used service management and IT governance tools (Service now, Cherwell, HP Open View Service Desk, HPSM, HPQC and HP ITG).

•Worked on SDLC and Agile Methodologies using Rally and JIRA

•Strong Analytical and Problem-solving Skills. Can quickly adapt to a changing environment.

Skills:

ETL Tools

IBM Web Sphere Datastage v.7.5, IBM Info Sphere Information Server 8.5, 8.7, 9.1.2, 11.5, 11.7, Snowflake, Big Data, Teradata 13, Azure Data Factory.

Cloud Technologies

Microsoft Azure technologies, AWS

Databases

Snowflake, Teradata 12,13,14, DB2, MS SQL Server 2000, Oracle 12c, 11g/10g, Mainframes.

Environment

IBM AIX 5.3, 6.1, Linux 6.0, 7.1,7.5, Windows Server, Azure Platform.

Scripting

Shell Scripting, BTEQ and Python.

DBMS

Oracle, MS SQL, TOAD, DB Visualizer, DB2 Control center, Teradata SQL loader, Snowflake.

Others

Autosys 4.5, Tivoli Workload Scheduler, Control-M, HPSM, HPQC, ServiceNow, Jira, Rally.

Work Experience:

·DataStage & Azure Data Engineer for American Airlines from August 2022 to till date.

·Lead ETL & Snowflake Developer for Caterpillar, Infosys from July 2021 to August 2022.

·Senior ETL & Snowflake Developer for Freddie Mac, TCS from Oct 2020 to July 2021.

·Senior DataStage Developer / Admin for American Airlines from Aug 2016 to Sep 2020.

·Senior ETL Consultant at United Health Group (OPTUM) from Sep 2014 to May 2016

·ETL Senior Developer at United Health Group from April 2011 to Sep 2014

·ETL Consultant at Accenture Spes Technologies (Accenture) from Jan 2011 to April 2011

Education:

Bachelor’s in information technology, Anna University, Chennai, India 2007

Experience:

American Airlines Fort Worth, TX August 2022 – Till date

Projects : De-Act/Re-Act, EFOP, Spring, DVerify

Role : Lead DataStage (Dev & Admin) / Azure Data Engineer

Tools used : DataStage 11.7, Azure Cloud, DB2, Teradata, Linux, TWS, Control-M, EMFT, ServiceNow, Rally.

Responsibilities:

•Worked extensively on migration of Datastage environment to Azure using Azure Data Factory.

•Understand the Business requirements, coordinating with business analysts and Core team members to get specific requirements and create mapping documents to help team with migration development.

•Responsible for building a large, complex DWH using tools like DataStage, Azure Data Factory, Logic apps, Azure Functions, Events hub, Key vaults, Web apps for real-time streaming, Storage accounts, Integration runtime, Linked Services, Azure Postgres, Azure SQL tools.

•Designed and developed jobs that read/capture real time data from API’s, Webservice hosting / consuming using DataStage Hierarchical stage, Azure Web Apps, postman services.

•Worked on hosting services to SAP HANA related to payroll and Employee Information.

•Developed jobs that read the data from sources as flat files and load into Tables like MS SQL, DB2, Teradata, Oracle Azure Postgres, Azure SQL.

•Consistently used DataStage File stages, Database connectors, Transformer, Hierarchical stage, Copy, Join, Funnel, Aggregator, Lookup, Filter and other processing stages to develop parallel jobs.

•Experienced in development and debugging of jobs using Azure Execute Pipeline, Set variable, DataStage stages like Peek, Head & Tail Stage, Column generator stage.

•Extensively used Cloud Pak (CP4D) to Define policies for data Governance to mask personal data for data privacy and provide access to data for limited people.

•Used IBM Planning analytics to re-evaluate business needs and to make business decisions.

•Extensively worked on python scripts to automate the file validation and cleaning process as part of daily runs.

•Used Python extensively for automation of processes, like data pre - processing, Cleanup of data, sorting, grouping, data cleaning, and validation.

•Created Unix shell scripts for file watchers, validating input files and auto generated Email Notification about process status. Capture the rejected data and Send notification to source with rejected reason.

•Extensive experience in Datastage Administration by building Datastage environment from the scratch Installing, configuring, upgrades, connecting databases, creating users, environmental variables, fix packs to avoid vulnerabilities, deleting older logs, kill jobs or release job locks.

•Handled the Production support work as we develop and enhance the existing code with new business requirements.

•Created intermediate tables to maintain log info about the source files, processed counts and loaded counts for future audits and maintained Primary and foreign key indexes.

•Built data frameworks and transfer of data by making sure it is secure and maintained Data Governance by following company standard compliance like PCI, PII and SOX.

•Worked on writing complex SQL queries, stored procedures (SP), database tuning, query optimization and resolving key performance issues.

•Exported Azure pipeline code using GIT Repo and ARM Templates between Prod and Non-Prod environments.

•Tested extensively on peer design codes in both Azure Data Factory and DataStage ETL to help and meet Design and coding standards as per business requirements.

•Worked on EMFT process for Secure File transfer (SFTP) environment to send/received files from different sources.

•Created job Environmental user variables, parameters and shared containers.

•Worked in Agile environment using Rally and JIRA.

•Environment: Azure Cloud, Data Factory, IBM Information server DataStage 11.5, MS SQL, DB2, Teradata, Oracle 12c, Azure Postgres, Azure SQL, Linux 7.0, TWS, Control-M, EMFT, ServiceNow, Rally.

Caterpillar (Infosys) Dallas, TX July 2021 – August 2022

Project : EPC, Snowflake Data Migration

Role : Lead ETL Datastage / Snowflake Developer

Tools used : DataStage 11.7, Teradata, Snowflake, Linux, ServiceNow, AWS, JIRA, Control-M, Tidal

Description: Development of transformation SQL in Teradata for month end data for reporting purposes. It also deals with conversion of transformation SQLs in Snowflake which is already running in production in Teradata. Generic data validation jobs are developed in IBM Infosphere Datastage to compare the data loaded in Teradata and Snowflake databases.

Responsibilities:

•As an ETL Developer, responsible for building end to end ETL process to migrate data from existing DB (Teradata) and new DB (Snowflake).

•Extensively worked with Data Modeler in designing intermediate tables and it’s DDL in Teradata and Snowflake.

•Created snowflake stored procedures using SnowSQL and JavaScript.

•Migrated On Premise database Teradata to Snowflake cloud data warehouse.

•Loading data from files that have been staged in Amazon S3 stage and copy data to target table and Query the DW Data. Worked on Amazon s3 components.

•Data loaded to Redshift DWH is used by business and other down streams for Business analytics.

•Involved in migration and Developed transformation SQL in Teradata to intermediate tables.

•Used existing Teradata Stored Proc and created snowflake stored procedures using SnowSQL.

•Implemented Delta loads and data transformations on snowflake cloud data warehouse.

•Implementation of data streaming and Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, snowflake streams.

•Experience in writing Complex Sub Queries, writing Complex JOINs, Views.

•Cloning of database schemas, tables, procedures from QA to Prod environment.

•Handled end to end Production support work for the EPC project which has monthly, weekly and daily batch jobs that load data into both Teradata and Snowflake.

•Experienced in writing complex SQL queries, stored procedures (SP), database tuning, query optimization and resolving key performance issues.

•Worked with business on Bobj reports which populates data loaded every month and resolve any data issues as part of these monthly loads.

•Consistently used File stage, Database stage, Transformer, Copy, Join, Funnel, Aggregator, Lookup, Filter and other processing stages to develop parallel jobs.

•Experienced in development and debugging of jobs using stages like Peek, Head & Tail Stage, Column generator.

•Developed Job Sequencers for scheduling and job flow, used loop stages for continuous looping of same job.

•Used shell scripts and Python for automation of process, Validation of received input files and auto generated Notifications about process status.

•Worked on Control-M and Tidal Schedulers.

•Worked in Agile environment.

•Environment: IBM Information server DataStage 11.7, DB2, Teradata, Snowflake, Linux 7.0, ServiceNow, JIRA, Control-M, Tidal

Freddie Mac (TCS) Plano, TX Oct 2020 – July 2021

Project : MLIS (Mortgage Loan Integrated System)

Role : Senior ETL Datastage / Snowflake Developer

Tools used : DataStage 11.5/.7, DB2, Snowflake, AWS, Linux, ServiceNow, Autosys, Control-M, GIT eye, JIRA

Responsibilities:

•As an ETL Developer, responsible for building end to end ETL process to make sure the dataflow from source to target.

•Understanding the Business requirements, coordinating with business analysts and Core team members to get specific requirements for the application development.

•Created metadata from the given mapping documents and mapped those columns to Database.

•Developed jobs that read the data from sources as flat files and load into Tables like DB2, Snowflake and Oracle application.

•Created Snowflake objects like databases, schemas, tables, stages, sequences, views, Procedures and FILE_FORMAT using snowSQL and snowflake UI

•Develop SnowSQL scripts to load the data from flat files to Snowflake tables.

•Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, snowflake streams.

•Handled large and complex datasets in CSV format from object stores like AWS s3 built on Redshift DWH.

•The data loaded to Redshift DWH is further used by multiple business teams as part of their analytics.

•Designed and developed jobs that read data from different sources and transform as per business requirements and load into final target DB.

•Consistently used File stage, Database stage, Transformer, Copy, Join, Funnel, Aggregator, Lookup, Filter and other processing stages to develop parallel jobs.

•Used shell scripts and Python for automation of process, Validation of received input files and auto generated Notification about process status.

•Capture the rejected data and Send notification to source with rejected reason.

•Created intermediate tables to maintain log info about the source files, processed counts and loaded counts for future audit purposes and maintain Primary and foreign key indexes.

•Worked on performance tuning of jobs to reduce running time which saved processing time of the whole job.

•Exported and imported Datastage components and Metadata tables in different environments.

•Performed unit and system testing, Involved in design and code review meetings on behalf of team.

•Scheduled Datastage jobs to run them in Sequence using Autosys and Control-M Schedulers.

•Worked on Migration of job schedulers from Autosys to Control-M using Control-M workload automation tool.

•Involved in testing to meet Design and coding standards as per business requirements.

•Used code check-in process in GITEYE tool to maintain the code repository

•Worked in Agile using JIRA environment.

•Environment: IBM Information server DataStage 11.5,11.7, DB2, Snowflake, Linux 7.0, ServiceNow, GIT Eye, JIRA, Autosys, Control-M

American Airlines Fort Worth, TX Aug 2016 – Sep 2020

Projects : De-Act/Re-Act, EFOP, Spring, DVerify

Role : Senior ETL DataStage Developer / Admin

Tools used : DataStage 11.5, DB2, Teradata, Oracle, Linux, TWS, Control-M, EMFT, ServiceNow, Rally.

Responsibilities:

•Understanding the Business requirements, coordinating with business analysts and Core team members to get specific requirements for the application development.

•Created metadata from the given mapping documents and mapped those columns to Database.

•Developed jobs that read data from sources as flat files and load into Tables like MS SQL, DB2, Oracle.

•Designed and developed jobs that read/capture real time data using Hierarchal stages to make service calls between Datastage and other real time service environments.

•Worked on hosting services to SAP HANA related to payroll and Employee Information.

•Consistently used File stage, Database stage, Transformer, Copy, Join, Funnel, Aggregator, Lookup, Filter and other processing stages to develop parallel jobs.

•Experienced in development and debugging of jobs using stages like Peek, Head & Tail Stage, Column generator stage.

•Generated Surrogate Keys using Triggers in Oracle Database using unique primary function to distinguish the source records and capture them if there are any rejects while loading table.

•Developed Job Sequencers for scheduling and job flow, used loop stages for continuous running of same job.

•Created shell scripts for file watchers, Validating the received input files and auto generated Email Notification about process status.

•Capture the rejected data and Send notification to source with rejected reason.

•Created intermediate tables to maintain log info about the source files, processed counts and loaded counts for any future audit purpose and maintain Primary and foreign key indexes.

•Worked as Datastage Administration by building Datastage environment from the scratch Installing, configuring, upgrades, connecting databases, creating users, environmental variables, fix packs to avoid vulnerabilities, deleting older logs, kill jobs or release job locks.

•Handled the Production support work as we develop and enhance the existing code with new business requirements.

•Used Cloud Pak for Data (CP4D) to deploy services like data assets and Data planning as per business needs.

•Extensively used Cloud Pak (CP4D) to Define policies for data Governance to mask personal data for data privacy and provide access to data for limited people.

•Used IBM Planning analytics under Cloud Pak to re-evaluate business needs and to make business decisions.

•Built data frameworks and transfer of data by making sure it is secure and maintained Data Governance by following company standard compliance like PCI, PII and SOX.

•Worked on performance tuning of various jobs to reduce the run time of a job which saved processing time of the whole job.

•Performed unit and system testing, Involved in design and code review meetings on behalf of team.

•Involved in testing along with QA to help and meet Design and coding standards as per business requirements.

•Worked on Control-M scheduler to automate the sequence runs of DataStage.

•Setup EMFT (Secure File transfer) environment to send/received files from different sources.

•Created job Environmental user variables, parameters and shared containers.

•Worked in Agile using Rally.

Environment: IBM Information server DataStage 11.5, MS SQL, DB2, Oracle 12c, Linux 7.0, EMFT file transfer, Control-M, TWS, ServiceNow, Rally

Optum (United Health Group) Hyderabad, India Nov 2015 - May 2016

Optum (United Health Group) Cypress, California Sep 2014 - Nov-2015

Project : Galaxy

Role : Senior Datastage Developer / Data Engineer

Description: Galaxy is the primary Shared Data Warehouse for DMS Operations (Optum Insight part of Optum). With thousands of users and 25 terabytes (TB) of data, Galaxy is one of the largest health information databases in the world. Galaxy makes available very detailed data about products, providers, customers, members, claims, lab results, and revenue. There are approximately 40 sources of Galaxy data. The number of sources continues to grow as more data is integrated into Galaxy which made architectural changes on Galaxy and started migrating to Bigdata environment with data feeding to HDFS.

Responsibilities:

•Involved in understanding business processes and coordinating with business analysts to get specific user requirements.

•Designed and developed jobs that extract data from the source databases using DB connectors Oracle, DB2 and Teradata.

•Worked extensively on production support to monitor Daily, Weekly, Bi-Weekly, Monthly and Load cycle batch jobs

•Designed and developed jobs that read the data from diverse sources such as flat files, XML files, and MQs.

•Created job parameters, parameter sets and shared containers.

•Consistently used Transformer, Modify, Copy, Join, Funnel, Aggregator, Lookup and other processing stages to develop parallel jobs and used Peek, Head & Tail Stage, Column generator stage, Sample Stage for debugging of jobs.

•Generated Surrogate Keys for composite attributes while loading the data into the data warehouse.

•Extracted data from multiple sources like Mainframes and from databases like Teradata, DB2, Oracle. Made the required transformations as per business and loaded the data into target which is DB2.

•Good knowledge in Big Data technologies Hadoop, Scala, Oozie, Pig, Hive and loading data to HDFS.

•Used Hive to view HDFS data and to perform DDL and DML operations.

•Developed Job Sequencers and batches, edited the job control to have jobs run in sequence.

•Built data frameworks and transfer of data by making sure it is secure and maintained Data Governance by following company standard compliance like PCI, PII and SOX.

•Worked on setup and monitoring of daily file transfers using SFTP and Axway transfer process.

•Troubleshooting and performance tuning of Data Stage Jobs, SQL queries for better performance.

•Extensively used materialized views for designing fact tables.

•Developed Unix shell scripts and worked on Perl scripts for controlled execution of Datastage jobs and Analyzed the BTEQ scripts.

•Extensively designed jobs in TWS and Autosys scheduler for execution of the jobs in sequence

•Worked in Agile/Scrum environment.

Environment: IBM Information server DataStage 9.1.1, Big Data, Teradata 13.1.1, AIX 5.3, AIX 6.1, Linux 2.6.18, Oracle 10g, Autosys, Tivoli Workload Scheduler, DB2, TOAD, SQL Plus, HPSM, Jira.

UnitedHealth Group Hyderabad, India Mar 2012 - Sep 2014

Project : UGAP

Role : Datastage Developer

Description: UGAP (UnitedHealth Group Analytics Platform) is a national consolidated data warehouse for UnitedHealth Group. Contains data from multiple claims processing systems UNET, COSMOS, MAMSI in GALAXY Database. The source data categorized into Flat files and data from DB2 databases loaded into UGAP database which is on Teradata. Final warehouse describes the complete details about an individual claim, provider details, member details, pharmacy details.

Responsibilities:

•Extensively used DataStage Designer and Teradata to build the ETL process which pulls the data from different sources like flat files, DB2, mainframes system and does the grouping techniques in job design.

•Worked extensively on production support to monitor Weekly, Monthly Load batch process jobs.

•Developed master jobs (Job Sequencing) for controlling flow of both parallel & server Jobs.

•Good knowledge in parameterizing the variables rather than hardcoding directly, Used Director widely for monitoring the job flow and processing speed.

•Developed Autosys jobs for scheduling the Jobs. These jobs include Box jobs, Command jobs, file watcher jobs and creating ITG requests.

•Closely monitor schedules and look into the failures to complete all ETL/Load processes within the SLA.

•Worked on daily file transfers process using SFTP and Axway.

•Designed and developed SQL scripts and extensively used Teradata utilities like BTEQ scripts, FastLoad, Multiload to perform bulk database loads and updates.

•Used Teradata Export utilities for reporting purpose

•Closely work with On-shore people and Businesspeople to resolve critical issues occurred during load process.

•Later all the UGAP jobs were migrated from Autosys to TWS, we were involved in end-to-end migration process and migrated successfully.

Environment: DataStage 8.5, Linux, Oracle 10g, Teradata 13.1.1, Autosys, Tivoli Workload Scheduler, TOAD, SQL*Loader, SQL Plus, SQL, HPSM, Mercury ITG

United Health Group Hyderabad, India Apr 2011 - Feb 2012

Project : UCG

Role : ETL Developer

Description: UCG (United Common Grouper) The primary objective of UCG program is to implement unified service environment for multiple grouper products against all claims. United Common Grouper allows business segments to process their member, claims, and provider data through various grouper engines with required configuration settings. It is the single source for generating reports to multiple business segments and is most important to UHG Business in decision making and incorporating new Grouper Product and new business segment.

Responsibilities:

•has been part of Data modelling, using Erwin tool to build the architecture.

•Worked extensively on production support to monitor Weekly, Monthly Load batch process jobs.

•Built Job Control routines to invoke one DS job from another and used custom BuildOp stage to suit our project requirements, which was reused in the jobs designed.

•Extracted claims data from multiple sources like DB2, Teradata and load process them through grouping techniques

•Incorporated error handling in job designs using error files and error tables to log error containing data discrepancies to analyze and re-process the data.

•Designed wrapper Unix shell scripts to invoke the DS Jobs from Command line

•Analyzing the BTEQ Scripts.

•Developed the TWS schedules & Autosys Jobs to schedule the running of jobs

•Troubleshooting the technical issues aroused during the load cycle

•Used Cognos to provide reporting data to business

•Implemented enhancements to the current ETL programs based on the new requirements.

•Involved in all phases of testing and effectively interacted with testing team.

•Documented ETL high level design and detail level design documents.

Environment: IBM DataStage 8.5, Teradata, Oracle, DB2, Linux, Oracle 10g, TOAD, SQL*Loader, SQL Plus, SQL, HPSM, Mercury ITG, Autosys, Tivoli Workload Scheduler, Jira

Client: Telstra (Accenture) Chennai, India Jan 2011 - Apr 2011

Role: Datastage Developer

Project: Telstra

Description: Telstra Corporation Limited is an Australian telecommunications and media company which builds and operates telecommunications networks and markets voice, mobile, internet access, pay television and other entertainment products and services. This is a migration project where the entire customer data is migrated from legacy servers (13 sources) to new servers called Siebel and Kenan database. The customer information is maintained in Siebel and the contact, financial information is maintained in Kenan database.

Responsibilities:

•Created Data stage jobs to extract, transform and load data from various sources like relational databases, flat files, etc. into the target tables.

•Worked extensively on different stages like Sequential file, Transformer, Aggregator, Lookup, Join, Merge, Sort etc.

•Used Data Stage Director to execute and monitor the jobs that perform source to target data loads.

•Responsible for preparation of design documents, Unit testing, performance review.

•Made the required changes to the jobs as per the Business and tracked the changes using QC (quality center).

Environment: IBM DataStage 7.1, AIX 5.1, Oracle 9g, TOAD, SQL*Loader, HPQC

Awards & Accomplishments:

·Received the highest honors in UnitedHealth Group i.e. “Sustaining edge award” award for Q3 2015 for figuring out the missing data and for managing, coordinating with up streams and loading the Member data into data warehouse on time.

·Received performer of the month “Star Award” for completing the FTP to SFTP project with in the given timelines.

·Received “Pat on back” award for correcting and reloading the corrupted claims data within the given SLA.

·Received Best Team (GALAXY) of Operations management department in United Health Group.



Contact this candidate