Post Job Free

Resume

Sign in

Data Analytics Cloud

Location:
Frederick, MD
Posted:
December 08, 2023

Contact this candidate

Resume:

Adarsh Inturi

240-***-**** ad1s5o@r.postjobfree.com linkedin.com/in/iad-cloud-architect

AWS Certified Solution Architect with 2 years of IT Experience in designing and implementing Data Warehouses/Data Marts and Data Analytics applications.

Over six years of comprehensive experience building cloud data pipelines and cloud data migrations.

Strong experience in creating High-Level Design, Detailed Technical Design Documents, and developing reusable frameworks to extract, transform, and load data from various source systems into Data Warehouses and Data Marts.

Strong RDBMS experience with Snowflake, DataBricks, Redshift, Netezza, Teradata, Oracle, DB2, and SQL server databases.

Strong experience building data pipelines using IBM DataStage, AWS Glue, Talend & Snowpipe.

Extensive experience in designing semantic layers for BI metrics and KPIs.

Expertise in analyzing source system datasets, creating mapping documents, unit test plans, code deployment/migration.

Strong understanding of the principles of DW using Fact Tables, Dimension Tables, Star and Snowflake schema, and data vault modeling.

Solid skills in writing SQL Queries and Stored Procedures for development and testing business applications.

Extensive experience in writing UNIX shell scripting like FTP process automation, archival process, Data pivoting, and automating deployment.

Proficiency with Test Plan preparation, Unit/System testing, and UAT.

Production support experience, including job scheduling, performance monitoring and analysis, handling other production issues.

Trained and certified to work in Agile-Scrum environments. CORE COMPETENCIES

Cloud Architect Data Warehousing Data Mart Data Modelling Data Engineering Data Strategy Data Architecture Big Data Data pipelines Cloud Adaption & Migration Data Lake Data Governance Data Security

Data Analytics Data Integrity & Virtualization Agile Methodology Cloud Migration TECHNICAL PROFICIENCY

Cloud platforms : AWS, Azure

Devops : Jenkins, GitLab CI, Docker, Kubernetes

Version Control : Git

Data Modelling : Star-Schema, Data Vault Modeling

ETL/BI : AWS Glue, IBM DataStage, Business Objects, Tableau, Talend Databases : Snowflake, Databricks, Teradata, Oracle, DB2, SQL Server, MS Access Monitoring and Logging : AWS CloudWatch, Splunk

Containers and Orchestration : Docker, Kubernetes

Security and Compliance : AWS Well-Architected Framework, CIS Benchmarks Languages and Scripting : Python, Terraform, Unix, XML, SQL, T-SQL, PL/SQL Professional Summary

Adarsh Inturi

240-***-**** ad1s5o@r.postjobfree.com

PROFESSIONAL EXPERIENCE

CMS, Baltimore, MD Sr. Cloud Data Engineer Dec 2016 – Present Project: CMS CPI One PI is a self-service data analytics environment for Medicare and Medicaid data where in-house CMS specialists, supporting contractors, and law enforcement can leverage multiple tools critical to program integrity work. Considered a “one-stop shop” for fraud-fighting data analysts, One PI provides access to current and historical data that is used to analyze data in the prevention of fraud, waste and abuse. The system is on Teradata and transitioned to AWS managed snowflake.

• Part of the planning, designing, and implementing of an AWS-hosted snowflake to centralize Medicare & Medicaid data from diverse sources along with migrating data from Teradata.

• Utilized AWS Glue components /snowpipe /python automate the ingestion, transformation, and loading of data from variety of sources, ensuring data consistency and quality.

• Implemented and fine-tuned Snowflake auto-scaling and optimization strategies specific to the unique requirements of the CMS CPI One PI project, ensuring cost-effectiveness while maintaining high performance

• Ensured the Snowflake architecture was optimized for performance, scalability, and reliability.

• Performed security measures using AWS IAM and KMS to safeguard sensitive data.

• Enable semantic Layer on snowflake for analysts to run complex queries and perform advanced analytics on the data.

• Developed and implemented comprehensive strategies for managing the entire data lifecycle within Snowflake, including data ingestion, storage, archival, and eventual purging, aligning with project-specific needs.

• Carried out auto-scaling and optimization strategies to minimize operational costs while maintaining high performance and also implemented data archiving and retention strategies within Snowflake to efficiently manage historical data, ensuring compliance with regulatory requirements.

• Implemented advanced data masking and redaction techniques within Snowflake to protect sensitive information, ensuring that only authorized personnel had access to confidential data in accordance with CMS compliance policies.

• Utilized Snowflake's time travel and versioning features to enable effective data version control and historical data analysis, crucial for program integrity work in the CMS CPI One PI project.

• Established and maintained robust ETL workflows, guaranteeing data consistency, quality, and timely availability for analytics.

• Deployed AWS CloudWatch for continuous monitoring of performance logs / access logs on the snowflake platform.

• Implemented auto-scaling mechanisms and optimization strategies within Snowflake, minimizing operational costs while maintaining high performance during varying workloads.

• Created comprehensive documentation on Snowflake DW architecture, governance policies, and compliance procedures.

• Collaborated with business stakeholders to understand analytical requirements and translated them into effective Snowflake data models.

• Ensured that Snowflake data models aligned with the needs of data analysts and supported accurate and insightful reporting and also ensured disaster recovery and business continuity plans for Snowflake, ensuring data resilience and minimal downtime in case of system failures.

• Provided training and guidance to CMS staff, enabling them to utilize the EDL and analytics tools effectively.

• Collaborated closely with CMS's compliance team to establish and enforce data governance policies.

• Successfully transitioned the project to CMS IT teams for ongoing maintenance and support. Adarsh Inturi

240-***-**** ad1s5o@r.postjobfree.com

CNSI, Gaithersburg, MD State of South Dakota & Washington Data Engineer Dec 2013 – Nov2016

As an ETL Lead/Developer, involved in creation an ETL solution for Medicaid Management Information System to support timely payment processing, enabling access to important information for providers and recipients more efficiently and manage the MMIS system more effectively. The Data Warehouse model for Medicaid Information System built is based on Data Vault modeling technique. This DW has partially normalized data to support various kinds of querying, reporting and data extracts.

• Designed standard job templates for best practices to develop the ETL process, prepared test Plan for unit testing and performance testing on WebSphere Data Stage to load legacy converted Data in extracted flat files as source for initial load and OLTP data for incremental load to DW (Oracle database) as target data warehouse as per the business requirements.

• Worked as a developer for SD MEDX DW project, roles include coordinating with project manager to prioritize the tasks as per the projected deadlines for the given tasks, mentoring the team, interacting with modeling team, external interface, BA and test teams to resolve defects, critical issues and reducing gaps in development and testing process.

• Performed Data-Stage Administration, responsibilities include setting up environments for development, test and production which include installation and configuration of IIS (Data Stage application Server) (on Sun OS)

& Client for versions 8.1 & 7.5.2, applying the IIS Fix Pack and other patches, administering user accounts (Creating user accounts, Groups, Setting privileges to users and groups), setting up project parameters, managing locks, rebooting application server, Projects back up, interacting with System Admin team to resolve critical OS level resource issues (such as memory issues on OS box) to keep the Data Stage Application servers up all the time.

• Worked closely with data modeling team and SMEs to create design documents, source to target mapping documents, unit test documents and to understand the business process.

• Worked with various subsystems of Medicaid and Medicare Information System such as Claims, Client, Provider, Drug Rebate, TPL, Managed Care and Prior Authorization.

• Developed Data Stage jobs, sequencers, routines and Batch jobs.

• Developed the ETL logics for SCD type1, type2 and type3 logics and analyzed the given source and target table structures to create Surrogate Key for required dimensions as per SCD scenarios.

• Worked on ‘CLOB’ data type to successfully populate REMARK and AUDIT_COMMENTS of Claim errors.

• Developed jobs using quality stage stages for data standardization using Investigate, standardize stages to standardize Providers and Members names and addresses.

• Created UNIX shell scripts for ftp process automation (to ftp files from OLTP extracts and various external interfaces), file archiving, dsx code deployment, to enable/disable indexes and constraints on DW tables during run time, rebooting DS application server, to execute external applications as part of ETL process.

• Worked with IBM support team to resolve various Data Stage application bugs and installing the patches for Data Stage server and client tiers.

• Supported for operational maintenance for delta loads on weekly basis in production environment. EDUCATION MASTER OF SCIENCE IN COMPUTER SCIENCE

University of Central Missouri, Lee’s Summit, MO Jan 2012 – May 2013 LICENSE & CERTIFICATIONS

• AWS Certified Solutions Architect

• AWS Certified Cloud Practitioner



Contact this candidate