Venkata Ramana Sathivada
McKinney, Collin County, 75070, Texas, USA (Open to Relocate) 806-***-**** ****************@*****.*** https://www.linkedin.com/in/svenkataramana/
Professional Experience
Overall, 8.3 years of IT experience in the Analysis, Design, Development, Testing and Implementation of business application systems for Banking, Health care, and Retail Domains.
Strong Data Warehousing ETL experience of using Informatica IICS, 10.2 and 10.4 PowerCenter Client tools - Repository manager, Mapping Designer, Workflow Manager/Monitor.
Efficiently worked has a Informatica Developer, Tester and L1 /L2/L3 Support Activity.
Strong experience in Dimensional Modeling using Star and Snowflake Schema, Identifying Facts and Dimensions.
Expertise in working with relational databases such as Oracle 11g/10g, SQL Server 2008/2005, and Teradata.
Excellent in designing ETL procedures and strategies to extract data from different heterogeneous source systems like oracle 11g/10g, SQL Server 2008/2005, Flat files, XML etc.
Experience in resolving on-going maintenance issues and bug fixes; monitoring Informatica sessions as well as performance tuning of mappings and sessions.
Experience in all phases of Data warehouse development from requirements gathering for the data warehouse to develop the code, Unit Testing and Documenting.
Experience in using Automation Scheduling tools like Autosys, Control-M.
Worked extensively with slowly changing dimensions (SCD Type I, SCD TYPE 2).
Excellent interpersonal and communication skills, and is experienced in working with senior level managers, business stakeholders and developers across multiple disciplines.
Strong experience in healthcare claims data analysis, working with Medicare and Medicaid data, EDI X12 formats, HIPAA standards, and reporting requirements.
Hands-on experience with Google Cloud Platform (GCP) and BigQuery for analyzing large healthcare datasets and supporting cloud-based reporting needs.
End-to-end involvement in ETL and data integration projects using Informatica, SQL, SSIS, and reporting tools to deliver reliable data solutions for business users.
Worked in Agile environments by supporting sprint activities, requirement discussions, and coordination between business stakeholders and technical teams.
Worked closely with business users and technical teams to analyze data requirements, design data flows, and deliver accurate reports and dashboards that supported key business decisions. Experience
(5) Designation: Business Systems Analyst Stabilis Technologies Inc 615 Carle Avenue, Lewis Center, 43035, OH
Project-5: Healthcare Claims Data Modernization (CHIP-DI / CIP-GCP Claims) Client: Cambia Health
Domain: Healthcare
Duration: Nov 2025 – Still Working
Tools & Technologies: Teradata data analysis concepts, SQL BigQuery analytics, data governance monitoring, automation scripts, batch processing optimization, reporting analytics, migration validation Description: This project focuses on improving how healthcare claims data is collected, processed, and reported at Cambia Health. The goal is to integrate Medicare and Medicaid claims from different systems into a single cloud platform using ETL tools and Google Cloud. The project supports better data quality, faster reporting, and accurate insights for HEDIS and STARS programs. It helps business and clinical teams access reliable claims information for decision-making and compliance needs. Role: Business Systems Analyst & Data Integration and Reporting. Responsibilities:
• Supported migration planning by analyzing legacy claims and eligibility data flows and converting business goals into clear data movement and validation steps for the modern platform.
• Performed detailed data analysis on Medicare and Medicaid datasets to identify patterns, trend changes, and data quality risks that could impact reporting and business planning.
• Supported large-scale transfers into cloud analytics by validating that incoming datasets matched expected structure and that transformed outputs aligned with reporting definitions.
• Built and maintained detailed mapping references showing how claim fields and eligibility fields moved from legacy sources into the new target structures used for analytics.
• Produced data aggregation outputs for utilization, cost, and program tracking views so business teams could monitor operational performance and strategic outcomes.
• Supported operational reporting by generating trend analysis on daily volumes, rejects, and load times, helping teams see whether the platform was improving over time.
• Supported data governance work by documenting data lineage paths, defining standard field meaning, and helping track compliance to enterprise data standards across feeds.
• Monitored and tested data quality checks that flagged missing ids, invalid dates, duplicate claim lines, and unexpected values, then routed problem records into review tables.
• Escalated non-compliance and control failures with clear business impact context and practical options for resolution, helping stakeholders act quickly on quality issues.
• Supported automation work for repeatable data processing steps by defining standard routines for file readiness checks, batch tracking, and daily load validation outputs.
• Supported performance improvement work by analyzing query patterns and heavy processing steps, then recommending changes that reduced runtime and improved cost behavior.
• Provided milestone analytics for migration phases by tracking load completion patterns, failure causes, rerun counts, and readiness for the next phase of delivery.
• Investigated reporting mismatches by tracing totals and sample records from dashboards back to source tables and transformation steps, using SQL checks to isolate root cause.
• Supported controlled reruns by defining restart points and validation steps so late-arriving files or corrected data could be processed without breaking other daily loads.
• Worked with stakeholders to define clear success measures for the platform such as expected volume ranges, acceptable reject levels, and expected totals by program.
• Supported proof-of-concept work by testing new automation ideas for faster data movement, validation, and reconciliation reporting, then sharing outcomes for future rollout.
• Assisted in aligning access practices by supporting role-based access planning for datasets used by analysts, reporting teams, and operational users.
• Produced clear documentation for run steps, data checks, and exception handling so support teams could operate the platform smoothly during ongoing migration work.
• Coordinated with technical teams when upstream systems changed layouts or field definitions, then supported mapping updates and test cycles before regular processing resumed.
• Supported program analysis and business planning by preparing datasets that helped measure utilization patterns, claim processing behavior, and member coverage trends.
• Supported developer alignment by sharing standard validation patterns, query templates, and compliance expectations that reduced rework and improved data consistency.
• Communicated progress and risks through simple updates focused on data readiness, processing health, quality trends, and next actions needed from stakeholders.
• Helped improve operational continuity by tracking recurring issues, defining preventive checks, and supporting process changes that reduced repeated failures.
• Measured improvements after tuning and automation changes using before-and-after metrics for runtime, reject counts, and reconciliation differences to prove impact.
• Supported long-term platform stability by combining strong data checks, clear governance tracking, and reliable automation routines that helped business users trust the data. Experience
(4) Designation: Health Care Analyst MeridianSoft Inc 615 Carle Avenue, Lewis Center, 45035, OH Project-4: MPC-DI (Medical and Pharmacy Claims Data Integration) Client: Aetna
Domain: Healthcare
Duration: Jul 2024 – Oct 2025
Tools & Technologies: Informatica IICS, Snowflake, SQL, API Management, Cloud Data Quality & Governance, B2B Gateway, Teradata utilities BTEQ FEXP FastLoad MultiLoad TPump, Snowflake SQL, data quality controls, automation scripts, governance compliance checks, performance optimization, healthcare data analytics Description: Worked on the integration of medical and pharmacy claims data into a centralized warehouse to support analytics, eligibility tracking, and business reporting. The focus was on building efficient data pipelines, improving quality, and enabling accurate downstream reporting. Role: Informatica ETL Integration Designer and Production Analyst. Responsibilities:
• Supported business and technical teams by analyzing medical and pharmacy claims feeds and converting complex business rules into clear data treatment logic used across integration and reporting datasets.
• Performed high-volume data analysis on claims, eligibility, and provider datasets to understand trend behavior, daily volume shifts, and operational performance of the pipeline.
• Supported movement of large datasets between platforms by using Teradata-related scripting and load patterns, including BTEQ and export style steps aligned to enterprise data transfer needs.
• Built clear mapping references that explained how claim identifiers, member keys, service dates, and paid amounts flowed from raw intake to curated reporting tables.
• Produced data aggregation views that summarized spend, utilization, and category trends, helping stakeholders monitor program results and operational performance.
• Supported data governance controls by documenting lineage from vendor files to curated tables and by tracking compliance to standard field definitions used by analytics users.
• Monitored and tested data quality controls that validated required fields, detected duplicates, checked formats, and routed failures into reject tables for review.
• Investigated non-compliance risks such as missing member ids, invalid codes, and coverage mismatches, then escalated findings with clear remediation options for upstream teams.
• Built automation support steps using scripts and scheduling logic so recurring data movement routines could run with less manual intervention and fewer production defects.
• Produced milestone analytics that measured runtimes, failure points, rerun frequency, and volume spikes, helping teams focus on the highest-impact performance problems.
• Supported performance tuning by analyzing slow SQL patterns, heavy joins, and large table scans, then improving filters, staging approach, and aggregation logic to reduce runtime.
• Conducted reconciliation comparisons across source counts, staging counts, and curated counts, then tracked reasons for differences and coordinated fixes through clear defect notes.
• Assisted in defining operational controls around batch readiness, late file handling, and partial reruns, helping protect daily processing continuity.
• Supported issue troubleshooting by tracing a bad dashboard number back through curated tables and staging steps, using SQL checks and sample record tracing to isolate root cause.
• Worked with stakeholders to shape success metrics for daily processing, such as acceptable reject rates, expected counts, and expected paid amount totals by feed.
• Assisted in validating new vendor layouts by reviewing changes, updating mapping logic, and running controlled tests before allowing the feed into normal daily processing.
• Supported role-based access patterns for data used by analysts by aligning table access and folder access with team needs and sensitive data practices.
• Maintained clear documentation of run steps, rerun steps, and validation checkpoints so support teams could follow stable processes during incidents.
• Supported proof-of-concept automation ideas that improved validation speed, reject routing, and control report generation, helping teams scale the platform.
• Provided trend and performance findings to business and delivery leaders, helping them decide future strategy priorities for reliability and processing speed.
• Supported developer mentoring by sharing standard query patterns, validation templates, and compliance expectations tied to enterprise data standards.
• Helped maintain operational stability during upgrades and releases by running pre-checks, post-checks, and sample validations that confirmed the platform behaved as expected.
• Worked closely with reporting users to confirm that curated datasets supported their analytics needs and that aggregation logic matched business definitions.
• Tracked recurring data issues over time and proposed improvements such as stronger validation rules, better mapping controls, and clearer upstream feed expectations.
• Supported continuous improvement by measuring the impact of tuning and automation changes using before-and-after runtime metrics and data quality trend summaries.
(3) Designation: IT Analyst TATA Consultancy Services Pvt.Ltd (TCS) Hyderabad, Telangana, India
Project-3: STAR2(Sales Tracking and Reporting)
Client: PepsiCo
Domain: Retail
Duration: Jan 2019 – Jul 2022
Description: ‘PepsiCo’ is an American multinational food, snack, and beverage corporation headquartered in Harrison, New York, in the hamlet of Purchase. PepsiCo has interests in the manufacturing, marketing, and distribution of grain-based snack foods, beverages, and other products. PepsiCo was formed in 1965 with the merger of the Pepsi-Cola Company and Frito-Lay, Inc. PepsiCo has since expanded from its namesake product Pepsi to a broader range of food and beverage brands, the largest of which included an acquisition of Tropicana Products in 1998 and the Quaker Oats Company in 2001, which added the Gatorade brand to its portfolio.
Role: Informatica IICS, PowerCenter ETL Developer and Production Analyst. Responsibilities:
• Worked with PepsiCo stakeholders and bottler contacts to gather business needs for sales reporting and onboarding, then translated those needs into clear data movement steps and data treatment rules that helped the platform deliver consistent outputs.
• Performed deep data analysis on customer, product, and invoice extracts to identify trends, volume patterns, and file- level issues, then shared findings that helped improve operational performance of daily and monthly runs.
• Built and maintained detailed source-to-target mapping notes that explained how bottler fields moved through staging and into downstream tables, supporting repeatable onboarding and easier troubleshooting.
• Supported large data transfers by running Teradata-side validation checks using BTEQ-style SQL scripts and count comparisons, so record movement could be verified after each batch.
• Used data aggregation logic to combine sales data by region, customer groups, and product categories, supporting business planning views and performance metrics used by reporting teams.
• Monitored batch processing timelines using scheduler logs and workflow status, then produced milestone analytics showing where delays happened and what steps needed tuning.
• Supported data governance practices by validating file naming patterns, feed timing, and format rules, then sharing non-compliance risks with clear options for correction.
• Investigated recurring bad file patterns such as missing columns, wrong delimiters, and unexpected characters, then coordinated fixes with upstream teams so stable intake could continue.
• Supported automation work by using shell scripts and scheduler controls to build repeatable data mover steps that reduced manual reruns and improved consistency in platform operations.
• Performed detailed data quality checks by validating key fields like customer id, product code, invoice id, and sales amount totals, then shared exceptions with business users for resolution.
• Assisted in building repeatable load routines by defining standard parameter inputs, control tables, and batch tracking fields that helped measure operational performance over time.
• Provided trend analysis on data volumes and reject counts across onboarding waves, helping stakeholders understand whether data quality was improving or dropping week by week.
• Supported performance optimization by reviewing heavy queries and high-volume steps, then tuning filters, join order, and aggregation approach to reduce runtime for large data days.
• Worked with platform teams to define simple controls for monitoring counts and totals, helping track whether daily loads matched expected volume ranges and business expectations.
• Documented end-to-end data flow from bottler intake to Bluebook outputs, including key checkpoints and validation gates that helped reduce confusion during onboarding and audits.
• Supported escalations when critical batch failures impacted reporting timelines by collecting logs, counts, and sample records, then providing clear technical context for faster resolution.
• Assisted in defining role-based access patterns for data folders and tables used in processing, supporting controlled access in a way that fit regulated reporting needs.
• Performed repeatable reconciliation work by comparing source counts, staging counts, and final publish counts, then tracking mismatch reasons to drive long-term fixes.
• Helped improve operational reliability by identifying high-risk steps in the workflow chain and suggesting safer restart points and rerun steps for partial failures.
• Supported program analysis by preparing datasets used for distribution and inventory views, so planning teams could measure performance and identify supply gaps.
• Coordinated with QA and business teams during validations by sharing expected outcomes, sample-based checks, and pass criteria tied to onboarding requirements.
• Maintained clear run notes and change notes when layouts or business rules changed, supporting stable operations during frequent onboarding activity.
• Supported proof-of-concept style improvements by testing small automation ideas for faster validation and routing, then sharing results with the team for rollout planning.
• Provided clear stakeholder updates on run health, data quality trends, and open issues, helping business users understand impacts and next actions without technical confusion.
• Improved delivery consistency by collecting lessons learned from onboarding cycles and converting them into reusable validation rules, mapping templates, and repeatable operating steps
(2) Designation: System Engineer TATA Consultancy Services Pvt.Ltd (TCS) Hyderabad, Telangana, India
Project-2: Data Exchange
Client: Anthem
Domain: Health Care
Duration: March 2018 – Dec 2018
Description: ‘Data Exchange’ is one of the major downstream application of Anthem that Extracts data and Transforms it based on business requirements. These extracts are sent to both Internal and External vendors. There are around 500 extracts scheduled that need to be monitored on Daily, Weekly and Monthly basis along with this we have developed extracts like M360, CTK, Healthcare Medical. The effort included stage, integrate and positioning the data making it available for the downstream data consumers.
Role: Informatica IICS, PowerCenter ETL Developer and Production Analyst. Responsibilities:
• Using Informatica PowerCenter Designer analyzed the source data to Extract & Transform from various source systems(oracle 10g,DB2,SQL server and flat files) by incorporating business rules using different objects and functions that the tool supports.
• Using Informatica PowerCenter created mappings and mapplets to transform the data according to the business rules.
• Used various transformations like Source Qualifier, Joiner, Lookup, sql, router, Filter, Expression and Update Strategy.
• Implemented slowly changing dimensions (SCD) for some of the Tables as per user requirement.
• Developed Stored Procedures and used them in Stored Procedure transformation for data processing and have used data migration tools
• Documented Informatica mappings in Excel spread sheet.
• Tuned the Informatica mappings for optimal load performance.
• Have used BTEQ, FEXP, FLOAD, MLOAD Teradata utilities to export and load data to/from Flat files.
• Created and Configured Workflows and Sessions to transport the data to target warehouse Oracle tables using Informatica Workflow Manager.
• Worked along with UNIX team for writing UNIX shell scripts to customize the server scheduling jobs.
• Constantly interacted with business users to discuss requirements.
(1) Designation: Assistant System Engineer TATA Consultancy Services Pvt.Ltd (TCS) Hyderabad, Telangana, India
Project-1: : Information Management Technology (IMT) Production Support and Maintenance L3 Client: Charles Schwab contract with Infosys Ltd.
Domain: Banking
Duration: Feb 2016 – Feb 2018
Description: Central repository IDW is built on all trading engine to build a warehouse for global reporting purpose. IDW contains core business information for Schwab’s brokerage and banking business. IDW is designed with a vision of lead and deliver enterprise data strategy and information capabilities by creating best in class data platforms, integration processes and analytic competencies for management reporting, analytics and decision support applications. IMT project involves building a continuously evolving ETL Platform for Charles Schwab Corporation, to provide securities brokerage, banking and related financial services to individuals and institutional clients. IMT is 2 phased effort projects to acquire data from multiple source systems, analyze, transform and standardize the data per the business requirement and customer needs and storing the data to centralized repository IDW. In the second Phase, Application layer/ Information Layer is built on top of the acquired data in the Data Warehouse for business consumers. This is a 24X7 Production Support project which involves Incident Management, Change Management, Problem Management and Release Management. It uses Teradata as the database and Informatica as the Extract Transform Load (ETL) tool.
Role: ETL Production Analyst.
Responsibilities:
• As a support person Maintaining availability of the application through monitoring of the systems and applications by performing proper health checks before business hours starts.
• Responsible for resolving incidents.
• Planning batch processes during patching /upgrade activities.
• Attending Client meetings to address the application impact If any changes happening in upstream/downstream systems.
• Incident and problem management – Identification and classification of incidents, Investigation and diagnosis of problems, resolution and closure.
• Execute batch jobs on request (ad-hoc jobs) as per service requests.
• Perform escalation management for production issues/defects.
• Creating breakfix ticket for immediate resolution of batch issue.
• Monthly and weekly status reporting of the batches to management Technical Skills
ETL Tools: Informatica IICS, PowerCenter ETL Tool 10.2/10.4, SSIS (SQL Server Integration Services), Teradata data movement utilities, batch data processing, data migration support
Operating Systems: Unix, Windows
Database: Oracle 11g, SQL Server, Snowflake, Teradata, Google BigQuery, Teradata utilities BTEQ FastLoad MultiLoad TPump TPT, RDBMS data analysis, performance tuning
Cloud Platform: Google Cloud Platform (GCP), cloud data analytics, data migration validation
Database GUI Tools: SQL Developer, DBeaver
Programming Languages: UNIX shell scripting, SQL, PLSQL, advanced SQL data analysis, automation scripting
Scheduling Tool: Autosys, Control-M, batch automation, job monitoring
User Access Tools: Service-now, ITSME, Identity Management Tool (IDM Tool), Cognos IBM Reporting Tool, Power BI, Tableau, role based access coordination
Healthcare Standards: EDI X12 (270/271, 834, 835), HIPAA, HEDIS, STARS, regulated data handling
Agile & Tracking Tools: JIRA, HP QC/ALM, MS Visio, MS SharePoint, requirements tracking, compliance monitoring
Reporting Tools: SSRS, Tableau Desktop, Power BI, data aggregation reporting, analytics dashboards Education
Texas Tech University, Lubbock, Texas, USA Aug 2022 – May 2024 Master of Science in Computer and Information Science GPA: 3.46 Andhra Loyola College, Vijayawada, AP, India Jun 2012 – May 2015 Bachelor of Computer Science GPA: 3.48
Certifications
• Data Warehouse Engineer Professional Certificate IBM
• Databases and SQL for Data Science with Python IBM Achievements
• Team under my guidance has outperformed and received the “Delivery Excellence Award”.
• Received a “Certificate of Appreciation” for resolving the critical issue present in application over a decade.
• Estimated and drafted a proposal of Application Migration project and assisted the team to complete it within the deadlines.
• Technically evaluated 30 candidates out of which 10 applicants are hired in recruitment drive.