Post Job Free
Sign in

Senior Data Architect & Engineer

Location:
Wilton, CT
Posted:
December 02, 2025

Contact this candidate

Resume:

Banupriya Sangameswaran

Email : *********.*************@*****.***

Phone No: 713-***-****

Architect,Lead Data Engineer

Professional Summary:

** ***** ** ********** *** played multiple roles from developer to data modeler, and designer, Data Architect for Healthcare, Insurance, Telecommunication, and Manufacturing industries.

IT experience in Data Architecture, Data Modeling to set and provide architectural direction to support business objectives and requirements that align with roadmaps, standards, and best practices; focusing on security, design, data quality, strategies, system availability and resiliency.

Designing, Developing, and maintaining the data architecture, data models in snowflake

5+ years of experience in Data Design and Development on Microsoft SQL Server, T-SQL, Performance Tuning, Troubleshooting, SSIS Package Configuration, SSRS and Data-warehousing.

Involved in writing Oracle PL/SQL Stored procedures, triggers, and views.

Expert level skills in developing ETL strategies & Data flow/Control flow Tasks in SSIS using SSIS & SSRS reports. Database designing using SQL Server 2005/2008/2008R2/2012.

Expertise in designing Star schema, Snowflake schema for Data Warehouse by using tools Erwin data modeler, Power Designer.

Expertise in ER modeling, dimensional modeling and CASE tools ERWIN, power designer and implementing RDBMS features.

Experience in healthcare Data Models (Hedis)

Efficient in implementing Normalization to 3NF/ De-normalization techniques for optimum performance in relational and dimensional database environment

Hands-on experience with multiple Snowflake objects implementation, SnowSQL, SnowPipe, Views and Snowflake Procedures, Time travel, Performance tuning

Strong understanding of RBAC and other authorization mechanisms in Snowflake.

Good experience with AZURE- Databricks, Spark SQL, ADF, ADLS and converted Greenplum functions into Databricks SQL for migration projects and loaded data into AZURE SQL DB

Good knowledge of ADF pipelines and CICD - AZURE Repos and branching strategy

Extensively used GITHUB, GITLAB version control tools to maintain the versions on notebooks and worksheets or snowsql jobs

Experience with technology design patterns and approaches for data ingestion and metadata

6+ years of experience in PostgreSQL, converted PostgreSQL to Databricks SQL during migration

Extensive experience with T-SQL in constructing Triggers, Tables, implementing stored Procedures, Functions, Views, User Profiles, Data Dictionaries and Data Integrity.

Experience in developing complex SQL queries using joins, sub-queries, and correlated sub-queries. Extensively Worked on database objects like tables, indexes, and views.

Good knowledge of Snowflake data warehouse, prepared an RFP for Teradata to snowflake migration

Worked on Teradata-CLDM (Communication Logical Data Modelling) modelling and design

Worked on Tableau report creation and deployment,good knowledge and experience on Power BI report generation.

Well versed in data analysis, developing data warehouse designs, working with staging, star schema and Snowflake schemas, and dimensional reporting.

Extended working knowledge in cloud service, IaaS, PaaS, queue, azure blob and table storages and API Management. Configure Azure blob storage and Azure file servers. Configured private and public facing Azure load balancers etc.

Enterprise-level experience - Working with large teams and involved in architecture of highly scalable, secure, distributed applications, aligned with company standards, process, methodologies, and best practices.

Worked in Agile Environment using tool like JIRA

Quick starter with ability to master and apply new concepts. Ability to meet deadlines and handle pressure in coordinating multiple tasks in a project environment.

Strong communication and presentation skills, organizational and inter personnel competencies along with detail oriented and problem solving skills in technology arena.

Working in Agile Methodology, connecting with clients for sprint planning and helping them in creating the user stories required for the development.

Determine the technical solution of different user stories client has in their day-to-day operation. Map requirements to the application process and perform a gap analysis. Prepare design alternatives and design specification documents.

Used IDE for data profiling to perform the analysis and identify the data quality issues

Data profiling helped business users to create the business rules

Provided support to Tableaus users and wrote custom SQL’s to support business requirements

Experience in creating mappings, workflows using Informatica (Power Center & BDE)

Effective experience in Complete Software development Life cycle (SDLC) (Agile) which includes Business Requirements Gathering, System Analysis & Design, Development, Testing, and Implementation of Data warehouse Applications.

Strong knowledge of Data Warehousing concepts, hands-on experience using Teradata utilities (BTEQ, FASTLOAD, MULTILOAD)

Good knowledge of Tableau reports also experienced Mainframe programming.

Provide daily/weekly reports to designated stakeholders across clients with respect to scheduled deliverables.

Participate in all SCRUM ceremonies such as Daily Scrum, Retrospective, Planning, and Reviews.

Contribute to client proposals based on functional/technological expertise and take delivery ownership and guide individuals and groups to desired Outcomes; initiate and maintain client relationships; anticipate and identify client issues and concerns.

Good experience in development, maintenance, and migration projects. Proficient in analyzing the requirements, design, coding, and testing for various service requests. Has effective communication and interpersonal skills.

Technical Skills:

•Data Warehousing: Informatica PowerCenter, Data Profiling, Data cleansing, OLAP, Control M, Star Schema, Snowflake Schema,

•Data Modeling: Dimensional Data Modeling,Snow-Flake Modeling, FACT and Dimensions Tables, Physical and Logical Data Modeling, ERwin

•Databases: Oracle, DB2, Sybase SQL Server, MS SQL, SQL Server, MS Access, Teradata, Hive, Redshift, Microsoft Azure, AWS Cloud, Snowflake, Postgres,Hive

•Cloud: Azure Data Platform including Azure SQL, Azure SQL Data Warehouse, Azure Data Lake, Azure Data Factory, AZURE DataBricks

Achievements:

Woman Of the Quarter for the period Q1 2021 and Q2 2020

Best Engagement Award in 2018

Customer Bravo for exceptional display of Quality work in 2017

Out of box award for outstanding performance and lasting contribution in 2016

HMS top performer Award during 2012-2013

Professional Experience:

IEHP

Hexplora, Rokyhill,Connecticut Mar 2024 to Present

Lead Data Engineer

Responsibilities:

Design and develop logical and physical data models to support CCA export

Analyze data requirements to ensure data models are optimized for performance

Create data dictionaries and other documentation to support data models.

Prepare source to target mapping document.

Collaborate with stakeholders to ensure data models and S2TM meet their needs

Prepare SSIS packages and DFT to load the data into HEDIS data Mart

Inclusion of CCA data into HEDIS reporting

Membership All demographics: name, DOB, gender, address, phone, race, ethnicity, language (spoken and written), provider assignment, IPA assignment, county, aid code - everything we currently have on our Medicare and Medi-Cal members.

Enrollment -Effectuation, termination, PCP assignment at the enrollment level, IPA assignment at the enrollment level. We need everything we currently have on our Medicare and Medi-Cal members.

Provider -Complete load (including contract details, etc.)

Provider Speciality -Manually upload/update scripts

Claims, Encounters

Pharmacy -MedImpact data and direct data (pharmacy/medication data from the hospital sources)

Labs-Data from IEHIE, Lab Corp and Quest

Dental-Data from Liberty

Vision-Data from Vision

Behavioral Health-Data from BH

Manifest MedEx (HIE IEHP is contracted with)-Data may overlap with claims, encounters, and labs.

Export data into Innovalon servers for quality measures

Analyze existing data models and suggest improvements

Monitor jobs ensure accuracy and integrity

Troubleshoot and resolve if there are any performance issues on SSIS ETL flow

Environment: AZURE, SSIS, AZURE DevOps, SSMS, Power BI

JR Simplot Capgemini America, Inc. Jan 2023 to Mar 2024

Data Architect

Responsibilities:

Participated in Agile Scrum methodology and involved in Design, development, Implementation, and testing of the enterprise applications.

As part of the Insights & Data team, I'm focusing on the Microsoft AZURE platform working as Data Architect to create Dimensional data modeling and semantic views for different use cases.

Bridge the Client Executive and Data & AI Delivery team to drive and enable the client's data strategy and the solutions of their strategic Analytics.

Extract the data from source DB’s and load into AZURE BLOB storage using ADF

Worked on ADF to load the data into stage layer (ADLS – AZURE Data Lake) later used Databricks notebooks – SQL and Spark SQL to load the data into curated layer

Read data from files using pyspark RDD, applied filters, created views and subset of the data

Used COPY, Azure Logic app for email and stored procedure and Databricks notebook activities

Worked on Pyspark, SQL, Cloud data warehouse like Azure DataBricks

Created scripts to extract data from structured and unstructured data, Experienced in ETL/Pipeline Development using tools such as Azure Databricks and Azure Data Factory with development expertise on batch data integration

Experienced on relational data processing technology like MS SQL, Delta Lake, Spark SQL, SQL Server, retrieve data from SQL server, Oracle and IBM DB2 databases

Performed dimensional data modeling using ERWIN

We loaded the data into AZURE SQL DB and experienced on Orchestration tools - ADF, Azure DevOps

Whenever the HLD documents are not holding proper information, recreate the discovery documents connecting with Product Owners.

Created Source to Target mapping to include transformations and data quality rules needed

Used forward engineering to create a Physical Data Model with DDL that best suits the requirements from the Logical Data Model.

Worked with Database Administrators, Business Analysts and Content Developers to conduct design reviews and validate the developed models.

Worked on performance tunning and fixing the issues for queries, Verify the table loads end to end and ensure implementation is correct

Work alongside Power BI team to implement and represent the data for multiple use cases which helps users to take Business decisions

Helping in the definition client's data strategy and their key strategic Data, Analytics and Implementation

Create semantic views for the complex use case

Involved in building the CICD pipeline and ADF pipelines

Environment: AZURE – ADF, Databricks, SSMS, Power BI Agile Scrum, AZURE DEVOPS,ERWIN

T-Mobile

Capgemini Technology Services India Limited Sep 2019 to Dec 2022

Data Architect

Responsibilities:

Involved in migrating objects from Teradata to Snowflake

Good knowledge on Snow pipe for continuous data load

Used COPY to bulk-load data

Connected with customers to complete the HRA, Marketing use cases and finished successfully

Worked on snow sql for Teradata to snowflake scripts migration

Used FLATTEN table function to produce lateral view of VARIENT, OBJECT and ARRAY column

Worked with both Maximized and auto scale functionality

Used Temporary and transient tables on diff datasets

Cloned production data for code modification and testing

Time travel to 56 days to recover missed data, used as well fail-safe for data recovery

Worked with development team to build python framework, passing JSON parameters implemented the jobs to load the data into snowflake

Strong understanding of RBAC and other authorization mechanisms in Snowflake.

Involved in migrating/convert to snowsql Oracle PL/SQL Stored procedures, triggers, and views.

Expertise in deploying Snowflake features like Data Sharing.

Worked on performance tuning and fix the performance issues.

Prepare Cloud data warehouse database design and architecture reports.

Environment: AZURE, Teradata, Snowflake, Control M

BHGE Nov 2017 to Aug 2019

Capgemini Technology Services India Limited

Data Architect

Responsibilities:

Used Agile methodology and participated in SCRUM meetings.

Involved in the Design, Development and Support phases of Software Development Life Cycle (SDLC).

Worked as a lead for Databricks migration, helped team developed Databricks SQLs and workflows deployed through CICD pipeline.

Involved in discussions with Business Analysts and designed the TDD (Technical Design Documents).

Worked on AZURE to AWS migration project, created SSIS packages, and guided them to create SSRS reports

Worked on Databricks – SQL, SparkSql scripts on AWS and AZURE SQL DB

Loaded data into a DataFrame from files using pyspark

Created Delta tables in hive metastore, which is used for analytical and query purpose

Extract data from source tables and loaded into AZURE Data Lake

Good knowledge on Terraform scripts which is used in CICD pipeline deployment

Created RFP (Request for Proposal) documents and participated in blind calls with customers and won the new project

Guiding team Greenplum functions and Talend ETL workflows for different projects in Baker Hughes

Created complex SQL scripts to achieve the customer expectations

Worked on production issues whenever there is an issues on jobs

Worked on AZURE to AWS migration project, onprem we are loading data from oracle and flat files using SSIS to SQL server

Migrated and worked on SSRS reports

Helped team to work on Tableau report creation and deployment

Developed Tableau data visualization using cross tabs, Heat maps, Box and Whisker charts, Scatter Plots, Geographic Map, Pei charts and Bar charts

Provided support to Tableaus users and wrote custom SQL’s to support business requirements

Involved in writing Stored procedures, and views on DB2, Greenplum, Azure SQL DB, Hive meta store

Good knowledge on GIT for version controlling

Managed a team of 7 resources and helped them to meet the delivery deadlines

Involved in resource recruitment and train the new Joiners provide technical assistance for team members

along with individual deliverables

Created weekly status reports and shared them with customer

Involved end to end project insights and complete the assigned projects within the given timeframe.

Environment: Unix, Databricks, Greenplum, Tableau, Pyspark, Basic python scripts, Greenplum DB, Tableau, SSIS, SSRS, AWS Cloud

T-Mobile Mar 2015 to Nov 2017

Capgemini Technology Services India Limited Data Modeler/Architect

Responsibilities:

Involved in Design, Development and Support phases of Software Development Life Cycle (SDLC).

Used AGILE methodology and participated in SCRUM meetings.

Used Rally tool for Issue/bug tracking, monitoring of work assignment in the system.

Developed Informatica Mappings (PDO) – Source: Oracle and Target : Teradata

Used Erwin Data Modeler and Erwin Model Manager to create Conceptual, Logical and Physical data models and maintain the model versions in Model Manager for further enhancements.

Worked on logical and physical modeling of various data marts as well as data warehouse.

Created and maintained Logical Data Model (LDM) for the project. Includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes and glossary terms in accordance with the Corporate Data Dictionary etc.

Worked on Dimensional Data Modeling and Teradata CLDM for IDW

Converted Business rules into ETL logic

Deliverables are ADD changes, S2TM, DDL's and PDM, BO reports, Complex views on semantic layer

Involved in bug fixing, major releases and customization for various client

Well experienced in Teradata (Primary and Secondary) Indexing feature.

Worked extensively with ETL tools like Teradata Multi Load, Fast Load, Fast Export, Bteq and SQL * Loader, Import/Export Utilities.

Involved in Performance tuning of problematic queries or long running SQL queries to reducing elapstime and impact CPU time.

oImplemented Compression and Multi-value Compression.

oImplemented Partition Primary Index - PPI/MLPPI

oImplemented Join Index ( Aggregate Join Index)

Attending code review meetings and provide solutions.

Experienced with Teradata Tools like SQL Assistant.

Responsible for code reviews and design reviews

Provide technical solutions to various issues in project

Involved in Configuration of data source and deployment of application in development, Test and Production.

GIT for version Controlling.

Development process the Scrum, Iterative Agile methodologies for web Application.

Involved in configuration setting for Development, Test, and Production Environment.

Environment: Unix,Teradata, Informatica, ERWIN,Teradata (SQL), Informatica(PDO)

HMS- Health care Management Systems Aug 2011 to Feb 2015

SPAN Infotech

Developer

Responsibilities:

Involved in Design, Development and Support phases of Software Development Life Cycle (SDLC).

Used AGILE methodology and participated in SCRUM meetings.

Used Rally tool for Issue/bug tracking, monitoring of work assignment in the system.

Involved in migrating Mainframe programs to Informatica, Teradata

Developed Informatica Mappings – Source: Oracle and Target : Teradata

Involved in various Informatica object creation and testing for different Accounts Receivable, Eligibility, IND, Claims tables

Worked with Data profiling and rule creations in IDE

Converted Business rules into ETL logic

Created BTEQ Scripts to Insert/delete data in Teradata database

Tested Workflows and loaded data in to Teradata tables using Unix Scripts and TPT technique- tbuild (MLOAD,FLOAD,BTEQ) scripts

Worked on Production support in mainframe and Informatica and developed a program for reformatting records

Involved in bug fixing, major releases and customization for various client

Responsible for code reviews and design reviews

Provide technical solutions to various issues in project

Involved in Configuration of data source and deployment of application in development, Test and Production.

Co-ordinate with onsite team for Mainframe or Informatica or Teradata Task development and release

GIT for version Controlling.

Development process the Scrum, Iterative Agile methodologies for web Application.

Involved in configuration setting for Development, Test, and Production Environment.

Environment: Unix,Teradata, Informatica, ERWIN,Teradata (SQL), Informatica(PDO)

Chrysler May 2010 to July 2011

vCentric Technologies Private Limited (Deputation with CSC India Private Limited)

Developer

Responsibilities:

Involved in Design, Development and Support phases of Software Development Life Cycle (SDLC).

Used AGILE methodology and participated in SCRUM meetings.

Received BRD from onshore lead and based on the requirement prepared Mainframe programs

Created Cobol programs for reports

Worked on Mainframe programs – Source files Target : Z/OS server

Involved in bug fixing, major releases and customization for various client

Involved in Configuration of data source and deployment of application in development, Test and Production.

Development process the Scrum, Iterative Agile methodologies for web Application.

Involved in configuration setting for Development, Test, and Production Environment.

Environment: Panvalet, File AID, SPUFI and QMF, VSAM, COBOL, JCL, DB2, Eztrieve

Metlife & GE Sep 2006 to July 2008

PATNI COMPUTER SYSTEMS LTD

Developer

Responsibilities:

Involved in Design, Development and Support phases of Software Development Life Cycle (SDLC).

Used AGILE methodology and participated in SCRUM meetings.

Used Rally tool for Issue/bug tracking, monitoring of work assignment in the system.

Worked on Mainframe programs – Source files Target : Z/OS server

Involved in bug fixing, major releases and customization for various client

Involved in Configuration of data source and deployment of application in development, Test and Production.

Development process the Scrum, Iterative Agile methodologies for web Application.

Involved in configuration setting for Development, Test, and Production Environment.

Environment: Panvalet, File AID, SPUFI and QMF, VSAM, CICS, COBOL, JCL, DB2, Eztrieve

Academics:

MBA (Master of Business Administrations (HR & Marketing), Anna University, India - 2010

BE Computer Science and Engineering, Anna University, India - 2005



Contact this candidate