Post Job Free
Sign in

Cloud Data Architect & Azure Data Lead with 18+ Years in DWH & BI

Location:
Edison, NJ
Posted:
February 26, 2026

Contact this candidate

Resume:

Name: Ranjit Kumar Panuganti Mobile: +1-945-***-****

Email: *******.*.*********@*****.***

Professional Summary

Highly Skilled Cloud Data Architect and Azure Data Lead with 19 years Experience in Data Warehousing ETL tools, Reporting tools and Certified Azure Data Engineer and Cloud fundamentals. In-depth knowledge on cloud DWH platforms like Azure Databricks, ADF, Delta Lake, PYSpark, Snowflake, ETL tool Datastage, Tableau, Matillion, SSRS, Cognos. Involved in complete Software Development life-cycle (SDLC) of various projects, including Agile methodology, Requirements gathering, System Designing, Data modeling, Migration, PoC and Maintenance. Excellent Interpersonal and communication skills with an ability to remain highly focused and self-assured in fast-paced and high-pressure environments.

Having 18 years of experience in Data ware Housing and Business Intelligence projects in Banking, Stock Market, Finance, Risk Compliance, Capital Markets and Insurance industry which includes ETL DataStage, Azure Data Factory, DataBricks, PySpark, Python Pandas, Numpy, Snowflake, Snowspark, Tableau, Power BI, T-SQL, Big Data, SSRS, SQL Server, Oracle, Teradata, Hadoop, Autosys and Unix in SMBC Bank USA NJ/NY, Wells Fargo India Solutions, Wipro Technologies Ltd., India.

Extensive ETL tool experience using IBM Infosphere/Websphere DataStage, ADF, Databricks and Reporting tools of Tableau, Power BI, SSRS, Cognos.

Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.

Banking domain experience in Capital Markets, Finance, Stock Market, Trade, Risk Compliance and Implemented or designed code in Hedging, Mitigate losses due to price fluctuations, Grossup, Offset, Netting, Reclass functional.

Built and managed Data Pipelines and Jobs with Databricks (Delta Live Tables, PySpark, Python, SQL) and Azure Data Factory to streamline data processing.

Built Real-time ingestion pipelines using Azure Databricks Streaming, Event Hubs to capture Trade and Market data from different vendors.

Architected real-time market data ingestion pipelines using Bloomberg BPipe API to capture pricing ticks, curves, benchmarks, and security-level reference data for Treasuries, Corporates, Swaps, FX, and Money Markets.

Implemented Delta Lake–based streaming frameworks in PySpark to process high-frequency BPipe updates with millisecond latency requirements.

Bring around 10 trading platforms data like Bloomberg, Markit, ION, Wells Fargo, Tradeweb, Market Axess, R8Fin and some more to legacy systems. On board new venues to the existing process and delivery to downstream Teams.

Architected cost-optimized Azure Databricks cluster strategy by segregating workloads into Job clusters, All-Purpose clusters and SQL Warehouses reducing overall DBU consumption by 30%.

Designed and enforced cluster policies across Dev/QA/Prod environments restricting node types, maximum workers, auto-termination thresholds, and mandatory Photon-enabled runtimes.

Implemented Ephemeral Job clusters for production ETL pipelines, eliminating idle compute costs and reducing unnecessary DBU burn from interactive clusters.

Performed cluster right-sizing using Spark UI and Ganglia metrics by tuning executor memory, cores, shuffle partitions, and broadcast joins, reducing spill and shuffle overhead by 25%.

Configured optimized autoscaling clusters (min/max workers) for variable batch workloads, enabling dynamic scale-up during shuffle-heavy transformations and aggressive scale-down post-processing.

Enabled Spark dynamic allocation and Adaptive Query Execution (AQE) using spark.dynamicAllocation.enabled,spark.sql.adaptive.enabled,shuffle.partitions improving runtime stability and resource utilization.

Integrated Azure Monitor and Log Analytics alerts for DBU consumption thresholds, idle clusters, and high-memory node overprovisioning.

Optimized join-heavy and aggregation workloads using Photon vectorized execution engine, reducing CPU time and shuffle read/write metrics significantly.

Implemented Delta Lake performance optimizations using OPTIMIZE and ZORDER to mitigate small file issues and improve data skipping performance.

Migrated legacy Python based pipelines to a modern Databricks PySpark framework, improving scalability, performance, and maintainability while reducing operational overhead.

Built robust PySpark pipelines for real time and Market surveillance, covering insider trading, spoofing, layering, Amended Trades and other market abuse detection scenarios.

Delivered regulatory and compliance reporting solutions including After Hours Rates, Credit Reporting, and Cancel & Amendments Reporting ensuring accuracy, auditability, Bucketing amendments, Cancels at Trader Level, Desk level and alignment with regulatory standards.

Experience in developing Spark applications using Spark-SQL/PySpark in Databricks for data extraction, transformation, and aggregation from multiple file formats for Analyzing & transforming the data to uncover insights into the customer usage patterns.

Pyspark jobs were finetuned using Spark AQE enabled and it’s inbuilt configuration parameters for different data volume conditions.

Orchestrated Databricks notebooks using Azure Data Factory (ADF) pipelines for fully automated ETL workflows.

Developed PySpark DataFrames, UDFs, and Window functions to perform complex transformations and aggregations on large-scale structured and semi-structured data.

Optimized PySpark jobs by caching, partitioning, and broadcast joins reducing job execution time by creating specific clusters, Coalesce, Salting techniques.

Built incremental data ingestion pipelines using Databricks Auto Loader and PySpark structured streaming.

Configured CI/CD Auto deploy for ADF Workflows, Databricks notebooks in Azure Devops with automated cluster provisioning.

Extract Load and Transform data from sources Systems to Azure Data Storage services using a combination of Azure Data factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics. Data ingestion to one or more Azure services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks Medallion architecture.

Written AWS Lambda code in Python for nested json files, converting comparing,sorting.

Construct the AWS data pipelines using VPC, EC2, S3, Auto Scaling Groups (ASG), EBS, Snowflake, IAM, CloudFormation, Route 53, CloudWatch, CloudFront, CloudTrail.

Build ETL pipeline end to end from AWS S3 to key, Value store DynamoDB, and Snowflake Datawarehouse for analytical queries and specifically for cloud data.

Evaluating Technology stack for building Analytics solutions by doing research and finding right strategies, tools for building end to end analytics solutions and help designing technology roadmap for Medallion Architecture, Data Ingestion, Data lakes, Data processing and Visualization.

Developed Tableau reports/Dashboards at enterprise level and Line of Business level. Presented briefly and migrated multiple legacy projects into Tableau and Power BI.

Upgraded IBM Datastage from 9.1 to 11.5 Version with all Environments in place and zero issues. Received Achieving Excellence Award 4 for this Conversion and driving end to end.

Implemented Legacy ETL Application migration to Azure platform on Medallion architecture using ADF for migration and Databricks for Analytics.

Implemented SCD Type 2,3 in Datastage and usage of JSON/XML Configuration to read or write files.

Rewrite the Datastage V11.5 software package in Wells Fargo Enterprise level to fix it as it replaces older version.

Developed parallel jobs implemented SCD Type 2, Used transformer stage, CDC Stage, ODBC Connectors, Sequencers, runtime column propagation.

Migrated as PoC for around 100 MS SQL Server tables to Azure Cloud Database using ADF(Azure Data Factory).

Python Pandas, Numpy Libraries are used in scripting for Azure blob files testing and data analytics.

ADF Created Pipelines and Dataflow jobs used Conditional split, exists, joins and derived columns of schema modifier windows, pivot.

Created Technical Specification documents related to Datastage Software Install, Design, Architecture, Migration Implementation Steps.

Experience in fine tuning Teradata SQL Queries by seeing Execution plan, Repartitioning Indexes, BulkLoad/BTeq loadof data.

Implementation of Bulk Load/Bteq Load in Datastage Teradata connectors and in UNIX Shell scripts query part.

Expertise in Snowflake concepts like setting up Resource monitors, RBAC controls, Scalable virtual warehouse, SQL performance tuning, zero copy clone, time travel and automating them.

Experience in in re-clustering of the data in Snowflake with good understanding on Micro-Partitions.

Experience in Migration processes to Snowflake, Azure Cloud environments from on-premises database environment.

Key achievement is partnered with 40+ teams to establish business glossary and metadata repository using IBM infosphere and Abinitio BRE components.

Implemented Data governance strategy for EFT LoB by developing an enterprise wide MDM system to provide consistency of data lineage and definitions across 40+ Systems.

Created a centralized MDM database system with 500+ attributes per 5 systems to support various business units and Business requirements.

Strong understanding of the principles of Data Warehousing using fact tables, dimension tables and star/snowflake schema modeling.

Worked extensively with Dimensional modeling, Data migration, Data cleansing, ETL Processes for data warehouses.

Led a team in Developing new and modifying design approach to automate routine tasks as of the new ETL architecture directions. Used Enterprise Edition/Parallel stages like Datasets, Change Data Capture, Row Generator and many other stages in accomplishing the ETL Coding.

Leading the technical design team and performing code peer review and analysis.

Microsoft Certified Azure Data Cloud Fundamentals 2022.

Created LoB level metrics using Tableau and finished multiple projects smoothly.

Excellent team player with problem-solving, Handling conflict issues and trouble-shooting capabilities.

Proven ability to quickly learn and apply new technologies, have creativity, innovation, and ability to work in a paced environment.

Work effectively with diverse groups of people both as a team member and individual.

Trained and practitioner of Agile development methodologies and used tools (Jira, Confluence) in a compliance driven environment.

Technical Skills

DWH/ETL Technologies : Azure Databricks, PySpark, ADF, IBM Infosphere

DataStage 11.7, 9.3, ADLS, Python, Tableau, AWS, SnowFlake, Power BI, Talend 7.3, Collibra, Hadoop, Cognos.

Databases : ADLS, Oracle, Teradata, Snowflake, MSSQL Server 2018

Languages : T- SQL, SQL, Python, Autosys, Unix Shell Scripts.

Other Tools : GitHub, Service Now, SCM, Jenkins, Jira, Confluence,Agile

Monitoring tools : Cloudera Manager, Autosys, Control M

Operating Systems : Windows, UNIX, AIX, Mac

Cloud Technologies : Azure ADF, Snowflake Cloud DWH, AWS

Reporting Tools : Tableau, Power BI, Cognos

Work Experience

Client: SMBC Bank NY USA Aug 2025 – Till Date

Role: Azure Architect & Data Engineer

Description:

SMBC Capital Markets in New York named as CM Nikko deals with different trading platform systems to capture trade data from (Tradeweb, Bloomberg, MarketAxess, ION, Wells Fargo) vendors. Fixed Income Sales Trade division will capture those trade data fixed income products, integrating real-time market data with SMBC’s internal analytics libraries.

Responsibilities:

Built Real-time ingestion pipelines using Azure Databricks Streaming, Event Hubs to capture Trade and Market data from different vendors.

Architected real-time market data ingestion pipelines using Bloomberg BPipe API to capture pricing ticks, curves, benchmarks, and security-level reference data for Treasuries, Corporates, Swaps, FX, and Money Markets.

Implemented Delta Lake–based streaming frameworks in PySpark to process high-frequency BPipe updates with millisecond latency requirements.

Integrated ION trading feeds, Tradeweb and Market Axess to ingest real-time RFQs, quotes, order books, and trade execution messages into Azure Event Hubs, Databricks.

Bring around 10 trading platforms data like Bloomberg, Markit, ION, Wells Fargo, Tradeweb, Market Axess, R8Fin and some more to legacy systems. On board new venues to the existing process and delivery to downstream Teams.

Migrated legacy Python based pipelines to a modern Databricks PySpark framework, improving scalability, performance, and maintainability while reducing operational overhead.

Built robust PySpark pipelines for real time and batch surveillance, covering insider trading, spoofing, layering, Amended Trades and other market abuse detection scenarios.

Delivered regulatory and compliance reporting solutions including After Hours Rates, Credit Reporting, and Cancel & Amendments Reporting ensuring accuracy, auditability, Bucketing amendments, Cancels at Trader Level, Desk level and alignment with regulatory standards.

Architected cost-optimized Azure Databricks cluster strategy by segregating workloads into Job clusters, All-Purpose clusters, and SQL Warehouses reducing overall DBU consumption by 30%.

Designed and enforced cluster policies across Dev/QA/Prod environments restricting node types, maximum workers, auto-termination thresholds, and mandatory Photon-enabled runtimes.

Implemented Ephemeral Job clusters for production ETL pipelines, eliminating idle compute costs and reducing unnecessary DBU burn from interactive clusters.

Performed cluster right-sizing using Spark UI and Ganglia metrics by tuning executor memory, cores, shuffle partitions, and broadcast joins, reducing spill and shuffle overhead by 25%.

Configured optimized autoscaling clusters (min/max workers) for variable batch workloads, enabling dynamic scale-up during shuffle-heavy transformations and aggressive scale-down post-processing.

Enabled Spark dynamic allocation and Adaptive Query Execution (AQE) using spark.dynamicAllocation.enabled, spark.sql.adaptive.enabled, spark.sql.shuffle.partitions improving runtime stability and resource utilization.

Integrated Azure Monitor and Log Analytics alerts for DBU consumption thresholds, idle clusters, and high-memory node overprovisioning.

Optimized join-heavy and aggregation workloads using Photon vectorized execution engine, reducing CPU time and shuffle read/write metrics significantly.

Implemented Delta Lake performance optimizations using OPTIMIZE and ZORDER to mitigate small file issues and improve data skipping performance.

Report Writer Migration – Multi Layered Framework, Led the migration of a complex Report Writer system to Azure Databricks, implementing a modular, class driven architecture with clearly defined processing layers:

Handler Layer, Enrichment Layer, Post Enrichment Layer, Filtering Layer, Normalization Layer, Aggregation Layer, Summary Layer, Persistence Layer.

Refactored and implemented hundreds of reusable methods and classes, improving code modularity, testability, and long term maintainability across multiple reporting domains.

Performance, Governance & Best Practices,Tuned Spark clusters, caching strategies, partitioning, and job parallelization to significantly reduce processing time for daily and intraday trading reports.

Implemented Azure Databricks Genie AI to enable governed natural language analytics over Unity Catalog tables.

Implemented cost control strategies by routing Genie workloads to Photon-enabled SQL Warehouses for improved vectorized execution.

Integrated Genie with Unity Catalog for centralized governance, ensuring row-level and column-level security enforcement during AI-generated query execution.

Collaborated with data governance teams to ensure compliance with PII masking and regulatory standards during AI-based query generation.

Designed Delta Lake multi-layer data model (Bronze/Silver/Gold) with GDPR-compliant data governance and Business Designed Transformations.

Developed resilient PySpark Structured Streaming jobs with watermarking, schema evolution, and exactly-once semantics.

All different vendors connections were secured through azure key vault systems and secret scope usage in Azure databricks.

Implemented cluster optimization, Photon runtimes, auto-scaling, and job clusters to minimize cost and improve runtime performance.

Implemented dynamic ADF pipelines using parameterized datasets, metadata-driven configurations, and dependency chains.

Technologies/Tools: Azure Data Factory, DataBricks, PySpark, Python Pandas, Numpy, Power BI, Bloomberg BPipe, Autosys, Katana, SQL Server.

Client: SMBC Bank NJ/NY USA Sept2023 – July 2025

Role: Azure Architect & Data Engineer

Description:

Oracle General Ledger Application under Capital Markets is moving to Azure cloud environment from legacy system. This Project involves Azure Data Lake environment with Source data extract from legacy systems using ADF Pipelines and Transformations using DataBricks. Datastage and Informatica were used for the data and Balances consolidation among from different inputs and finally used Tableau and Denodo views for reports.

Responsibilities:

Experience in Migration processes of Datastage Jobs to Azure cloud environment from on-premises database environment Oracle, SQL Server.

Created ADF Pipeline jobs to extract source data from different legacy systems like EBS, ELF and HORIZON (Oracle GL Application).

Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.

Built and managed Data Pipelines and Jobs with Databricks (Delta Live Tables, PySpark, Python, SQL) and Azure Data Factory to streamline data processing.

Experience in developing Spark applications using Spark-SQL/PySpark in Databricks for data extraction, transformation, and aggregation from multiple file formats for Analyzing & transforming the data to uncover insights into the customer usage patterns.

Extract Transform and Load data from sources Systems to Azure Data Storage services using a combination of Azure Data factory, Spark SQL, and U-SQL Azure Data Lake Analytics. Data ingestion to one or more Azure services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure Databricks Medallion architecture.

Written Pyspark code for Trial Balance dataclasses, Securities, Derivatives and Functional processing like Netting, reclass, Grossup and Offset rules.

Orchestrated Databricks notebooks using Azure Data Factory (ADF) pipelines for fully automated ETL workflows.

Developed PySpark DataFrames, UDFs, and Window functions to perform complex transformations and aggregations on large-scale structured and semi-structured data.

Optimized PySpark jobs by caching, partitioning, and broadcast joins reducing job execution time by creating specific clusters, Coalesce, Salting techniques.

Built incremental data ingestion pipelines using Databricks Auto Loader and PySpark structured streaming.

Configured CI/CD Auto deploy for ADF Workflows, Databricks notebooks in Azure Devops with automated cluster provisioning.

Automated data processing and improved efficiency using Databricks Jobs and Data Pipelines (DLT) with Spark, enhancing overall data processing accuracy.

Designed incremental loading strategies with Change Data Capture (CDC) and Delta Lake MERGE operations in Bronze, Silver, Gold zones and catalog tables.

Deployed CI/CD pipelines using Azure DevOps with automated Databricks notebook testing and data validation.

Migrated as PoC for around 100 MS SQL Server tables to Azure Cloud Database using ADF(Azure Data Factory).

Python Pandas, Numpy Libraries are used in scripting for Azure blob files(Landing files) testing and data analytics.

Databricks Catalog tables were queried for analytics and File extracts. Databricks Unity catalog data lineage and data quality insights session demonstrated to Dev Teams, CoE and QA, Business Teams.

Databricks jobs were fine tuned dataclass wise specific interms of Customization of cluster, Salting techniques to avoid data skewness repartitioning, redesign of framework, Joins and control tables.

Python coding in Dataframes joins, nested for loops and advanced programming concepts in Python.

ADF Created Pipelines and Dataflow jobs used Conditional split, exists, joins and derived columns of schema modifier windows, pivot.

Designed end-to-end data integration solutions using Azure Data Factory to extract, transform, and load data from on-premises and cloud sources.

Implemented data pipelines with data movement and transformation activities, utilizing Azure Data Factory's built-in transformations and custom activities.

Developed Reconciliation requirement jobs in Datastage leveraging Financial Controllers to upload latest Mapping document(FINMAPP) and using this generate desired output files in ADI System.

Implemented Trial Balance validation across entity level for different Source Systems and reports on different parameters in Tableau.

Helped QA Team in automation leveraging PYTHON Code in their complex validations.

Technologies/Tools: Azure Data Factory, DataBricks, PySpark, Python Pandas, Numpy, Datastage, Collibra, Tableau, Autosys, Oracle, SQL Server, Service Now, JIRA, Power BI.

Client: Wells Fargo India Jan2019 – July 2023

Role: Data Architect & Data Engineer

Description:

Enterprise Risk and Finance Technology is part of EFT Alignment in Wells Fargo which deals with Risk applications related to Internal Loss Data, Third Party, Capital Markets, Operational Risk and Finance technologies.

TRIMS, ILD are the DWH applications in different tech stack and on MS SQL Server and Oracle.

Responsibilities:

Led the migration of an on-premises TRIMS(ThirdParty RISK) Application data warehouse from Datastage to Azure cloud environment using ADF, Databricks PySpark.

Migrated as PoC for 100 MS SQL Server tables to Azure Cloud Database using ADF.

Designed end-to-end data integration solutions using Azure Data Factory & Databricks to extract, transform, and load data from on-premises to cloud environment.

Implemented Dynamic Column mapping in ADF through azure config table for multiple process of different source systems and common flow and file deliveries.

Implemented data pipelines with data movement and transformation activities, utilizing Azure Data Factory's built-in transformations and custom activities in Data flow.

Implemented Delta Lake Framework using PYSPARK and loaded landing files into Parquet files with Medallion(Bronze/Silver/Gold) architecture transformations.

Rewritten Legacy ETL SQL scripts into PySpark SQL Scripts for faster performance.

Managed the ingestion of landing csv files from different datasources to ADLS Gen2 using Azure Data Factory (Self hosted integration runtime) then we are mounting to further processing using Databricks PySpark.

Ensured secure storage and accessibility of data in Azure Delta Lake Storage.

Created Databricks pipeline to handle Data transformations, including extraction, cleansing, normalization, and structuring of raw data in Delta lake architecture involves Bronze, Silver and Gold enrichments.

Integrated Azure Data Factory with other Azure services, such as Azure Databricks and Azure SQL Database, to support advanced analytics and reporting needs.

Awarded AE4 for the PoC implementation of replacing all modules file delivery through NDM process into single ETL job with NDM script integration.

Experience in Migration processes of Datastage ILD to Snowflake from on-premises database environment Oracle.

Developed Snowflake Stream, Procedures, Views to load different types of data from and to an AWS S3 bucket, external stage, raw tables as well as transferring the data to databricks.

Experience with Snowflake cloud data warehouse and AWS S3 bucket for integrating data from multiple source system which include loading nested JSON formatted data into Snowflake table/DWH.

SnowSQL Experience in developing stored Procedures writing Queries to analyze and transform data.

Proficient in Snowflake concepts like setting up Resource monitors, RBAC controls, scalable virtual warehouse, SQL performance tuning, zero copy clone, time travel and automating them.

Experience in in re-clustering of the data in Snowflake with good understanding on Micro-Partitions.

Proficiency with Snowflake’s cloud data platform and its features such as Snowpipe, SnowStream, SnowTasks, Time Travel, and Zero-Copy Cloning.

Experience with SnowSQL, Snowflake's command-line interface through visual studio code and the ability to write complex SQL queries for large-scale data analytics.

Designed and managed Kafka clusters with optimized partitioning, replication, and topic retention policies for high-throughput streaming.

Implemented Kafka Connect with JDBC/S3/Snowflake connectors to stream structured/unstructured data into Snowflake.

Multiple reporting projects from excel, DB rewrited into Tableau and Power BI.

Lead EFTPM Metrics in Tableau and create sharepoint URL for business usages.

Lead the team in ETL tool migration from Datastage to Snowflake conversion.

Identify ideas, Prepare PoC, Drive end to end delivery.

Drive meetings with BSC, QA and Release management.

Project management activities-Quality delivery, Incident tracking, Resolving issues,

Codewalkthrough, Code review, Defect prevention management.

Technologies/Tools: Azure Databricks PySpark, ADF, ADLS Gen2, Snowflake, DataStage v11.7, Matillion, T-SQL, Tableau, Autosys, Oracle, SQL Server, Service Now, JIRA

Client: Wells Fargo India Mar 2016 - Dec 2018

Role: Team Lead and Developer

Description:

Operation Risk Utilities (1ORU), ORIS, SHRP Applications, as part of the Wells Fargo Enterprise Risk Management, is responsible to support analytics capability and provide the environment to house and maintain TPRM,ILD,CRAS,EIW,SCM data for reporting and analytics purposes for Wells Fargo stake holders like EDA, TRIMS, SHRP, SORP, EIW, CRAS, ILD team.

Responsibilities:

Lead team in Design, Development, Agile Sprint, Testing and Deployment activities.

Lead team in Technical and Management of Datastage migration from v9.1 to v11.5 which includes Server Setup, Environment creation, NDM Configuration, Testing, Code fixes/Rewrite.

Awarded AE4 for the PoC implementation of replacing all modules file delivery through NDM process into single ETL job with NDM script integration.

Rewrite the Datastage V11.5 software package in Wells Fargo Enterprise level to fix it as it replaces older version.

Created Technical Specification documents related to Datastage Software Install, Design, Architecture, Migration Implementation Steps.

Shared containers in Datastage were used for repetitive steps while creating file using stored procedure passing different parameters.

Datastage jobs were fine tuned wherever required fixes like Hash partitioning of keys which have Joins, Lookup, Shared containers, Before/After job sub routines, Sequencer Job Activity for parallel process to run, Queries fine tune at ETL and DB Side.

Performance tuning of Teradata SQL Queries by seeing Execution plan in Teradata SQL Assistant. Repartitioning of Indexes, Avoiding full table scans, CTE Expressions.

Teradata table stats collection and dropping off Indexes/Keys before bulk load.

PROD support, Migration Activites for ORU, TRIMS, SHRP and ORIS Applications were handled smoothly with out any single issue in PROD After conversion or Upgradation.

Implemented SCD Type 2,3 in Datastage and usage of JSON/XML Configuration to read or write files.

Lead EFT LoB metrics build up from Scratch on all PROD Support Issues and Change Request Deployments into Tableau.

Multilpe Dashboards created with multiple table joins using Data blending, Dual Axis, Blended Axis, Context filter, global filter, Interactive Dashboards in Tableau.

Created Hierarchy filters to Drill Down/Drill Up the metrics over Managerial Hierarchy at Lob level and Year to date filters over dashboard and views.

Identify ideas, Prepare PoC, Drive end to end delivery.

Design Technical docs, prepare test cases and peer review of Code/Scripts.

Developing Enterprise Lob Metrics for every quarter of year.

Lead activities STAMP forecast, Metrics, Drive meetings with BSC, QA and Release

management.

Project management activities-Quality delivery, Incident tracking, Resolving issues,

Codewalkthrough, Code review, Defect prevention management.

Technologies/Tools: DataStage v11.5, v9.1, Tableau, T-SQL, Control M, Teradata 15 DB, SQL Server, Stored Procedures, SCM, Pac2k, NDM scripts, RSA Archer reports.

Client: Wells Fargo India Aug 2012 - Mar 2016

Role: ETL Design and Developer

Description:

The Capital Markets Operational Data Store (CMODS) is a ETL and database system that contains consolidated mortgage loan data, including pipeline loans, reverse loans, and loan commitments from rate lock to settlement, using data provided by participating loan origination systems, servicing systems, and other mortgage information systems. Loan information is refreshed daily. CMODS merges the data from multiple loan origination systems into a single unified repository for analysis, and provides data for downstream business line functions. Business line customers include Secondary Markets Accounting & Controls (SMAC), Pipeline/Warehouse Asset Valuation (PWAV) group, Servicing Portfolio Management, Asset Sales, Structured Finance, Investment Analytics, Agency Relations, and the Trade Desk.

Responsibilities:

Developing Enterprise, M&E Projects for every quarter



Contact this candidate