Upendar Charugundla
Senior Data Engineer
U.S. Citizen: No Sponsorship
Required
Address
Texas 75024
Phone
*************@*****.***
Technical Skills
Cloud Computing: Azure, AWS EC2,
S3, DMS, Lambda, Athena, Glue, EMR,
RDS, AWS Transfer Policy, Step
Functions, Databricks, Snowflake
Bigdata Technologies: Hadoop, Hive,
Pig, Yarn, Flume, Sqoop, Spark, Avro,
JSON, Parquet and Kafka.
Languages: Python, SQL, PL/SQL, T-
SQL, Unix/Linux Shell Programming
ETL Tools: Informatica Power Center,
PowerExchange, ICDI, IICS, Ab-Initio,
DBT
Databases: DB2, Snowflake, Teradata,
Oracle, MS SQL Server
Reporting: Tableau, Cognos, Business
Objects
Data Structures: Facts, Dimensions,
Star Schema, Slowly Changing
Dimensions
Data Modeling Tools: ERWIN
DB Utilities: SQL Loader, SQL
Developer, SQL*Plus, Toad, DB
Visualizer
Version Control: Visual Safe Source,
Jenkins, GIT, Bitbucket, SVN, Teraform,
Splunk
Scheduling Tools: Crontab, Control-M,
Autosys, Airflow
Education
Master’s degree in computers
Bachelor’s degree in computers
Contact
Dedicated and resourceful IT professional with over 20 years of IT experience in versatile Software engineering across various organizational domains, including specialization in Data Engineering, Data migration, Data Warehousing & Data Center exit projects. Demonstrated skills in supporting and delivering multi- platform technologies, hosting a wide range of applications, and identifying new technologies for implementation. Expertise in full development life cycle experience in delivering Data warehousing, Data Migration projects with data analysis, data extraction, transformation and loading using ETL Tools, with a deep understanding of technology and a focus on delivering business solutions. Efficient Technical Lead with excellent communication, interpersonal, problem- solving, analytical, decision-making, and leadership capabilities to enhance organizational objectives. Organized and dependable candidate successful at managing multiple priorities with a positive attitude. Willingness to take on added responsibilities to meet team goals.
Work History
Wells Fargo, Irving, TX / Charlotte, NC Jan 2025– Present Senior Data Engineer
● Senior Data Engineer on the Wells Fargo ODS team, supporting the legacy Informatica-based ODS application while actively contributing to its modernization on Azure Databricks.
● Designed and developed Spark-based Databricks pipelines to replace complex legacy Informatica ETL workflows for trading and reference data, including Trades, Positions, Securities, Holdings, Accounts, Balances, Customers, and Addresses.
● Worked closely with Data Architects and stakeholders to analyze and rationalize a 20+ year ODS landscape, identifying redundant ETL processes, zero-dollar trades and positions, and obsolete database copies, resulting in significant storage reduction.
● Contributed to architecture reviews and design discussions for the Azure Databricks platform, ensuring solutions aligned with Wells Fargo cloud, security, and data standards.
● Partnered with Product Owners to translate business and regulatory requirements into JIRA stories, supporting development, testing, and deployment activities.
● Implemented data quality validations (completeness, duplication, timeliness) to ensure reliability and consistency between Informatica ODS outputs and Databricks feeds.
● Focused on performance and operational stability, optimizing Spark jobs, supporting monitoring and rerun procedures, and assisting with controlled cutover from legacy to modern pipelines.
JP Morgan Chase, Plano, TX Apr 2017 – Nov 2024
Senior Data Engineer
● Led redesign and automation of the UPD framework using Python, SQL, PL/SQL, SFTP, and Control-M, cutting end-to-end processing time by 50% while improving reliability and auditability.
Certifications
*Informatica Power Center
Certified Professional.
*Oracle Database PL/SQL
Program Certified Professional.
*AWS Solutions Architect Associate
Area of Expertise
Technical leadership with
impressive success rates in
Data Center Exit project
execution.
Extensive experience in
building ETL data Pipelines
to fulfil the business
requirements.
Orchestrated migration of
Ab-Initio to Informatica,
generating cost savings
In-Depth knowledge of
Hadoop EcoSystem - Pig,
Hive, Spark, Avro, Parquet,
Flume, Sqoop and Kafka.
Proficient in Hadoop
Development and various
components such as HDFS,
Job Tracker, Data node,
Name node and PySpark.
Expertise in Data Migration
& Data base tools like
Informatica, Ab-Initio,
Oracle, Teradata, DB2.
Well versed on Dimensional
modelling using Star
Schema, Snowflake
Schema.
Extensive experience in
automating the Production
manual process
Played a pivotal role in
leading the Data migration
projects with multi terabytes
of data.
Well versed on Oracle
development tool set
(Including SQL*Plus, PL/SQL,
SQL*Loader, PL SQL
developer, TOAD)
Extensive experience in fine
tuning the ETL/ELT pipelines
in Production environment.
Languages
English, Telugu
● Owned data integrity and data quality during migrations, implementing constraints, stored procedures, triggers, and complex SQL-based validation frameworks to ensure accuracy and regulatory compliance.
● Led CCAR regulatory delivery, designing and implementing quarterly FRB changes, driving regulatory projects, and coordinating across Risk, Controllers, and Product teams.
● Built and managed scalable ETL and orchestration pipelines using Python, PySpark, and SQL across AWS (Glue, Lambda, S3), including migration of CCAR stage loads from Hadoop to AWS Data Lake (Raw, Trusted, Refined zones).
● Drove real-time and batch data pipeline development, including Spark DPL pipelines for ingesting CIB loan data from JSON into Hadoop HDFS and Oracle staging, with strong focus on performance testing and production readiness.
● Provided technical leadership across delivery and operations, leading the McCoy Data Center Exit from a data engineering and platform readiness perspective.
● Supported production releases and incident management, coordinating closely with SRE, UAT testing, Regulatory Controllers team and Business stakeholders to ensure stable deployments and minimal operational impact. JP Morgan Chase, Dallas, TX Sep 2010 - Apr 2017
ETL Developer
● Played a key role in ETL architecture and design, partnering with architecture teams on data mappings and participating in business requirement gathering and specification documentation.
● Designed and developed complex Informatica ETL workflows and mappings using a wide range of transformations (Joiner, Expression, Aggregator, Lookup, Rank, Router, Normalizer, Sequence Generator), including robust error handling and automated batch ID generation.
● Delivered real-time and batch Informatica solutions for trading and collateral systems, including LongBox, ACCE, GCF, and EVARE projects, ingesting Positions, Securities, Obligations, Custody and Non-Custody data into the Global Collateral Engine (GCE) repository.
● Led and mentored ETL developers on large initiatives (e.g., GCF project), while also managing Informatica PowerCenter installation, configuration, and upgrade activities in coordination with Informatica Administration teams.
● Supported production releases and operations, creating detailed implementation and rollback documents, working closely with Release Management, and developing UNIX shell scripts and MQ workflows to support end-to-end ETL processing.
ETL Consultant Nov 2008 - Sep 2010
CompuCom, Dallas, TX
Informatica ETL Developer Jun 2005 - Nov 2008
PepsiCo, Plano, TX