Srikar Gampa
AWS Certified Solutions Architect Professional **************@*****.***
AWS Certified Solutions Architect Associate mobile:703-***-****
AWS Certified Cloud Practitioner https://www.linkedin.com/in/srikargampa/
Professional Summary
17+ years of experience in the IT industry, including 9+ years of deep expertise in Deploying AWS cloud services. Proven track record in designing and implementing scalable cloud architectures, leading database migrations from on-premise to AWS, and modernizing applications using microservices. Specialized in cloud-based data architecture, CI/CD pipeline automation (Bitbucket, Bamboo), and end-to-end data analytics solutions.
As part of Platform team I have Built Reusable CloudFormation templates aligned to organization standards to spinup various AWS services like RDS Aurora postgresql databases, RDS PostgreSQL databases, Glue Job, Redshift cluster, Stepfunction, lambda functions, SNS topics, S3 buckets, EC2 Instances, EMR Clusters.
Created CI CD Pipeline for AWS Infrastructure services using Bitbucket Bamboo Atlassian tools, standardized exemplar repository deployment process and Enabled application teams for self service cloud adoption by sharing knowledge on how to spin up AWS infrastructure by following the deployment process and by cloning the exemplar repository.
Participated in Architecture review board meetings to review usecases and Architectures aligned to best practices and standards and helped in onboarding new AWS services,technologies and Architecture patterns
Worked with Various application teams introduced cost optimization strategies to automatically pause the non prod infrastructure when there is no usage and enabled Reserved instance usage to optimize cost
Defined Roadmap and MVP for platform team and presented features, products to leadership team and business teams and collaborated with multiple teams across enterprise
collaborated with multiple stakeholders across organization and supported multiple teams adopting cloud and helped in resolving issues
Designed and deployed AWS Glue Jobs, S3 buckets, Step Functions, and Lambda functions using Terraform with reusable code incorporating organization security, governance, monitoring and alerting standards
Developed a parameterized AWS Glue job with concurrency and enabled Glue ETL framework to load data from S3 into Redshift tables. Implemented AWS Step Functions to orchestrate and trigger multiple parallel executions of the same Glue job, each configured to read from a specific S3 bucket and load the corresponding Redshift table. Automated the pipeline When a file lands in S3 it invokes AWS Lambda function, which invokes the Step Function workflow. Enabled cloudwatch logging and alerting mechanisms through Lambda function to trigger a ServiceNow incident in the event of a Glue job failure
Designed and implemented an architecture pattern to migrate mainframe data from on-premise to AWS Aurora PostgreSQL using Attunity Replicate. Architected write/read strategy where write operations continued on the mainframe (gold copy), while read traffic was served from Aurora PostgreSQL for improved performance and scalability. Performed full data loads and enabled Change Data Capture (CDC) for continuous replication of platform tables. Created detailed migration documentation and conducted training sessions, enabling multiple project teams to transition from mainframe to cloud-based microservices aligned with domain-driven design and bounded context principles
Built and led a team that developed and launched data products for customer 360 on modernized AWS Redshift data warehouse platforms, Designed scalable cloud data warehouse strategies involving Redshift and Aurora Postgresql, resulting in performance improvement and reducing time to market by 40%
Developed Python scripts for data reconciliation, automated validation, and ETL process monitoring, ensuring high data integrity
Led Cloud Cost Optimization effort, designed and implemented automated cost saving techniques, resulting in savings of $1M\YOY
Ingested Data from multiple data sources into S3 Data lake (raw data layer), then applied transformation logic as per business driven lineage documents using Glue ETL. Transformed the data and saved it in a transformed S3 bucket.
Implemented Twelve-Factor App principles in microservice architecture design and all architectures proposed for seamless integration, complaince, security, portability, scalability, and maintainability
Good Knowledge of LLMS, NLP, Amazon Sagemaker Jumpstart models and Amazon Bedrock. Designed and implemented a POC for Retrieval-Augmented Generation (RAG)-based chatbot virtual assistant using Amazon Bedrock. Uploaded enterprise documents to S3 and created a Bedrock Knowledge Base to generate embeddings via Amazon Titan, and stored them in OpenSearch Serverless Vector DB. Developed a Streamlit-based chat interface that leverages the Bedrock Retrieve API to fetch contextually relevant document chunks and pass them to the Meta LLaMA model for context based responses.
Designed GenAI-powered virtual assistants using Amazon Bedrock, Claude, LangChain, and LangGraph to enable autonomous retrieval-augmented workflows, integrated Meta LLaMA for contextual NLP-driven responses
Proficient in ITIL methodologies, Production Support process, SLA Guidelines
Familiar with other CI CD tools like GitHub, Gitlab
Good knowledge of EKS, Kubernetes, Kafka kinesis, Snowfalke,NoSQl, Oracle, Sql server Databases Azure, GCP, Tableau, MySQL, Redis, Ansible, Docker and Helm
Focused on NLP use cases for document Q&A, summarization, and intelligent search using RAG-based pipelines
Familiar with deep learning frameworks such as PyTorch, TensorFlow, Sagemaker unified studio, Hugging face models
Education
● Bachelors in Computer Science from Jawaharlal Nehru Technological University, Hyderabad, India
● General Management (EPYP) from Indian Institute of Management, Calcutta, India
Pursuing Chief Technology Officer program at Wharton School, Pennsylvania
Professional Certifications
AWS Certified Solutions Architect Professional
IBM Certified Database Administrator DB2 V9.7 for Linux, Unix, Windows
ITIL V3 Foundation Certified through APMG –International
Certification Data Science: Data to Insights from Massachusetts Institute of Technology
Completed course “Tackling the challenges of Big Data” from MIT, USA.
Project & Product Management Excellence (Indian School of Business, Hyderabad) through International Board of Project and Product Leadership
Technical Skills
Technology: AWS RDS Aurora PostgreSQL, AWS Redshift, AWS Glue ETL, Athena, EMR, python, DB2 LUW 9.x, 10.x, DB2 BLU, Cloud, EC2, S3, IAM, EBS, ELB, VPC, Route53, Cloudwatch, Lambda, data lake, data architecture, SQL, data modeling, database partitioning, data Replication, cloud formation, AMI, High Availability, On premise-cloud database migration, Github, Bitbucket, Bamboo, Terraform, Git, Docker, EKS, Devops CI/CD, AWS CDK Okta SSO, Quicksight, Sagemaker, GenAI, Amazon Bedrock .
Database Tools: pgAdmin, SQLWorkbench, Sql Developer, IBM Guardium,
DB2 Connect,SQL\Q- Replication, Oracle Golden gate
Operating System: AIX, Linux, Windows, Solaris, UNIX
ITIL Tools: Service Now, Maximo, Remedy
Monitoring Tools: Tivoli Monitoring, Net cool, Cloud watch
Professional Experience
Vanguard Group, Malvern, PA Feb 2024 – tilldate
Lead Consultant
Project description:
●Designed Architecture to implement Amazon Bedrock across Enterprise accounts, Enabled Service control policies to grant required foundational models access to multiple AWS accounts used by various LOB teams Applied responsible AI principles, implemented access guardrails and ethical use boundaries for foundational models across enterprise accounts using SCPs, IAM roles, and Bedrock governance controls.
●Designed and implemented AWS Lambda function to copy data from one S3 bucket to other using CI CD pipelines bitbucket for version control and bamboo for deployment process
●Adhered to Organizational Security standards enforcing IAM least privilege methodology, Created IAM Roles, policies and defined Service control policies for enabling AWS services, created Secrets using Secrets Manager and used KMS keys for encryption of data at rest and in transit
●Proficiency in IAM tools and technologies, Enabled Least privilege access methodology and Granted access to AWS services limited to principals that need access . Defined service control policies at organization level and enforced boundary permissions and guardrails across organization
●Implemented least privilege access policies, SCPs, and IAM permission boundaries across AWS Organizations, enhancing multi-account security posture for enterprise workloads.
●Designed and implemented Landing Zones using AWS Control Tower, enforcing security and compliance through SCPs, IAM policies, and permission boundaries across AWS Organizations
●Designed Architecture to implement Athena -Tableau -Okta integration for SSO capability which allowed users to leverage Athena platform vs EMR for querying data lake, resulted in EMR cluster decommission project resulting in cost saving $30M YOY
Amazon Web Services July 2022 – Jan 2024
Solutions Architect
As part of Digital Innovation program worked with customers to identify GenAI usecases, and helped in implementing an Internal Enterprise chatbot using RAG Architecture to retrieve information based on documents, and knowledge sources as per user prompt.
Enabled adoption of Amazon Redshift by leading use case discovery sessions, conducting Redshift workshops, and developing a proof-of-concept (POC) to evaluate performance and scalability. Identified key performance optimizations during POC and successfully implemented the solution into production.
Designed and implemented S3 Cross-Region Replication (CRR) to seamlessly replicate data from a specific source S3 path to a target path in the destination region
Created AWS Data analytics Reference Architecture patterns including S3, Glue, Redshift, Sagemaker, Quick sight
Implemented IAC CI CD Framework, Integrated security tools into CI CD pipelines to enforce secure coding and deployment practices
Created GenAI Reference Architecture diagrams leveraging Bedrock, Sagemaker, LLM, RAG, Kendra and Data analytics ecosystem
Conducted 50+ workshops on Redshift, Databases(RDS), Data analytics Services, Sagemaker, and GenAI workshops and helped customers achieve their desired skillset which helped in new workloads in AWS and migrating applications to AWS
Led Redshift query optimization workshops, reducing query execution time by 50% through indexing, distribution key strategies, and workload management.
Conducted Migration Assessment Analysis for entire datacenter identified 800+ servers for migration, databases and 50TB storage,Designed migration strategy and helped customer during migration journey which generated $4M Y0Y
Analyzed EMR and all workloads, created cost optimization strategy to optimize cost and saved $1.5M YoY using savings plan, Compute type, multi-tenant and RI approaches
Graduated TFC member of GenAI and part of GenAI Ambassador group . Helped multiple customers in understanding GenAI, Sagemaker and implementing AI ML
Graduated TFC member of RDS Db2 technology and as part of RDS Db2 group helped customers leverage RDS Db2 and migrate Db2 onprem to RDS DB2 managed version
Created tableau dashboards on redshift data for reporting needs
Designed and implemented redshift -okta integration and enabled SAML authentication through Query editor
As a RDS DB2 Beta Buddy participated during product development and testing, worked along with product team in delivered sessions to the customers, collaborated with customers and enabled them to launch RDS Db2 Beta Instances and test RDS DB2 Beta by migrating dev workloads before product launched in re-invent 2023
Engaged with stakeholders to understand business requirements and translated them into scalable and efficient AWS solutions
Designed strategy for migrating complex data warehouse of 100TB from existing Teradata data warehouse to Redshift and helped customer in migration journey starting with POC, training, finalizing Redshift Architecture, provisioning infra, strategy to migrate and implementation
Demonstrated ability to migrate or modernize legacy customer solutions to the cloud
Increased AWS adoption by delivering POCs for different AWS services like EMR, DataBrew, Glue ETL, Redshift, Sagemaker, Bedrock and more
Led multiple AWS Well Architected Reviews across enterprise platforms to ensure workloads aligned with best practices in security, reliability, performance, and cost optimization
Leveraged the AWS Well-Architected tool to build secure, high-performing, resilient, and efficient infrastructure
Demonstrated ability to adapt to new technologies and learn quickly
Mentored junior architects and led knowledge-sharing sessions as part of internal and External AWS events and as part of AWS ambassador program.
Vanguard Group, Malvern, PA Mar 2018 – July 2022
Technology Lead
Project description:
Vanguard group institutional services migrating legacy applications (plan balances, PE, 401K) to Cloud AWS technologies, As part of platform architecture team my role is to design a strategy to migrate application from on-premise to cloud, introduce new technologies, perform POC, stabilize production support process
●Led, designed, and implemented AWS Redshift as a modern data warehouse platform
●Led cloud strategy and technical enablement workshops across application Teams, Platform teams data engineering teams sharing best practices and Cloud adoption and innovation at scale
●Enabled GLUE Infrastructure and Glue ETL framework pipeline to run multiple business use cases
●Led security architecture reviews (SAR) and integrated IAM, KMS, and Secret Manager controls into all layers of AWS infrastructure
●Enabled Athena as a Query tool for BI & Data Analysts
●Enabled Cloud data architecture for IIG and migrated data from DB2 tables to various micro service apps, cloud databases AWS Aurora PostgreSQL
● Led multiple MSM teams of 20+ engineer, Architects, Data engineers, providing technical guidance, delegating, and assigning various JIRA stories as needed to accomplish project goals and MVPS, conducting knowledge sharing sessions with team to get team upskilled in all the Technical and functional areas
●Designed and deployed a scalable microservices architecture using Kubernetes on AWS EKS
●Proficient experience in writing AWS CloudFormation templates leveraging troposphere, boto3
●Designed cloud formation template for AWS RDS Aurora PostgreSQL DB cluster, primary & replica instances, and deployed Aurora cluster across the environments
●Designed and Implemented data lake pipelines ingesting real-time data from mainframe and transactional systems using Attunity Replicate Kinesis, S3 and further modernized analytics ecosystems using Glue, and Redshift; supported analytics reporting needs
●Designed High availability Solution from AWS RDS database perspective for application which needs 99.99% availability, Designed and implemented AWS RDS, Route53, Read replicas, Multi-AZ solution to span AWS resources across availability zones
● Designed a Data Lake solution, to aggregate data from multiple data sources and land at S3 bucket.
●Participated in RFP Eval proposals, Presentations, Performed POCs and Evaluated Data analytics Augmented Machine Learning tools like Data Robot, Domino Data labs, Dot Data, H2O.ai
● Migrated DB2 mainframe tables to RDS PostgreSQL using Attunity Replicate, created migration tasks for table full load CDC process from DB2 Mainframe to AWS RDS PostgreSQL for micro services
●Designed data pipelines for low latency and high throughput using Attunity, Kinesis for real-time ingestion from Mainframe to S3 for data lake and data analytics reports
● Designed a real time Data strategy to retrieve data from DB2 tables, transaction logs and load in to S3 buckets, perform data transfer parsing using SQS and transform SQS messages based on FIFO and perform transformation in EMR cluster & load the transformed data to another S3 Data lake and then implemented EMR query cluster to connect to S3 data using presto
●Designed Cloud Operational Support model and stabilized teams to adhere to process and guidelines
●Utilized Attunity Enterprise Manager to monitor and maintain tasks associated with various Table migrations full load CDC
●Defined and implemented AWS tagging strategy across multi-account architecture to improve resource tracking, cost allocation, and reporting, aligned with FinOps best practices
●Designed AWS RDS self-provisioning solution through CI CD bit bucket bamboo process and enabled Application teams to leverage bit bucket project to spin up AWS RDS Instances using a button click approach
●Utilized Data robot Auto ML tool to connect to AWS S3 Data source and perform feature engineering, Data cleansing and run various Machine learning models and evaluate accuracy of models using AUC values and perform model selection process to deploy the model in production to predict the outcome as per business use case
●Written python Automated script to do data reconciliation between db2 and postgres tables
●Designed cloud formation templates for RDS (MySQL, postgres, Sqlserver) databases and spin up multiple databases for multiple SI app teams for micro services and client applications
● Designed a Solution to capture RDS Events and trigger an incident in service now for respective Service now Assignment group. RDS Instance is subscribed to an SNS Topic for event subscriptions, When an RDS event happens a lambda function will be triggered to capture RDS event message and route the RDS event notification to an SNS topic, SNS topic then routes the event message to Tivoli View port and Service now Assignment group and creates an incident in case of severity, otherwise creates a warning notification with required details.
● Documented Business Continuity and Disaster Recovery strategies for critical AWS-hosted workloads, including multi-AZ failover, RDS backup automation, and cross-region replication
●Participated in cloud architecture governance meetings enforced standards via design reviews, architecture checklists, and reusable patterns
● Implemented infrastructure as a code, designed cloud formation code, promoted code to various environments and spin up RDS instances with a single button click deployment approach
● Designed and implemented cloud watch alarms and configured metrics for various RDS instances and incorporated cloud watch metrics within the cloud formation template
●Delivered architecture solutions within Agile teams using Scrum methodology; collaborated across platform, security, and data engineering, Application, IAM teams to meet sprint goals and feature releases.
FreddieMac, Mclean VA Sep 2016 – Mar 2018
Lead Consultant
Project description:
Freddie Mac Database production support group my role is to maintain production databases, support and fix production database issues, Migrate databases from on premise to cloud, re-platform CDW data warehouse and implement
● Worked on AWS services like EC2, S3, RDS, IAM, EBS, ELB, VPC
● Designed database migration strategy for on premise to AWS private cloud
● Migrated databases from on-premise oracle to cloud rds mysql postgres
● Led a 9-month technology program to design and rebuild Corporate Datawarehouse platform, improving business reports performance by 40%
● Designed and implemented highly available architecture patterns for database system, enhancing database availability for business application and contributed to stable and smooth business operations
● Migrated DB2 corporate data warehouse DPF database of 45+TB and 33 logical partitions from AIX DB2 9.7 to Linux 10.5
● Worked on EMC advanced database backup & restore mechanisms using DDBOOST
● Led, designed, and migrated 500+ databases from lower version to higher version to stay current and compliant.
● Migrated Oracle database databases from 11g to 12C
● Have done oracle database administration activities like Export Import using data pump
SunTrust Bank, Atlanta, GA (Infosys, USA) Oct 2015 – Sep 2016
Technology Lead
Project description :
SunTrust bank Database production support group my role is to maintain production databases, support and fix production database issues, Migrate Datawarehouse from ISAS to PDOA, Version upgrades.
●Managed a global team of 20 employees including offshore and onsite, supported and developed database systems for business applications
● Mentored over 30 consultants and managers as part of onsite and offshore practice and helped them achieve technical and domain skills to deliver fast paced project
●Project managed multiple teams of technical specialists, created project goals, assigned, tracked, monitored, and delivered project as per timelines Database, Application, Data Governance and Data Security teams.
● Experience with IBM ISAS Data warehousing housing (DPF), Managed ISAS BI Appliance database of 95TB
●Worked on Database Partitioning Features (DPF) Environment and worked on creation of range partitions and data partitions
● Work closely with Application Developers, Data Modelers, Engineering, Security Administrators, Capacity Planning & Monitoring, Service Desk Scheduling and Network Administrators as needed
● Proficient in Performance tuning, oversee backups and create scripts for task automation
Toyota Motors, Torrance CA (Infosys, USA) May 2015 – Sep 2015
Technology Lead (Database Administrator)
Project description :
Toyota Motors Database production support group my role is to maintain production databases, support and fix production database issues, Migrate database to newer versions, Version upgrades.
● Migrated Databases from Old version to latest Versions
● Proficient in performance monitoring in DB2 and handling DB2 TSM backups
● Good Knowledge of AIX operating system and handling DB2 databases on AIX
● Expertise in using DB2 Movement Utilities like Export, Import, Load and db2move
● Experience in using DB2 Maintenance Utilities like Reorgchk, Reorgs and Runstats
● Performed SQL Query Tuning using db2explain and db2exfmt tools
● Troubleshooting database issues using db2diag, db2pd, db2top, Snapshot and event monitors
Harley-Davidson(Infosys, Hyd, IN) Aug 2011 – Apr 2015
Technology Lead -Database Administrator
● Performed Database Administration activities like Backup, Restore, Runstats and Reorg
● Managed a team of Six people, provided technical guidance and knowledge transfer
and implemented projects
● Proficient in performance monitoring in DB2 and handling DB2 TSM backups
● Proficient in Install, Configure and Manage of IBM DB2 in high available clustered environment ● Work closely with the application development team and run the data creation and data loads
● Proficient in the installation of DB2 pureScale software for a database cluster
● Migrate an existing DB2 9.7 database system to a DB2 pureScale clustered environment
● Configure database manager and database member options for a DB2 pureScale cluster
CNA Insurance (Capgemini, Hyd, IN) Sep 2010 – Aug 2011
Sr. Database Administrator
● Managed SAP & NON-SAP Databases. Managed 120 instances and 140 databases
● Creating and Monitoring Users, authorities and responsibilities and password management ● Enhanced application performance by running RUNSTATS, REORGCHK and REORG utilities
● Proficient in performance monitoring in DB2 and handling DB2 TSM backups
● Coordinated with user departments and attending their requirements for smooth functioning ● Upgraded instances regularly with latest fix packs and alternate fix packs
Honda of America Manufacturing (Wipro Technologies, Hyd, IN ) Mar 2008 – Sep 2010
Database Administrator
● Created DB2 objects like table spaces, tables, indexes and views
● Tuned Database configuration parameters and enhanced performance
● Perform administration tasks such as database backup, restore, Runstats and reorg ● Improved performance of slow running queries jobs by query tuning
● Migrated database from older versions to latest version