RAM SAREDDY
Lead Database Administrator
Ph No: 571-***-****
Email: **********@*****.***
SUMMARY
Total 20+ years of IT experience which includes 8 years of experience in Mongo dB in on Prem/brown field and AWS ec2.
8 years of experience in AWS in Mongo Atlas, EC2 and Aurora Postgres
12 years of Strong DB2 UDB DBA Experience on Linux, UNIX, and Windows
Certified AWS Solution Architect and SysOps Administrator
Experience in Database migrations using AWS DMS tool from MSSQL, DB2, Sybase to AWS Aurora Postgres.
Worked on AWS Aurora database builds and Mongo Atlas databases.
Worked on Mongo dB migrations from ec2 and on Prem/brown field to Mongo dB Atlas.
Certified IBM DB2 (UDB) (8.1, 10.1) DBA with fifteen years of DB2 (udb) database administration experience
Solid experience with AIX, Shell Scripts (ksh, perl, awk, sed, vi).
Excellent knowledge and experience in database, application, and system tuning.
Experience in logical and physical design of OLTP and Data warehouses.
Experience in setting up high availability solutions like DB2 databases.
Expertise in database performance tuning and monitoring.
Excellent analytical, communication and leadership skills.
Ability to work independently with minimum direction.
TECHNICAL SKILLS
RDBMS: DB2 (UDB and Mainframe, Oracle 12c, Paraccel/Matrix (5.1.3.4), Hadoop (2.4) and Postgres
No SQL Databases: Mongo 3.6.4.0,4,4 and 5.0
Operating Systems: AIX/LINUX 5.2/7.9, Linux Red Hat 7.9, MVS/ESA, OS/390
Tools and Utilities Mongo Utility Tools, Mongo Compass, Studio 3T, DMS and Delphix
Languages: COBOL/370, COBOL85, C, FORTRAN, JCL, SQL, Shell Scripts
Protocols: TCP/IP
Freddie Mac VA Aug 2017-Present
Title: Senior Database Administrator
Role: Database administration
Job responsibilities included:
PostgresDB
Building RDS Aurora Postgres database clusters as per application team request.
Building Global databases and testing planned failovers and unplanned failovers
Upgrade the database stacks as and when Cloud formation template changes occurred.
Build the database cluster using Jenkins pipeline and through AWS cli.
Database migrations using export/import, backup and restore or using DMS tool from MSSQL to RDS Aurora Postgres, BF Sybase to BF postures databases and Aurora Postgres to Aurora Postgres (Non prod and Production)
Creation of Aurora Postgres Database clusters using Aurora Clusters snapshots.
Worked on manual backups deletion of RDS Aurora Postgres databases to save the cost.
Deletion of read replicas in on non-prod to save the cost.
Worked on trouble shooting connectivity and performance issues.
Worked on setting up the process to restore from prod to non-prod by masking sensitive data by using Delphix tool.
Worked on S3 buckets to transfer the files.
Trained the team members and application team on the process of builds, migrations, and database refreshes.
Received Above and Beyond Awards, Received Round of Applause Award for supporting end to end migration from mssql server to Aurora postgres in Gold Fields (AWS) and many Fist bump Awards
Mongo dB
Building Mongo dB Atlas Non-Shared and Sharded clusters as per application team request that includes creation of the project, private end point, firewall openings, adding ports to Security groups and Hashicorp roles creation.
worked on the issues to troubleshoot and improve the performance of the queries.
Worked on database migrations from Brown Field to Gold Field
Worked on mongo atlas cluster upgrade from 4.0 to 4.4 and 4.4 to 5.0 in non-Prod around 110 database clusters and 25 production database clusters.
Converted huge collection to shard collection to distribute the data across all shards and to improve the performance.
Resync the shard cluster to reclaim the fragmented space after converting non shard collection to shard. Reclaimed the space around 15 TB.
Worked on POC’s on Antiunity to read the oplog data, Dromio to connect to mongo atlas using x509 certifications, online archive to keep aged data in S3.
Implemented the script in mongo atlas to create static (local) users, to generate mongo atlas cluster/databases report/Inventory, Database role count to find, Pause and un pause clusters during the off hours.
Modification of cloud formation template, code push to the S3 bucket using bit bitbucket and Git repository
Setting up online archive for mongo atlas clusters
Encryption of mongo atlas projects and clusters using KMS key and rotate the keys once in 3 months.
Upgrade cluster resources during heavy data loads and down grade when bulk load completes.
Setting up ops manager with backup databases and app databases
Worked on Conversion of IAM user-based encryption to IAM role-based conversion for Mongo dB atlas projects and cluster with AWS KMS key.
Worked on Ransomware project to download mongo atlas database backups to Ec2 and send it to S3 buckets.
Converting replica sets to shard cluster manually and using ops manager.
Chaining the ec2 instance type, Increase EBS Volumes, creating EBS volumes and file systems on AWS.
Conversion of ext4 files systems to XFS file system as per mongo dB notes for better performance and swapping the file systems
Creation of EFS file system for mongo dB backups and setting up ops manager backups
Supporting OS patches and worked on Decommission of ec2 instances.
Technology Ventures, Jul 2015-Jul 2017
Client: Freddie Mac VA
Title: Senior Database Administrator
Role: Database administration
Job responsibilities included:
Building Mongo dB Non-Shared and Sharded clusters in AWS EC2 as per application team request that includes creation of the cluster, databases, firewall openings and adding ports to Security groups.
Worked on the issues to troubleshoot and improve the performance of the queries.
Worked on database migrations from Brown Field to Gold field EC2.
Worked on mongo cluster upgrade from 3.6 to 4.0.
Worked on mongodump script to exclude audit collection to save lot of time processing and to save space to restore into non prod.
Implemented directoryPerDB.
Worked on to convert Arbiter nodes to data nodes and vice versa.
Worked on brown field for converting non shard cluster to shard cluster.
Worked on to offload 20 TB collection and moved the data using mongo mirroring to another cluster.
Worked on workaround restore process during application deployment to recover data without downtime if there are deployment failures.
Worked on setting up auto sys scheduler to take backup from production and restore into non prod during nightly processes.
Modification of cloud formation template code and push the code to the S3 using bit bitbucket and Git repository.
Worked on creation of views by eliminating sensitive fields and giving access to the views.
Encryption of mongo database file systems using Vormetric.
Setting up ops manager with backup and application databases
Worked on LDAP and agents’ id password rotation.
Converting replica sets to shard cluster manually and using ops manager.
Chaining the ec2 instance type, Increase EBS Volumes, creating EBS volumes and file systems on AWS.
Conversion of ext4 file systems to XFS file system as per mongo dB notes for better performance and swapping the file systems.
Creation of EFS file system for mongo dB backups and setting up ops manager backups
Worked on Brown field for converting non shard cluster to shard cluster and supported Unix patch during my BF on call rotation.
Building RDS A Postgres database clusters as per application team request.
Upgraded the database stacks as and when Cloud formation template changes occurred.
Worked on Decommission of RDS Postgres databases.
Supported Database administration on DB2 UDB on LUW.
Macy’s Systems & Technologies, Inc Atlanta, GA Jun 2014-July 2015
Title: System specialist Database
Role: DB2 LUW Database Administration
Job responsibilities included:
Manage production and non-production Macy’s e-commerce databases. Worked as a Lead Database administrator with expertise in DB2 Relational Database systems on UNIX/Linux & Windows Operating Systems. (V9.7 + 10.1 + 10.5)
Implemented DB2 BLU databases for EDW.
Implemented to give database access to Active directory users.
Developed Shell Scripts (kern shell, Perl, awk, sed & vi editor) in Unix/Linux Operating Systems to monitor database health and performance and to listen heartbeat of HA systems and automatic failover through TSA.
Performed setting up TSM (Tivoli system Management) for Disaster recovery planning which includes backing up the database and OS level and restoration procedures. Worked in setting up high availability solutions like HADR and HACMP on database.
Worked on DB2 Data compression utilities to save space (about 50%) on some of the tables and worked on IBM Tools.
Involved in all the phases of the Software Development Life Cycle for various projects. The phases included: Requirements Gathering, Application Development, Performance databases and Implementation.
Worked with the Network Engineering Teams in setting up and fine tuning the network components needed for the DB2 Database Servers. Also, worked with the Storage Teams in setting up SAN storage (IBM/EMC Technology) needed for the DB2 database systems.
Performed activities like Capacity Planning, File System layout planning, Logical Data modeling from a DB2 (UDB) perspective, Creation of instances/databases and Database Objects (Physical Modeling).
Create, configure, and tune new and existing instance/database objects
(9.7, 10.1, 10.5)
Installation of Info sphere CDC, Management console and Access server. Setting up the CDC to replicate the data across the database servers after creating data stores/users, subscriptions, and mapping tables.
Implemented database Encryption to protect backup image from restoring to other env without key value and encrypt tablespace containers. No users will see clear data in tablespace containers (10.5)
PCI project that requires all developers to access databases via their RACFID/LDAP instead of app id defined locally.
Setup database audit to Capture and prevent unauthorized transactions (if needed) reaching database firewall for auditing purpose.
Setup the databases to monitor through OPM.
Maintain and troubleshoot exiting shell scripts on AIX and Linux environments.
Migrated databases from AIX to Linux
The Home Depot Inc, Atlanta, GA Sep 2010-Jun 2014
Title: Lead Database Administrator
Role: Lead UDB DB2 DBA
Job responsibilities included:
Performed the role of Team lead in enabling the HADR capabilities for Documentum Infrastructure. Effort involved coordinating with the Unix Engineering and TSM teams to setup Tivoli system automation and HADR on Documentum database servers.
Also, the Documentum Project involved moving databases from AIX servers to Linux servers. As DB2 is not supporting to restore the AIX database backup image on Linux database server. Used cursor technique and loaded the data on Linux server after creating database objects on Linux servers with AIX database Data definitions.
Helped the team navigate through the IRB Processes (QA results, implementation plans) and approvals. Assisted in testing with application team with failover of TSA.
Refreshed the lower life cycle databases with production copies. Involved in the End2End testing of the process and upgraded the databases to newer version of DB2 V9.7 fix pack 5 and upgraded many DB2 connect instances.
Explored the DB2 high performance unload tool and documented the standard usage of the tool in THD Environment. Also, conducted Lunch & Learns to explain the tool and procedure to other team members. The tool and the procedure were put to action for Market Max and Abinitio Snapshot databases when they got corrupted. This helped save around 40+ hours.
Helped Design of file systems as per the new standards for the best performance for ITIM servers. Build the 4 database servers from start to finish including HADR and TSM setup.
Supported the AIX Migration from Power PC P5 to Power PC P7 Effort. This platform includes critical applications like My Apron and Bonus Calc. This effort replaces the current Physical and partitioned hosts to Virtual Machines.
Worked on many service center tickets, change requests and IBM PMR's and worked Installation of DBI on database servers to facilitate monitoring.
Gave presentations on DB2 9.5 concepts. (Ex.) partitioning on DB2 UDB to the team. Also, trained contractors on HD.com
Supported EDW warehouse projects and worked on month end recast process for Sales being moved to align with the new hierarchy. Examples: (A) a store moves to another district (B) a SKU moves from one subclass to another. New members (such as new SKU's, Stores) will still be added with their associated sales.
Played a pivotal role in the DB purge vendor tool for ITIM (IBM Tivoli Identity Manager). This effort helped reclaim the space using Purge/Reorg strategies and thus help performance of the application(s).
Yash Solutions, GA Sep 2008- July 2010
Client: The Home Depot Inc, Atlanta, GA
Title: Senior Database Administrator
Role: Senior UDB DB2 DBA for Enterprise Data Warehouse
Job responsibilities included:
Supported multiple UDB instances (V8.2 and V9.5) including Very Large databases for the enterprise database warehouse databases.
Client: IBM Global Services, Atlanta, GA May 2001-Sep 2008
Title: Senior Database Administrator
Role: Sr. UDB DB2 DBA
Job Responsibilities Included:
Performed complex database performance tuning, security, database backup/recovery strategy, and implementing monitoring procedures to maximize availability of the database. This includes the monitoring and management of enterprise SQL replication, HADR solutions and general health of database servers and data center clients and other DBMS technologies.
Support various IBM clients and IBM internal projects
Plan, implement and support various OLTP and Data warehouse projects in IBM.
Work as a team player with sound knowledge of database, applications, OS, and SAN.
EDUCATION/CERTIFICATIONS
Bachelor of Engineering (Mechanical) from Bangalore University, India
Certified AWS Solution architect and SysOps administrator
IBM Certified Database Administrator - DB2 UDB v8.1 and 10.1 for Linux, UNIX, and Windows