Post Job Free

Resume

Sign in

Google Cloud Data Architect

Location:
San Diego, CA, 92101
Posted:
February 20, 2024

Contact this candidate

Resume:

Mukesh Sharma

San Diego, CA *****

Cloud Database Migration and Modernization Specialist / Big Data Architect

Professional Summary: -

●Experienced Oracle cloud Architect/ERP DBA with over 15+years of experience in managing, configuring, and optimizing Oracle databases and ERP systems. Proficient in designing, implementing, and managing high-performance database environments to support business-critical applications. Skilled in Oracle RAC, Data Guard, GoldenGate, and Cloud Control. Excellent problem-solving, communication, and project management skills. Result-oriented Database/Big Data Architect with extensive experience in architecting large-scale database environments and mission-critical systems.

●Skilled in Google Cloud, AWS, OCI and Oracle, MySQL, and PostgreSQL with multiple certifications. Expertise in database engineering, migration, ETL/ELT, and data lake design. Programming skills in Python and shell scripting. Proven track record of successful projects and team management.

●Collaborate with engineering and product teams to resolve issues and discuss product needs. Serve as a result-oriented, self-motivated, team player with the ability to work independently. Manage and mentor teams of on-site DBAs and off-site DBAs across global locations.

●Google Cloud Certified Professional Database Engineer

●Google Cloud Certified Professional Cloud Architect

●Google Cloud certified Data Engineer

●AWS Certified Database – Specialty - Credl

●Benchmark GCP Spanner with YCSB Tools (accenture.com)

●Architecting with Google Kubernetes Engine

●ORACLE Certified (DBA) professional

●MapR Certified Professional (https://verify.skilljar.com/c/ihd75dj3wujt)

●AWS Certified Solutions Architect (Validation Number 9VTC0GD2DJQQ1M9R) http://aws.amazon.com/verification)

●MongoDB for DBAs and MongoDB Advanced Deployment and Operations

Core Technical Skills

●Database Systems: Oracle ERP, MySQL, PostgreSQL, Cassandra, MongoDB, Cloud SQL, Redshift, RDS, DynamoDB

●Oracle Skill: Oracle GRID Infrastructure 19c, 12c, 11g, 10g, Oracle Real Application Clusters (RAC), Oracle Data Guard, Oracle Golden Gate, Oracle Cloud Control, Oracle Enterprise Manager (OEM) 12c/13c

●Big Data Ecosystem: Big Query, Big table Hadoop Ecosystem, Data-Proc, Pub-Sub, Dataflow Data fusion, Dataprep, EMR, MapR, Data-Proc, Pub-Sub, Dataflow, Data fusion, Data prep, EMR.

●Cloud Technologies: Google Cloud, AWS, OCI

●Languages: Shell scripting, Python

●Operating Systems: Linux, Solaris

●Orchestration: GKE, Anthos

Professional Experience

Accenture Sep 2022 - Current

Sr Manager Database Administration Architect

●Migrated MySQL RDS and on-premises into Cloud SQL using database migration service with zero downtime.

●Worked on Oracle to Postgres and Oracle to MySQL database into Google Cloud Platform.

●Working on securing a Cloud SQL for Postgres database using a customer-managed encryption key (CMEK) and customer-managed encryption key (CMEK). Configured and tested Cloud SQL IAM database authentication.

●Configured Replication and Enabled Point-in-Time-Recovery for Cloud SQL for PostgreSQL

●Cloud Spanner - Defining Schemas and Understanding Query Plans

●Worked on creating and populating a Bigtable Instance. Designing, querying Bigtable Schemas and streaming, offline HBase migration to Bigtable. Setup Monitoring and Managing Bigtable Health and Performance.

●Deployed SQL server and MySQL databases on Google Kubernetes Engine (GKE) cluster.

●Deploy and use Google Cloud's Oracle on Bare Metal Solution. Set up Oracle Data Guard between on-premises and GCP Oracle Bare Metal.

●Worked on Online Data Migration to Big Query using Striim and DataStream and data fusion.

●Working on setting up and running a MongoDB Database in Kubernetes with StatefulSets

●Leverage the Autoscaler Tool for Cloud Spanner to Achieve Workload Elasticity

●Used Terraform for setting databases and other infrastructure.

Qualcomm Jan 2020 – Sep 2022

Sr Staff Database Administrator/Architect

●Managed and administered Oracle databases and ERP systems for the company's global operations. Install, configure, and maintain Oracle software, including Oracle RAC, Data Guard, and GoldenGate.Ensure database and application availability, reliability, and security through proactive monitoring, maintenance, and troubleshooting. Perform database tuning and optimization to ensure high performance and scalability. Implement backup and recovery strategies to ensure business continuity and disaster recovery. Collaborate with development, infrastructure, and business teams to support database and application requirements.

●Managed and administered Oracle databases and applications using OEM for the company's global operations. Install, configure, and maintain OEM software, including OEM 12c/13c/14c. Monitor and manage database and application performance using OEM to ensure high availability, reliability, and security. Implement database tuning and optimization to ensure high performance and scalability. Performed backup and recovery procedures using OEM to ensure business continuity and disaster recovery. Collaborate with development, infrastructure, and business teams to support database and application requirements. Plan and implement database and application upgrades, patches, and migrations using OEM.

●Worked in data engineering technologies on various RDBMS (Oracle, MySQL, Postgres), NoSQL like HBASE, MongoDB, Cassandra, MAPR-DB, and Big Data like MapR, and Cloudera for instrumenting, administering, and upgrading to stable releases. Instrumenting many data migrations projects from on-premises to cloud providers like Google and AWS.

●Played lead role in building and architecting multiple Data pipelines, end-to-end ETL, and ELT processes for Data ingestion and transformation in GCP and coordinating tasks among the team. Design and architect various layers of the Data Lake. Using rest API with Python to ingest Data from and some other sites to big query. Build a program with Python and Apache beam and execute it in cloud Dataflow to run Data validation between raw source files and Big Query tables.

●Worked on data migrations for heterogeneous database platforms using AWS Data migration. Hand-on experience on AWS RDS, EMR, Redshift, Kinesis streams, etc. and provide optimal solutions as per business needs. I have experience with Cloud machine learning tools like AutoML and Pre-define machine learning.

●Worked on data cleansing using Cloud Data-prep and data visualization using tools like Data-Studio and Tableau. Have Extensive Experience in IT data analytics projects, Hands on experience in migrating on-premises ETLs to Google Cloud Platform (GCP) using cloud-native tools such as BIG query, Cloud DataProc, Google Cloud Storage, Composer.

Tips July 2019- Dec 2019

Big Data Admin

●Performed pre-installation and post-installation benchmarking and performance testing. Installing, migrating and upgrading multiple MapR clusters

●Designed and implemented the Disaster Recovery mechanism for data, eco-system tools and applications

●Orchestrated data and service High availability within and across the clusters

●Performed multiple rigorous DR testing

●Training, mentoring and supporting team members

●Experienced in moving the Services (Re-distribution) from one Host to another host within the Cluster to facilitate securing the cluster and ensuring High availability of the services

● implemented MapR stream to facilitate real time data ingestion to meet business needs

●On-call availability for rotation on nights and weekends.

●Upgraded MapR 5.1 to 6.1.0 version.

●Good knowledge of Hadoop cluster connectivity and security.

●Experienced in MapRDB, Spark, Elastic search and Zeppelin.

Virtusa (Etouch system Corporation) Feb 2019 – Aug 2019

GCP Data Engineer

●Worked on the project at Google Cloud for implementing data pipeline using cloud Pub-Sub, Dataflow for database migration to Big Query, Bigtable from on-premises legacy databases.

●Setup, documented, and migrated databases from Oracle to MySQL using Oracle-Golden Gate

●Set up new real-time and Batch upload data from Cloud SQL to Bigquery using Data fusion, federated big query pipeline.

●Worked on Dataproc Google-managed Hadoop cluster for on-premises cluster data migration.

●Documented, and tested Oracle database migration from on-premises to GCP using Oracle Data Guard

●Migrated Oracle Schema migration using data pump and dbms_file_transfer.

●Experienced in installing and configuring monitoring tools.

●Capacity planning, Architecting, and designing Hadoop cluster from scratch.Designing service layout with HA enabled

Iflow Inc Oct 2018 – Feb 2019

Database Architect

●Worked on the project at Google Cloud for implementing data pipeline using cloud Pub-Sub, Dataflow for database migration to Big Query, Bigtable from on-premises legacy databases.

●Setup, documented, and migrated databases from Oracle to MySQL using Oracle-Golden Gate

●Set up new real-time and Batch upload data from Cloud SQL to Bigquery using Data fusion, federated Bigquery pipeline.

●Worked on Dataproc Google-managed Hadoop cluster for on-premises cluster data migration.

●Documented, and tested Oracle database migration from on-premises to GCP using Oracle Data Guard

●Tested and Documented ORACLE RMAN backup-recovery on Google Cloud Storage. Process in scripted and run via cron job.

●Migrate Oracle Schema migration using data pump and dbms_file_transfer.

●Experience in installing and configuring monitoring tools.

●Capacity planning, Architecting, and designing Hadoop cluster from scratch.

●Designing service layout with HA enabled

●Migrating an entire Oracle and hive database to BigQuery and using power bi for reporting.

●Build data pipelines in airflow in GCP for ETL-related jobs using different airflow operators.

●Experience in GCP Dataproc, GCS, Cloud functions, and BigQuery.

●Experience in moving data between GCP and Azure using Azure Data Factory.

●Experience in building power bi reports on Azure Analysis services for better performance.

●Used cloud shell SDK in GCP to configure the services Data Proc, Storage, BigQuery

●Coordinated with the team and developed a framework to generate Daily ad-hoc reports and Extracts from enterprise data from BigQuery.

●Designed and Co-ordinated with the Data Science team in implementing Advanced Analytical Models in Hadoop Cluster over large Datasets.

Qualcomm, San Diego, CA Jan 2008 – Sept 2018

Staff DBA/Mapr/Hadoop Admin

●Performed pre-installation and post-installation benchmarking and performance testing.

●Installing, migrating, and upgrading multiple MapR clusters

●Designed and implemented the Disaster Recovery mechanism for data, eco-system tools, and applications.

●Orchestrated data and service High availability within and across the clusters

●Performed multiple rigorous DR testing.

●Training, mentoring, and supporting team members.

●Moving the Services (Re-distribution) from one Host to another host within the Cluster to facilitate securing the cluster and ensuring the High availability of the services.

●Working to implement MapR stream to facilitate real-time data ingestion to meet business needs.

●On-call availability for rotation on nights and weekends.

●Upgraded MapR 5.1 to 6.1.0 version.

●Good knowledge of Hadoop cluster connectivity and security.

●Experience in MapRDB, Spark, Elastic search, and Zeppelin.

●Working on databases cloud migration for Oracle to Oracle, Oracle to MySQL, and Redshift Extensively

●Highly skilled in deployment, data security, and troubleshooting of applications using AWS services.

●Experienced in creating multiple VPCs and public, and private subnets as per requirement and distributing them as groups into various availability zones of the VPC.

Education &Industry Technical Certifications

●Masters in information technology – University of Karnataka, India

●Bachelors in mathematics – Himachal Pradesh University India



Contact this candidate