Dnyaneshwar Pendharkar
* ***** *** **** ********/Developer
*********************@*****.***
Mob--985*******
Professional Summary
Experienced GCP Data Engineer/Developer with 7 years of expertise in data migration, data pipeline development, and cloud-based data architecture. Proficient in:
BigQuery Migrations: Extensive experience in migrating data from Oracle SQLServer, to BigQuery across various environments.
ETL Processes: Skilled in designing and implementing ETL pipelines using Data Fusion and DBT (Data Build Tool).
Data Processing: Hands-on experience with BigQuery, Dataflow pipelines, and data modeling.
Cloud Orchestration: Proficient with Google Cloud Composer and Apache Airflow for managing data workflows and DAGs.
Version Control: Experienced with GIT and Bitbucket for version control, alongside using Google Cloud Storage.
Containerization: working with Docker and Kubernetes for containerization.
CI/CD Pipelines: Good understanding and experience in building CI/CD pipelines.
Additional Tools: Experienced with Cloud Functions, Cloud Scheduler, and DataStudio.
Bug Tracking: Utilizes JIRA for tracking and managing bugs. Technical Skills
Cloud Environment: Google Cloud Platform (GCP)
Scripting: Python, SQL, BigQuery, Oracle, PL/SQL,Python,Oracle,RDBMS
Version Control: GIT, Bitbucket
Data Processing: BigQuery, Data Fusion, Dataflow,Pub/Sub
Orchestration: Google Cloud Composer, Apache Airflow
Containerization: Docker, Kubernetes
ETL Tools: Data Fusion, DBT
Additional Tools: Cloud Run, Cloud Function, Cloud Scheduler, Data Studio
Bug Tracking: JIRA
Technology: Cloud, Data Lake, Data Warehouse,Data Modelling Project Experience
Letitbexai Pvt Ltd
Duration:June 2024 – Till today
Project Title
Client: “Aetna Health ”Duration(June 2024 – Till Today) Description:-
Worked experienced on data processing using Google Cloud Storage and Cloud data fusion.
Provided high durability of the available data using data storage in the bucket
Coordinated with team to establishing and applying appropriate branching, labeling/naming conventions using GIT source control and analyzed and resolved conflicts related to merging of source code for GIT.
Roles:
Design, build, operational, secure, and monitor data processing systems in the Google Cloud Platform.
Data accuracy and role based controls can be achieved only through digitization and cloud data lake
The objective of this role includes Cloud journey and Gcp Services preparation
Managed and resolved pipeline issues, reducing system error
Design and build GCP data driven solutions for enterprise data warehouse and data lakes and Big Query
Evaluate client business challenges and work with the team to ensure at the best technology solutions
Knowledge of working with Agile methodologies"
Lingaro India Pvt Ltd
Duration: May 2023 – June 2024
Project: Banking
Project Overview: Led the migration of data to BigQuery from various sources including Salesforce, SQL Server Developed and implemented comprehensive data management solutions to improve data accessibility and deliver actionable insights. Key Contributions:
Working with python for creating DAG Script for Airflow and python library
I have experienced on Data Modelling,RDBMS with plsql and oracle data base
Developed and Implemented Software Release Management plans for applications in alignment with Agile processes.
Designed and developed BigQuery queries to extract and deliver meaningful insights to stakeholders, enhancing decision-making capabilities.
Implemented ETL processes to streamline the import of data from various sources into BigQuery, optimizing data flow and accuracy.
Worked with DBT tools for data transformation, creating and managing data pipelines to ensure efficient data processing and integration.
Managed and resolved issues related to data migration and processing, including troubleshooting, error logging, and maintenance.
Collaborated with stakeholders, including Executive, Product, Data, and Design teams, to assist with data-related technical issues and support data infrastructure needs.
Managed the data lifecycle, implementing improvements in data management techniques to enhance data accessibility and efficiency.
Initiated data democratization efforts, enabling non-technical teams to effectively leverage and utilize data.
Synechron Technology
Duration: October 2021 – May 2023
Project / Client: HDFC Bank
Key Contributions:
Spearheaded the utilization of BigQuery, leading to enhanced system performance and faster query execution.
Designed, built, and secured data processing systems on Google Cloud Platform (GCP). Worked extensively with GCP services including BigQuery, Compute Engine, Cloud Storage, Cloud SQL, Cloud Load Balancing, Stackdriver Monitoring, and Cloud Deployment Manager.
Ensured high durability of data by managing storage solutions in GCP buckets. Applied data modelling techniques and used tools like Data Fusion, Composer/Airflow, Python, and shell scripting. Designed and built GCP data-driven solutions for enterprise data warehouses and data lakes.
Managed version control with GIT and Bitbucket, coordinating branching strategies, resolving code conflicts, and conducting pull requests and code reviews.
Effectively handled and resolved production issues, maintaining system stability and ensuring timely fixes.
Collaborated with clients to translate business requirements into technical solutions. Worked with Executive, Product, Data, and Design teams to support data infrastructure needs and address technical issues.
Identified and implemented internal process improvements to optimize data management and system performance. Conducted testing, scheduling, and data validation.
Provided IT solutions and technical support, ensuring data accuracy and role-based controls through digitization and cloud data lake strategies. Supported the overall cloud journey and GCP services preparation.
Gravitas Technology
Duration: February 2020 – August 2021
Client: State Bank of India
Project Title: Queue Management System (QMS)
Key Contributions:
Developed and optimized Big Query queries to extract and deliver actionable insights to stakeholders. Implemented ETL processes to efficiently import data from various sources into the Big Query warehouse.
Utilized Data Fusion, Composer/Airflow, Python, and shell scripting for data modelling and management.
Applied GIT branching strategies, including feature, staging, and master branches. Managed code conflicts, performed pull requests, and conducted code reviews to ensure code quality and integration.
Created automated ETL processes to streamline data loading and ensure seamless integration into the Big Query environment.
Managed software release processes, ensuring timely and efficient deployments according to Agile practices.
Provided technical support and solutions, leveraging a strong background in Data Fusion and BigQuery to address project needs and deliver high-quality outcomes. Aurionpro Solution
Duration: April 2017 – July 2019
Client: KARUR VYASA BANK
Project Title: Check Deposit Machine (kiosk)
Key Contributions:
Developed a real-time event data collection and storage platform to support high transaction volumes and ensure timely data processing.
Managed on-premise to cloud migrations, leveraging cloud technologies to transform IT infrastructure and enhance scalability.
Created and maintained ETL processes using Google Cloud Composer to automate data loading and integration from various sources.
Designed and managed data schemas to ensure efficient data storage and retrieval.
Contributed to cloud journey and GCP services preparation, aligning with project goals and requirements.
Ensured data accuracy and implemented role-based controls through digitization and the use of cloud data lakes, enhancing data integrity and security.
Implemented GIT branching strategies, including feature, staging, and master branches. Oversaw code integration, resolved conflicts, and performed pull requests and code reviews.
Assessed client business challenges and collaborated with the team to implement optimal technology solutions while applying Agile methodologies to manage project workflows and ensure timely delivery.
Executed data modelling tasks to support project requirements and improve data management processes.
Leveraged Google Cloud Storage and Cloud Data Fusion to enhance data processing capabilities and manage large datasets efficiently.
Ensured high durability of data through robust storage solutions in Google Cloud Storage, maintaining data integrity and availability.
Designed, built, operated, secured, and monitored data processing systems within Google Cloud Platform, ensuring the reliability and security of data systems.
Achieved data accuracy and role-based controls through digitization and implementation of cloud data lakes, improving data management and security.
Contributed to the cloud journey and preparation of GCP services, aligning project goals with cloud solutions and optimizing resource usage.
Managed and resolved pipeline issues, reducing system errors and improving overall data processing efficiency.
Designed and built GCP-driven solutions for enterprise data warehouses and data lakes, utilizing BigQuery for scalable and effective data analysis.
Coordinated with the team to establish and apply appropriate branching, labeling, and naming conventions using GIT. Managed code integration, resolved conflicts, and conducted pull requests and code reviews.
Evaluated client business challenges and collaborated with the team to implement the best technology solutions, applying Agile methodologies to ensure project success and timely delivery.
ITTI PVT LYD.
Duration: February 2016 – September 2016
Project Title: My Insurance Page
Key Contributions:
Identified, designed, and implemented internal process improvements to enhance efficiency and performance.
Achieved data accuracy and role-based controls through digitization and the use of a cloud data lake, ensuring reliable data management and security.
Improved SQL queries to enhance database management and data processing capabilities within Big Query and data lakes.
Optimized data migration to Google Cloud Platform (GCP) using Cloud Dataflow, significantly reducing data discrepancies and improving data consistency.
Utilized Data Fusion, Composer/Airflow, Python, and shell scripting to streamline data processes and support Big Query operations.
Applied Agile methodologies to ensure effective project management and timely delivery of solutions.
Educational Qualification
Course University/Board Year Passing Class %
M.C.A. PUNE 2015 62
B.C.A. NASIK 2011 63
H.S.C. NASIK 2008 64
S.S.C NASIK 2006 66
Personal Details
Dnyaneshwar Babasaheb Pendharkar
Address:-NASHIK 422304
Date of Birth : 11-06-1990
Marital Status : Married
Nationality : Indian
Sex : Male
Languages Known:- English, Hindi,Marathi.