Post Job Free
Sign in

Data Engineer Engineering

Location:
Posted:
October 15, 2025

Contact this candidate

Resume:

Gopi Javvaji

Data Engineer

Email ID: *************@*****.*** Ph No: 913-***-**** LinkedIn

PROFESSIONAL SUMMARY

Cloud Data Engineer with experience in building scalable data platforms, real-time pipelines, and analytical solutions across AWS, Azure, and GCP. Strong background in Spark, Kafka, Snowflake, Airflow, Databricks, and data processing. Experienced in ML-ready data engineering, cloud migrations, fraud detection,and high-volume ETL/ELT pipelines. TECHNICAL SKILLS

• Cloud Platforms: Microsoft Azure, AWS (EC2, S3, IAM, Lambda), GCP(Pub/sub, Dataflow, VertexAI)

• Data Engineering: Apache Spark, Hive, Kafka, Kinesis, Databricks, Flink, Nifi, Informatica

• Databases & Warehousing: PostgreSQL, MySQL, MongoDB, Cassandra, HDFS, Snowflake, Redshift, Big Query

• Programming & Scripting: Python, SQL, R, Java, SAS, Linux

• Data Modeling: Dimensional Modeling, Star Schema, Data Vault 2.0, Medallion Architecture

• BI & Analytics: Power BI, Tableau, Azure Analysis Services, Excel, Seaborn, Pandas, Matplotlib

• DevOps & CI/CD: Azure DevOps, Git, CI/CD Pipelines, ARM Templates, GitHub Actions

• Data Governance: Azure Purview, Data Quality Frameworks, Data Lineage, Security Protocols PROFESSIONAL EXPERIENCE

VISA INC. Azure Data Engineer Foster City, CA Jan 2024 – Present

• Engineered 15+ scalable ETL/ELT pipelines using Azure Data Factory to process 2TB of daily transaction and payment authorization data, integrating sources from cross-border settlements and merchant systems.

• Implemented Azure Stream Analytics with Databricks to stream 10,000+ payment events/sec into ML pipelines for real-time fraud scoring and anomaly detection using Spark ML and Azure ML endpoints.

• Implemented Azure Synapse Analytics dedicated SQL pools to support real-time fraud detection analytics, reducing query response times from 15 minutes to 45 seconds through effective partitioning strategies.

• Developed complex PySpark transformations in Azure Databricks to cleanse and aggregate transactional data for 200+ million cardholders, improving data processing efficiency for compliance reporting by 50%.

• Automated deployment processes using Azure DevOps CI/CD pipelines and Github Actions to support BL and ML teams, reducing deployment errors by 45% and cutting release time from 4 hours to 30 minutes.

• Established Azure Data Lake Storage Gen2 with a medallion architecture, organizing raw, enriched, and curated data layers to power enterprise-level customer spending analytics and merchant insights.

• Implemented comprehensive security protocols meeting PCI-DSS and SOC compliance, including Azure Active Directory authentication and role-based access controls for sensitive financial data.

• Optimized data storage and archival strategies, reducing Azure storage costs by 25% while maintaining 99.9% data availability for critical payment processing applications. Bajaj FinServ Data Engineer India Jun 2021 – Apr 2023

• Built and managed ETL processes using Airflow and Kafka that integrated loan application, customer, and credit risk data from 15+ regional business units, reducing manual data processing by 60%.

• Developed Python,pyspark and Pandasautomation scripts for processing and validating customer financial data, decreasing the time for creditworthiness analysis from 8 hours to 45 minutes for the underwriting team.

• Optimized SQL Server databases supporting consumer finance operations, improving query performance for predictive modelling reports by 35% through strategic indexing and query optimization.

• Migrated 12TB of customer and financial product data to AWS (S3,Sagemaker, Redshift, Glue, Lambda), creating a centralized data lake to support machine learning workflows and improving data accessibility for 50+ analysts and risk modelers.

• Created Power BI dashboards visualizing key performance indicators (KPIs) for loan disbursement, portfolio health, and customer demographics, enabling management to track business growth and risk exposure.

• Implemented data validation frameworks ensuring 99.8% accuracy for financial reporting data, complying with internal audit and regulatory standards.

• Designed dimensional data models supporting business intelligence reporting for customer 360-degree views and cross-selling opportunity analysis.

EDUCATION & CERTIFICATIONS

• Master of Science in Computer Science University of Central Missouri Warrensburg, MO

• Bachelor of Engineering in Electronics and Communication V R Siddhartha Engineering College, India

• Microsoft Certified: Azure Data Engineer Associate

• Databricks Certified Data Engineer Associate

• AWS Certified Solutions Architect - Associate



Contact this candidate