Post Job Free
Sign in

Data Engineer

Location:
Bengaluru, Karnataka, India
Posted:
March 23, 2026

Contact this candidate

Resume:

CHANDU B L

Bangalore, Karnataka, India +91-701******* ******.*.****@*****.*** http://linkedin.com/in/chandu-b-l SUMMARY

Data Engineer experienced in cloud ETL migrations, scalable data pipeline development, and data platform standardization. Skilled in BigQuery, Databricks, Airflow, Dataflow, Dataproc, PySpark, and dbt, with hands-on experience building production-grade pipelines, reusable data engineering frameworks, and contract-driven data validation systems. Proven ability to deliver reliable data platforms and collaborate across engineering and analytics teams. TECHNICAL SKILLS

Cloud Platforms and Services: Google Cloud Platform (GCP), Databricks Data Processing & Orchestration: Apache Airflow, Apache Spark, PySpark, dbt GCP Data Services: BigQuery, Dataproc, Cloud Composer, Cloud Functions, Cloud Storage, Dataflow, BQMS Programming Languages: SQL, Python

Data Engineering: ETL Pipeline Design, Data Warehousing, Data Transformation, Data Migration, Data Pipeline Optimization

Tools: Git, Jira

PROFESSIONAL EXPERIENCE

QUANTIPHI ANALYTICS

Bangalore, India

Data Engineer Sep 2022 – Present

Data Platform Standardization & Data Contracts

● Worked on a data platform standardization initiative to unify pipeline development and governance practices.

● Designed a CLI-based, platform-agnostic data contract management framework to standardize schema definitions, validation, and pipeline development across BigQuery and Databricks environments.

● Engineered distributed ELT pipelines on Databricks using Spark, enabling scalable contract-driven data transformations.

● Integrated data contract validation rules as dbt data tests, embedding data quality checks into transformation workflows.

● Implemented automation to generate dbt project initialization code and reusable pipeline templates from data contract specifications.

● Established a Git-based workflow to manage the data contract lifecycle across environments.

● Enabled collaboration between data engineering, analytics, and downstream consumers through contract-first data development practices.

Teradata to BigQuery Data Migration

● Migrated 20+ large-scale DMExpress and DataStage ETL workloads and Teradata tables to BigQuery, enabling modernization of legacy data pipelines.

● Created reusable ETL capability templates, standardizing pipeline development patterns and accelerating migration across multiple datasets.

● Designed an Airflow-based audit and monitoring framework to standardize pipeline observability and enable faster debugging of production workflows.

● Translated and optimized 50+ Teradata BTEQ scripts using BigQuery Migration Service (BQMS), accelerating migration of legacy analytics workloads to BigQuery

● Orchestrated ETL workflows using Apache Airflow, improving scheduling, monitoring, and pipeline orchestration. Distributed Data Processing (Spark, Databricks, Dataproc & Dataflow)

● Built a reusable Spark ingestion framework on Dataproc with configuration-driven processing, enabling scalable ingestion and transformation of both small and large data files.

● Implemented Dataflow pipelines to ingest large datasets (100+GB) from JDBC sources into BigQuery, enabling scalable cloud data ingestion.

Real-time Data Sync and Analytics

● Engineered a solution to stream near real-time Firestore data into BigQuery, enabling real-time data synchronization.

● Created and maintained dashboards using Looker Studio, providing stakeholders with real-time insights on key performance indicators (KPIs).

● Facilitated timely decision-making and improved transparency by tracking the progress of internal tool launches. Cloud Data Pipeline Development

● Engineered scalable data pipelines to synchronize data from client servers with BigQuery using Cloud Functions and Scheduler.

● Improved data transfer efficiency, reduced latency, and enabled seamless integration with existing data infrastructure. Cognizant

Bangalore, India

Intern Mar 2022–Jul 2022

FACTR

Bangalore, India

Intern Aug 2021 – Jan 2022

Django API Development

● Built high-performance RESTful APIs using Django, enhancing backend capabilities and supporting seamless front-end interactions.

Analytics Report Generation

● Created comprehensive analytics reports to assess and improve student performance, leveraging data analysis techniques to provide actionable insights for educators and administrators. Web Scraping

● Automated the data collection process by executing comprehensive web scraping projects, building rich datasets. Text Summarization & Comparison

● Implemented AI-driven text summarization and comparison tools, streamlining information retrieval and enhancing content management.

Question Generation & AI Collaboration

● Contributed to the development of an automated question generation system, improving educational content delivery through innovative AI solutions.

CERTIFICATIONS

● Databricks Certified Data Engineer Professional

● Databricks Certified Data Engineer Associate

● Google Cloud Certified Associate Cloud Engineer

● Google Cloud Professional Database Engineer

● Astronomer Certification for Apache Airflow 3 Fundamentals EDUCATION

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

BE Computer Science and Engineering 2022

AWARDS & RECOGNITION

● Think Tank Innovation Award - Quantiphi Analytics



Contact this candidate