Murali Chavvakula
Determined, Ambitious, Organized & Goal-Oriented.
SUMMARY:
● 20 years of industry experience in Data Engineering & designing Data Solutions On Cloud(Azure) & On-Premise Platforms.
● Expertise in building & implementing architectures for Lake Houses, Data Lakes & Data Warehouses.
● Proven skills in Big Data, Building Data Pipelines, Cloud Migrations, and leading high-volume data transformation projects across industries like Retail, BFSI, and Telecom.
● Experience in designing & implementing complex ETL & ELT processes using tools like Azure Data Factory, Databricks & Synapse to integrate data from diverse sources.
● Have a good understanding of Databricks Medallion Architecture & Delta Live Tables.
● Experienced in developing Spark Applications using PySpark, Spark-Sql in Databricks for data extraction, transformation, aggregation for multiple files formats.
● Hands-on experience in Azure DevOps/GitHub for version control, code maintenance and deployments.
● Expertise in optimizing data pipelines, storage, processing & database queries.
● In-depth knowledge on Hadoop ecosystems like MapReduce, HDFS, Yarn, Hive, Spark, Kafka, Sqoop etc.
● Have a good understanding of data governance & data security principles and practices.
● Familiarity with tools & platforms for business intelligence and reporting, such as Power BI.
● Strong communication skills to collaborate with stakeholders, project teams, and articulate technical solutions to non-technical audiences.
● Ability to learn and adapt quickly to emerging new technologies.
● Planning & executing data projects effectively, including resource allocation, timeline management, and risk assessment.
● Proficient in creating detailed technical documentation, including solution architecture diagrams, and design specifications for RFP’s.
● Leadership skills to guide development teams and collaborate with stakeholders & project managers.
******.*.**********@*****.***
SKILLS:
Azure Data Services:
● Microsoft Fabric
● Azure Data Factory
● Azure Synapse
Analytics
● Azure Data bricks
● Azure Data Lake
● Azure SQL Database
Big Data Technologies:
● Apache Spark
● Apache Hadoop
● Hive
● Sqoop
● Kafka
● Impala
Scripting and Programming:
● PySpark
● T-Sql
● Shell Scripting
CI / CD tools:
● Azure DevOps
● GitHub
● Jira
● Jules/Jenkins
● BitBucket
Data Engineering Tools:
● Talend
● Informatica
● SSIS
Data Storage/Databases:
● Relational DB’s
● Non-relational DB’s
● Data lakes
● Distributed file
systems.
PROJECT EXPERIENCE
HCL, Princeton- NJ —Sr. Solution Architect
Sep 2021 - Till Date
Role:LeadData Engineer / Solution Architect.
● Designed and implemented a scalable Azure Data Lake & Data Warehouse architecture with projected growth for multiple customers.
● Designed a self-service analytics platform for retail and supply chain clients using Microsoft Fabric.
● Hands-On experience building optimized data pipelines using- Azure Data Factory, Azure Data Bricks, Azure Synapse etc.
● Identified and resolved issues on Azure data platforms, analyzed bottlenecks, and recommended improvements to enhance efficiency and cost-effectiveness.
● Performed data migration from the existing legacy system to the new platform.
● Involved in planning, designing & Architecting solutions along with proposal submissions on diversified business domains.
Cognizant, Bangalore —Architect
Mar 2011 - Sep 2021
Client : JP Morgan Chase & Co, India.
Role: Data Engineer.
● Designed, built, and maintained an Enterprise Data Hub on the Cloudera platform using PySpark, Sqoop, Hive, Python and Shell scripting adhering to Agile and DevOps methodologies.
● Developed and optimized data ingestion, cleansing, and transformation workflows for structured and semi-structured datasets.
● Built scalable data pipelines to handle batch and near real-time data processing needs, improving data availability for critical business operations.
● Re-architected data platform,leveraging services like Azure Data Factory, Azure Databricks, and Azure Synapse Analytics, modernizing the architecture and enhancing scalability, performance, and cost efficiency. Client : Australia & New Zealand Bank, India.
Role: Data Engineer.
● Collaborated in an Agile environment to migrate an existing Oracle Data Warehouse (DWH) to a Hadoop/Cloudera ecosystem, ensuring seamless integration and performance optimization.
● Engaged in requirement gathering, impact analysis, design, development, deployment, and maintenance of the new data platform.
● Created multiple reusable Shell & PySpark scripts to streamline and automate ingestion activities, reducing manual intervention and improving operational efficiency.
● Designed and managed external tables in Hive, optimizing performance by implementing dynamic partitioning and bucketing strategies. Client : Smith & Nephew plc, UK.
Role: Data Engineer.
● Contributed to architecture design, conducted code reviews, and ensured successful project delivery for data integration initiatives.
● Scoped and estimated tasks for data extraction from SAP ECC-CRM to ExaSol using the Talend Data Integration tool, ensuring timely and accurate delivery.
● Conducted impact analysis, designed, developed, and fine-tuned existing ETL systems to enhance performance and scalability, reducing processing times and improving reliability
Client : Barclays Bank, UK.
Role: ETL Architect.
● Designed and developed Informatica workflows to populate Cross-Channel Tableau Dashboards, providing insights into customer journeys.
● Built a Mortgage Digital Accelerator to streamline and expedite the mortgage loan process.
CERTIFICATIONS:
● Azure Data
Fundamentals
● Azure Data Engineer
● Databricks Lakehouse
Fundamentals
Client : Sainsbury’s Bank, UK.
Role: ETL Architect.
● Collaborated with business stakeholders to understand KPIs and dashboard requirements, translating them into technical solutions for effective reporting and analytics.
● Customized Financial Services Logical Data Model (FSLDM) sources and attributes to fit the EDW architecture, enhancing data organization and accessibility.
Client : J.P. Morgan & Chase, UK .
Role: ETL Developer.
● Collaborated closely with Business Analysts and external vendors to design and implement Informatica-based solutions, tailored to support the re-architected EMEA Transfer Agency business processes.
● Conducted comprehensive impact assessments and effort estimations based on business requirements, ensuring feasibility and resource optimization. Client : RBC Capital Markets, UK.
Role: ETL Developer.
● Developed and implemented the UK Regulatory Reporting (UKRR) framework for financial instruments traded by RBC, ensuring compliance with regulatory standards.
● Actively participated in code migration to Informatica, deployment processes, and housekeeping activities within production environments to ensure system efficiency and stability.
Client : BSkyB, UK.
Role: ETL Developer.
● Contributed to the data integration of Data Warehouse (DW) applications supporting Sky’s Corporate and Domestic Customers, ensuring seamless data flow across systems.
● Developed, tested, and optimized Informatica code, Netezza SQL, and Shell scripts to meet the requirements of new and ongoing projects. IBM, Bangalore—DWH Consultant.
Oct 2009 - Mar 2011
Client : KCI, India.
Role: ETL Developer.
● Developed and customized Informatica jobs for Oracle Business Analytics solution, merging regional platforms into a global solution. Accenture,Bangalore—SoftWare Engineer
Nov 2005 - Oct 2009
Client : Telenor, India.
Role: ETL Developer.
● Developed Informatica jobs for data mart population & data pipeline optimization. EDUCATION
Bachelors of Engineering, India
Aug 1999 - Sep 2003