Post Job Free
Sign in

Data Engineer Azure

Location:
Cincinnati, OH
Posted:
August 10, 2024

Contact this candidate

Resume:

Kiran Chalamala

Cincinnati, OH, USA +1-513-***-**** LinkedIn

Summary

Experienced Data Engineer with expertise in optimizing data pipelines, executing complex ETL processes, and leveraging Azure for enhanced performance. Skilled in data integration, Python automation, and microservices architecture, driving a significant increase in efficiency and scalability. Proven track record in reducing committed to continuous learning and innovation in data management. Technical Skills

Languages: Python, Java, C, Scala, R-Programming.

Front-end: HTML, CSS, JavaScript, NodeJS, React.

Back-end: Azure Data Factory, Azure Data Lake Storage, Azure Storage Accounts, Azure Synapse, Azure Cosmos DB, Data Pipelines, ETL/ELT, informatica, MySQL, PostgreSQL, MongoDB, RDBMS, PL/SQL, CI/CD,SSIS. Data Processing: Hadoop, Spark, Flink, Kafka, Airflow, Apache, Snowflake, Redshift. Other Skills: Data Ingestion, Data Modeling, Data Vault, MATLAB, Terraform, Tableau, PowerBI, Agile Methodology, SLDC Life Cycle, Waterfall Model Professional Experience

Azure Data Engineer, JP Morgan Chase & Co. Feb 2024 – Present

Applied advanced data partitioning techniques on Azure Databricks to decrease data processing time, while also deploying scalable big data solutions with Azure HDInsight to enhance processing capacity for large-scale data sets. Additionally, engineered and developed robust ETL pipelines in Azure Data Factory to improve data availability and reliability.

Managed Azure SQL Database to ensure efficient data storage, implementing indexing, query optimization techniques for performance, and utilized Azure Data Lake Storage to address data retention policies, improving data access reliability through partitioning and data organization strategies.

Collaborated in developing efficient ETL pipelines using Azure Data Factory, leveraging Git for version control and CI/CD pipelines for automated deployment, while implementing data quality checks and validation processes to ensure data accuracy and consistency.

Participated in managing Azure SQL Database, ensuring near-perfect uptime and consistent data availability through regular backup and recovery strategies, while contributing to big data processing like Azure Databricks and implementing optimizations such as caching and parallel processing for faster data processing speeds.

Implemented monitoring solutions for data pipelines and databases using Azure Monitor and Log Analytics, enabling proactive performance optimization.

Ensured data compliance and security measures were implemented in data solutions, including encryption, access controls, and data masking techniques.

Developed interactive and insightful data visualizations using Power BI, enabling stakeholders to make informed decisions based on real-time data insightstrends.

Designed and implemented dynamic dashboards in Tableau, providing intuitive data presentations and drill-down capabilities for detailed analysis and reporting.

Implemented data processing and analysis tasks using Apache Hadoop, optimizing MapReduce jobs for efficient handling of large-scale data sets and enhancing data processing capabilities.

Utilized Python for scripting and automation tasks in data processing workflows, integrating machine learning models and data transformations to enhance data quality and predictive analytics capabilities

Data Engineer, Tata Consultancy Services Jul 2021 – Dec 2022

Directed the design and implementation of Azure-based microservices architecture, improving application response time by 30% and boosting user engagement metrics by 20%.

Led the strategic initiative to transition over 10 legacy applications to Azure Cloud, achieving a 40% reduction in operational costs and a 50% enhancement in performance and scalability.

Spearheaded the design of Azure-based microservices, resulting in a 30% faster application response time and a 20% increase in user engagement.

Managed large-scale migrations to Azure, ensuring seamless transitions and optimizing performance, leading to a 70% increase in efficiency.

Collaborated within project frameworks to identify enhancements and develop new domain roadmaps, driving continuous improvement and achieving a 90% increase in capability.

Orchestrated the enhancement and streamlining of ETL processes using cutting-edge tools to extract, clean, and store large data volumes, resulting in a 30% improvement in processing efficiency.

Designed and implemented database components, including tables, functions, stored procedures, and triggers, aligning with project specifications, and contributing to a 15% increase in overall system efficiency.

Developed and deployed Azure Data Factory (ADF) ETL processes for seamless data transfer from various sources (flat files, Excel, Oracle, SQL Server, XML, JSON), enhancing data processing efficiency by 40%.

Implemented Power BI for data visualization and reporting, enabling stakeholders to gain actionable insights from real-time data, resulting in improved decision- making and strategic planning.

Designed and developed interactive dashboards in Tableau, facilitating intuitive data exploration and analysis for business users, enhancing data-driven decision- making processes.

Integrated Hadoop for big data processing, optimizing data storage and analysis capabilities to handle large-scale datasets efficiently, improving data processing speeds and scalability.

Leveraged Python for scripting and automation in data analysis workflows, implementing statistical models and machine learning algorithms to extract valuable insights from complex datasets, enhancing predictive analytics capabilities. Jr. Data Engineer, KL University Jan 2019 – June 2021

Python Automation: Implemented Python automation using libraries such as NumPy and Pandas, leading to a significant enhancement in data processing efficiency.

PySpark Optimization: Optimized PySpark jobs for scalability and performance, resulting in a notable reduction in processing time.

ETL Workflow Enhancement: Enhanced data integration accuracy by creating ETL workflows in Azure Data Factory, seamlessly integrating data from multiple sources.

Query Optimization: Improved query response time by optimizing and tuning data warehouses in Azure Synapse for faster analytics.

Cost-Effective Data Solutions: Implemented cost-efficient data solutions on Azure, reducing infrastructure expenses while managing large volumes of data.

Project Collaboration: Successfully delivered multiple data projects in collaboration with teams, demonstrating ongoing learning and development in the field.

Developed interactive reports and dashboards in Power BI, providing stakeholders with real-time insights into key performance metrics, enhancing decision- making capabilities.

Created visually compelling data visualizations and dashboards in Tableau, enabling business users to explore and interpret complex data sets intuitively, driving actionable insights.

Implemented Hadoop for distributed data processing and storage, optimizing data management and analysis capabilities for handling large-scale datasets efficiently, improving overall data processing performance. Education

Master of Science in Computer Science with emphasis on Data Science University Of Cincinnati, Cincinnati, OH (GPA:4/4) Jan 2023 – Apr 2024 Bachelor of Technology in Electronics and Communication Engineering with emphasis on Computer Science KL University, Guntur, India (CGPA:3.8/4) Jun 2017 – Jul 2021 Certifications

Microsoft: Azure Fundamentals, Azure Data Engineer. Other: Pega Certified System Architect, Certified Senior System Architect, CLAD, Python Fundamentals in Coursera ***************@*****.***

operational costs, improving data processing, and implementing advanced data solutions. Certified in Azure Data Engineering and Pega System Architecture,



Contact this candidate