Post Job Free
Sign in

SQL Developer & Data Engineer with 7+ Years Experience

Location:
Hounslow, Greater London, United Kingdom
Posted:
December 12, 2025

Contact this candidate

Resume:

Page *

Ahmad Kakar

Email: *****@*****.**

LinkedIn: https://www.linkedin.com/in/ahmad-kakar-5b0284120/ Phone: +44-796*******

Address: United Kingdom, London – Uxbridge

Professional Summary

Results-driven SQL Developer Database Developer Data Engineer with 7+ years of experience in database development, query optimization, ETL processes, and business intelligence. Proficient in SQL Server, MySQL, PostgreSQL, and MongoDB, with expertise in performance tuning, data modeling, and automation. Developed ETL pipelines that reduced data processing time by 50% and optimized SQL queries that improved execution speed by 40%. Adept at building scalable database solutions, ensuring data integrity, and leveraging BI tools like Power BI and SSRS to drive actionable insights. Seeking to contribute expertise in SQL development and data engineering to a fast-paced, data-driven organization. Work Experience

SQL Developer April 2025 – Present

BromCom UK – London

• Successfully managed daily school data migrations, ensuring accuracy, consistency, and timely delivery of data across multiple education systems.

• Performed complex data validation, fixing, and dependency resolution tasks, reducing migration errors and improving data reliability for end-users.

• Supported and conducted User Acceptance Testing (UAT) for migrated data, identifying discrepancies early and ensuring seamless go-live transitions.

• Designed, optimized, and maintained complex SQL queries, stored procedures, functions, and triggers, reducing execution times by up to 40%.

• Enhanced performance of legacy systems through indexing, partitioning, and query tuning, improving reporting speed and system stability.

• Automated data migration and ETL workflows using SSIS and Azure Data Factory, reducing manual intervention by 60% and ensuring data integrity.

• Collaborated with product teams, QA, and stakeholders to translate business requirements into effective database solutions, supporting critical school operations. Page 2

• Worked within Agile/DevOps environments, leveraging Git for version control and participating in CI/CD pipelines for database deployments. Contributed to data governance initiatives, standardizing processes for school data management and improving long-term maintainability. Senior SQL Developer April 2024 – March 2025

Hello World Academy UK – London

• Designed and optimized enterprise-level SQL databases, reducing query execution time by 45% through indexing and partitioning.

• Developed and maintained 20+ ETL pipelines using SSIS and Azure Data Factory, reducing manual data integration efforts by 60%.

• Created Power BI and SSRS dashboards, improving real-time data visibility for stakeholders and increasing decision-making efficiency.

• Automated database backup, recovery, and security policies, ensuring 99.9% system uptime and compliance with data governance standards.

• Implemented CI/CD pipelines for database deployments, reducing deployment failures by 50%.

• Successfully optimized Oracle PL/SQL queries, reducing processing time by 40%, enhancing system performance and efficiency.

• Implemented Snowflake data warehouse solution, improving data processing scalability and enhancing integration efficiency by streamlining data pipelines and reducing processing time.

• Developed PySpark scripts to automate and optimize data transformations, which enhanced the processing speed by 50% for big data sets.

SQL Database Developer Oct 2022 – Jan 2023

TiBX USA – Virginia

• Optimized 200+ SQL queries, improving execution speed by 40% and reducing system load.

• Implemented indexing and partitioning strategies, reducing query response time by 30%.

• Developed 5+ ETL pipelines, automating data ingestion and reducing manual processing by 50%.

• Built 20+ stored procedures, improving data consistency and business logic execution.

• Strengthened database security, implementing encryption and role-based access control (RBAC).

• Led the successful integration of Snowflake data solutions, optimizing the data architecture and enhancing performance by 40%.

• Created and managed extensive data lakes, ensuring efficient data storage and accessibility, which supported advanced analytics and data science projects.

• Designed data processing pipelines using Apache Spark, which facilitated real-time data analysis and supported the company's shift towards more dynamic, predictive analytics models.

• Engineered robust data integration solutions with Teradata, enhancing data warehousing capabilities and supporting complex data workflows. Page 3

• Developed predictive models in Python, leveraging statistical methods and machine learning to drive business decisions, significantly impacting revenue growth. SQL Developer June 2019 – Sept 2022

WEERDP (World Bank Project) Kabul – Afghanistan

• Optimized SQL queries for financial transactions, reducing report generation time from 15 minutes to 30 seconds, improving system efficiency.

• Developed and fine-tuned stored procedures, triggers, and indexing strategies, increasing database performance by 50%.

• Designed and implemented an automated payroll system, reducing manual processing errors by 90% and ensuring real-time data accuracy.

• Integrated advanced data validation, error-handling mechanisms, and ETL workflows, ensuring 100% data consistency and compliance.

• Created 10+ BI dashboards in Power BI, enabling senior management to make data-driven decisions.

Data Engineer Aug 2017 – June 2019

Asan Khedmat (Ministry of Communication and IT) Kabul – Afghanistan

• Developed an optimized Online Passport System database, reducing query execution time by 60%.

• Designed a Document Management System, streamlining document retrieval with faster indexing techniques.

• Engineered an ETL framework, automating data migration for multiple government entities.

• Created high-performance database schemas, reducing redundancy and improving maintainability.

Database Programming Language Instructor March 2017 – March 2018 (Part Time) Qalam University Kabul – Afghanistan

• Conducted workshops on query optimization and database security, increasing student competency.

• Designed 10+ real-world projects, helping students gain practical database development experience.

Database Officer March 2015 – July 2017

Guide Star ICT Solution Kabul – Afghanistan

• Developed custom database applications, reducing manual data processing by 30%.

• Automated reporting and data entry, saving 10+ hours of manual work per week. Page 4

Database Developer July 2014 – Feb 2015

Code Zone Techno Company Kabul – Afghanistan

• Designed 5+ optimized database architectures, improving query performance.

• Developed complex stored procedures, reducing execution time by 25%. MIS Assistant July 2014 – Feb 2015

Ministry of Rural Rehabilitation and Development (MRRD) Kabul – Afghanistan

• Assisted in database management and maintenance, ensuring smooth system operations.

• Provided technical support for MIS-related tasks, troubleshooting database errors. Education

Master’s Degree in Software Engineering

Heriot-Watt University, UK 2022 – 2023

Bachelor’s Degree in Computer Science

Kabul Polytechnic University, Afghanistan 2013 – 2016 Core Skills & Technologies

Database Development & Optimization

• Databases: Proficient in SQL Server, MySQL, PostgreSQL, MongoDB, Oracle PL/SQL, Snowflake.

• Data Modeling & Warehousing: Experienced in using Kimball/Inmon methodologies for data warehouse design.

• Performance Tuning & Indexing: Expert in optimizing queries and designing indexes to improve database performance.

• Apache Spark: Advanced proficiency in using Spark for large-scale data processing tasks. ETL & Data Engineering

• ETL Tools: Skilled in SSIS, Azure Data Factory, and PySpark for robust ETL pipeline development.

• Cloud Data Engineering: Extensive experience with AWS Glue, Azure Data Factory, and Azure Databricks.

• Automation & Integration: Competent in automating data workflows and integrating complex data systems across platforms.

Page 5

• Data Lakes: Skilled in the design and management of data lakes for scalable data storage and access.

• Teradata: Experienced with Teradata for large-scale data warehousing and analytics solutions.

• PySpark: High-level expertise in processing big data with PySpark, optimizing performance in distributed computing environments.

Business Intelligence & Reporting

• BI Tools: Advanced skills in Power BI, SSRS, and Crystal Reports for dynamic reporting and dashboard creation.

• Data Visualization: Expertise in turning complex data sets into actionable insights through sophisticated visualization techniques.

Programming & Scripting

• Languages: Proficient in Python, SQL, Java, JavaScript, R, Bash, and PowerShell.

• Automation: Experienced in scripting for database and system automation to enhance operational efficiencies.

Cloud & DevOps

• Microsoft Azure: In-depth knowledge of Azure SQL, Azure Data Factory, Azure Synapse, and Azure Databricks.

• Cloud Platforms: Hands-on experience with AWS RDS, Google Cloud BigQuery, and managing cloud-based infrastructure using Kubernetes and Docker.

• DevOps Tools: Proficient in using CI/CD pipelines, Terraform, and Ansible for efficient deployment and management of IT resources.

Projects & Achievements

• Optimized Payroll Processing System – Reduced execution time from 15 minutes to 30 seconds.

• Developed ETL Pipelines – Automated data migration, saving 20+ hours per week in manual work.

• Business Intelligence Dashboard – Designed a real-time analytics dashboard, improving reporting accuracy.

• Developed and Optimized Data Lakes – Successfully engineered and optimized data lakes to enhance data integration and analytics capabilities, achieving a more efficient data architecture that supports cross-channel data integration and advanced analytics initiatives.

• Advanced Data Pipeline Architectures – Designed and built robust data processing pipelines using PySpark and Apache Spark, significantly improving data flow and system performance, which drove new insights and facilitated the development of predictive and prescriptive models. Page 6

• Implemented Scalable Data Science Platforms – Created scalable platforms for data experimentation and model development, leveraging PySpark and Teradata to enhance predictive analytics capabilities, directly contributing to revenue growth and strategic decision-making.

• Revenue Growth through Data Initiatives – Initiated and led data science programs that contributed to significant revenue growth by enabling more precise market targeting and customer segmentation.

• Optimized SQL Queries – Increased database efficiency by 40% with advanced indexing techniques.

• Designed Scalable Database Schemas – Reduced redundancy, improving data integrity & performance.

Languages

• English (Fluent) Dari (Fluent) Pashto (Native)



Contact this candidate