Post Job Free
Sign in

Data Engineer Financial

Location:
Kansas City, MO
Salary:
70000$
Posted:
October 15, 2025

Contact this candidate

Resume:

AHAMED ABDULLAH MOHAMMED

+1-913-***-**** ***********************@*****.*** Linkedin.com/in/ahamed-abdullah-md SUMMARY

Data Engineer with over five years of experience architecting robust, scalable data solutions across AWS, Azure, and GCP. Expertise in designing and optimizing streaming data pipelines and ETL processes utilizing Serverless architectures. Proven ability to enhance data quality with Apache Griffin and automate deployments through CI/ CD best practices, driving business intelligence and operational efficiency.

EDUCATION

University of Missouri Kansas City Kansas City, MO Masters Computer Science

EXPERIENCE

Azure Data Engineer Mastercard O'Fallon, Missouri 08/2024 - Current Engineered Azure Data Lake solutions for petabyte- scale financial data, utilizing Azure Synapse Analytics and Databricks for efficient data flow management.

Developed real-time financial data processing with Azure Stream Analytics, enabling faster decision-making and supporting critical business operations.

Leveraged Power BI for interactive dashboards, providing real-time financial analysis and performance tracking for stakeholders. Automated deployment and testing with Azure DevOps, streamlining CI/CD pipelines and improving release cadence for finance applications.

Optimized Azure SQL Databases for high-performance analytics, ensuring scalable solutions for complex financial queries. Configured Azure Cosmos DB for globally distributed, low- latency data storage, guaranteeing financial data availability and resilience.

Implemented Apache Hadoop and Spark clusters on Azure HDInsight for distributed big data processing, enhancing analytical capabilities.

Ensured data integrity and quality across complex financial datasets by implementing robust validation strategies. GCP Data Engineer Wipro Bangalore, Karnataka, India 09/2021 - 07/2023 Designed and implemented scalable data pipelines on GCP using Dataflow and Pub/ Sub for real- time data processing and streaming, supporting critical insurance analytics. Utilized Google Cloud Dataproc and Apache Spark for processing and analyzing large data volumes in batch and real- time, enhancing predictive modeling.

Automated deployment and testing of data pipelines using Google Cloud Build and Terraform, adhering to CI/CD best practices. Developed robust streaming data pipelines that processed over 1TB of daily transactional data. Data Engineer GE Aerospace Bangalore, karnataka, India 06/2019 - 08/2021 Designed and implemented scalable data pipelines for processing large volumes of financial data using Apache Spark and Hadoop, driving efficiency.

Optimized data integration workflows using SQL Server, PostgreSQL, and cloud solutions like Azure Synapse Analytics and Google Big Query.

Developed real- time streaming solutions using Apache Kafka and Azure Stream Analytics for financial transaction processing, reducing latency.

Utilized Python, Pyspark, and Scala to develop ETL pipelines for data transformation and loading from heterogeneous sources. Incorporated data quality checks using Apache Griffin to ensure accuracy and reliability in upstream data feeds. SKILLS

Programming Languages: Proficient in Python, Java, Scala, SQL, and Bash for scripting, application development, and complex data analysis.

Big Data Technologies: Expert in Hadoop, Apache Spark, Hive, HDFS, Presto, Pig, and Flink for large-scale data processing. DevOps & CI/ CD: Skilled in Jenkins, GitLab CI/ CD, Azure DevOps, AWS Code Pipeline, Docker, Kubernetes, Terraform, CloudFormation, Ansible, GitHub Actions for automated deployment and infrastructure management. Cloud Platforms: Extensive experience with AWS, Azure, and GCP for building scalable data solutions, including Serverless architectures.

Data Integration & Orchestration: Adept at Apache Airflow, AWS Step Functions, Azure Logic Apps, and GCP Data Fusion for managing data workflows.

Streaming & Real- Time Processing: Expertise in Apache Kafka, AWS Kinesis, Azure Event Hubs, Google Pub/ Sub, and Spark Streaming, including the development of streaming data pipelines. Database Systems: Skilled in PostgreSQL, MySQL, Oracle, SQL Server, MongoDB, Cassandra, DynamoDB, Cosmos DB, and HBase for diverse data storage needs.

Data Quality & Governance: Experienced with Apache Griffin for ensuring data quality and maintaining governance standards.



Contact this candidate