Post Job Free
Sign in

Senior Machine Learning Engineer GenAI LLMs MLOps

Location:
Baltimore, MD
Salary:
130
Posted:
December 03, 2025

Contact this candidate

Resume:

Name: Sanjida Akter Sharna

Email: *******.*.******@*****.***

Phone: 931-***-****

Professional Summary:

Over 6 years of comprehensive experience in machine learning engineering, federated learning systems, and data engineering, specializing in graph neural networks and privacy-preserving AI solutions.

Expert in federated graph learning architectures with proven track record of developing novel GNN frameworks (GIN, GraphSAGE) that significantly outperform existing models while reducing communication overhead.

Extensive experience in large language models (LLMs) and natural language processing, developing production-ready meeting notes generation systems and AI-powered code review tools using advanced transformer architectures including GPT-4, LLaMA, BERT, and RoBERTa.

Hands-on experience developing Generative AI models including GANs, VAEs, and RNNs, applied in anomaly detection, time-series forecasting, and synthetic data generation.

Proven track record of designing client-facing architectures and scalable microservices (FastAPI, gRPC, Kubernetes, Docker) for deploying generative and conversational AI systems in production.

Expertise in integrating Generative AI models into chatbots, virtual assistants, and content generation platforms, driving innovation across fintech, e-commerce, and compliance domains.

Skilled in experiment design, model optimization (parallelism, quantization, distillation), and human-in-the-loop evaluation for improving reliability of generative outputs.

Proficient in Python for data processing, ML model development, TensorFlow, PyTorch, and Transformer-based models for various business applications.

Experienced in building and fine-tuning large language models including GPT, BERT, and LLaMA for text classification, sentiment analysis, and code generation tasks.

Proven expertise in end-to-end ML pipeline development and deployment using AWS (SageMaker, EC2, S3, Glue, Lambda), Docker, and Kubernetes for scalable cloud-native solutions.

Extensive experience with Apache Spark, PySpark, and Kafka for real-time data processing and streaming analytics, handling TB-scale datasets.

Experience with MLflow for experiment tracking, model registry, and deployment management in production environments.

Skilled in designing and implementing federated learning systems and distributed machine learning architectures for privacy-preserving AI.

Strong expertise in recommendation systems and collaborative filtering, achieving 95.6% accuracy in product sales forecasting using ensemble methods.

Experienced in developing named entity recognition (NER), text classification, sentiment analysis, and time-series forecasting models using advanced machine learning algorithms.

Hands-on experience building microservices using FastAPI, gRPC, and containerization technologies for scalable ML model deployments.

Delivered real-time machine learning solutions for critical business problems using distributed computing and streaming technologies.

Strong understanding of deep learning frameworks including CNNs, GANs, Autoencoders, and Graph Neural Networks for complex pattern recognition tasks.

Developed and deployed conversational AI systems and chatbot solutions using transformer-based architectures and intent recognition

•Adept at building predictive models for business intelligence, risk assessment, and behavioral analytics with measurable impacts on business KPIs.

Experience in end-to-end data engineering workflows, from ETL pipeline design to model validation and deployment, with focus on data quality and governance.

Established MLOps best practices including CI/CD pipelines, A/B testing, and automated model monitoring, reducing deployment time through streamlined workflows.

Implemented efficient data versioning and experiment tracking using DVC and MLflow, ensuring reproducibility and traceability.

Strong problem-solving abilities in debugging, refactoring, and optimizing ML applications for enhanced reliability and performance.

Expertise in hyperparameter tuning, model compression techniques, and memory- efficient training for large-scale models.

Collaborated with data scientists, engineers, and business stakeholders to align ML solutions with organizational goals.

Research-focused professional with published work in federated learning and graph anomaly detection, contributing to peer-reviewed academic publications.

Committed to continuous learning in AI/ML technologies, demonstrated through conference participation and staying current with emerging trends.

Technical Skills:

Category

Skills

Programming Languages

Python, Java, C++, R, JavaScript, SQL

Frameworks & Libraries

TensorFlow, PyTorch, Scikit-Learn, Hugging Face, BERT, RoBERTa, LLaMA, GPT, Transformers, OpenNLP, LangChain, Llama Index, PyTorch Geometric, DGL

Web Technologies

FastAPI, Flask, RESTful APIs, MLflow, Kubeflow, React.js, Node.js

API Development

RESTful API design, gRPC, Protobuf, OAuth2 Security, WebSocket, Microservices

Containerization & Orchestration

Docker, Kubernetes, Container Orchestration

Database Management

PostgreSQL, MongoDB, MySQL, Redis, PostGIS, NoSQL, Vector DBs

Cloud Platforms

AWS (S3, EC2, SageMaker, Glue, Lambda, ECR), GCP, Azure, Terraform, CloudFormation

Data Processing

Apache Spark, PySpark, Apache Kafka, Apache Airflow, Apache Flink, Hadoop HDFS, Pandas, NumPy

Testing Tools

Pytest, JUnit, CI/CD Pipelines, Jenkins, GitHub Actions, Postman

Version Control

Git, GitHub, GitLab, DVC, Git LFS

DevOps & MLOps Tools

Jenkins, Docker, Kubernetes, MLflow, Weights & Biases, Neptune

IDEs & Development Tools

Jupyter Notebook, PyCharm, Visual Studio Code, IntelliJ IDEA

Deep Learning

CNNs, RNNs, LSTMs, GANs, Autoencoders, Graph Neural Networks, Attention Mechanisms, Transformers

Machine Learning

Regression, Classification, Clustering, Ensemble Methods, Federated Learning, Reinforcement Learning, Time-Series Forecasting

Natural Language Processing

SpaCy, NLTK, Transformers, Tokenizers, Named Entity Recognition (NER), Sentiment Analysis, Text Classification

Data Visualization

Matplotlib, Seaborn, Plotly, PowerBI, Apache Superset, Streamlit

Big Data & Streaming

Apache Spark, Kafka Streams, Apache Storm, Real-time Analytics

ML Optimization

Hyperparameter Tuning, Model Compression, Quantization, Knowledge Distillation, CUDA Programming

PROFESSIONAL EXPERIENCE:

BNY Mellon, Remote, USA March 2025 – Present

Senior GenAI Engineer / Machine Learning Engineer

Project Description:

A global investments company dedicated to helping clients manage and service their financial

assets throughout the investment lifecycle. Focused on leveraging AI and machine learning to enhance operational efficiency, mitigate risk, and deliver data-driven insights for institutional, corporate, and wealth management clients.

Responsibilities:

•Built an end-to-end intelligent document processing system using large language models (GPT-4, LLaMA, FinBERT) to automate the extraction and analysis of data from complex prospectuses, custody agreements, and corporate action notices, significantly reducing manual processing time for asset servicing teams.

•Architected scalable microservices using FastAPI, gRPC, and Docker to deliver real-time transaction monitoring and anomaly detection scores, securing high-volume payment and settlement flows with enterprise-grade uptime.

•Optimized ML inference pipelines for counterparty risk assessment models with Redis caching and batch processing, accelerating credit evaluation for institutional clients while maintaining high standards of predictive accuracy.

•Designed merchant-focused AI agent prototypes to support sales/financial decision-making workflows using LangGraph + MCP.

•Developed automated evaluation pipelines to test AI agent responses for accuracy, bias, and reliability in merchant-facing scenarios.

•Integrated and fine-tuned Generative AI models (GPT-4, LLaMA, Gemma prototypes, GANs, and Autoencoders) with RAG pipelines for real-time financial document analysis and content generation

•Implemented comprehensive MLOps practices, including CI/CD for deploying risk models, A/B testing frameworks for settlement-failure prediction algorithms, and real- time performance monitoring using MLflow and internal data visualization platforms.

•Built a distributed model serving infrastructure using Kubernetes and NVIDIA Triton Inference Server to support concurrent risk simulations and NAV (Net Asset Value) validation calculations for large investment portfolios.

•Fine-tuned GPT-4 and LLaMA models using LoRA and QLoRA techniques for financial-domain adaptation.

•Implemented RAG evaluation with RAGAS and DeepEval to benchmark retrieval accuracy and response faithfulness.

•Collaborated with global delivery teams across multiple time zones to align AI solution strategies with business and compliance requirements.

•Participated in Agile sprint planning and cross-functional refinements, ensuring smooth coordination between AI engineers, business teams, and stakeholders.

•Designed AI deployment architecture adaptable for hybrid and multi-cloud environments including AWS and potentially OCI-based setups.

•Built GraphQL BFF layer to aggregate microservices and serve financial insights to AI agents.

•Deployed prompt versioning and cost tracking dashboards using MLflow and custom token usage monitoring.

•Integrated AutoGen-based agentic workflows for autonomous financial report summarization and anomaly detection.

•Applied best practices to design and deploy secure, enterprise-grade AI agents leveraging MS Copilot, LangGraph, and Ollama, ensuring scalable implementations that complied with strict financial and compliance regulations.

•Designed systematic prompt validation pipelines, combining automated semantic checks with operations analyst feedback loops to ensure reliable model outputs in financial reporting.

•Implemented tensor/model parallelism strategies across multi-GPU clusters, optimizing LLM training efficiency and reducing GPU memory overhead by 30%.

•Applied CUDA kernels and PyTorch distributed data parallel (DDP) for large-scale risk model training, ensuring optimal GPU throughput.

•Designed and implemented client-facing architectures using FastAPI, gRPC, and Kubernetes, enabling scalable deployment of LLM-powered assistants for regulatory and compliance workflows.

•Conducted experiments with tensor/model parallelism and applied knowledge distillation & quantization to optimize generative model performance while reducing GPU memory overhead

•Extended risk simulation pipelines with C++ backend modules for high-performance computation, accelerating NAV validation and settlement simulations.

•Established feedback-driven refinement cycles, where settlement and compliance teams validated LLM-based outputs, improving adoption and trust in AI systems.

•Integrated and fine-tuned Generative AI models (GPT-4, LLaMA, Gemma prototypes) with Retrieval-Augmented Generation (RAG) pipelines for real-time financial document analysis and query resolution.

•Built multi-agent orchestration pipelines with LangGraph + MCP, allowing generative models to collaborate with compliance APIs for clause extraction, anomaly detection, and conversational query resolution.

•Researched and applied federated generative learning approaches for privacy-preserving cross-institutional fraud analysis.

•Built multilingual text processing pipelines supporting English, Spanish, and Asian-language datasets, enabling cross-border compliance monitoring.

•Built an end-to-end intelligent document processing system using LLMs (GPT-4, LLaMA, FinBERT) to automate extraction and analysis of data from complex prospectuses, custody agreements, and corporate action notices.

•Architected scalable microservices using FastAPI, gRPC, and Docker to deliver real-time transaction monitoring and anomaly detection scores, securing high-volume payment and settlement flows with enterprise-grade uptime.

•Collaborated with legal/compliance professionals to integrate AI-powered document review workflows, reducing manual review time and improving accuracy in contract validation and knowledge management.

•Implemented advanced LangGraph-based orchestration to coordinate multi-agent workflows for parsing, clause extraction, and compliance validation across heterogeneous financial datasets.

•Integrated proprietary compliance APIs and settlement monitoring tools with MCP (Model Context Protocol) to enable secure, standardized agent-to-tool communication inside enterprise workflows.

•Created automated model versioning and deployment pipelines with DVC, Git LFS, and GitOps, ensuring reproducibility and rollback capabilities.

•Established a custom evaluation framework combining semantic similarity metrics, financial lexicons, and human-in-the-loop feedback to validate and refine LLM outputs.

•Developed a real-time regulatory intelligence pipeline using the Whisper API and WebSocket connections to process audio from central bank announcements and policy meetings, providing immediate insights to compliance and investment management teams.

•Implemented advanced prompt engineering techniques, including few-shot learning, chain-of-thought, and Retrieval-Augmented Generation (RAG) to accurately classify and extract covenants and clauses from unstructured legal and financial documents.

•Created an automated model versioning and deployment system using DVC, Git LFS, and GitOps workflows, ensuring complete reproducibility and auditable rollback capabilities for critical risk and compliance models.

•Built a custom evaluation framework for financial language model outputs using semantic similarity with internal financial lexicons and human-in-the-loop feedback from operations analysts to establish quality thresholds for automating client reporting.

•Designed a fault-tolerant trade processing and settlement system with a Redis-based job queue, Apache Kafka for streaming market and settlement data, and automatic retry mechanisms to achieve enterprise-grade reliability for critical custody operations.

•Implemented model compression techniques, including knowledge distillation, pruning, and 4-bit quantization on proprietary risk models, significantly reducing their infrastructure footprint while preserving performance.

•Conducted research on federated learning applications for privacy-preserving analysis of cross-institutional transaction data, developing novel algorithms to enhance fraud detection without compromising client confidentiality.

•Mentored junior engineers on transformer architectures for financial document understanding, designing distributed systems for high-throughput data processing, and best practices for deploying robust ML solutions in a highly regulated banking environment.

Environment: Python, PyTorch, TensorFlow, Transformers, GPT-4, LLaMA, FinBERT, Whisper, Fast API, Docker, LangGraph, MCP, FastMCP, Kubernetes, Redis, Apache Kafka, MLflow, Weights & Biases, AWS SageMaker, NVIDIA Triton, DVC, Git LFS, WebSocket, gRPC, Protobuf.

ICEL TECH LLC, St. Petersburg, FL February 2022 – February 2025 AI Machine Learning Engineer II

Project Description:

Technology consulting firm specializing in enterprise AI solutions, recommendation systems, and data engineering platforms for e-commerce and fintech clients.

Responsibilities:

Engineered enterprise-scale recommendation system implementing collaborative filtering with Apache Spark MLlib ALS matrix factorization, deep learning embeddings using TensorFlow neural collaborative filtering, and ensemble methods with Random Forest, SVM, and XGBoost delivering highly accurate predictions.

Built robust real-time ETL pipelines utilizing Apache Airflow for workflow orchestration, AWS Glue for serverless data processing, Apache Kafka for streaming ingestion, implementing data validation with Great Expectations and PySpark transformations handling enterprise-scale daily data volumes.

Developed Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) for real-time anomaly detection and synthetic data generation, improving fraud detection models in fintech use cases

Developed LoRA/QLoRA-based fine-tuning pipelines for multilingual chatbots.

Built hybrid retrieval and semantic chunking pipeline with vector databases for context-aware RAG systems.

Applied FMeval framework to evaluate generative response faithfulness and user relevance metrics.

Implemented LLMOps stack with prompt caching, experiment versioning (Weights & Biases), and model cost tracking.

Designed agentic multi-agent workflow using Crew AI for cross-functional data retrieval and decision automation.

Built multilingual chatbots and recommendation AI agents using LangChain, LangGraph, and transformer architectures, ensuring consistent and high-quality conversational content across regions.

•Developed pipeline-parallel and mixed-precision PyTorch training workflows integrated with Ray Tune and Optuna for distributed hyperparameter optimization across GPU clusters.

•Published reusable open-source PyTorch extensions in C++ to accelerate latency-critical anomaly detection tasks, adopted internally and shared publicly via GitHub.

Implemented sales performance analysis agents leveraging LangChain + FastMCP, enabling merchants to query and retrieve real-time revenue insights.

Built decision-support chatbot systems for e-commerce merchants, providing dynamic recommendations and sales trend forecasting.

Designed and deployed agentic AI solutions using MS Copilot and Ollama alongside LangChain-based frameworks, ensuring secure, scalable, and maintainable implementations for enterprise clients.

Designed pipeline-parallel and mixed-precision training architectures for deep generative models, achieving scalable training on distributed GPUs with optimized memory usage.

Conducted A/B testing and prompt reliability analysis for generative chatbots, refining responses to improve customer engagement.

Directed technical roadmap for enterprise-scale recommendation and anomaly detection systems, providing technical guidance to engineers and establishing code review and testing standards that improved maintainability.

Developed reusable API components with clear versioning and backward compatibility, enabling smooth integration between ML pipelines and enterprise systems while reducing deployment issues.

Integrated AI services in an Agile development model working with distributed teams including client-side technical counterparts.

Built scalable AI and data engineering pipelines following DevOps best practices, designed to extend toward hybrid-cloud or OCI migration strategy.

Enhanced Python-based AI microservices with full-stack integrations using React and Node.js for visualization and operational monitoring.

Collaborated with cross-functional teams to embed generative AI models into recommendation systems, content generation platforms, and anomaly detection workflows.

Implemented continuous feedback loops from e-commerce merchants to refine recommendation and anomaly detection AI agents built with LangChain + LangGraph.

Built prompt reliability testing modules that benchmarked model responses across multilingual chatbot deployments, ensuring consistent customer engagement.

Developed sophisticated behavioral analytics platform implementing customer segmentation using scikit-learn clustering algorithms, feature engineering with Spark Feature Store, dimensionality reduction using PCA and t-SNE, and cohort analysis for business intelligence.

Designed pipeline-parallel training architectures for recommendation systems, enabling large-batch training on distributed GPUs.

Integrated mixed precision training (FP16/BF16) and memory checkpointing to optimize GPU memory usage in TensorFlow and PyTorch models.

Developed C++ extensions for PyTorch to improve latency-critical components in anomaly detection systems.

Developed behavioral analytics platforms and forecasting models with advanced clustering, dimensionality reduction, and LSTM-based time-series predictions.

Built AI agent workflows with LangChain + LangGraph, dynamically selecting between recommendation, anomaly detection, and NLP pipelines for e-commerce and fintech applications.

Directed a cross-functional team to design, test, and deploy real-time recommendation systems at scale, coordinating data engineers and ML developers across multiple regions.

Established coding standards, code review practices, and automated testing frameworks for ML systems, improving maintainability and reliability across deployments.

Designed FastMCP adapters to connect LLM-based agents with enterprise APIs, monitoring dashboards, and third-party data sources — enabling interoperability and reducing integration overhead.

Created distributed hyperparameter optimization with Ray Tune and Optuna, integrating with MLflow for experiment tracking and model registry.

Designed containerized ML deployment architecture using Docker, Kubernetes, and Helm charts, enabling seamless scaling and rolling updates for production systems.

Created comprehensive A/B testing framework utilizing SciPy. stats and stats models for hypothesis testing, implementing Bayesian A/B testing with PyMC3, multi-armed bandit algorithms for dynamic allocation, and statistical power analysis.

Designed and deployed multi-lingual NLP pipelines for customer feedback and chatbots, covering English and regional languages, improving global user adoption.

Experimented with next-gen LLMs (Qwen, Phi series) in recommendation and anomaly detection use cases, benchmarking their performance against GPT-based models.

Engineered low-latency recommendation APIs optimized for sub-50ms response times, ensuring scalable performance at millions of daily requests.

Implemented advanced time-series forecasting models using ARIMA, Prophet, and LSTM networks for revenue prediction and demand forecasting, achieving significant improvement over baseline models.

Built business intelligence platform using Apache Superset for interactive dashboards, PowerBI with custom DAX calculations, real-time streaming analytics with Kafka and Apache Storm, and custom REST APIs using Flask and FastAPI.

Developed distributed hyperparameter optimization utilizing Ray Tune for scalable tuning, Optuna for TPE and CMA-ES algorithms, early stopping with successive halving and Hyperband, automated model selection with cross-validation.

Created automated MLOps pipeline implementing model retraining with Airflow and MLflow, drift detection using Evidently AI and alibi-detect, automated hyperparameter optimization, and continuous integration for model deployment.

Implemented explainable AI systems using SHAP values for feature importance, LIME for local explanations, attention visualization for transformer models, counterfactual explanations, and interactive dashboards using Streamlit and Plotly Dash.

Built customer lifetime value modeling implementing survival analysis with Cox proportional hazards using lifelines library, churn prediction with ensemble methods, imbalanced data handling using SMOTE and class weighting techniques.

Designed real-time feature pipeline utilizing Kafka Streams for stream processing, Apache Flink for complex event processing, Spark Structured Streaming for transformations, and feature store implementation with Feast and Redis.

Developed graph-based recommendation systems using Neo4j and PyTorch Geometric, implementing GraphSAGE and Graph Attention Networks for capturing complex user- item relationships and social network effects.

Implemented natural language processing pipeline for customer feedback analysis using BERT, RoBERTa, and sentiment analysis models, processing large volumes of customer reviews monthly with high accuracy.

Created automated data quality monitoring system significantly reducing data inconsistencies and establishing governance practices across multiple data sources using Apache Griffin and custom validation rules.

Built real-time anomaly detection system using Isolation Forest, One-Class SVM, and Autoencoder networks for fraud detection, delivering high precision and recall on financial transaction data.

Designed containerized ML deployment architecture using Docker, Kubernetes, and Helm charts, enabling seamless scaling and rolling updates for production models serving millions of requests daily.

Environment: Python, R, Apache Spark, PySpark, Apache Kafka, Apache Airflow, TensorFlow, PyTorch, Scikit-learn, MLflow, LangGraph, MCP, FastMCP, Ray Tune, Optuna, AWS (S3, EC2, Glue, SageMaker), Docker, Kubernetes, PostgreSQL, MongoDB, Redis, Neo4j, Apache Superset, PowerBI, Streamlit, Fast API, Flask.

Data Soft Systems, Bangladesh July 2018 – July 2021 Data Scientist & ML Engineer

Project Description:

Technology consulting firm specializing in fintech solutions, real estate analytics, and data- driven business intelligence platforms for emerging markets.

Responsibilities:

Deployed scalable ETL pipelines using AWS services (Glue, Lambda, S3, EC2) processing enterprise-scale daily data from multiple heterogeneous sources, significantly reducing processing time through optimized data transformation workflows.

Developed sophisticated predictive models for property investment risk assessment using ensemble methods (Random Forest, XGBoost, Gradient Boosting) delivering substantial improvement in accuracy over baseline models, managing multi-million-dollar portfolio decisions.

Implemented comprehensive automated data quality monitoring system using Apache Griffin and custom validation rules, substantially reducing data inconsistencies and establishing governance practices across diverse data sources.

Built real-time analytics platform using Python, FastAPI, and PostgreSQL with PostGIS extensions, enabling data-driven investment decisions for substantial real estate portfolios through interactive dashboards and alerts.

Created ML-powered property valuation system using advanced regression techniques, ensemble methods, and geospatial analysis, delivering highly accurate price predictions across diverse property types and locations.

Implemented sophisticated time-series forecasting models using ARIMA, Prophet, LSTM networks, and seasonal decomposition for market trend prediction, significantly outperforming industry baseline models.

Introduced early RAG prototype integrating LLMs with PDF and HTML parsing for automated contract clause extraction.

Applied bias mitigation techniques and explainability methods (SHAP, counterfactual analysis) for model transparency.

Established Terraform CI/CD pipelines to automate LLM deployment and monitoring across AWS and Azure clouds.

Applied statistical modeling grounded in linear algebra and probability theory to design risk scoring algorithms for fintech applications.

Implemented custom gradient optimization routines in C++/Python hybrid workflows for property valuation models, improving computational efficiency.

Built automated data ingestion system handling diverse data formats and sources including APIs, databases, and flat files with exceptional success rate using Apache Airflow orchestration and robust error handling.

Developed AI-powered contract analysis system using autoencoders and ensemble generative models to extract clauses, obligations, and risk indicators from financial/legal contracts.

Built multilingual generative sentiment and NER models to analyze legal/financial documents across multiple jurisdictions, improving knowledge management for compliance and legal review teams.

Applied RNNs and LSTM-based sequence models to time-series forecasting and market trend generation, improving prediction accuracy in fintech and real estate analytics

Designed AI-powered document analysis systems that used Autoencoders and ensemble generative models to extract clauses, terms, and features from contracts and financial records.

Built multilingual generative sentiment models to analyze voice/text data in real estate and fintech platforms, improving user experience and content insights.

Conducted experiments with ensemble generative techniques (GANs + classical ML) to reduce error in property valuation models.

•Built hybrid C++/Python data loaders for time-series forecasting and geospatial analysis, ensuring memory-efficient large dataset handling in PyTorch.

•Contributed to open-source ML libraries by submitting enhancements for geospatial analytics and model explainability, extending community adoption.

Developed multilingual sentiment analysis and classification models to process voice and text feedback in real estate and fintech applications.

Implemented entity recognition systems to extract contract clauses, financial terms, and location-specific attributes, improving automation accuracy.

Developed comprehensive geospatial analysis pipeline using PostGIS, GeoPandas, and Folium, processing location-based data for extensive property datasets with spatial indexing and proximity-based feature engineering.

Created sophisticated risk scoring algorithms using ensemble methods, advanced feature engineering, and statistical modeling techniques, substantially reducing default prediction error and improving loan approval processes.

Implemented end-to-end data lineage tracking system using Apache Atlas and custom metadata management, ensuring compliance and auditability across all data transformations and model predictions.

Built real-time market anomaly alert system using Apache Kafka, Apache Storm for stream processing, and custom REST APIs, providing rapid notification to stakeholders of significant market changes.

Designed automated report generation system using Python, Pandas, LaTeX, and Matplotlib, producing comprehensive weekly executive reports with dynamic charts, statistical



Contact this candidate