Post Job Free
Sign in

Sr. Gen AI/ ML Engineer

Location:
Rome, NY
Posted:
October 14, 2025

Contact this candidate

Resume:

SHREYAMSHA DEBBATA

Sr. Gen AI/ Data Scientist

***********@*****.*** 607-***-**** www.linkedin.com/in/debbata

Professional Summary:

Over 11+ years of experience delivering enterprise AI, Generative AI, NLP, MLOps, and analytics solutions across finance, healthcare, government, telecom, retail, and IT service domains, driving innovation and scalable adoption of data-driven systems.

Experience in Python, ML/LLM engineering, and data engineering. Skilled in designing scalable ML pipelines, integrating Google Cloud LLMs, Google Vector Search, and hybrid search methodologies. Expert in AWS SageMaker, Docker, Django, Flask, and distributed team collaboration. Proven ability to deploy production-grade ML solutions and optimize MLOps workflows for large-scale applications.

Experienced in integrating Google Cloud LLMs and building search pipelines using Google Vector Search and Hybrid Search methodologies.

Proven experience collaborating with data scientists, data engineers, and DevOps to build and deploy ML models using AWS SageMaker in hybrid environments. Skilled in SageMaker Pipelines, model monitoring, fine-tuning LLMs, and implementing CI/CD workflows for robust MLOps. Experience in Explainable AI (XAI) and production-ready LLM customization for agentic systems.

Strong hands-on knowledge in Python, PyTorch, TensorFlow, Keras, and NLP frameworks like SpaCy, NLTK, and Hugging Face Transformers, with deep implementation experience in GPT, LLaMA, Claude, BERT, RoBERTa, and LangChain. Skilled in working with Vector DBs, LoRA, RAG, PEFT, and Knowledge Graphs.

Well-versed in AWS cloud ecosystem (SageMaker, EC2, Lambda, EMR, S3), with robust experience in Jupyter Notebooks, agentic programming, and building AI solutions such as GraphRAG, Chain of Thought (CoT), and Tree of Thought (ToT). Expert in processing structured/unstructured data, deploying models in production, and delivering high-impact AI/ML solutions using advanced data science toolkits.

Proficient in SQL, Spark, Linux, Git/GitHub, and tools such as Tableau, QuickSight, and Kibana. Adept at translating ambiguous business problems into data-driven solutions and working in collaborative, fast-paced environments.

8+ years of hands-on experience and comprehensive industry knowledge of Artificial Intelligence/ Machine Learning, Statistical Modelling, Deep Learning, Data Analytics, Data Modelling, Data Analysis, Data Mining, Text Mining & Natural Language Processing (NLP), and Business Intelligence.

Having good experience in Analytics Models like Decision Trees, Linear & Logistic Regression, Hadoop (Hive, PIG), R, Python, Spark, Scala, MS Excel, SQL and PostgreSQL, Erwin.

Strong knowledge in all phases of the SDLC (Software Development Life Cycle) – Agile/ Scrum from analysis, design, development, testing, implementation and maintenance.

Strong leadership in the fields of Data Cleansing, Web Scraping, Data Manipulation, Predictive Modeling with R and Python, and Power BI & Tableau for data visualization.

Experienced in Data Modeling techniques employing Data Warehousing concepts like Star/ Snowflake schema and Extended Star.

Hands-on experience in working experience in the entire Data Science project life cycle including data extraction, data cleaning, statistical modeling and data visualization with large data sets of structured and unstructured data and created ER diagrams and schema.

Excellent knowledge of Artificial Intelligence/ Machine Learning, Mathematical Modeling and Operations Research. Comfortable with R, Python, SAS and Weka, MATLAB, Relational databases. Deep understanding & exposure of Big Data Eco-system.

Expertise in Data Analysis, Data Migration, Data Profiling, Data Cleansing, Transformation, Integration, Data Import/ Export through the use of ETL tools such as Informatica Power Center.

Experience working in AWS environment using S3, Athena, Lambda, AWS Sage maker, AWS Lex, AWS Aurora, Quick Sight, Cloud formation, Cloud Watch, IAM, Glacier, EC2, EMR, Recognition and API Gateway.

Proficient in Artificial Intelligence/ Machine Learning, Data/ Text Mining, Statistical Analysis & Predictive Modeling.

Applied parameter-efficient tuning approaches such as LoRA, PEFT, and QLoRA for domain adaptation while minimizing compute and memory footprint.

Built deep learning and transformer pipelines with PyTorch, TensorFlow, and Hugging Face Transformers for classification, summarization, and generation tasks.

Designed clinical and regulatory NLP workflows using BioBERT, ClinicalBERT, BERT, T5, and BART for entity extraction, de-identification, and summarization.

Engineered data platforms and feature pipelines using Apache Spark, PySpark, Databricks, Airflow, and Kafka to support large-scale model training and streaming inference.

Implemented ETL and warehousing solutions on BigQuery, Snowflake, Azure Data Lake, and AWS S3 for scalable training datasets and analytics.

Established end-to-end MLOps with MLflow, Kubeflow, AWS SageMaker, Azure ML, and GCP Vertex AI for reproducible experiments, model registry, and automated retraining.

Containerized and orchestrated workloads via Docker, Kubernetes, and Helm to enable resilient deployments across hybrid clouds.

Built monitoring and observability using Prometheus, Stackdriver, ELK Stack, and Grafana to track model drift, latency, and infra health.

Developed business reporting and dashboards using Tableau, Power BI, and Looker, converting model outputs into actionable KPIs.

Applied graph analytics with Neo4j for link analysis, fraud detection, and relationship-based recommendations.

Designed multimodal solutions combining text, images, and tabular data using OpenCV, transformer encoders, and CNNs for richer business signals.

Used traditional and modern ML algorithms such as XGBoost, LightGBM, Random Forest, SVM, and deep LSTM/Transformer models depending on problem fit.

Implemented secure data practices: IAM, RBAC, KMS encryption, and compliance controls for HIPAA, GDPR, and SOC2 contexts.

Applied prompt engineering, instruction tuning, and few/zero-shot techniques to improve LLM utility for non-technical users.

Led cross-functional collaboration with product, engineering, compliance, and operations teams to deliver production-grade AI initiatives.

Mentored teams on GenAI best practices, model explainability (SHAP/LIME), and responsible AI governance.

Technical Skills:

Generative AI & LLMs

GPT-4, LLaMA-3, Falcon, Claude, LangChain, LlamaIndex, RAG, LoRA, PEFT, QLoRA, Hugging Face Transformers

NLP & Speech

BERT, RoBERTa, BioBERT, ClinicalBERT, T5, BART, spaCy, NLTK, Wav2Vec2

ML & DL Frameworks

PyTorch, TensorFlow, Keras, Scikit-learn, XGBoost, LightGBM

Vector DB & Semantic Search

Pinecone, FAISS, ElasticSearch

Data Engineering & Streaming

Apache Spark, PySpark, Databricks, Apache Kafka, Airflow, Hadoop, Hive

Data Warehouse / Storage

BigQuery, Snowflake, PostgreSQL, MySQL, SQL Server, Oracle, MongoDB

Graph & Relationship Analytics

Neo4j, Graph ML

MLOps & Deployment

MLflow, Kubeflow, AWS SageMaker, GCP Vertex AI, Azure ML, Docker, Kubernetes, Helm

Cloud Platforms & Tools

AWS, GCP, Azure, S3, GCS, Azure Data Lake

Visualization & BI

Tableau, Power BI, Looker, Matplotlib, Seaborn, Plotly

CI/CD & Infrastructure

GitHub Actions, Jenkins, Azure DevOps, Terraform, CloudFormation

Security & Compliance

IAM, RBAC, KMS, HIPAA, GDPR, SOC2

Programming & APIs

Python, R, SQL, FastAPI, Flask

Other Tools

OpenCV, DVC

Education:

Bachelors in computer science, JNTUH, Hyderabad, India, 2013

Professional Experience:

Client: Morgan Stanley, New York, NY May 2024 - Till Date

Role: Sr. Gen AI/ML / Prompt Engineer

Responsibilities:

Involved in requirement analysis, application development, application migration, and maintenance using Software Development Lifecycle (SDLC) and Python technologies.

Developed MLOps pipelines on AWS using SageMaker Pipelines, Lambda, and Step Functions to orchestrate training, tuning, and deployment.

Adapted existing Retrieval-Augmented Generation (RAG) pipelines to leverage Google Vertex AI and Google LLM APIs for scalable LLM deployments.

Designed hybrid search strategies combining vector similarity search and keyword-based search using Google Vector Search for improving document retrieval accuracy.

Set up model monitoring and alerting workflows to ensure ongoing model performance using CloudWatch and SageMaker Model Monitor.

Deployed LLM-based workflows for call transcript summarization using GPT-4, LangChain, and vector databases (RAG architecture).

Fine-tuned domain-specific LLMs using LoRA/PEFT and created performance benchmarks using accuracy, BLEU, perplexity, and human feedback.

Developed end-to-end ML pipelines using AWS SageMaker Pipelines, Step Functions, and Lambda.

Integrated Google LLMs and Google Vector Search into NLP workflows for intelligent document search and hybrid search solutions.

Built RAG (Retrieval Augmented Generation) pipelines utilizing LangChain and custom embedding models.

Deployed and monitored LLM fine-tuned models (GPT-4, Claude) using LoRA and PEFT techniques.

Built REST APIs using Flask and deployed on Docker containers in hybrid AWS/Google Cloud environments.

Optimized model training costs through advanced instance management and parallel processing.

Designed, implemented, and monitored ML solutions ensuring high performance and low latency.

Built Support Vector Machine algorithms for detecting the fraud and dishonest behaviors of customers by using several packages: Scikit-learn, Numpy, Scipy, Matplotlib, and Pandas in Python.

Used AWS S3, Dynamo DB, and AWS Lambda, AWS EC2 for data storage and models& deployment. Worked extensively on AWS services like Sage Maker, Lambda, Lex, EMR, S3, Redshift etc.

Used AWS transcribe to obtain call transcripts, perform text processing (cleaning, tokenization, and lemmatization)

Participated in features engineering such as feature intersection generating, feature normalize and label encoding with Scikit-learn pre-processing.

Designed the data marts in dimensional data modeling using Snowflake schemas.

Generated executive summary dashboards to display performance monitoring measures with Power BI.

Developed and implemented predictive models using Artificial Intelligence/ Machine Learning algorithms such as linear regression, classification, multivariate regression, Naive Bayes, RandomForest, K-means clustering, KNN, PCA and regularization for Data Analysis.

Leverage AWS Sage Maker to build, train, tune and deploy state of art Artificial Intelligence/ Machine Learning and Deep Learning models.

Built classification models include: Logistic Regression, SVM, Decision Tree, and Random Forest.

Used Pandas API to put the data as time series and tabular format for east timestamp data manipulation and retrieval.

Worked with creating ETL specification documents, & creating flowcharts, process workflows and data flow diagrams.

Designed both 3NF data models for OLTP systems and dimensional data models using star and snowflake Schemas.

Worked on the Snow-flaking the Dimensions to remove redundancy.

Created reports utilizing Excel services and Power BI.

Applying Deep Learning (RNN) to find the Optimum route for guiding the tree trim crew.

Using XGBOOST algorithm predicting storm under different weather conditions and using Deep Learning analyzing the severity of after storm effects on the Power lines and Circuits.

Worked with Snowflake SaaS for cost effective data warehouse implementation on cloud.

Developed Data Mapping, Transformation and Cleansing rules for the Master Data Management Architecture involved OLTP, ODS and OLAP.

Produced A/B test readouts to drive launch decisions for search algorithms including query refinement, topic modeling, and signal boosting and machine-learned weights for ranking signals.

Implemented an Image Recognition (CNN + SVM) anomaly detector and convolutional neural nets (CNN/ Image Recognition) to determine fraud purchase direction.

Designed and developed Power BI graphical and visualization solutions with business requirement documents and plans for creating interactive dashboards

Environment: Python, PyTorch, Hugging Face, GPT-4, LLaMA-3, LangChain, Pinecone, BigQuery, GCP Vertex AI, Airflow, Spark, Docker, Kubernetes, MLflow, GitHub Actions, Prometheus, Stackdriver, Looker, AWS RDS, KMS SDLC, Python, Scikit-learn, Numpy, Scipy, Matplotlib, Pandas, AWS S3, Dynamo DB, and AWS Lambda, AWS EC2, Sage Maker, Lex, EMR, Redshift, Snowflake, RNN, Machine Learning, Deep Learning, OLAP, ODS, OLTP, 3NF, Naive Bayes, RandomForest, K-means clustering, KNN, PCA, Power BI.

Client: Florida Department of Health, Tallahassee, FL Feb 2023 - Apr 2024

Role: Gen-AI/AI Engineer

Responsibilities:

Involved in Data Analysis, Data Validation, Data Cleansing, Data Verification and identifying data mismatch. Performed data imputation using Scikit-learn package in Python.

Built advanced GenAI capabilities leveraging Hugging Face Transformers and LLaMA for multi-label classification on architectural documentation.

Integrated NLP pipelines with LangChain and vector search components for document similarity scoring, semantic search, and clustering.

Applied fine-tuning techniques like LoRA and PEFT to improve model accuracy on specialized architectural text datasets and implemented GraphRAG for structured output generation.

Built several predictive models using machine learning algorithms such as Logistic Regression, Linear Regression, Lasso Regression, K-Means, Decision Tree, Random Forest, Naïve Bayes, Social Network Analysis, Cluster Analysis, and Neural Networks, XGboost, KNN and SVM.

Building detection and classification models using Python, TensorFlow, Keras, and scikit-learn.

Used Amazon Web Services, AWS provisioning and good knowledge of AWS services like EC2, S3, Red shift, Glacier, Bamboo, API Gateway, ELB (Load Balancers), RDS, SNS, SWF and EBS.

Integrated NLP pipelines with LangChain and Google Vector Search for semantic document clustering and hybrid retrieval.

Built fine-tuned models on architectural datasets using Hugging Face Transformers and Google Cloud Vertex AI.

Implemented Flask-based microservices deployed on Docker for scalable ML model inference.

Developed monitoring dashboards using AWS CloudWatch, Google Cloud Monitoring, and QuickSight.

Led initiatives enhancing communication across distributed teams using agile methodologies.

Developed the required data warehouse model using Snowflake schema for the generalized model

Worked on processing the collected data using Python Pandas and Numpy packages for statistical analysis.

Used Cognitive Science in Artificial Intelligence/ Machine Learning for Neuro feedback training which is essential for intentional control of brain rhythms.

Worked on data cleaning and ensured Data Quality, consistency, integrity using Pandas, and Numpy.

Developed Star and Snowflake schemas based dimensional model to develop the data warehouse.

Used Numpy, Scipy, Pandas, NLTK (Natural Language Processing Toolkit), and Matplotlib to build models.

Involving in Text Analytics, generating data visualizations using Python and creating dashboards using tools like Power BI.

Performed Naïve Bayes, KNN, Logistic Regression, RandomForest, SVM and XGboost to identify whether a design will default or not.

Managed database design and implemented a comprehensive Snow Flake Schema with shared dimensions.

Application of various Artificial Intelligence (AI)/ machine learning algorithms and statistical modeling like decision trees, text analytics, natural language processing (NLP), supervised and unsupervised, regression models.

Implemented Ensemble of Ridge, Lasso Regression and XGboost to predict the potential loan default loss.

Performed data cleaning and feature selection using MLlib package in PySpark and working with deep learning frameworks.

Involved in scheduling refresh of Power BI reports, hourly and on-demand.

Tracked experiments, versioned models, and maintained audit readiness using MLflow in clinical ML pipelines.

Integrated and normalized data following FHIR standards, building ETL transformations to convert disparate EHR exports into uniform model-ready datasets for training and inference.

Provisioned secure model endpoints on AWS SageMaker, implementing role-based access, VPC isolation, and audit logging to meet HIPAA controls for inference services.

Implemented PHI de-identification using spaCy pipelines combined with deterministic regex and token masking strategies to enable safe model training on clinical text.

Designed time-series models combining Prophet and LSTM ensembles to forecast patient vital trends and near-term deterioration risk for remote monitoring programs.

Fine-tuned LLaMA-2 and Falcon models with LoRA adapters on de-identified conversational logs to improve triage and patient messaging accuracy.

Modeled drug interaction graphs in Neo4j, using graph embeddings and traversal algorithms to detect potential

BI that integrate predicted risk scores, recommended actions, and evidence snippets for decision support.

Environment: SDLC, Python, Scikit-learn, Numpy, Scipy, Matplotlib, Pandas, AWS S3, Dynamo DB, AWS Lambda, AWS EC2, Sage Maker, NLTK, Lex, EMR, Redshift, Machine Learning, Deep Learning, Snowflake, OLAP, OLTP, Naive Bayes, RandomForest, K-means clustering, KNN, PCA, PySpark, XGBoost, Tensor flow, Keras, Power BI.

Python, TensorFlow, Hugging Face, BioBERT, ClinicalBERT, Falcon, LLaMA-2, AWS SageMaker, HIPAA, Azure DevOps, Jenkins, Kafka, Docker, Kubernetes, Neo4j, Power BI.

Client: State of Colorado,Denver,CO Aug 2021 - Jan 2023

Role: AI/ML Consultant

Responsibilities:

Facilitated agile team ceremonies including Daily Stand-up, Backlog Grooming, Sprint Review, Sprint Planning etc.

Collaborated with data engineers and operation team to implement ETL process, wrote and optimized SQL queries to perform data extraction to fit the analytical requirements.

Involved in building database Model, APIs and Views utilizing Python, in order to build an interactive web based solution.

Performed univariate and multivariate analysis on the data to identify any underlying pattern in the data and associations between the variables.

Used Pandas, Numpy, Seaborn, Matplotlib, Scikit-Learn, Scipy, and NLTK in Python for developing various Artificial Intelligence/ Machine Learning algorithms like XGBOOST.

Built machine learning models (XGBoost, Random Forest, SVM) to optimize financial risk and credit scoring.

Designed Snowflake schema data models and implemented scalable ETL pipelines for ML feature readiness.

Automated deployment pipelines with Docker and integrated model hosting on AWS SageMaker.

Assisted in implementing hybrid search strategies combining structured SQL queries with semantic search models.

Built and Developed an End-to End Data Engineering pipeline with automated data ingestion using Snowflake and AWS (S3, and RDS).

Analyzed technical and economic feasibility of clients performing requirement gathering to optimize and reduce project expenses by up to 60%.

Performed data imputation using Scikit-learn package in Python.

Ensure that the model has low False Positive Rate and Text classification and sentiment analysis for unstructured and semi-structured data.

Curated SEO-optimized solutions for business enterprises to boost sales and internet presence by 70%.

Worked with data analytics team to develop time series and optimizations.

Developed an end-to-end multilingual E-Learning Management System (SCORM compliant) based on Articulate 360 and Redwood Web Authoring Tools.

Created Logical and Physical data models with Star and Snowflake schema techniques using Erwin in Data warehouse as well as in Data Mart.

Utilized Power Query in Power BI to Pivot and Un-pivot the data model for data cleansing and data massaging.

Performed ad-hoc requests for clients using SQL queries to extract and format requested information.

Involved in Data Analysis, Data Validation, Data Cleansing, Data Verification and identifying data mismatch.

Designed data model, analyzed data for online transactional processing (OLTP) and Online Analytical Processing (OLAP) systems.

Worked Normalization and De-normalization concepts and design methodologies.

Wrote and executed customized SQL code for ad-hoc reporting duties and other tools for routine.

Created customized reports in PowerBI for data visualization.

Implemented automated release pipelines via Azure DevOps, gating deployments with unit and integration tests, data quality checks, and canary rollout strategies.

Designed resource allocation forecasting with XGBoost and Prophet to help plan staffing levels across seasonal demand cycles.

Configured observability for AI services using the ELK Stack and Grafana, instrumenting logs, metrics, and traces to speed incident response.

Ran hands-on workshops with staff to demonstrate GenAI capabilities, governance needs, and safe prompt patterns for public sector use.

Secured sensitive assets with IAM and granular RBAC rules, and implemented dataset access controls for cross-agency collaborations.

Deployed semantic search using FAISS embeddings for rapid retrieval across archived cases and historic records.

Integrated LIME and SHAP explainability views into dashboards to support auditors and policy reviewers.

Environment: : ER/ Studio, SQL, Python, APIs, OLAP, OLTP, PL/ SQL, Oracle, Teradata, Power BI, ETL, SQL, Redshift, Pandas, Numpy, Seaborn, Matplotlib, Scikit-Learn, Scipy, NLTK, Python, XGBOOST, Tableau, Power Query, Snowflake, AWS, S3, RDS, Erwin. Python, Hugging Face, Azure ML, GCP Vertex AI, Databricks, Spark SQL, FAISS, ELK Stack, Grafana, Azure DevOps, LIME, SHAP, spaCy, Looker.

Client: T-Mobile,Herndon,VA Apr 2019 - Jul 2021

Role: Senior Data Scientist

Responsibilities:

Implemented recommendation systems combining collaborative filtering and XGBoost ranking to personalize plan offers and promotions across customer segments.

Built churn prediction pipelines using Random Forest and gradient-boosted trees to surface at-risk subscribers and power retention workflows within CRM.

Engineered streaming analytics with Spark Streaming to process CDRs and network events for near real-time alerting on quality degradations and outages.

Developed transcript summarization using Transformer models to extract key call insights and surface root causes for support teams.

Performed customer segmentation using K-Means and density-based clustering to drive targeted campaigns and product experiments.

Packaged inference as REST endpoints via Flask, containerized in Docker, and deployed behind autoscaling groups for reliable throughput.

Integrated ML outputs into CRM via secure REST APIs to drive agent recommendations and next-best-action prompts.

Built interactive Tableau dashboards to monitor churn signals, NPS trends, and agent performance across regions.

Forecasted call volumes using ARIMA and LSTM ensembles, optimizing staffing and capacity planning for contact centers.

Applied Wav2Vec2 based pipelines to convert voice into text for automated quality analysis and sentiment scoring.

Implemented anomaly detection using Isolation Forest and statistical baselines to flag billing irregularities and usage fraud.

Instrumented observability with AWS CloudWatch and ELK dashboards to track service performance and pipeline health.

Environment: Python, R, Spark Streaming, XGBoost, Random Forest, Wav2Vec2, Flask, Docker, AWS EC2, AWS Lambda, CloudWatch, Tableau, ELK Stack.

Client: Kroger,GlenAllen,VA Jan 2016 - Mar 2019

Role: Data Analyst

Responsibilities:

Developed robust SQL ETL procedures and stored procedures to ingest and transform high-volume POS and e-commerce transaction feeds for analytics and reporting.

Performed exploratory analysis in Python (Pandas, NumPy, Matplotlib) to uncover seasonality, SKU-level demand anomalies, and promotion lift effects for merchandising decisions.

Built executive and operational dashboards in Tableau and Power BI highlighting daily sales, inventory health, and margin trends, with drilldowns for store and SKU-level action.

Implemented product similarity analysis with Word2Vec and FastText embeddings to improve search relevance and assist merchandising for cross-sell opportunities.

Created sales forecasting pipelines using ARIMA, Prophet, and baseline regression models to support replenishment and allocation planning.

Implemented Hadoop/Spark ETL jobs to scale historical data processing, including partitioning strategies and incremental loads.

Automated recurring Excel and SQL reporting tasks to reduce manual effort and improve delivery cadence of business insights.

Conducted customer segmentation and cohort analyses using clustering approaches to drive targeted marketing and loyalty programs.

Documented data definitions, KPI calculations, and reporting SLAs to maintain consistent analytics across teams.

Supported ad-hoc analytical requests for marketing, finance, and operations with actionable visualizations and SQL-backed analyses.

Environment: SQL Server, Oracle, Hadoop, Hive, Spark, Tableau, Power BI, Python (Pandas, NumPy, Matplotlib), Word2Vec, FastText, Excel.

Client: Value Momentum, Hyderabad, India May 2013 – March 2015

Role: Data Analyst

Responsibilities:

Delivered cross-client reporting solutions by building interactive Tableau dashboards and automated Excel reports to track financial, HR, and operational KPIs.

Wrote complex SQL queries and joins to extract, aggregate, and reconcile data from Oracle and SQL Server sources for daily and weekly reporting.

Created calculated fields, parameters, and dynamic filters in Tableau to enable flexible drill-down analysis and self-service exploration for stakeholders.

Migrated legacy Excel/Access reports into maintainable Tableau workbooks, enabling centralized publishing and scheduled distribution.

Partnered with ETL and data engineering teams to document data lineage and ensure consistent feeds for analytics consumption.

Performed QA and validation of dashboards against source systems to ensure accuracy for regulatory and executive reporting.

Conducted end-user training and created documentation to increase dashboard adoption and reduce ad-hoc reporting requests.

Implemented role-based access controls on Tableau Server to safeguard sensitive reports and enable controlled sharing.

Maintained repository of report metadata and business rules to support knowledge transfer and audits.

Supported operational analytics with timely data extracts and visualization for service delivery teams.

Environment: Tableau Desktop, Tableau Server, SQL Server, Oracle, Excel, Access, ETL Pipelines, Data Warehousing.



Contact this candidate