Post Job Free
Sign in

Senior AI/ML Engineer and Python Expert

Location:
Southfield, MI
Posted:
December 17, 2025

Contact this candidate

Resume:

LOHITH AMARAGANI

AI/ML Engineer Senior Python Developer

+1-989-***-**** *****************@*****.***

PROFESSIONAL SUMMARY

Senior Python Software Engineer and AI/ML professional with 9+ years of IT experience delivering enterprise-grade software solutions.

Involved in all phases of the Software Development Life Cycle (SDLC), including requirement gathering, system design, development, testing, deployment, and maintenance of AI/ML and software applications.

Strong experience building backend services and supporting full-stack applications using Python, JavaScript, REST APIs, and modern web frameworks.

Proven expertise in designing, developing, and deploying end-to-end machine learning and deep learning solutions, including NLP systems, computer vision models, and predictive analytics using frameworks like TensorFlow, PyTorch, and Hugging Face Transformers.

Strong backend development skills using Python and frameworks such as Django, Flask, and FastAPI to build scalable RESTful APIs and microservices, supporting high-performance web and AI applications.

Skilled in applying NLP techniques and fine-tuning large language models (BERT, GPT, LLaMA, RAG) for domain-specific tasks such as summarization, sentiment analysis, entity extraction, and intelligent automation.

Familiar with using Kafka and Spark Structured Streaming for building low-latency systems capable of handling high-throughput event-driven architectures.

Hands-on experience in developing and deploying AI-powered systems on major cloud platforms (AWS, Azure, GCP).

Proficient in building CI/CD pipelines, containerizing ML workflows with Docker and Kubernetes, and leveraging infrastructure-as-code for reliable model deployment.

Excellent communication, analytical, business, and interpersonal skills. Comfortable working independently and as a team player.

Adept at working in agile, cross-functional environments, collaborating with global stakeholders to deliver high-quality, scalable solutions aligned with business objectives.

Collaborated with internal and external teams to address issues, provide feedback, and document project progress and challenges accurately.

Strong experience designing high-level software architectures, defining APIs, and selecting appropriate technology stacks and deployment strategies.

Familiar with test-driven development, CI/CD practices, application performance optimization, and secure software development principles.

TECHNICAL SKILLS

Programming Languages

Python (FastAPI, Flask, Django), R, SQL, JavaScript, HTML, CSS

Databases

PostgreSQL, MySQL, MongoDB, Neo4j

Cloud Platforms

AWS (Lambda, S3, EC2, SageMaker, Bedrock), GCP, Vertex AI, Azure

DevOps & CI/CD

Docker, Kubernetes (EKS/GKE), Terraform, GitHub Actions, Jenkins, Rest API Design, Unit Testing, Agile, Scrum.

Data Science Libraries

Pandas, NumPy, SciPy, Scikit-Learn, Seaborn, Matplotlib, NLTK

Business Intelligence

Tableau, Power BI

Data Engineering Tools

PySpark, ETL Tools, Data Warehousing, Databricks, Apache Airflow

Version Control

Git, GitHub

Machine Learning

Supervised Learning: Linear/Logistic Regression, Decision Trees, Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Naïve Bayes, XGBoost, Random Forest.

Unsupervised Learning: K-Means Clustering, Principal Component Analysis (PCA), Singular Value Decomposition (SVD), Hierarchical Clustering.

Deep Learning & NLP

Frameworks & Libraries: PyTorch, TensorFlow, Hugging Face Transformers, NLP Techniques: Tokenization, Encoding, Sentiment Analysis, Text Summarization, Text Classification. CNN, RNN, Transformers (BERT, GPT, Llama, Ollama, OpenAI, Google Gemini.

Generative AI, LLMs, Agentic AI

Prompt engineering, fine-tuning techniques (LoRA, PEFT), RAG, Vector DBs (FAISS, Pinecone), LangChain, LangGraph, MCP, Knowledge Graphs, Ontology Design, OWL, RDF, SPARQL, JSON-LD, Semantic Reasoning, AutoGen, LangGraph, CrewAI.

PROFESSIONAL EXPERIENCE

Client: Summit Radiology, Fort Wayne, IN Jan2023—Present

Project: AI-Driven Clinical Knowledge Automation & Medical Imaging Intelligence Platform

Designed and deployed advanced ML, CNN-based medical imaging models for disease detection and treatment monitoring using PyTorch and TensorFlow.

Engaged directly with healthcare stakeholders to understand business needs and translate them into AI and Generative AI solution approaches.

Supported AI solution walkthroughs and technical discussions during proposal and pre-implementation phases.

Utilized imaging data (MRI, CT), clinical, and genomic data to develop AI-powered solutions that support data-driven drug development and biomarker discovery.

Standardized and normalized medical image data from MRI and CT scans, reducing equipment-related inconsistencies and ensuring data consistency, which improved model accuracy by 20%.

Defined high-level AI system architecture covering data pipelines, model lifecycle, deployment strategy, and enterprise integration.

Implemented a RAG pipeline by indexing clinical trial documents with FAISS and integrating GPT-3 for answer generation, enabling accurate, real-time Q&A for researchers.

Built graph-based data pipelines to ingest structured and unstructured healthcare data into Knowledge Graphs for downstream GenAI applications.

Leveraged graph traversal and relationship modeling to enhance contextual retrieval and entity linking for AI-driven knowledge systems.

Architected Generative AI solutions using Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) for clinical knowledge automation.

Designed and implemented Knowledge Graphs using Neo4j to model relationships across clinical, imaging, and genomic datasets, enabling context-aware AI reasoning.

Designed and deployed Agentic AI systems capable of autonomous decision-making, goal setting, and multi-step task execution.

Architected RAG pipelines combining vector search and Knowledge Graph–based retrieval to improve contextual grounding and semantic accuracy of LLM responses.

Considered performance, scalability, and cost efficiency while designing and deploying AI solutions in cloud environments.

Utilized PySpark for distributed data processing to handle large-scale clinical and genomic datasets, improving data preprocessing time by 30%.

Developed RDF and JSON-LD based ingestion pipelines to populate and maintain enterprise knowledge graphs.

Optimized model performance through hyperparameter tuning using techniques like Grid Search and Random Search, improving model accuracy by 15%.

Developed and deployed an XGBoost model for predicting patient outcomes using clinical, genomic, and imaging data, enhancing prediction accuracy by 30%.

Optimized SQL queries and managed data pipelines for clinical, imaging, and genomic data stored in PostgreSQL, supporting efficient AI model development.

Enhanced RAG pipelines by combining graph-based context expansion with LLM inference for domain-aware responses.

Fine-tuned BERT-based models using Hugging Face Transformers for entity extraction and summarization of medical documents, improving accuracy and reducing annotation overhead.

Contributed to performance tuning across modules, helping reduce average page-load times by around 35% during peak seasons.

Developed and fine-tuned LLMs like GPT-3 for medical text generation and document summarization, enhancing the accuracy and relevance of insights extracted from clinical reports.

Researched and developed AI-driven solutions for pharmaceutical research, clinical trials, and disease modeling.

Built AI workflows with MLflow, Docker, and Kubernetes to ensure reproducibility, scalability, and compliance in R&D environments.

Collaborated with cloud architects to build reusable, knowledge-enabled AI components across enterprise platforms.

Implemented Modular Control Protocol (MCP) clients to enable external orchestration and control of AI agents, which streamlined complex R&D workflows and reduced deployment time by 30%.

Leveraged AWS Bedrock to integrate and orchestrate foundation models (LLMs) for secure, scalable GenAI applications, enabling RAG and agent-based workflows without managing model infrastructure.

Deployed models on AWS (EC2, S3, SageMaker), achieving scalable and cost-effective solutions for medical data processing and inference.

Environments: Python, PyTorch, TensorFlow, scikit-learn, Hugging Face Transformers, XGBoost, AWS (EC2, S3, SageMaker), FastAPI, Flask, Docker, Kubernetes, MLflow, Apache Airflow, GitHub Actions, Jenkins, PostgreSQL, MySQL, Git, Jupyter, PyCharm, VS Code, REST APIs, Docker, MLflow, GitLab CI/CD, SQL, Spark, BERT,

Client: Wells Fargo Charlotte Data Science (AI/ML) Developer Sep 2020 – Nov 2022

Project: Intelligent Reconciliation Automation & Predictive Financial Analytics Platform

Designed, developed, and deployed AI/ML-driven solutions to automate financial reconciliation processes across global business units, ensuring compliance with SLAs.

Developed Python-based POCs for anomaly detection, forecasting, and intelligent automation to validate model feasibility before production rollout.

Integrated LLM-style NLP preprocessing pipelines for classification, summarization, and rule-based automation tasks.

Built resilient, scalable Python applications to support critical financial operations with improved system reliability and fault tolerance.

Integrated solutions with SmartStream TLM, enabling automated exception handling and enhanced visibility into the reconciliation lifecycle.

Architected end-to-end ML systems for time-series forecasting using Prophet and LSTM models, improving accuracy of financial trend prediction by 35%.

Employed scikit-learn Isolation Forest to detect transaction anomalies, integrating SMOTE for class balancing and optimizing hyperparameters via grid search to enhance detection accuracy.

Developed and maintained robust ML pipelines using Apache Airflow, Apache Spark, and Kafka/Spark Structured Streaming to support both batch and real-time workflows.

Designed and optimized ETL/ELT workflows using SQL, PostgreSQL integrating data from diverse financial systems for analytics and ML applications.

Integrated observability stacks with AWS CloudWatch, Prometheus, and Grafana for AI pipelines, enabling proactive alerting and reducing downtime by 30%.

Implemented monitoring and alerting systems to detect model drift, performance degradation, and data anomalies, ensuring high model uptime and decision integrity.

Designed and maintained low-latency inference systems capable of serving real-time predictions in production environments with high transaction volumes.

Automated data validation and schema enforcement across the ML lifecycle to prevent silent failures and ensure data consistency in production systems.

Contributed to technical debt reduction by identifying manual reconciliation steps and converting them into robust automation pipelines.

Drove feature engineering optimization for distributed training environments, cutting model training time by 40% and improving system throughput.

Standardized MLOps practices using MLflow, Docker, and CI/CD tools to streamline model deployment, monitoring, and version control.

Environments: Python, SQL, scikit-learn, XGBoost, MLflow, Apache Airflow, Apache Spark, Kafka, Spark Structured Streaming, FastAPI, Flask, Docker, Kubernetes, Git, CI/CD tools, AWS (EC2, S3, CloudWatch), Prometheus, Grafana, Relational Databases (PostgreSQL, MySQL), JSON. REST APIs, Linux.

Client: CBRE Dallas, Texas Sr. Python Developer April 2019 – Aug 2020

Project: Real-Estate Analytics Automation & Predictive Reporting Platform

Developed scalable RESTful APIs using Python and Django, ensuring secure and efficient communication across services.

Used JavaScript and AngularJS to build interactive UI components that consumed backend REST APIs, improving data visualization and user experience.

Implemented client-side validations and asynchronous API calls using JavaScript to reduce server load and improve response times.

Collaborated with frontend teams to debug JavaScript issues, optimize API integration, and ensure smooth end-to-end data flow.

Developed visually dynamic, full-stack web applications using Django and AngularJS, enhancing UI responsiveness and achieving a 40% increase in user engagement and satisfaction.

Analyzed business requirements for unified reporting and identified multiple data sources, including MySQL and Excel-based spreadsheets, that required automated ingestion.

Built Machine learning models with Scikit-learn/XGBoost for property usage forecasting, improving reporting accuracy by 30%.

Automated feature engineering and integrated ML outputs, reducing manual analysis by 40% and delivering real-time insights.

Architected and automated data ingestion pipelines using Python and Pandas to unify and process data from MySQL and Excel-based spreadsheets, reducing manual reconciliation by 60% and enabling real-time updates.

Utilized Pandas for data cleaning, transformation like type casting, missing value imputation and merging logic, enabling consistent schema alignment across disparate formats.

Used libraries like logging to capture and alert on failures, such as missing files or database connection errors.

Managed datasets using Pandas DataFrames and MySQL, querying databases from Python using the Python-MySQL connector and MySQLdb packages.

Streamlined ETL pipelines by optimizing data ingestion from on-premises systems into Azure Data Factory (ADF), reducing load times by 40% and lowering storage costs by 25%.

Developed Power BI dashboards with Python scripts for automated data processing and actionable insights.

Automated data workflows and deployment pipelines using AWS (EC2, S3) and Azure, supporting scalable and resilient cloud-based systems.

Orchestrated Python applications with Docker and Kubernetes, ensuring smooth deployment and scaling within a reliable runtime environment.

Troubleshot and deployed bug fixes for critical applications, continuously improving functionality and ensuring a smooth user experience for both customers and internal teams.

Boosted the performance and scalability of web services through asynchronous programming and Docker-based microservices, improving throughput by 20%.

Integrated automated CI/CD pipelines using Jenkins and GitHub Actions, enforcing test coverage thresholds and accelerating deployment cycles by 50%.

Environment: Python, Django, Angular, HTML5, CSS, JavaScript, MySQL, Pandas, AWS (EC2, S3), Azure, Jenkins, GitHub Actions, Docker, Kubernetes, Apache.

Client: State of NY New York City, NY Python Developer Nov 2017 – Feb 2019

Project: Enterprise Data Processing Automation & Digital Services Modernization Platform

Engineered high-performance multi-threaded Python wrappers to accelerate data ingestion and processing pipelines, improving runtime efficiency by over 50%.

Designed and maintained scalable web applications using Django and Flask, integrating with dynamic front-end interfaces powered by AngularJS.

Enhanced frontend interactions using JavaScript, jQuery, and AJAX to enable asynchronous data exchange between UI components and backend services.

Implemented Model-View-Controller (MVC) architecture in server-side web applications using Django and Flask.

Implemented dynamic front-end interactions using jQuery and AJAX, enabling asynchronous transmission of JSON data between UI components and backend controllers for smoother user experiences.

Migrated legacy databases from SQLite to MySQL and PostgreSQL by writing custom schema migration scripts using raw SQL, accurately mapping data types and ensuring seamless data transformation and portability.

Designed and implemented RESTful APIs for efficient communication between services, facilitating data exchange and system interoperability.

Contributed to web development and engineering tasks using Django, WSGI, SQL-based admin tools, and Behavior-Driven Development (BDD) practices.

Led database migration and optimization tasks, including schema refactoring, stored procedures, and CRUD-based feature enhancements, resulting in a 25% improvement in data-access latency.

Performed data transformations using Apache Spark (PySpark) on Databricks, writing outputs to AWS S3 for downstream analysis.

Used Confidential Elastic Beanstalk with Confidential EC2 to deploy the project into AWS and gained strong experience with AWS storage services (S3).

Utilized Python SDKs to interact with Azure Blob Storage, managing unstructured lab experiment files and improving data availability in cloud-based analytics platforms.

Used MongoDB to store data in JSON format and developed dashboard features using Python, Bootstrap, CSS, and JavaScript, ensuring clean code separation and a maintainable application structure.

Developed and maintained robust unit and integration tests using PyTest and UnitTest, with code coverage enforcement and automated build triggers, enhancing code reliability and reducing release bugs.

Designed and enforced API security using JWT tokens, role-based access control, and CORS policies for external partner integrations.

Environment: Python, Django, Flask, PyTest, UnitTest, PySpark, Databricks, AWS (EC2, S3, Elastic Beanstalk, Lambda), Azure (Blob Storage, File Share), Docker, Git, MySQL, PostgreSQL, MongoDB, HTML5, CSS3, JavaScript, jQuery, AJAX, AngularJS, Bootstrap, JSON, Pandas, NumPy, WSGI, JWT, REST APIs, BDD, Jenkins, Ansible, Linux, Windows

Client: Serendio INC India Junior Python Developer July 2016 – June 2017

Project: Web Services & API Automation Framework for Enterprise Applications

Contributed to the development of RESTful APIs and backend components using Django and Flask under senior developer supervision.

Developed secure APIs with JWT-based authentication, role-based access control, and CORS policies for external partner integrations.

Built and maintained dynamic web templates using Django views and templating language.

Developed views and templates using Python and Django’s templating framework to create a user-friendly website interface.

Contributed to scalable backend architectures, optimizing performance and maintainability of core services built with Python.

Used database models and schema migrations for PostgreSQL and MySQL, ensuring data consistency and query efficiency.

Assisted in designing PostgreSQL and MySQL schemas by modeling normalized tables, writing DDL scripts, and testing basic queries, which improved data access efficiency and reduced query time by 30%.

Participated in integration tasks involving legacy SOAP services and XML payloads.

Collaborated closely with front-end teams to design APIs tailored for dynamic user interfaces, while also implementing core UI components using HTML, CSS, and JavaScript when needed.

Created a Git repository and added the project to GitHub.

Participated in weekly release meetings with Technology stakeholders to identify and migrate potential risks associated with the releases.

Participated in Agile ceremonies including daily standups, sprint reviews, and release planning meetings, contributing to risk identification and test planning.

Collaborated with QA, frontend, and DevOps teams using Jira and stand-ups to identify, debug, and resolve staging issues, reducing page load time by 40% and closing 95% of critical bugs pre-release

Environment: Python, Django, Flask, PostgreSQL, MySQL, HTML, CSS, JavaScript, Git, Jira, SOAP, XML.

EDUCATION

Bachelor of Technology (B.Tech) in Computer Science and Engineering

Mahatma Gandhi Institute of Technology, Hyderabad



Contact this candidate