CH. VENKATA SAI KARTHIK
SOFTWARE DEVELOPMENT ENGINEER 2 ( MLE-II )
SUMMARY
PERSONAL INFORMATION
6+ years of work experience in AI/ML. Proficient in Python, ML-Ops, ML, DL, and adept at scaling products for optimal performance. Passionate about leveraging data science to drive business excellence. Email
*******************@*****.***
Mobile: +91-735*******
Total work experience
6 Years 3 Months
Github:
https://github.com/saikarthikchee
della
Linkedin:
https://www.linkedin.com/in/sai-
karthik-cheedella/
Blog:
https://medium.com/@saikarthik_
81304
KEY SKILLS
• PYTHON
• PYSPARK
• MACHINE LEARNING
• DEEP LEARNING
• GEN-AI
• ML-OPS
• NLP
• DOCKER, KUBERNETES,
HELM
• ARGO : WORKFLOWS, CD
• CI-CD, GITHUB ACTIONS
• SQL
• GCP, AZURE
EXPERIENCE
Kinaxis India: (Chennai)
Contributing ML Engineer, SDE-2
May 2022 – Present (2.3+ yrs)
Working with ML Product team and contributing to the Kinaxis supply chain’s AI based demand forecasting tool.
Quantiphi Analytics Pvt Ltd: (Bangalore)
ML Engineer
Mar 2021 – Apr 2022 (1.1 yr)
Worked with Applied-AI delivery team.
Tata Consultancy Services: (Chennai)
System Engineer, MLE
June 2018 – Dec 2020 (2.6 yrs)
Worked with Machine Learning Centre of Excellence (MLCOE) team, Retail strategic initiative
PROJECTS
1. Kinaxis product:
Collaborating with the ml product team to build scalable retail demand sensing (ml) product capabilities using python, kubernetes
(k8s), helm, and argo workflows (wfs).
Developed and integrated a customized logging module and a memory profiling utility to enhance application performance. Developed health check applications for the ml ecosystem using flask. Crafted demand forecasting solutions tailored to different customers, including new feature generation and training processes. Created helm charts for deploying applications.
Maintained the codebase by ensuring clean standards, including the addition of unit tests and integration tests for comprehensive code coverage.
Contributed to spike projects like Gen-AI.
2. Hershey’s Merch Builder: (client – Hershey’s)
Situation: Proposed forecasting and merch-unit/planogram solutions for Hershey's customers based on sales patterns, considering financial and business constraints.
Task: Clustered store-level data for all customers, calculated predictive product demands at the store level, and determined the optimal product mix for allocation in planogram structures. Action: Leveraged PySpark for data processing and Spark ML’s K-means algorithm for clustering.
Implemented statistical models (e.g., Facebook's Prophet) and regression-based forecasting for predicting product demand. Used OR-Tools solver for constraint-based optimization, identifying the optimal product mix and generating feasible planograms that fits for the business rules.
3. Knome Analytics: (Client – Tata Consultancy services) Situation: Classification all blog posts and comments of employees in TCS Knome platform & Create dashboards of the employees interactions.
Task: Identifying No. of Business categories needed to segregate. Perform Multi-level classification of interactions, Sentiments analysis, Derive Business insights for each category and display visualizations of results.
Action: Created Logistic Regression model for multi-class classification with weighted Tfidf-W2V embedding. Also Used NLTK and regex parser to derive insights from text. 4. Mavis Recommendation Engine: (client - Delta Airlines) Situation: Generate Food recommendations to airline passengers
& Recommend items for Duty-Free shopping at airports. Task: Generate similarity engine to recommend significant food items for passengers. Also suggest shoppable items at destination airports to them.
Action: Created content-based similarity filters for both Food- items and Passengers.
As, item-passenger interactions weren’t available, Achieved collaboration by mapping similar items to passengers and vice versa based on similarity scores.
5. THD search terms improvisation: (client: The Home Depot) Task: To leverage the quality of existing THD search engine results.
Action: Improvised the search capability by extracting multiple combinations of entities (like Adjectives, Cardinals, Nouns) from the product description. Used NLTK’s chinking and chunking to parse the text data, with regex parser grams. Used BigQuery to analyse large datasets and Pyspark for code. 6. Tata Cliqs imperfect order:
Situation: Classifying the orders of e-commerce website TataCliqs as perfect or imperfect.
Task: To provide reasons for being an imperfect order. Action: Handled the imperfect order details module, by preserving interpretability from the trees of model and prepared a documented analysis by automating the process and generated sheets.
Also worked in Data cleaning and pre-processing the data. DECLARATION
All the mentioned information in the resume is true to the best of my knowledge.
Place : Chennai