Post Job Free
Sign in

Senior Software & Data Engineer

Location:
Rowland Heights, CA, 91748
Salary:
65000
Posted:
January 12, 2026

Contact this candidate

Resume:

Ashish Katuri

Software Engineer

Location: USA Phone: +1-626-***-**** Email: **************@*****.*** LinkedIn Professional Summary

Software Engineer & Data Engineer with 3+ years of experience across McKesson, JPMorgan Chase, KPMG, and Dell Technologies. Skilled in Python backend development, data pipeline automation, and cloud-native solutions using AWS, PySpark, Kafka, and Snowflake. Strong expertise in API design, CI/CD, and containerization with FastAPI, Docker, and Kubernetes. Adept at integrating machine learning models and optimizing systems that process terabytes of data with high reliability and compliance in healthcare and financial domains. Experience

Software Engineer McKesson, USA Apr 2025 – Present

Architected and deployed scalable RESTful APIs using Python (FastAPI) and PostgreSQL, enabling seamless integration between 10+ healthcare systems and improving real-time patient data exchange across compliance- sensitive environments.

Optimized backend performance by implementing asynchronous I/O operations and caching with Redis, reducing API response time from 2.4 seconds to 0.9 seconds and improving service reliability for 50,000+ daily transactions.

Engineered automated data pipelines with Python, Pandas, and AWS Lambda, processing 4TB of medical transaction data weekly, enhancing accuracy in prescription and inventory reconciliation workflows.

Coordinated with cross-functional product, DevOps, and QA teams to containerize backend applications using Docker and Kubernetes, ensuring zero downtime deployments across 8 production clusters.

Spearheaded code quality and maintainability improvements by implementing unit testing (PyTest) and continuous integration (GitHub Actions), leading to 300+ successful automated builds with minimal rollback incidents.

Data Engineer JPMorgan Chase (JPMC), USA Oct 2024 – Apr 2025

Built enterprise-grade data pipelines with PySpark and AWS Glue, processing approximately 1.8 TB of structured and semi-structured data daily, improving downstream analytics delivery by 2.5 business hours across risk and finance teams.

Created Kafka-based ingestion streams and Snowflake staging layers, handling an average of 8.2 million transactions per day to support real-time credit exposure monitoring across multiple trading systems.

Optimized SQL-based ETL queries and indexing strategies, reducing batch processing time from 11 minutes to under 3 minutes, accelerating data warehouse refresh cycles for 12+ critical compliance dashboards.

Automated archival workflows via AWS Lambda and S3 Glacier, securely migrating 4.6 billion rows of trade history, saving approximately $42,000 annually in cold storage management costs.

Partnered with global DevOps and data governance teams to design validation checks and schema enforcement for 14 production data feeds, ensuring zero integrity breaches during quarterly audits.

Mentored 3 associate data engineers in Spark optimization, SQL tuning, and data modeling best practices, enabling delivery of 2 full-scale migration projects within the six-month assignment. Software Engineer KPMG, India Oct 2020 – May 2022

Developed internal automation tools using Python, SQL, and Power BI APIs to generate real-time audit reports, reducing manual report compilation by 22 staff hours weekly across consulting teams.

Engineered data ingestion pipelines with Azure Data Factory and ETL scripts to consolidate 4.2 million financial transactions, improving data accessibility for compliance and risk audit analysis.

Designed backend logic in Java and C# (.NET Core) for client-facing applications, ensuring 99.9% service uptime while optimizing query response times under 200ms in production environments.

Integrated APIs with Salesforce and Oracle systems to synchronize audit data and engagement metrics across 6 client portfolios, eliminating redundant data entry and ensuring consistency.

Optimized database queries and schema structures in PostgreSQL and MS SQL Server, increasing data retrieval performance by 1.6x for audit analytics dashboards.

Collaborated with business analysts and finance teams to translate technical requirements into scalable code modules, ensuring alignment with audit workflows and regulatory standards.

Coordinated code reviews, sprint retrospectives, and version control using Git, JIRA, and Azure DevOps, facilitating smooth release cycles across a 7-member engineering team.

Presented data integrity findings and process automation results to senior partners, enabling informed decision- making and achieving on-time delivery for 9 audit technology projects. Software Engineer Intern Dell Technologies, India Dec 2019 – Aug 2020

Developed internal automation tools using Python and Flask, reducing manual test execution time by 3 hours per release cycle and improving engineering team throughput across 2 product lines.

Integrated backend modules with RESTful APIs and validated data consistency using Postman and SQL, ensuring error-free communication between 5 microservices in Dell’s enterprise product environment.

Collaborated with senior developers to implement and test CI/CD workflows using Jenkins and Git, enabling over 20 successful automated deployments with minimal rollback incidents.

Analyzed and optimized existing Python-based scripts for log parsing and data extraction, cutting system analysis time from 25 minutes to under 8 minutes, while strengthening team problem-solving and debugging efficiency. Technical Skills

Languages: Python, Java, C++, SQL, JavaScript, TypeScript, Bash, R

AI & ML: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost, LightGBM, LangChain, Hugging Face

Deep Learning: CNN, RNN, LSTM, Transformer models, NLP, Computer Vision

Data Engineering: Spark, Hadoop, Kafka, Airflow, ETL pipelines, Snowflake, BigQuery, Redshift

Backend Development: Node.js, Express.js, FastAPI, Flask, Django, RESTful APIs, GraphQL, Swagger

Frontend Development: React.js, Next.js, Angular, HTML5, CSS3, Bootstrap, Tailwind CSS, Redux

Databases: MySQL, PostgreSQL, MongoDB, Cassandra, Redis, Elasticsearch

Analytics & Visualization: Pandas, NumPy, Tableau, Power BI, Google Data Studio, Plotly

Cloud & MLOps: AWS (S3, EC2, Lambda, SageMaker), GCP (BigQuery, Vertex AI), Azure, Docker, Kubernetes, MLflow, Jenkins, CI/CD pipelines

Big Data Tools: Databricks, Hive, HDFS, Delta Lake, Presto

Software Engineering: OOP, Agile/Scrum, Microservices, Unit Testing, Design Patterns, API Integration

AI Applications: Chatbots, Recommendation Systems, Predictive Analytics, Intelligent Automation Education

California State University, Los Angeles (CSULA) CA, USA Master of Science in Computer Science GPA: 3.8 / 4.0 May 2024 Karunya Institute of Technology and Sciences India Bachelor of Science in Computer Science GPA: 3.3 / 4.0 Jul 2020 Projects

Tour Package Prediction

Engineered a machine learning model using Python and Google Colab to predict customer interest in Wellness Tourism Packages, analyzing 25,000+ customer profiles to support targeted marketing efforts.

Visualized demographic and behavioral insights through Tableau dashboards, empowering marketing and policy teams to optimize campaign reach and improve lead conversion. Real-Time Sign Language Translator

Developed a computer vision-based translator leveraging CNNs and OpenCV to recognize 26+ hand sign gestures in real time using live camera input.

Implemented a language mapping algorithm to form basic sentences, enabling smoother human–computer communication for individuals with hearing impairments. IoT and Machine Learning-Based Health Prediction System

Designed a mobile application using Java and TensorFlow Lite to estimate calorie content from food images, achieving accurate recognition across 100+ food categories.

Integrated a personalized diet planner powered by predictive modeling to recommend daily calorie intake and exercise goals, enhancing user engagement and fitness tracking.



Contact this candidate