%
%
%
%
%
Nov **** - Present
Esakiraj S
India
+91-709******* ***********@*****.***
https://www.linkedin.com/in/esakiraj-serman-104a4b169 Objective
Python Developer & AI Engineer with 3 years 11 months of experience building AI-driven solutions, scalable APIs, and automated data pipelines. Proficient in Python, PySpark, FastAPI, LangChain, LangGraph, SQL, Neo4j, and Azure. Experienced in legacy modernization, Databricks workflows, and agent-based systems using ChatGPT, LLaMA, and Semantic Kernel. Focused on delivering efficient, cloud-ready, and intelligent systems. Education
KCG College of Technology, Chennai, India
Bachelor of Engineering
Technical Proficiencies
Python, Agentic AI, AI Agent, ChatGPT, Llama, Python Semantic Kernel, Langchain & Langgraph FastAPI, Flask, RESTful APIs, Streamlit
Pyspark, Pandas, NumPy, Matplotlib, Seaborn, Plotly, SQLAlchemy, Databricks Neo4j, SQL, MongoDB, Elasticsearch
Azure (Storage, Data Processing), AWS EC2
Linux, Docker, Jira, Confluence, Jupyter Notebook, Google Colab, Putty, GIT Certifications
Agentic AI with LangGraph and Langchain, Udemy
Python - GUVI Geek Network, IITM Research Park
Data Analysis with Python, freeCodeCamp
Azure Fundamentals, Microsoft
Advanced Prompt Engineering Techniques, LinkedIn
Generative AI - Technical AI Advisor Curriculum, NVIDIA MongoDB, GeeksforGeeks
Professional Experience
Kumaran Systems Private Limited
Programmer (Software Engineer)
1. Informatica to Databricks Migration – Agentic AI Built an AI-driven system using LangChain and LangGraph to automate the migration of Informatica ETL workflows to Databricks.
Parsed Informatica XML into hierarchical JSON and visualized workflows using Neo4j knowledge graphs. Leveraged graph traversal to generate PySpark code via a code generation agent, with validation handled by a reflection agent and a human-in-the-loop correction mechanism.
LangGraph was used to orchestrate agent collaboration, including A2A (Agent-to-Agent) communication and MCP
(Model Context Protocol) agent-based control flow. Stored validated code in Azure Blob Storage and published both PySpark code and workflow orchestration logic to Databricks using instance tokens.
Maintained complete run status in SQL Server and exposed asynchronous FastAPI endpoints to orchestrate the full migration lifecycle. Enabled seamless deployment using Docker and CI/CD pipelines. Key Contributions:
Automated 95% of Informatica-to-Databricks migration using LangChain-based AI agents. 2. Mainframe to Modern Stack – AI Migration Agent
Developed an AI-powered agent to accelerate COBOL-to-modern migration using Python, Semantic Kernel, and LangChain.
Extracted business logic and metadata from legacy COBOL codebases using prompt engineering. Initially integrated with ChatGPT, later upgraded to the LLAMA model for improved control. Exposed endpoints via FastAPI for orchestration, maintained run status in SQL Server, and stored all extracted data in Azure Storage Explorer. Feb 2023 - Oct 2024
Sept 2021 - Dec 2022
Key Contributions:
Designed a Neo4j-based knowledge graph to map and analyze complex COBOL logic for re-engineering. Result: Reduced manual code analysis effort by 70% and accelerated migration timelines by 40%. Tata Consultancy Services
Software Engineer
1. AAH Modeling
Contributed to a PySpark-based system designed to process and analyze large-scale datasets from multiple sources, including Hive. The system identified critical data points, triggered real-time alerts, and generated dynamic business reports using Pandas, Matplotlib, and Plotly to support data-driven decision-making. Key Contribution:
Reduced pipeline runtime by 30% through optimization and implemented reusable logic for alerting and reporting. Collaborated with analysts to ensure data insights aligned with evolving business rules. 2. MIP Analytics & Job Scheduler
Developed a Python-based platform to enhance credit risk analytics and portfolio management for BSAM. The platform was designed to streamline data processing, generate insightful visualizations for risk assessment, and provide real-time access to critical data from databases. and NumPy. Collaborated with a team of 5 to create a Python-based job scheduler using APIs to automate model execution. Key Contribution:
Developed high-performance data pipelines using Pandas, Polars, and SQL, and exposed automated model execution workflows through FastAPI and Flask for seamless risk analytics delivery. 3. FMR – Alerting System
Designed and implemented a real-time monitoring system using Elasticsearch to detect suspicious debit card transactions. The system continuously analyzes transaction patterns and triggers alerts for potential fraud based on predefined anomaly rules.
Key Contribution:
Achieved 100% real-time fraud detection based on threshold rules using Elasticsearch alerting for debit card transactions.
Tech Mahindra
Software Engineer
1. Code Converter Tool
Developed a Python-based tool utilizing ANTLR to automate the conversion of COBOL code into Java equivalents. This tool was designed to expedite legacy system modernization by streamlining the migration process. Key Contribution:
Parsed COBOL source code using ANTLR grammar in Python, walked the syntax tree, and mapped nodes to equivalent Java templates using Jinja for automated code conversion. 2. Legacy Code Analysis Tool
Developed a Python-based tool for in-depth analysis of legacy codebases to support modernization and maintenance efforts. The tool analyzes code structure, dependencies, and risks capturing comment and executable lines, dynamic call chains, variable usage, CRUD operations, and generating rule-based reports similar to SonarQube. End-to-end analysis endpoints were built using Flask, and extracted insights were stored in MongoDB for scalable access and reporting.
Key Contribution:
Built end-to-end Flask APIs to analyze legacy code and store structural insights, call chains, and rule-based metrics in MongoDB for scalable reporting.