Name: Simun Yugashakti
Phone: +1-531-***-****
Email: ***********@*****.***
Professional Summary:
With over 14 years of extensive experience in the IT industry, I have honed my expertise across various facets of the Software Development Life Cycle (SDLC). My professional journey is marked by a commitment to quality, efficiency, and continuous improvement through Test-Driven Development (TDD) and Object-Oriented Programming (OOP) methodologies. Successfully implemented enterprise standards and best practices, delivering consistent and reliable applications.
As a Backend Developer, I possess substantial working experience in Python, Django, AWS, database management, and UNIX technologies.
My expertise extends to, Extensive experience in utilizing Python for backend development, including features like variables, control structures, loops, exceptions, and functions.
Proficient in Python data structures (lists, linked lists, sets, dictionaries, arrays, and tuples) and object-oriented concepts (polymorphism, inheritance, encapsulation).
Adept at using Python collections (Counter, Defaultdict, deque, chainmap, Heap, Namedtuple) and multithreading concepts (threads, synchronization, locks, barriers).
Experienced with Python libraries such as datetime and Pytz for managing time zones, python-docx and PyPDF2 for processing Word and PDF files, and requests for handling HTTP methods.
Skilled in data integration using Python libraries for JSON, CSV, XML formats, and data processing with pandas.
Expertise in numerical operations using NumPy, advanced text processing with regular expressions (re), and configuration management using libraries like keyboard, subprocess, configparser.
Proficient in using Django Object-Relational Mapping (ORM) for schema management and data retrieval, and Django Rest Framework for developing robust RESTful APIs.
Experienced in setting up API authentication and user registration with Djoser, and performing CRUD operations with libraries like pyodbc, psycopg2, pymysql.
Capable of implementing complex database operations (joins, group by, having, analytical functions) and performing effective unit and integration testing using pytest and unittest.
Over 5 years of experience in JavaScript, HTML, and CSS, ensuring the development of dynamic and responsive web applications.
Proficient in Agile ceremonies, ensuring project alignment, fostering continuous improvement, and maintaining effective collaboration within teams.
Well-versed in AWS services including S3 for storage, RDS for database management, and CloudWatch for monitoring.
Skilled in automating AWS infrastructure management tasks using AWS Boto3 SDK and AWS CLI.
Proficiency in version control systems like GitHub and collaboration tools like JIRA, facilitating efficient project management and teamwork.
Technical Skills:
Programming Languages
Python, JavaScript, HTML, CSS, Angular
Python Libraries
Pandas, NumPy, Matplotlib, beautiful soup, pytesseract, pywinauto, pyautogui, selenium, keyring, Django, djangorestframework, google.cloud, bigquery, storage
Operating Systems
Ubuntu, Redhatlinux.
Database
DynamoDB, MySQL, Oracle, Postgres, SQL Server
Cloud
AWS, GCP
Web Services
REST.
Version Controls
SVN, GitHub.
Educational Qualification:
Bachelor of Computer Science and Engineering, from Biju Patnaik University, India. 2010
Professional Experience:
Client: Charles Schwab, TEXAS, USA August 2024 – Present
Role: Project Lead
Project Overview: As a Project Lead, I played a pivotal role in leading full stack application’s architecture, design and development coordination with peers for critical DTF(Dart Testing Framework) Tool . My responsibilities encompassed leading a team of development professionals, designing and developing backend applications, and integrating various technologies to optimize system performance and efficiency.
Project-1: Data Job Execution feature in DTF Tool
Roles & Responsibilities:
Led a team of 8 developers design and implement a full-stack solution using Angular, .NET, and Python.
Designed the metadata table in SQL Server to store execution-related and historical data, ensuring efficient tracking and reporting.
Established a data journey mechanism to migrate production data to lower environments for validation and testing purposes.
Implemented a data job lineage feature, enabling users to visualize and modify lineage dynamically during execution.
Developed a previous job configuration selection feature, allowing users to reuse existing configurations instead of starting from scratch.
Integrated Google Cloud (google.cloud library) to store logs in BigQuery and created custom views for business insights.
Provided API development guidelines, including pagination strategies, to enhance performance and optimize data access.
Deployed new features in PCF (Pivotal Cloud Foundry) environments, ensuring seamless integration and stability across different deployment stages.
Project-2: Account Number Validation in IICS Extracted Files
Roles & Responsibilities:
Developed a Python-based solution to validate account number-related strings from IICS extracted ZIP files.
Used the Google Cloud Storage module to download ZIP files locally before processing.
Utilized the zipfile module to flatten and extract metadata, storing it in Big Query tables for further analysis.
Implemented a validation check to ensure account numbers are less than 9 digits, triggering user alerts for discrepancies.
Extended the validation to view_definitions in the Information Schema, identifying improper use of CAST and TRIM operations and alerting the team for corrections.
Successfully promoted the validation script to higher environments (Pre-Prod and Prod) to maintain data integrity across deployments.
Client: Union Pacific Railroad, OMAHA, USA November 2017 – August 2024
Role: Lead Python AWS Developer
Project Overview: As a Python AWS Developer, Played a pivotal role in developing and implementing end-to-end software/automation solutions for a critical Positive Train Control (PTC) system. My responsibilities encompassed leading a team of automation professionals, designing and developing backend applications, and integrating various technologies to optimize system performance and efficiency.
Project-1: Train data management backend API
Roles & Responsibilities:
Designed and implemented endpoints for train data backend application code using REST API principles.
Developed RESTful APIs using Django and Django Rest Framework (DRF) to facilitate asset data management.
Created DRF serializers to convert complex data structures into JSON format for easy client-side consumption.
Implemented data validation within DRF serializers to ensure consistency across API endpoints.
Utilized multithreading in Python scripts to optimize I/O operations and improve performance.
Configured URL routing using Django's URL patterns to map views to specific endpoints for train data management modules.
For API Documentation used drf yasg module.
Leveraged Django middleware to handle request/response processing, user authentication, and data validation.
Implemented authentication and authorization services using JWT Authentication and Session Authentication.
Implemented caching methods using the Redis library.
Participated in the design and implementation of AWS Code Pipeline, streamlining the deployment process.
Created Airflow Scheduling scripts in Python to automate the process of sqooping wide range of data sets.
Designed and implemented data pipelines/DAG using Apache Airflow, automating ETL processes for improved data accuracy and efficiency.
Developed custom DAGs and operators to integrate with train data sources and systems.
Utilized external monitoring and email_on_failure parameter in airflow for alerting system.
Project-2: Operational Train Data Analysis and Visualization
Roles & Responsibilities:
Conducted in-depth analysis on sync events, initialization events, and locomotive out-of-order timeouts within the "Train Data Analysis" project using Python-Django Rest Framework.
Designed ETLs to load, validate, and transform data files in JSON and XML formats for downstream processing.
Developed a Python framework for Redmine-JIRA integration of reported defects and enhancements using Python, JSON, REST API.
Designed a JIRA dashboard to showcase listed defects and enhancements using filters.
Developed a dashboard to anticipate locomotive states for the crew using back-office server data, leveraging Tableau for comprehensive reporting and visualization.
Designed and developed the PTC Issues Analysis visualization reports framework using Python-Matplotlib, maintaining data from Hadoop DB to Oracle tables.
Project-3: End to End Automation for PTC software
Roles & Responsibilities:
Led the development of various end-to-end automation solutions for a Positive Train Control (PTC) system, managing a team of 5 automation professionals.
Collaborated with cross-functional teams and stakeholders to ensure the automation solutions met specific requirements and delivered the desired results.
Integrated Tesseract OCR software for effective image reading from the PTC software.
Utilized advanced python modules for handling desktop application and OCR solutions like pywinauto, pyautogui, pytesseract.
Utilized in-premises Linux server for the application interaction and for source code management used Git’s branching strategy.
Project-4: End to End Automation framework for Remote Bulletin/Authorities
Roles & Responsibilities:
Developed an automation framework for issuing remote bulletins and authorities using Python-Selenium Module.
Designed and implementation of CI/CD pipelines using Jenkins and Git Webhooks to streamline the deployment process.
Generated detailed test reports with pytest frameworks, providing insights into test results and coverage.
Implemented logging mechanisms to capture and analyze test execution details and debug issues efficiently.
Environment: Python, AWS, Git, CI/CD, Jira, Django, MySQL, XML, JSON, RESTful API, Tesseract, Tableau, Matplotlib, Hadoop, Oracle, Selenium, Airflow,DAG
Client: Qualcomm, India December 2014 - November 2017
Role: Python Developer
Project Overview: As a Python Developer at Qualcomm, I played a pivotal role in designing, developing, and deploying a comprehensive test automation framework, Lincase. The framework aimed to streamline testing processes, enhance test efficiency, and provide valuable insights through data visualization and analysis.
Project-1: Lincase Framework
Roles & Responsibilities:
Utilized Python, Django, HTML, CSS, JavaScript, and MySQL to create a robust framework for testing and automation.
Incorporated Object-Oriented Programming (OOP) concepts and Test-Driven Development (TDD) practices to ensure high code quality and maintainability.
Developed and integrated user-friendly RESTful APIs using Python and Django Rest Framework (DRF) to facilitate test case scheduling, progress monitoring, and results visualization.
Implemented authentication and authorization services using the Base Permission Class for a custom user interface, enhancing security and user management.
Designed interactive features like bar and pie charts to provide intuitive data visualization, improving insights for testing teams.
Integrated export functionalities to enable seamless data extraction in Excel or CSV formats, promoting data portability and analysis flexibility.
Maintained and developed data pipelines for ingesting data from various sources using S3, AWS RDS, and Python.
Utilized Python DB module libraries like psycopg2 and pandas to_sql to transfer data into diverse data repositories.
Created SQL queries, stored procedures, functions, packages, tables, and views to retrieve and manipulate data effectively and construct into ORM languages to utilize in Django framework.
Leveraged RabbitMQ server for efficient event analysis and management, ensuring real-time updates and streamlined communication across the testing environment.
Conducted thorough analysis of downtime occurrences in the Lincase software, identifying key areas for improvement and implementing proactive measures to minimize.
Project-2: Lab Device Health Monitor Status Dashboard
Roles & Responsibilities:
Employed the Python urllib module to fetch data from private web APIs, used for retrieving device health record information.
Developed a comprehensive dashboard using Django representing device health status by interacting with the server using shell commands.
Read, processed, and stored device data in a central location using Python libraries like JSON, CSV, and requests.
Used Python scripts and libraries such as jsonschema and csvvalidator to validate JSON and CSV files for compliance.
Gathered end-user feedback, incorporating their suggestions and requirements into the framework design to enhance usability.
Designed and developed unit testing and integration test cases to ensure the functionality and reliability of the system using pytest
Provided comprehensive documentation for the framework, including usage guidelines and best practices to ensure ease of adoption by end-users.
Developed and maintained Splunk dashboards and reports to monitor application performance and security events.
Implemented advanced search queries and alerts, enhancing incident response times by 40%.
Conducted data onboarding and normalization, ensuring comprehensive log analysis and compliance
Environment: Python, Django, HTML, CSS, JavaScript, MySQL, REST API, JSON, Pandas, AWS (S3, RDS), SQL, RabbitMQ.
Tools and Libraries: jsonschema, csvvalidator, urllib, requests, psycopg2.
Client: Intel Corporation, India May 2013 - June 2014
Role: Python Automation Developer
Project Overview: Played a critical role in validating the Ultrabook platform for the Shark Bay (Haswell Processor) project. My responsibilities encompassed end-to-end automation, defect tracking, and performance optimization, leveraging Python and various tools and libraries.
Roles & Responsibilities:
Spearheaded the validation for the Shark Bay (Haswell Processor) project, conducting end-to-end Python automation, defect tracking, and debugging.
Implemented multi-threading to facilitate parallel processing of inventory-related tasks, significantly enhancing performance.
Leveraged the requests library to integrate external APIs, including supply chain management and ERP systems for inventory forecasting.
Utilized libraries such as pytz and dateutil to ensure accurate timestamping during inventory tracking and reporting processes.
Efficiently managed inventory data using Python data structures like lists and dictionaries for tracking products and categories.
Used the os and subprocess modules for system-level operations, including file management and running external commands/scripts.
Managed data interchange and configuration storage by leveraging JSON and XML serialization/deserialization.
Developed Python scripts for functionality testing of BIOS versions in Windows and Android operating systems.
Conducted root cause analysis and implemented corrective measures to resolve defects and improve platform performance.
Collaborated with cross-functional teams, providing technical expertise and support during the validation process.
Prepared comprehensive monthly reports, presenting progress, results, and recommendations to stakeholders, ensuring they were informed and updated.
Expertly used tools such as DDMS, TERATERM, Kernel Tuner, System Tuner, HSD, and JAMA counter to streamline the platform validation process.
Used DDMS to monitor and debug the Android platform and applications, improving efficiency and performance.
Employed Python testing frameworks like pytest and PyUnit to automate unit test execution, improving code quality.
Utilized Docker to package and deploy applications and their dependencies consistently across various environments.
Leveraged Jenkins CI/CD pipelines to automate building, testing, and deploying applications.
Utilized Jira as a project management tool to plan, track, and manage tasks, user stories, and project milestones.
Environment: Python, SQL, JSON, XML, Docker, Jenkins, Jira.
Tools and Libraries: requests, pytz, dateutil, os, subprocess, pytest, PyUnit, DDMS, TERATERM, Kernel Tuner, System Tuner, HSD, JAMA counter.
Client: ST-Ericsson, India May 2011 - January 2013
Role: Python Automation Developer
Project Overview: As a Trainee Consultant on the Mont Blanc (U8500) platform validation project, I focused on ensuring the platform's reliability and performance through extensive multimedia testing and automation. My role involved in-depth log analysis, defect resolution, and the development of Python scripts for various testing aspects.
Roles & Responsibilities:
Contributed to the Mont Blanc (U8500) platform validation project, focusing on extensive multimedia testing to ensure platform reliability and performance.
Conducted in-depth log debugging and defect analysis, identifying and resolving issues to improve platform functionality.
Performed beat analysis and pixel analysis for audio and camera testing, ensuring high-quality audio and image output using Python scripts.
Developed Python scripts to verify the captured image features, quality of thumbnails stored in memory, and log analysis of all multimedia testing logs from Android Studio.
Conducted thorough stress testing and ad-hoc testing, as well as multiple rounds of exploratory testing to identify potential issues and ensure the robustness of the platform.
Leveraged Python CSV and Excel libraries to facilitate the export and import of inventory data in various formats.
Utilized psycopg2 to establish database connections, enabling seamless interaction with Oracle and PostgreSQL databases.
Developed SQL queries, stored procedures, functions, packages, and views to retrieve and manipulate data efficiently.
Proficiently used Python and shell scripting for diverse automation tasks, enhancing efficiency and scalability across projects.
Involved in developing automated solutions to streamline repetitive tasks, optimize workflows, and accelerate development cycles.
Continuously explored emerging automation technologies to stay updated with industry trends and drive continuous improvement in automation techniques.
Environment: Python, SQL, JSON, Oracle, PostgreSQL.
Tools and Libraries: psycopg2, Python CSV and Excel libraries, Android Studio.