Suma Latha
Email id: ad17ag@r.postjobfree.com
Mobile No: 763-***-****
https://www.linkedin.com/in/suma-latha-68826b135/
A Total 11 years of Industrial working experience which includes DevOps on Software Configuration Management & Build, Release, Deployment and cloud (AWS) environment. Proven track record of successfully delivering cloud projects on time and within budget, while ensuring alignment with business objectives. Passionate about innovation, continuous learning and committed to staying abreast of emerging cloud trends and best practices. Exceptional communicator and collaborator, with a strong ability to work cross-functionally. Proactively identifying areas for improvement in the team and building up the spaces.
PROFESSIONAL SUMMARY
Experience with AWS Cloud services like EC2, VPC, ELB, Auto-Scaling, Security Groups, Route53, IAM, EBS, AMI, EFS, RDS, S3, SNS, SES, Cloud Watch, CloudFormation, CloudFront, CloudTrail, Lambda.
Good working knowledge on Amazon AWS IAM service: IAM Policies, Roles, Users, Groups, AWS Access keys.
Experienced in all phases of the software development life-cycle (SDLC) with specific focus on the build and release of quality software. Experienced in Waterfall, Agile/Scrum, Lean and most recently Continuous Integration (CI) and Continuous Deployment (CD) practices.
Experience in Installation, configuration, tuning, security, backup, recovery, and Operating System upgrade on Red Hat Linux (RHEL 5, 6), Unix, Centos and Ubuntu.
Experience in implementation of Python web frameworks like Django, Flask, Pylons, Web2py and Python Servlet Engine(PSE).
Experienced in working with various Python Integrated Development Environments like IDLE, PyCharm and Sublime Text.
In-depth understanding of the principles and best practices of Software Configuration Management (SCM).
Expertise in migrating key systems from on premise hosting to Amazon Web Services (AWS).
Setting up private networks and sub-networks using virtual private cloud (VPC) and creating security groups to associate with the networks and setting up scalability for application servers using command line interface.
Configured and managed AWS Glacier to move old data to archives, based on retention policy of database/applications.
Experience in Python and Shell scripting to automate infrastructure related manual tasks.
Written and implemented the Cloud Formation Templates (CFT's) for creating, updating and deleting the stacks on AWS.
Automated infrastructure provisioning on AWS using Terraform and Ansible, Experience in installing and configuring the Ansible management node to deploy configuration to the end user nodes and Writing Ansible playbooks to deploy configuration to the production’s servers.
Worked in container-based technologies like Docker, Kubernetes, Container management using Docker by writing Docker files and setting up the automated build on Docker HUB, installed and configured Kubernetes.
Hands-on with Kubernetes cluster management and administration, creating pods and managing them by updating resources depending on the requirement.
Extensively worked on Jenkins/Hudson by installing, configuring and maintaining for the purpose of continuous integration (CI) and for End-to-End automation for all build and deployments. Good Experience in Continuous integration using the Pipeline view.
Experience in writing Bash Shell Scripts to automate the administrative tasks and management using Cron Jobs.
Knowledge of using Routed Protocols: FTP, SFTP, SSH, HTTP, HTTPS and Coordinated with the Offshore and Onshore teams for Production Releases.
Excellent knowledge on Python Collections and Multi-Threading.
Experience in working on various python packages such as Numpy, SQLAlchemy, matplotlib,
Hands on experience in upgrading servers using RPM, Apt-get and Yum package installer.
Extensive experience using MAVEN and ANT as build tools for the building of deployable artifacts (jar, war & ear) from source code in Tomcat Server.
Good knowledge in checking the code quality using the SONARQUBE and generated code coverage metrics.
Good Knowledge using the backup repository Nexus.
Performance testing for complex systems. Build, Release and environment planning, designing, implementation for projects across different technologies.
Experience in AWS like Amazon EC2, Amazon S3, Amazon Redshift, Amazon EMR and Amazon SQS.
Excellent experience with Python development under Linux OS and Mac OS.
Troubleshoot the application related issues in production & non-production environments.
Worked with Engineers, QA and other teams to ensure automated test efforts are tightly integrated with the build system and in fixing the error while doing the deployment and building.
Experience in using bug tracking systems like Jira, Bugzilla, for hotfixes and bugfixes.
Experience in Branching, Merging, Tagging and maintaining the version across the environments using SCM tools like Subversion (SVN), GIT (GitHub, GitLab). Deployed and configured GIT repositories with branching, forks, tagging, merge requests, and notifications.
Wrote custom monitoring and integrated monitoring methods into deployment processes to develop self-healing solutions and configuring alerts using Splunk, Cloud Watch.
Comprehensive problem-solving capability and good at communication skills, Quick Learner, fast paced, hard worker and a consistent team player, High flexibility to adapt any changing environment quickly and eager to learn new things, Presentations and Communication to all sorts of audience, one to one or group.
ACADEMIC PROFILE
Master of Computer Applications in Andhra University 2006-2009
Bachelor of Computer Applications in Acharya Nagarjuna University 2002-2005
TECHNICAL SKILLS
Operating systems
Windows, Linux (Red Hat 5/6), Ubuntu, CentOS
Build Tools
MAVEN, Gradle and ANT
Application server
Apache Tomcat
Cloud Platforms
AWS, Azure, Google Cloud
Software Development Methodologies
SDLC, Agile/Scrum, Waterfall
Issue Tracking Tools
JIRA, Service-Now, Bug Zilla, Confluence
Version Control
Bitbucket, GitHub, SVN, GIT, GitLab
Configuration Management
Ansible
Continuous Integration
Jenkins
Containerization and Orchestration
Docker, Kubernetes
Infrastructure spin up tools
Terraform, CloudFormation
Logging & Monitoring Tools
Cloud watch, Splunk, ELK, SolarWinds
Package Management
Nexus Artifactory, JFrog
Code Quality
SonarQube
PROJECT/WORK EXPERIENCE
Client: Sherwin-Williams
Alpharetta, GA
Role: Sr. lead DevOps Infra Engineer/Principal Cloud developer
Duration: June 2023 – Present
Responsibilities:
Hands on Experience on Designing, deploying, and maintaining the application servers on AWS infrastructure, using services like EC2, S3, VPC, SNS, LAMBDA, AWSGLUE, Batch, AWS Connect, WAF, IAM, RDS, EKS, ECS, CloudFormation, AWS API, Redshift, DynamoDB, Aurora, Route53, and HashiCorp, terraform, json and Troposphere. worked very exclusively in an AWS environment, over 8 years of experience
Hands on Worked on Configured S3 bucket policies to manage data and maintaining the data backups and archives using glacier.
I been very infrastructure-focused entire career, with extensive experience on building out and managing CI/CD pipelines
Hands on Experience in DevOps tasks and CI/CD for cloud-based environments and in particular AWS
Experience on build, design, and implementation in AWS API
Hands on experience on Developing AWS Lambda functions
Hands on experience in terraform IAC building and provisioning infra services using AWS cloud.
Hands on experience Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and managed releases of Helm packages.
Great Hands-on experience with Infrastructure as Code, utilizing Hashicorp Terraform in order to create a more efficient cloud migration
Experience on HashiCorp Vault.
Experience building and maintaining CI/CD infrastructure with tools like GitHub Actions, Jenkins, Argo CD
Good experience on python module like, regex, OS, parsearg, time and other modules.
Experience on Configured and Managed CI\CD Pipeline using GitLab and using Terraform modules for template creation
Hands on experience on Terraform to provision AWS Infrastructure
Hands on experience Worked on Writing python scripts for S3 buckets copy, installing packages and create ec2 instance.
Experience on provisioning tools Terraform, and Packer.
Hands on Experience in AWS cloud heavily works on Aws cloud
Experience with automation and configuration management (Terraform, Ansible, Chef, etc.)
Hands on Experience in managing Ansible Playbooks with Ansible roles, group variables, inventory files, copy and remove files on remote systems using file module.
Experience with Ansible Tower to manage Multiple Nodes and Manage Inventory for different Environments
Experience with container-based deployments using Docker, working with Docker images, Docker Hub and Docker-registries and Kubernetes.
Hands on Experience on Migrating on-premises Java applications to AWS Cloud
on Assist team members writing Splunk queries according to best practice. We use beginning to use Terraform for our infrastructure and plan to use Splunk for unified logging - aggregation/correlation.
Hands on Experience on Migration like, to migrate applications, create IAM user for AWS Replication Agent and Create the Replication Settings template in the AWS MGN Console, Then Configure the Launch Settings in the AWS MGN console and test the instances.
Hands on Experience on Infra and service monitoring using Splunk, Prometheus, Datadog and Grafana.
Extensively worked on in Cloud Infrastructure Automation for platform like AWS and Terraform and packer.
Hands on Experience on Performed branching, Tagging & Release activities on GitLab, GitHub actions.
Hands on Experience in working with log monitoring Elastic Search Log stash and Kibana Stack (ELK).
Hands on Setup chef configuration environment comprised of chef server and workstation manage nodes that are on cloud and on-premises, by deploying the cookbooks, recipes using knife tool from Chef Workstation to Chef-server.
Client: Cognizant (Mc Donald’s), Bangalore, India
Role: DevOps Lead
Duration: Dec 2021 – May 2023
Responsibilities:
Create & maintain environment & tools to automate build/release activities.
Designed and implemented CI/CD pipelines using Jenkins, including building, testing, and deploying applications, ensuring optimal performance and scalability.
Supported around 250+ AWS Cloud instance and used AWS CLI to manage and configure AWS products. Also, have experience setting up a database by using AWS service and S3 bucket to configure backups of instances to S3 bucket.
Managed AWS EC2 instances utilizing S3 and Glacier for our data archiving and long-term backup and UAT environments as well as infrastructure servers for GIT.
Have worked with solutions architect to modify the AWS architecture so as to include cloud formation along with auto scaling so that it works as failover.
Created EC2 instance on AWS, managed security groups, administered Amazon VPCs. Configured security groups at an instance level and Network ACLs at subnet level in VPC.
Maintaining an in-house ticketing system using a Python/Django backend with a Django REST Framework based API, using Angular.js for the web frontend.
Utilized CloudWatch to monitor resources such as EC2, CPU memory, Amazon RDS, EBS volumes; To set alarms for notification or automated actions and to monitor logs for a better understanding and operation of the system.
Developed and maintained Jenkins automation scripts using Groovy, to automate build and deployment processes, and reduce deployment time.
Wrote Python code embedded with JSON and XML to produce HTTP GET request, parsing HTML data from websites.
Written the Ansible playbooks which are the entry point for Ansible provisioning. Where the automation is defined through tasks using YAML format to setup continuous Delivery pipeline and ran Ansible Scripts to provision Dev servers
Written Ansible playbooks for automating tasks and managed Ansible to configure Web Apps and deploy them on server
Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.
Integrated Jenkins with other DevOps tools, such as Git, Docker, or Kubernetes, to build hybrid and multi-cloud infrastructures.
Designed and implemented Jenkins-based solutions, including Jenkins Shared Libraries, Jenkins Agents, and Jenkins Workflow, to automate complex tasks and workflows.
Performing both manual and automation builds using Maven and Jenkins.
Involved in the release process and deployed applications (WAR, EAR and JAR) to the Tomcat.
Notify Broken builds to appropriate Team / Team Members and enable for successful build.
Configured and deployed applications in various work environments like Development, Test, and Production.
Good understanding of Amazon Web Services (AWS), Creating EC2 Instances, S3, Auto scaling, ELB and configuring all necessary services
Strong knowledge of Python Web Frameworks such as Django and Flask.
Monitoring daily builds using continuous integration tool Jenkins.
Responsible for Maintaining / Administration of GIT Version control tool. Create branches in GIT & provide access permissions to developers on need basis
Application/Web Server Log files analysis to troubleshoot application problems on application and web server side.
Assisting development team in resolving issues with build, environment, SCM and tools.
Configuring the Docker containers and creating Docker files for different environments.
Creating container and deploying it to production server
Working closely with development team to identify and resolve build or deployment problems and support.
Troubleshooting of Performance and Stability Issues, Support software environments, build, release, deployment and operational issues.
Client: Virtusa (Cloud Management Services), Bangalore, India
Role: Sr. DevOps Engineer
Duration: Apr 2020 to Dec 2021
Responsibilities:
Worked with an Agile development team to deliver an end-to-end continuous integration/continuous delivery product in an open-source environment using Jenkins to get the task done
Used the continuous Integration tools such as Jenkins for automating the build processes
Understanding of Maven and Java for easy setting up of CI jobs on Jenkins.
Designed, Installed and Implemented CI/ CD automation system
Created and updated Bash scripts and modules, files, and packages
Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation using Jenkins along with Shell scripts to automate routine jobs
Configure Continuous Integration from source control, setting up build definition within Visual Studio Team Services (VSTS)
Involved in development of Test environment on Docker containers and configuring the Docker containers using Kubernetes
Analyzed all aspects of server upgrade deployment for 12,500 workstations nationwide to ensure a smooth migration
Design of Cloud architectures for customers looking to migrate or develop new PaaS, IaaS or hybrid solutions
Documented workflows and executed comprehensive training plan to team.
As part of the Infrastructure Team implemented SSO (Single Sign On) and integrated the applications to the Load Balancer
Installed Docker Registry for local upload and download of Docker images and even from Docker hub
Managed Docker orchestration using Docker Swarm
Authored IQ/OQ summaries and assisted in overall project documentation
Setup Jenkins tool to integrate the JAVA project and maintained Jenkins with continuous integration and deployment
Created alarms in CloudWatch service for monitoring the server’s performance, CPU Utilization, disk usage and have insight in the monitoring tool namely Nagios
Built a new CI pipeline Testing and deployment automation with Docker and Jenkins
Used MAVEN as build tools on Java projects for the development of build artifacts on the source code
Configured Nagios to monitor EC2 Linux instances with Ansible automation
Used various services in AWS such as Cloud formation, S3, VPC, IAM.
Involved in Release Management Cycle of various applications.
Involved in planning, deploying and releasing of applications.
Automated previously manual, time-consuming processes to drive grains in data
Created release pipelines using GIT for automatic workflow
Created pipelines in Jenkins includes Continuous integration of tools like Maven, Git, SonarQube, nexus and Continuous deployment using GIT
Proposed branching strategies for using Version Control Systems like GIT, Clear Case, Stash, GitHub & Subversion
Created branches, performed merges in version control systems GIT, GitHub
Good knowledge of Operation Management Technologies - Log Aggregation, Server Monitoring, Process Monitoring, Application Monitoring
Used Kubernetes for container operation and used Kubernetes clusters as a network and load balancer, and chosen Kubernetes is also good at running web applications in a cluster way, also used in multiple services by creating images and reduced space
Automated setting up server infrastructure for the DevOps services, using Ansible, shell and python scripts
Developed Python career path through professional development while working on Python projects
Monitored and secured systems, implemented disaster recovery plans and implemented in-house tools. Docker, Jenkins, GitHub were tools that were used for continuous integration and deployment
Expertise in configuring the monitoring and alerting tools according to the requirement like Prometheus and Grafana, setting up alerts and deployed multiple dashboards for individual applications in Kubernetes
Implemented Slack Notification plugin in Jenkins Using Groovy Scripts
Created Jobs for all the projects in Dev, Qa, Prod environments using Groovy scripts
Client: HCL (T-Mobile Enterprise Delivery Pipeline), Bangalore, India
Role: Sr. DevOps Engineer
Duration: Jul 2018 - Nov 2019
Responsibilities:
Worked with various services of AWS: EC2, ELB, Route53, S3, Cloud Front, SNS, RDS, IAM, Cloud Watch and Cloud Formation, Elastic Beanstalk, Lambda, CloudTrail
Working and fixing issues related to DBA. Working on MongoDB Ops Manager configuration
Experience in building new Open Stack Deployment through Puppet and managing them in production environment
Managing keys by creating the keys and attaching them to library & Variable Groups with the help of Key Vault
Responsible to Manage IAM Policies, providing access to different AWS resources, design and refine the workflows used to grant access
Development using Object-Oriented Python, NumPy, SciPy, IPython, Pandas and in-house libraries, Web interface development, Data management on a terabyte scale
Using Ansible created multiple playbooks for machine creations and SQL server, cluster server and MYSQL installations
Integrated Ansible dynamic inventory for Virtual box, Open stack, Amazon AWS EC2, Docker for full automated deployment in all the environments to support scaling required
Utilized AWS Lambda platform to upload data into AWS S3 buckets and to trigger other Lambda functions
Maintained and developed infrastructure using Ansible, Jenkins, and multiple AWS tools
Used Docker for running different programs on single VM, Docker images includes setting the entry point and volumes, also ran Docker containers and worked on installing Docker and creation of Docker container images, tagging and pushing the images
Worked on Docker registry, Machine, Hub and creating, attaching, networking of Docker containers, container orchestration using Kubernetes for clustering, load balancing, scaling and service discovery using selectors, nodes and pods
Strong ability in procedures to ETL data into a Data Warehouse from a variety of Data sources including flat files and database links (MYSQL & Oracle)
Maintaining Jenkins in various multiple environments by installing packages on Jenkins master and slaves and perform regular security updates for Jenkins
Developed a fully automated continuous integration system using GIT, Jenkins and custom tools developed in Python and Bash
Configured build tool Maven for building deployable artifacts such as jar, war, and ear from source code and Artifactory Repository like Nexus for Maven and ANT builds to upload artifacts using Jenkins
Managed and monitored the server and network infrastructure using Splunk applied blackouts for any outages, pulling reports by providing them to the client
Created Maven POM files for Java projects & then installed the application on AWS EC2 AMI(Linux), RedHat, Ubuntu
Worked on ECS and EKS deployment to an application on a Tomcat server hosted in containers using Terraform to create Infrastructure as Code
Worked on SonarQube for continuous inspection of code quality and to perform automatic reviews of code to detect bugs and Automated Nagios alerts and email notifications using Python script.
Involved in creating Jenkins CI pipelines and automate most of the pipeline build related tasks by deploying and validating the automated builds using pipeline groovy scripts.
Supported around 250+ AWS Cloud instance and used AWS CLI to manage and configure AWS products. Also, have experience setting up a database by using AWS service and S3 bucket to configure backups of instances to S3 bucket.
Managed AWS EC2 instances utilizing S3 and Glacier for our data archiving and long-term backup and UAT environments as well as infrastructure servers for GIT.
Have worked with solutions architect to modify the AWS architecture so as to include cloud formation along with auto scaling so that it works as failover.
Created EC2 instance on AWS, managed security groups, administered Amazon VPCs. Configured security groups at an instance level and Network ACLs at subnet level in VPC.
Developed and maintained Jenkins automation scripts using Groovy, to automate build and deployment processes, and reduce deployment time.
Used Python and Django for JSON processing, data exchange and business logic implementation.
Worked on Python Open stack API's
Integrated Jenkins with other DevOps tools, such as Git, Docker, or Kubernetes, to build hybrid and multi-cloud infrastructures.
Designed and implemented Jenkins-based solutions, including Jenkins Shared Libraries, Jenkins Agents, and Jenkins Workflow, to automate complex tasks and workflows.
Used several python libraries like wxPython, numPY and matPlotLib.
Performing both manual and automation builds using Maven and Jenkins.
Involved in the release process and deployed applications (WAR, EAR and JAR) to the Tomcat.
Notify Broken builds to appropriate Team / Team Members and enable for successful build.
Configured and deployed applications in various work environments like Development, Test, and Production.
Good understanding of Amazon Web Services (AWS), Creating EC2 Instances, S3, Auto scaling, ELB and configuring all necessary services
Monitoring daily builds using continuous integration tool Jenkins.
Responsible for Maintaining / Administration of GIT Version control tool. Create branches in GIT & provide access permissions to developers on need basis
Application/Web Server Log files analysis to troubleshoot application problems on application and web server side.
Assisting development team in resolving issues with build, environment, SCM and tools.
Configuring the Docker containers and creating Docker files for different environments.
Creating container and deploying it to production server
Working closely with development team to identify and resolve build or deployment problems and support.
Troubleshooting of Performance and Stability Issues, Support software environments, build, release, deployment and operational issues.
Client: HCL (DevOps Automation), Bangalore, India
Role: Sr. DevOps Engineer
Duration: Apr 2017 - Jul 2018
Responsibilities:
Involved in designing and deploying multitude applications, focusing on high-availability, fault tolerance, and auto-scaling
Involved in Remediation and patching of Unix/Linux Servers
Used Google Cloud DNS to manage DNS zones and give public DNS names to elastic Load balancer Ip’s
Used to debug all Application Jenkins builds if something is breaking
Deploy and maintain an enterprise class security, network and high-performance applications within an AWS infrastructure
Design and develop AWS architecture for both Internet facing and Internal facing applications
Solution design for client opportunity in one or more AWS Competencies or general cloud managed services
Designed and maintained databases using Python and developed Python based API (RESTful Web Service) using Flask, SQLAlchemy and PostgreSQL.
Configure AWS Web Application Firewall (WAF) to help protect web applications from common web exploits and DDoS attacks
Worked on Splunk for analyzing deployment failures, App event/logs for Micro services
Maintain Git source code repository and local mirrors perform branching, tagging, merging, and maintenance tasks
Written Jenkins files using groovy scripts to support Docker image Build and Kubernetes deployments Automation
Writing/Debugging Docker files to build Application Docker images & deploying them to Kubernetes by writing Yaml files and by using kubectl cli
Written Kubernetes yaml files for Replication controller and services
Implemented process and quality improvement through task automation and generated infrastructure as a code using AWS cloud formation automation
Work with Chef Automation to create infrastructure and deploy application code changes autonomously
Used Chef to automate the deployment workflow of JAVA applications on WebSphere and Oracle Web logic servers
Work with Chef automation to create infrastructure and deploy application code changes autonomously
Provide highly durable and available data by using Cloud-Storage data store, versioning, lifecycle policies, and create AMIs for mission critical production servers for backup
Used Git hub for Python source code version control, Jenkins for automating the build Docker containers, and deploying in Kubernetes.
Responsible for managing the chef client nodes and upload the cookbooks to chef-server from workstation
Worked on Managing the Private Cloud Environment using Chef
Worked on build tasks using Maven, Ant, Gradle and GNU make files and worked with development team to migrate Ant scripts to Maven
Worked on Jenkins as CI/CD tool, Salt Stack as configuration management tool, Kubernetes Cluster, SonarQube as code quality tool, JFrog Artifactory
Performed SVN to Bitbucket migration and managed branching strategies using Bitbucket workflow. Managed User access control, Triggers, workflows, hooks, security, repository control in Bitbucket
Performed integrating, JUnit and code quality Tests as a part of build process
Used Kubernetes to deploy scale, load balance, scale and manage Docker containers with multiple namespaces in Kubernetes
Enhanced existing automated Build/Deploy process and architect the next generation centralized deployment processes using Octopus
Implemented JUnit framework to write test cases for different modules and resolved the test findings
Deployed the Docker containers in Kubernetes cluster and AWS ECS for the different projects
Created Maven POM files for Java projects & then installed the application on AWS EC2 AMI(Linux), RedHat, Ubuntu.
Worked with Amazon AWS/EC2 and Docker based Cluster Management environment Kubernetes
Worked on Open stack API Service Magnum for container orchestration using Docker Swarm
Worked with Docker and Kubernetes on multiple cloud providers, from helping developers build and containerize their application (CI/CD) to deploying either public or private cloud
Managed Git repositories for branching, merging and tagging and developing Shell/Groovy Scripts for automation purpose
Automated and tested the Build and deployment of the CRM product on Smoke, DEV Environment using power shell and Octopus tool
Used Kubernetes to orchestrate the deployment, scaling, management of Docker Containers
Infosys (AON), Bangalore, India
Role: Sr. Process Executive
Duration: May 2010 - Jun 2013
Responsibilities:
Worked with DevOps team to create, launch, configure EC2 instances. Used Amazon Web Services to provide a large computing capacity.
Build more than 50 new RHEL servers in an existing cluster.
Monitored trouble ticket queue to attend user and system calls
Closely worked with DevOps team to provide support on Branching, Tagging, Release activities on
Version control (GIT).
Experienced on working with JIRA for bug tracking.
Providing the backup and recovery of the files and data, troubleshooting the computer related problems.
Taking care of major incidents and coordinating with the resolver support teams for resolution.
Responsible for managing the process to restore normal service operation as quickly as possible to minimize the impact to business operations.
Handling level 2 escalation calls of end user support and act as single point of contact and escalate to support groups.
Also worked as technical support representative for the account and handled 24x7 operations.
Co-ordinated with the experienced professionals in the team in migrating the cost-effective service that successfully reduced the operating costs by over 70% upon running the newly migrated service.
Took part in the scheduled meeting to recommend relevant modifications and project improvements.
Responsible for collected Proofs PDF and after Comparison with the live sheet need to submit the evaluated forms to the stake holders.
Provided