Post Job Free

Resume

Sign in

Devops Engineer

Location:
Vasant Nagar, Karnataka, India
Salary:
12L
Posted:
June 19, 2020

Contact this candidate

Resume:

Sharath Rameshkumar

Email: addw1b@r.postjobfree.com

Mob: +91-875*******/975*******

Career Objective:

I aspire for a professional career in IT Field, which will provide me with unlimited growth, opportunities, suitably reward my skills and give a chance to constantly add value to the company and society at large by working at the cutting edge of technology.

Professional Summary:

Total Experience

5 years 7 months

Operation System

Linux, Windows

Databases and storage

MSSQL, EMC, Symantec backup tool

Cloud

AWS, Azure and VMware Openstack

Devops and CICD

Puppet, Docker, Ansible, Terraform and GoCD

Educational Profile:

B.E CSE from Hindustan Institute of Technology with CGPA 8.2%-April-2014.

Professional Experience:

Working as Senior DevOps Engineer and Consultant in Capgemini Technology Services at Chennai, since December 2018.

Working as Cloud Solutions Architect and Administrator in Cognizant Technology Solutions at Chennai, since November 2014.

Am AWS Certified Solutions Architect Professional and Azure Certified Infrastructure Solutions Associate.

Trained on Docker Devops and Google Cloud Platform Services, Will pursue the certification shortly.

Project Summary at Capgemini: Bank of Ireland

Description:

We belong to Automation Team and it is done by using Immediate framework. We automate application using Puppet and Infra automation is done by Terraform. We use AWS provider for development and VMWare Integrated OpenStack for production environment. For CICD, we use GoCD pipelines to build the application image and to deploy the application Servers. We have T24 as core banking application in Bank of Ireland.

Responsibilities:

For Infra automation, we use Terraform for deploying the application servers, load balancers and Security Groups.

We write Terraform in resource and module format to support reusability of code and we use interpolations for iteration

To build the application image we use Packer and code are written in puppet. We write the code in git bash and push it to BitBucket

While building the application image, we require application and supporting binaries to build the image, so the artifacts are stored in Nexus and fetched during the build

We use Consul as Key-Value to store the resource outputs and will be fetched wherever required for the deployment

Have written abstract modules using puppet for JAVA, JBOSS, JBOSS Vault, PKI and Appdynamics. These abstract modules will be invoked as class or function or profile during their respective application build

Have worked in following applications, Sumerian, Abris Gamma, UXP Edge Connect, ELK and Appdynamics.

Have knowledge and exposure on banking payments application like GPP-SP.

In puppet, we use define resource to ensure the reusability of code, it will be called multiple times by passing different values

In terraform, we incorporate userdata initially to run the application template. Later we used provisioners to run the application template which overcomes the userdata script size limit of 16KB

We have also implemented pipeline as a code for GoCD pipelines instead of GUI pipelines. Pipelines will be created automatically from pipeline as a code and completes the deployment

We have implemented Single stack deployment of application which includes deployment of both application and infrastructure, this is achieved by using upstream_pipeline and downstream_pipeline concepts.

In terraform we use interpolations for iteration like element, list, join, format, length are some of the major interpolations that we used in our terraform code

Have also worked on Openstack CLI commands to build the infrastructure like server, loadbalancer, security group and flavor.

We have achieved operational pipelines for application automation using Ansible.

Environment: AWS, openstack, packer,puppet,terraform,bitbucket,gocd,nexus,consul.

Project Summary at Cognizant:

Description:

Cognizant Cloud Services(public cloud)assures in providing End to End service from provisioning the virtual machines, managing the VM’s and decommissioning VM’s. We also provide high performance for the applications hosted on the Cloud Environment using various Cloud services or API’s. The major reason for opting cloud technology by the customers is to provide business at NO downtime with high Availability excluding any disruption of services at the feasible cost using Pay for Use billing method.

Multiple Clients : Celgene, HealthFirst, Sothebys, KPMG, Herc Rentals, National Life Insurance, Abbott, Orica, Farmers Insurance, etc.

Cloud Responsibilities:

Migrated from classic VPN to AWS VPN by establishing new vpn connection between AWS virtual private gateway and customer gateway. VPC Peering is established b/w regions.

Established Direct Connect by creating new virtual interfaces to provide dedicated bandwidth for accessing the resources without any network latency.

To prevent the attacks like DDOS, SQL Injection. We will block the blacklisted IP’s at NACL(blocking the access at subnet level). We have allowed specific routes under VPN connection for accessing AWS resources from on-prem.

We have created security groups to restrict the inbound access at the various level like EC2,ELB,RDS.

We have deployed AWS CloudFront, an Content Delivery method for accessing static or frequent website pages from the edge locations ensuring high accessibility.

Implemented AWS Elastic classic or application Load balancer ELB in routing traffic between the instances for good performance and High Availability

Configured Route 53 for mapping the website Domain names(likeAAA Records) to the appropriate load balancer with best routing method.

We have deployed AWS Cloud Formation template to patch the WAF instances by updating the stack and created Autoscaling groups by creating the launch configuration containing the updated image of the application instances assuring HA for High End applications.

Implemented AWS Lambda Functions for scheduled maintenance of instances during OFF Business hours saving the cost and auto creating & deleting the image backups pertaining to the Retention policies.

We have created AWS Elastic Filesystem EFS and mounted the same on the instances, which help us to access the data without any storage performance issues (or) we can also mount S3 as filesystem promising the same pros.

AWS Simple storage service (S3) provides to create the buckets with unlimited storage and creating the each objects of 5TB size. It serves like GDrive, we can access the files anywhere from the internet. Public access can be restricted to the S3 assuring the security.

Managing EC2 VM’s include like provisioning, restricting the access to the VM’s, Terminating VM’s. Migrating the VM from one physical host to another during the maintenance.

We have also restored the VM’s from the AMI backup taken during the data loss, application inconsistency. To provide better performance at the instance level, we will upgrade the instance type and expand the disk depending upon the business requirements.

Amazon RDS Endpoint is created for various DB Engines to accomplish high IO throughput for all the critical applications. AWS Identity Access Management is used to restrict the Roles and permissions for users on AWS Services.

We have used Dockers to create the containers for the modified application on the same resource, resolving the application compatibility issues and resource optimization

Basic Responsibilities :

We perform patching on regular basis for applying security vulnerable KB’s to prevent machines from malicious attack using WSUS, Chef and Ansible playbooks.

For Cloud administration, we use ServiceNow ticketing tool in the frontend and Zenoss is used as backend for collecting the data from cloud resources.

We use Password Manager Pro for managing the vault and Detrans tool for enabling and disabling the instance monitoring

Eyeshare automation is implemented for notifying the alerts proactively to the customers and resolving some basic issues like disk, cpu & memory.

From AWS perspective, Cloud Watch is used to monitor the resources and Cloud Trail is used to record the users events on AWS services.

Second Project Summary:

Description:

Cognizant cloud services(private cloud) assures to provide traditional infrastructure services above the VM hypervisor and various supports like storage&backup, OS,DB, Network, VMWare& Citrix, Middleware. In private cloud, below the hypervisor is maintained by third party private cloud service providers like CenturyLink (or) by our own proprietary Cognizant Managed Cloud Infrastructure(CMCI).

Multiple Clients: Marine Harvest, Voya, SourceNet, Zoetis, AstraZeneca.

Responsibilities:

We used to perform Active Directory level activities like creating Domain access for the users, creating groups, adding users to the groups, creating OU.

The activities include like registry settings, creating forward lookup and reverse lookup zone in DNS and managing local users & group management.

In linux, we used to do users & groups management, permission control for users and folders, used to create new filesystem and LVM’s.

Creating various level authentication methods in linux like, key with passphrase and without passphrase authentication in remote machine and between different systems.

Database activities includes, taking native backup, doing shrinking the database if the transaction logs doesn’t get truncated, checking the health of the DB and it’s consistency in the data, finding long running jobs, blocking sessions.

Doing DB failover, if the DB instances are in Cluster during the patching activity or during any deployments. Monitoring the replication is happening properly between primary & secondary instances.

Backups are managed using EMC and Symantec tool. The activities includes creating the job for filesystem backups and SQL backups (like incremental, transaction log, full backups).

Monitoring the jobs, if any job is getting failed we used to analyze the reason for failure and reran the job in the test group.

Adding the server to the domain, doing the page file configuration for the critical application machines during the high memory usage.

Creating and scheduling CRON jobs in the linux boxes, the same in Windows using Tasks scheduler and systems performance can be monitored by configuring Perfmon tool.

We used to add routes locally in the machines for establishing the connectivity between the machines. Monitoring services, ports, firewalls if any problem occurs and patching is being performed on all the virtual machines at the regular intervals.

Declaration:

I hereby declare that the above written particulars are true to the best of my knowledge and belief.

Date:

Place: (Sharath R)



Contact this candidate