Post Job Free

Resume

Sign in

Devops Engineer Configuration Management

Location:
Frisco, TX
Salary:
140000
Posted:
February 06, 2024

Contact this candidate

Resume:

Sarada Ganta

Current Location: Dallas, TX

Work Authorization: Green Card

E-mail: ad3ezd@r.postjobfree.com Mobile: 503-***-****

Professional Summary:

• Around 8+ Years of experience in Build and Release/Software Configuration Management/Deployment Engineer role with an outstanding organization, which gives me challenges and uses experts in Big Data & Java based applications.

• Designed and deployed GCP & AWS Solutions using google services like Compute Engine, Cloud Storage Buckets, Persistent Disks, Cloud Load Balancer, Auto Scaling groups, Cloud IAM & AWS Services (EC2, S3, Load Balancers).

• Implemented and managed DevOps operations (Configuration Management, Build, Deploy and Continuous integration) over cloud and on-premises utilizing Ansible, Kubernetes, Docker).

• Developed automated testing of terraform infrastructure deployments in drift detection scripts using python.

• Conducted POC and deployed production scale Hashicorp Vault clusters using terraform and ansible and onboarded several apps towards using it.

• Experience installing and developing on ELK.

• Worked on AWS EKS Cluster configuration.

• Worked with Docker and Kubernetes on multiple cloud providers, from helping developers build and containerize their application (CI/CD) to deploying either on public or private cloud.

• Configured Jenkins jobs with the Maven scripts for various deployments of Scala enterprise applications.

• Responsible for Continuous Integration (CI) and Continuous Deliver (CD) process implementation using Jenkins along with the Shell Scripts to automate the jobs.

• Configuration Management/Deployment Engineer role and Linux System administrator and experience with continuous integration with an outstanding organization.

• Understand Bigdata infrastructure for operational across various environments.

• End to End support of ETL pipelines which are implemented in Spark.

• Having experience of creating alerts, reports and dashboards in Splunk.

• Understanding on Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Cluster Planning, Manage and review data backups, Manage & review log files.

• Assessing current state processes related to their IT systems to determine DevOps maturity, define DevOps capabilities, and design a future roadmap.

• Developed POC’s on GCP, Performance Tuning and ETL, Agile Software Development, Team Building & Leadership Management.

• Worked on WebLogic to Tomcat migrations more than three applications.

• Perform as DevOps enabler, specializing in Agile, Continuous Integration (CI), Continuous Delivery (CD), Cloud, infrastructure as Code, Infrastructure Provisioning, Orchestration, Monitoring, Alerting and Service level Dashboards.

• Good experience in WebLogic, Tomcat Application server deployments.

• Good experience in Build Tools like Ant, Maven and process creation and control through the use of Subversion/GIT.

• Hands on experience in using Continuous Integration tools like Jenkins, Bamboo.

• Supporting scheduled builds using scripts and tools.

• Good understanding of the processes in Software Development Life Cycle.

• Good debugging, root cause analysis and problem-solving skills.

• Customer focused, organized, detail oriented with the ability to meet deadlines.

• Hands-on experience with creating and managing the Git repositories including branching, merging, forking and tagging across the environment.

• Wrote several Ansible playbooks for the automation that was defined through tasks using YAML format and run Ansible Scripts to provision Dev servers.

• Deploying Java applications to cloud platforms like GCP.

• Monitoring and Logging for Java application.

• Building and deploying the Java application.

Skill Set:

• Build Tools

• Ant, Maven, Gradle.

• Big Data/Cloud

• Hadoop, Hive, Spark, Kafka, GCP,AWS,Terraform,Docker,Kubernetes,Kubectl,Nginx.

• Continuous Integration

• Jenkins, Screwdriver, Artifactory

• Web/Application Servers

• Apache-Tomcat, Jetty, Web logic,Load Balancer,Cloud Watch,Auto Scaling.

• Operating Systems

• Red hat Linux

• Database/Datawarehouse

• Oracle, My SQL, Big Query

• Scripting tools

• Shell Script, Python, SQL

• Orchestration/Scheduler

• Oozie, Airflow. etc.

Professional Experiences:

Verizon, Irving, TX. Oct 2021 - Till Date

Cloud DevOps Engineer

Responsibilities:

• Designed and deployed GCP & AWS Solutions using google services like Compute Engine, Cloud Storage Buckets, Persistent Disks, Cloud Load Balancer, Auto Scaling groups, Cloud IAM & AWS Services (EC2, S3, Load Balancers).

• Implemented and managed DevOps operations (Configuration Management, Build, Deploy and Continuous integration) over cloud and on-premises utilizing Ansible, Kubernetes, Docker).

• Developed automated testing of terraform infrastructure deployments in drift detection scripts using python.

• Conducted POC and deployed production scale Hashicorp Vault clusters using terraform and ansible and onboarded several apps towards using it.

• Experience installing and developing on ELK.

• Worked on AWS EKS Cluster configuration.

• Worked with Docker and Kubernetes on multiple cloud providers, from helping developers build and containerize their application (CI/CD) to deploying either on public or private cloud.

• Configured Jenkins jobs with the Maven scripts for various deployments of Java applications.

• Responsible for Continuous Integration (CI) and Continuous Deliver (CD) process implementation using Jenkins along with the Shell Scripts to automate the jobs.

• Supporting oath supporting tools which are screwdriver, Doppler, Athens and calypso.

• Have become POC for one of the applications and deployed a couple of releases.

• Automated process of download/upload files to GCP bucket and integrated with Jenkins.

• Have automated the certificate renewal process for all jobs.

• Have created multiple Jenkins jobs for Java applications.

• Supporting data ingestion framework which are required for file processing, hashing and storing data into GCP.

• Creating tables and backup activity using SQL in Hive.

• Perform validations against Data and Table structures in Big Query after loading data from Hadoop/Hive using SQL.

• Creating databases and backup activity using SQL in Hive.

• Running ad hoc spark queries and providing the results to the users.

• Design and implemented zero-click continuous delivery/orchestration of code/configuration promotion with standardizing CI/CD workflow which includes Code Coverage, Unit Test cases, functional test cases and auto-promotion of code depending on various test cases results across multiple environments.

• Worked on migrating the pipelines from Oozie to Airflow composer.

• Scheduling and Monitoring Airflow pipelines.

• Worked on providing solutions for auto-deployment and auto-scaling of clusters across various environments.

• Integrated various DevOps tools [SVN/GIT, Jenkins & Artifactory] to provide end-to-end continuous delivery solutions.

• Performed code merges on a regular frequency to integrate the source code from various branches.

• Automated CI/CD process using Jenkins, build-pipeline-plug-in, Maven, GIT and involved in Software Configuration Management (SCM) Build and Deployment Management.

• Responsible for Continuous Integration (CI) and Continuous Deliver (CD) process implementation using Jenkins along with the Shell Scripts to automate the jobs.

• Deploying Java applications to cloud platforms like GCP.

• Monitoring and Logging for Java application.

• Building and deploying the Java application.

Yahoo, Sunnyvale, CA. Jan 2020 - Aug 2021

BigData DevOps Engineer

Responsibilities:

• Handling end to end pipeline of Location pipeline applications.

• Supporting oath supporting tools which are screwdriver, Doppler, Athens and calypso.

• Have become POC for one of the applications and deployed a couple of releases.

• Have automated the certificate renewal process for all jobs.

• Have created multiple screwdriver jobs for multiple applications.

• Supporting data ingestion framework which are required for file processing, hashing and storing data into HDFS.

• Worked on Hadoop infrastructures such as Hortonworks(HDP2.3), MapReduce, Hive, Hbase, Yarn, Scala, Spark, Oozie & Airflow.

• Monitoring the Hadoop pipelines after Deployments.

• I worked with QA team After Hadoop Deployments to validate the data.

• I worked for to clean up the database using the SQL.

• Running ad hoc spark queries and providing the results to the users.

• Design and implemented zero-click continuous delivery/orchestration of code/configuration promotion with standardizing CI/CD workflow which includes Code Coverage, Unit Test cases, functional test cases and auto-promotion of code depending on various test cases results across multiple environments

• Worked on providing solutions for auto-deployment and auto-scaling of nodes across various environments that gets configured through Chef.

• Integrated various DevOps tools [SVN/GIT, Screwdriver and Chef] to provide end-to-end continuous delivery solutions.

• Performed code merges on a regular frequency to integrate the source code from various branches.

• Automated Build/Deploy process using Continuous integration tools like Screwdriver and Shell scripts and python.

• Implemented chef cookbooks as part of automation.

Yahoo, Sunnyvale, CA. Sep 2017 - Dec 2019

BigData DevOps Engineer

Roles and Responsibilities:

• As a DevOps Engineer responsible for support Business Banking Group Applications. Ease of Access in solving the issues faced by developers and coordinating with network team in keeping the infrastructure stable.

• Performed Build, deploy, installation and configuration from lower to PROD, BCP environments.

• Worked on WebLogic to Tomcat migrations.

• Worked on Quarterly Patching on Applications on Red hat Enterprise Linux 6.9 servers.

• Supported highly critical production applications on Linux platforms.

• Implemented various automated scripts for the ease of Devops daily activities.

• Responsible for Production, Test, Development application support including technical support, technology refresh, application change, problem trouble shooting, validation and release.

• Provide root cause analysis for the incidents reported and close the incidents within SLA time.

• Performed annual BCP event with Development and Operations partners on java-based applications.

• Collaborating with leads from DEV, QA DevOps and other functional groups to implement

• Agile SDLC and Continuous Integration (Build -> Test -> Deploy -> Report).

• Responsible for monitoring and alerting the applications to check on any errors or failures.

• Used Splunk tool for creating dashboards & application log monitoring.

• Performed emergency break fix deployments and provided post deployment support.

• Worked on outage triage calls with Product owners, database and development teams.

• Worked in monthly deployments for application enhancements.

• Analyzed application capacity and performed assessments on application availability in production environments.

• Worked on Autosys for deployments, job monitoring and scheduling.

• Worked on Venafi for SSL certification renewals for production applications.

• Building up a very strong customer support and communication structure.

• Development of scripts to clean up log files and taking backup of files.

• Contributed in re imaging the Operating systems to RedHat 7.2.

• Working on Converting the chef recipes to Ansible playbooks while migration to the new environment.

• Worked on AppDynamics for application health monitoring.

SGR Institutions of Technology, Bangalore, India. Mar 2015 - Dec 2016

System Administrator

Roles and Responsibilities:

As a System Administrator, I am responsible for the daily administration of Linux and Unix servers in a business application environment. This includes general system administration tasks, software and hardware support, system configuration, system monitoring. I am able to express thoughts clearly and capable of working in a team or as a sole contributor. I am highly self-motivated with very good communication skills. Primarily responsible for the overall operability, resiliency, performance, and capacity of owned production services.

• To setup and maintenance of new infrastructure

• To coordinate with hardware team and doing root cause analysis for L2 and L3

• Installation and configuration of Operating System and applications

• Provide Tier 2 and Tier 3 problem identification, diagnosis, and resolution, and subsequent documentation of resolution

• Microsoft Office client deployment/configuration, profiles, and troubleshooting.

• Experience in administering Microsoft Windows Server and Active Directory and group policy.

• Self-started and self-motivated to learn new things.

• Eagerness to learn new technologies.

Educational Qualification:

• BTech (Computer Science) - JNTU Kakinada, India - 2014.



Contact this candidate