Post Job Free

Resume

Sign in

Devops Engineer Cloud Services

Location:
Princeton, TX
Posted:
April 21, 2024

Contact this candidate

Resume:

Bala Jogi Reddy TirumalaReddy

Sr. Devops Engineer ad46fd@r.postjobfree.com

ph: +1-945-***-****

Linkedin:https://www.linkedin.com/in/bala-jogi-reddy-tirumalareddy-33072a1a2/

Professional Summary:

12 years of experience as cloud, Devops, Site Reliability Engineer(SRE), and also gained expertise as Linux Administrator which also includes SCM, Build and Release Management, CI&CD as an iterative process and automation of infrastructure using different tools and cloud services like Azure, AWS and GCP.

Deploy AWS infrastructure and resources using automation and pull requests including Terraform and Jenkins. Familiarity with Amazon SDK's including CLI and Python.

Experience in working with Docker, Kubernetes, ECS container services and successfully deployed the images in the cloud environment for managing Application.

Strong Experience in migrating other databases to Snowflake

Experience in migrating on premise storage to Microsoft Azure using Azure Site recovery and Azure backups and deployed Azure IaaS Virtual Machines(Vms) and cloud services(PaaS role instances) into secure VNets and subnets

Experience providing Apigee Edge platform diagnostic and troubleshooting ability

Hands on experience in creating API proxy in Apigee Edge using python script as well as out of the box policies.

Configure and manage OpenShift Container Storage (OCS)/OpenShift Data Foundation (ODF) for workloads and data services across the platform. This includes setting up persistent storage, storage classes, and managing the lifecycle of storage volumes

Keeping the OpenShift clusters up to date by performing version upgrades and optimizing the cluster's performance by fine-tuning configurations and resolving performance bottlenecks.

EBS, Security Groups, Devops, DevSecOps, Kubernetes, Dynamodb, Redshift.

Implementing the Azure discovery on the servicenow and discover and assess servers running in your VMware environment. If you also want to run discovery of installed applications and agentless dependency analysis, the account must have permissions enabled in VMware for VM guest operations

Design and implement scalable enterprise monitoring system by applying continuous integration/continuous delivery concepts perform maintain and troubleshooting of our enterprise Redhat Openshift systems.

Hands on experience in Azure Development, worked on Azure web Applications, Azure SQL Database, Virtual Machines, Azure Active Directory, Azure Kubernetes Services(AKS), Azure Fabric, App services, Notification hub and experience in using Azure service fabric to package, deploy and manage reliable microservices.

Understanding on Snowflake cloud Technology.

Experience with snowflake cloud data warehouse and AWS S3 bucket to integrating data from multiple source system which include loading nested JSON formatted data into snowflake table.

Implemented Guardrails using AWS lambda to mange the compliance and cost management of the resources

Prepare workload Modeling based on Dynatrace and Sumo Logic data.

Extensive work experience with Devops process and tools such as Jenkins, GitHub, Slack, Version One etc.

Experience in push to talk application to group the messaging transcipts, easy to use and group messaging.

Knowledge on the trino catalog for the configuration of the trino clusters.

Push to talk allows user to turn devices into instant communications like walkie-talkie. Etc.,

Expert in building and Operating containers, Microsservices and Serverless environment on AWS with a focus around cost performance, observability and security.

Setup the Integration functional test automation in Continuous Integration and development(CICD) pipeline using Jenkins, GitHub, postman.

Designed processes, process templates, applications and build environments for various teams using UCD deployment tool.

Setup the Integration Performance test automation in Continuous Integration and development(CICD) pipeline using Jenkins, GitHub, Dynatrace, Sumo Logic and Newrelic

Good understanding of working with SAP cloud platform (Web IDE) as well as Eclipse

Created frameworks with maven, JUnit, Testing, cucumber and Rest assured

Akamai CDN is used for both delivery Config and Security configurations

Experience in Architecting and securing the infrastructure on AWS using EC2, EBS, s3, EKS, Athena, VPC, Cloud Front, Route 53, AWS Firewall(security Group and NACL) Dynamo DB, Redhat Shift, RDS, KMS, IAM, ECS, ELB,Cloud Formation, Cloud trail, Cloud watch, SNS.

Expertise in working with Terraform key features such as Infrastructure as a code, Execution Plans, Resource Graphs, Change automation and extensively used Auto scaling launch configuration templates for launching amazon EC2 instances while deploying microservices.

Implemented a Serverless architecture using API Gateway, Lambda and Dynamo DB and deployed AWS Lambda code from Amazon S3 buckets

Implemented cluster services using Kubernetes and Docker to manage local deployments in Kubernetes by building a self-hosted Azure Kubernetes cluster(AKS) using Terraform and Ansible and deploying application containers.

Deploying machine learning model with AWS SageMaker

Worked in highly collaborative operations team to stream line the process of implementing security Confidential Azure cloud environment and introduced best practices for remediation

Hands on experience with build tools like Hudson/jenkins,Argo CD, concourse, ANT, Maven and other product tools like Bamboo, Jira, Bit bucket for building and deploying artifacts

Experience working on Docker hub, creating Docker images and handling multiple images, used Docker File and ran Docker Build to build customer images, used named volumes and bind mounts to map the host files to containers files

Worked with Redhat Openshift v4 Container Platform for Docker and Kubernetes, used Kubernetes to manage containerized applications using its nodes, Config maps, services and deployed application containers as Pods.

Created automated solution using Windows powershell to manage the backup of the primary file server and progrmatically notify the administrator via email of the backup’s success or failure

Extensive Experience with containerization and related technologies like Docker, Kubernetes and Openshift, from creating initial development Pipelines through to Production.

Good experience on Atlassian Products like Jira, Confluence, Bit bucket, Bamboo

Automated OpenStack and AWS deployment using Ansible, Chef and Terraform

Experienced in developing Continuous integration and Continuous deployment system with Jenkins on Kubernetes container environment, utilizing Kubernetes and Jenkins for the runtime environment using CI/CD to build, test and deploy

Experience in Configuring and managing source code using GIT and resolved code merging conflicts in collaboration with application developers and provided a consistent environment in implementing Continuous Integration, using Jenkins and GIT.

Good Knowledge and experience in using Elasticsearch, Kibana and Fulentd, Cloudwatch, Splunk, Prometheus and Grafana for logging and monitoring.

Hands on experience using SonarQube .

Experience in managing software artifacts required for development using repository managers like Nexus ans Artifactory and Published snapshot and release versioned artifacts into Nexus Repository.

Worked with Redhat openshift container platform, for Docker and Kubernetes, used Kubernetes to manage containerized applications using its nodes, Configmaps, node-selector, Services and deployed applications containers as pods.

Over 2 years of experience in Go lang. Developing restful webservices and Micro services using Golang.

Interested in learning and applying new technologies and concepts and stays up-to-date

with technology tools and trends in the industry.

Education:

Mca(master of Computer Applications) from Loyola Degree and PG Collegepassing out 2009.

BSC(Bachelor of Science) from Maitreyi Degree college Passing out 2006.

Technical Skills:

Cloud Platforms: AWS, AZURE, GCP,Google App engine, Google Cloud compute engine

Operating System: Linux (Redhat/5/6/7 and Centons) Windows server 2008, 2012, Ubuntu, Opens use

Configuration management Tools: Ansible, Chef

CI/CD Tools: Jenkins, Harness.io, Bamboo,Argo CD

Build Tools: Maven, ANT, Npm, Gradle

Containerization Tools: Docker, Kubernetes, EKS

Version Control Tools: GIT, GitLab, Bit bucket, SVN

Logging & Monitoring Tools: Nagios, Splunk, ELK, Cloud-watch, Azure Monitor, Prometheus, Grafana, Dynatrace, Sumo Logic, New Relic, Sync, Cloud Logging, Cloud Monitoring, Open Telemetry

Azure Stack: Azure Data bricks

Scripting & Programming Languages: Python, Bash/Shell, Powershell, Java, Groovy

Databases: Oracle 10g, MySQL, NoSQL(MongoDB, DynamoDB, Cassandra),PostgreSQL

API Management Tools: APIGEE Edge.

Application/Web Servers: Apache Tomcat, NGINX, JBoss, WebSphere, WebLogic

Web Services: SOAP, REST

Operating Systems: Unix, Linux, Windows

Virtualization Platforms: Oracle Virtual Box, VMware Workstation

Bug tracking Tools: JIRA, Bugzilla

PROFESSIONAL EXPERIENCE:

Client: Google

Implementation: HCL

Duration: August 2023 – Till date

Office Location: Austin Texas

Role: Devops/SRE Engineer

Migration from the conga to pod migration

Collecting all the per-requirements for the pod migration

Configuring all the flags related to the service where its was present with before migration.

Trained and mentored a team of 5 junior Devops engineer resulting in a 40% improvement in team productivity and 15% reduction in production issues.

Migration to pod in progress when ever the service is not up and running checking the logs and fixing the issues by creating the CL.

Using the Python script for the configuration of the service.

Leading the POC, adoption and deployment of IAST-interactive application security for application security and threat modeling port and firewall rules, gathering technical information for prospective applications and recommending, code installation procedures.

Creating CL every time when ever the service is having some issues.

Experience with container based deployment using docker, working with docker images, Docker hub and docker registries, ECR, EKS and Kubernetes.

Writing builds(Maven) and deployment scripts to automated the build and deployment of the applications

Setting up the Logging and Monitoring for the services.

Developed stored procedure views in snowflake and use in Talend for loading Dimensions and Facts.

Maintaining all the service up and running in different types of environments dev, staging and prod.

Creating reports in Looker based on Snowflake connections.

Written the Ansible playbooks for the configuration systems

Terraform is used for the provision the infrastructure for the dev, QA and prod environments.

Deploy configuration changes to Akamai and Comcast(CDN)

Monitoring, configuring, and Provisioning servers and services(Nginx)

Provider server administration, shell scripting, software installation and Linux server configuration using LAMP

Managed multiple application installations through helm charts on the GCP backend GKE cluster

Build Terra grunt project to manage Terraform configuration file Dry while working with multiple terraform modules and worked with Terraform Templates to automated the AWS IAAS virtual machines using terraform modules and deployed virtual machine scale sets in production environment Redhat enterprise Linux 5/6/7.

Fix underlying issue caused by incidents and recording its impact, and actions taken to mitigate or resolve it, Analyze root cause(s), and make follow-up actions is to prevent incident from recurring

Monitoring all the services checking the qps and query execution is working as expected after the migration was done.

After everything is working fine migrate the traffic from the conga to pod

Cleanup the process of Conga delete the files which are not related to the service.

Client: Verizon

Implementation: Infovision

Duration: October 2022 – July 2023

Location: Irving Texas

Role: Devops /SRE Engineer

Worked on GCP(Google Cloud Platform) Services like compute engine, cloud balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager.

Managed multiple application installations through helm charts on the GCP backend GKE cluster

Build Terra grunt project to manage Terraform configuration file Dry while working with multiple terraform modules and worked with Terraform Templates to automated the AWS IAAS virtual machines using terraform modules and deployed virtual machine scale sets in production environment Redhat enterprise Linux 5/6/7.

Experience in Monitoring the availability & performance of Redhat Linux servers through tools like mpstat, vmstat, iostat, netstat and nfssat.

Creating the component of an application as weel as the properties of that component and attaching the component to the existing in IBM Urban Code deploy(UCD).

Created pyspark frame to bring data from DB2 to Amazon s3

Provide guidence to development team working on PySpark as ETL Platform.

Optimize the Pyspark jobs to run on Kubernetes cluster for data processing.

Working on the Ansible playbook with involving installing, updating UCD deploy agents updating Different types of OS and load Balancer many more etc.

Created Azure services using ARM templates (JSON) and ensured no changes in the present infrastructure while doing incremental deployment.

Responsible for Database build, release configuration in urban code deploy

Track issues with the ticketing system and follow through to resolution.

Utilize monitoring tools to proactively identify issues and trends

Escalate significant issues to service, network or other operating engineers.

Implemented the application on the Google App engine .

Worked with Docker containerization and deployed applications over Docker containers.

Understood well DevOps culture and CI/CD workflow using Jenkins with Git and GitHub Version controls.

In partner ship with marketing, utilize Machine Learning to improve customer retention and production deepening across all financial products, including mortgages.

Learned a broad stack of technologies- Python, Docker, AWS, Airflow, AWS sage maker to reveal the insights hidden within huge volumes of numeric and textual data.

Deploying machine learning model with AWS SageMaker

Acted as build and release engineer, deployed the services by VSTS (Azure DevOps) pipeline. Created and Maintained pipelines to manage the IAC for all the applications

Created Azure services using ARM templates (JSON) and ensured no changes in the present infrastructure while doing incremental deployment.

Utilized Argo CD for Post-sync and implemented Gitops practices leveraging, Argocd, GitHub actions, and Kubernetes

Working on implementing new OCR Solutions; Spring Boot, Openshift, microservices, members of group developing containerized applications; Docker, Spring boot, Kubernetes, Openshift .

Knowledge on Splunk Enterprise Deployments and enable continuous integration as part of configuration using (props.conf, Transforms.conf, Input.conf & Output.conf, Deployment.conf) management.

Acted as build and release engineer, deployed the services by VSTS (Azure DevOps) pipeline. Created and Maintained pipelines to manage the IAC for all the applications

Worked on oracledatabases, Redshift and snowflakes

Build a logical and physical data model for snowflake as per the changes required.

Established infrastructure and service monitoring using prometheus & grafana

Knowledge of log parsing, complex Splunk searches, including external table lookups, Splunk data flow, components, features, and product capability.

Using rest API with python to ingest data from and some other suite to BIGQUERY

Knowledge in setting up alerts and Monitoring recipes from the Machine generated data.

Having Good exposure of cloud services like cloud logging and cloud monitoring.

Able to setup the cloud monitoring dashboard to monitor the services.

Able to setup the alerts for the slack and mail for the cloud logging any issues are raised.

Applied agile methodology to shorten cycle time and achieve target margins.

Experienced in Systems Administration / multi-platform environments, installed and configured software and hardware, installed, configured & maintained servers for disaster recovery management.

Fortify API migration support for application teams.

Experience in Nagios Monitoring.

Understood configuration management with Ansible tool.

Experienced with Docker containers and docker hub registry.

Built images and shipped images to hub registry then deployed on production servers using docker technology.

Used Apigee management APIs for creation operations.

Worked on POC of high-end Apigee innovation proxies and On boarding apis

Configured application health checks Liveness probe and Readiness probe parameters to monitor containers health.

Performed rolling updates when new source code version is released.

Managed source code version tagging and prepared release notes accordingly while handover to service delivery teams.

Wrote Jenkins Pipeline code and automated Build and Deployments to Kubernetes using Helm.

Analyzed Jenkins Shared Libraries and integrated Shared Libraries in Jenkins Pipeline for Application Deployments.

Wrote custom Pipeline code for project specific built and deployed name spaces for Dev, QA, and Prod environment.

Define virtual warehouse sizing for Snowflake for different types of workloads.

Prepared, built and deployed to non-prod environments, released notes to service delivery team for production deployments.

Configured and set up database parameters while deploying into specific names spaces.

Troubleshooted build issues and coordinated with Dev teams to fix issues.

Defined automate process for deployments.

Deployed the application on the Google cloud compute engine.

Deployed 3-tier architecture infrastructure on AWS cloud using Terraform – IaC.

Migrated IAC base from Terraform 0.11 to 0.12.x latest version.

Using NodeJS and Python created the D Baas which allows self-service for developers to provision and manage DB types from single pane of glass.

Automated Instance configuration tasks using Ansible Playbooks.

Experienced with Cloud Computing delivery model IaaS - AWS (EC2, EBS, S3, IAM, Cloud Watch, Cloud Trail, Elastic Load Balancer, Auto Scaling, Route 53, Cloud Front, VPC, RDS)

Developed and adopted DevOps Model and CI/CD workflow in integrating Git, Bit Bucket, Maven, Sonar, Jfrog, Tomcat, Ansible, Docker, Kubernetes clusters in automated deployments.

Server auditing using SumoLogic logs to ensure proper mobile compliance.

Extensive experience working with Performance engineering/testing tools such as load runner, Performance center, Jmeter, Neoload, Gitlab, Jenkins, Postman, Newrelic, Sumo Logic, Dynatrace precise tools.

Automated resulting scripts and workflow using Apache Airflow and shell scripting to ensure daily execution in production.

Develop stored procedures/views in Snowflake and use in talned for loading Dimensions and facts.

Fix production support issues, data issues and ensure data integrity.

Client: Walmart

Implementation: Infinite Blue

Duration: July 2021 – Sept 2022

Role: Devops/SRE Engineer

Worked with AWS Logs agent and created metric filter for cloud watch log groups

Experienced in Management & Deployment of scaling solutions in AWS cloud Platform

Deployed AWS EC2 instances behind Elastic Load Balancer and Auto Scaling making sure of high availability by deploying instances in Multi AZ.

Administrating Network file system using Automonter and administering user and OS datafiles in NIS, and NFS environment on Redhat Linux.

Expert in building and Operating containers, Microsservices and Serverless environment on GCP with a focus around cost performance, observability and security.

Focused on automates and development tools in NodeJS and python that serve as productivity multipliers for Operations Engineering.

Setup and maintained Logging and Monitoring subsystems using tools like: Elasticsearch, Fluentd, Kibana, Prometheus, Grafana and Alertmanager.

Deployed RDS instances and provided endpoint details to database teams and managed database snapshot methods.

Design, develop, configure and troubleshoot APIs and policies using Apigee OPDK, Apigee Hybrid

Created PySpark frame to bring data from DB2 to Amazon s3

Provide guidence to development team working on PySpark as ETL Platform.

Optimize the Pyspark jobs to run on Kubernetes cluster for data processing.

Installation and configuration of Apigee Hybrid in Multi-cloud Platform.

Part of Major platform activities like Cassandra and runtime scale up.

Define Virtual warehouse sizing for Snowflake for different types of workloads.

Understand the latest features like (Azure DevOps, OMS, NSG Rules, etc..,) introduced by Microsoft Azure and utilized it for existing business applications

Creating, validating and reviewing solutions and effort estimate of converting existing workloads from classic to ARM based Azure Cloud Environment

Installing RedhatLinux using Kickstart and applying security polices for hardening the server based on the company policies.

Developed custom powershell script to tie into the filesystemswatcher .net class and mimic the functionality of the document management solution, managing documents and complex permissions that are typically not possible with NTFS permissions.

Having Good expose of Cloud service like cloud monitoring and Cloud logging.

Rest API and Serverless deployment using Node.js on AWS lambda, SQS, SNS, SES and API gateway, AWS automation using python(boto3)

Developing Docker images to support Development and Testing Teams and their pipelines; distributed Jenkins, Selenium and J Meter images,

Responsible to implement Devops transformation by working with Agile teams to migrate application in AWS platform.

Provisioned EBS volumes and attached to EC2 instances - Created File Systems on EBS, attached volumes as per business need.

Fortify scans for the vulnerabilities for the code and send mail to the appropriate teams.

Scheduled EBS auto snapshot backups using Cloud Watch Events.

Configured IAM roles and attached roles to AWS services like EC2, S3.

Used VPC architecture and deployed AWS services within VPC - Managed security using security groups, NAT instances, NAT gateways.

Experienced in writing Python script and configured lambda function to run daily basis or maintain EC2 AMI backup retention period.

Wrote Python script to perform cross region backup for EC2 AMI.

Understood organization requirements interns of infrastructure and took decisions accordingly to procure workstations and deploying EC2 instances.

Migrated services from one EC2 instance to another to have optimal computing resources utilization.

Handled production deployments and conducted performance impact analysis with Quality team.

Provided solution to configure Auto Scaling solutions to production extraction services and wrote python script to edit security group inbound rules automatically during Scale In and Scale Out operations.

Experienced in configuring Build Pipeline jobs in Jenkins and managing build and release.

Identified gaps in market to spot opportunities to create value propositions

Worked with cloud architect to generate assessments and develop and implement actionable recommendations based on results and reviews

Used metrics to monitor application and infrastructure performance

Partnered with infrastructure teams on evaluation and feasibility assessments of new systems and technologies

Installed and configured a private Docker Registry, authored Docker files to run apps in containerized environments and used Kubernetes to deploy scale, load balance and manage Docker containers with multiple namespace ids.

Used Docker containers and Docker consoles for managing the application lifecycle and worked on setting up the automated build on Docker HUB and deployed CoreOS Kubernetes Clusters to manage Docker containers with light weight Docker images as base files.

Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications, templatize Kubernetes manifests, provide a set of configuration parameters to customize the deployment and managed releases of Helm packages.

Experience in developing Spark applications using Spark-sql in Data bricks for data extraction and transformation.

Responsible for estimating the cluster size, monitoring and troubleshooting of the Spark Data bricks cluster.

Worked on complete Jenkins plugins and administration using Groovy Scripting, setting up CI for new branches, build automation, plugin management and securing Jenkins and setting up master/slave configurations. Deployed and configured GIT repositories with branching forks, tagging and notifications.

Written groovy scripts to use multi branch pipeline projects in Jenkins to configure it as per the client’s requirements.

Created the shell script for build and automation deployment

Create AMIs by using Packer for production usage as part of a continuous delivery pipeline.

Configuring and maintaining Redhat OpenShift versions 4 PaaS Environment.

Managing the OpenShift cluster that includes scaling up and down the AWS app nodes.

Extensive experience with Containerization and related technologies like Docker, Kubernetes and Openshift, from creating initial development Pipelines through to Production.

Automating ingestion of AWS Platform logging (cloudWatch, cloudTrail) and Application logs into a centralized AWS Elasticsearch domain.

Used Apigee management APIs for creation operations.

Worked on POC of high-end Apigee innovation proxies and On-boarding apis

Implemented messaging solutions to automate data device sync with client database utilizing Openshift applications deployed on AWS.

Define Virtual warehouse sizing for Snowflake for different types of workloads.

Manging and operationalizing Continuous Delivery Pipeline applications/tools and infrastructure such as Jenkins, Nexus Artifactory, SonarQube

Worked on the GIT branching for applications by creating Release branches, Development branches thus ensuring the integrity of applications.

Install and configure Apache Airflow for s3 bucket and snowflake data warehouse and created dogs to run the airflow.

Performed all necessary day-to-day GIT support for different projects and responsible for design and maintenance of the GIT Repositories, and the access control strategies.

Installed and configured Docker to create Docker Containers of NodeJS and Ruby apps.

Preformed in Production support and Cloud operations, used Service now for the tickeing tool.

Worked with administrators to ensure Splunk is actively and accurately running and monitoring on the current infrastructure implementation.

Created datadog dashboards for various applications and monitored real-time and historical metrics.

Client: SAP

Implementation: Innominds software(Hyderabad)

Duration: Feb 2018 – June 2021

Role: Devops Architect

Worked on Google cloud platform(GCP) services like compute engine, cloud load balancing, cloud storage, cloud sql,

Blackduck scan for the vulnerabilities for the code and send mail or intimated to the appropriate teams or team members.

Responsible for creating products in Apigee so that they can be consumed by the consumers.

Understood various components with in Apigee platform so that issues can be resolved when needed.

Installed and configured Hadoop Mapreduce on HDFS.

Involved in loading data from unix file systems to Hadoop clusters.

Knowledge in performance troubleshooting and tuning Hadoop Clusters.

Resource management of Hadoop clusters including adding/removing clusters nodes for maintenance and capacity needs.

Implemented nine nodes CDH3 Hadoop cluster on Red hat Linux.

Created PySpark frame to bring data from DB2 to Amazon s3

Provide guidance to development team working on PySpark as ETL Platform.

Optimize the PySpark jobs to run on Kubernetes cluster for data processing.

Implemented services in modeling analytic platform using Grail and Groovy to expose restful web services to get consumed by UI layer.

Created pipelines for deploying code from GITHUB to Kubernetes cluster in the form of Docker Containers using Spinnaker Platform.

Shielded scrum team from external interference for optimal productivity and success of Agile process.

Knowledge on the Open Telemetry used for the resource usage of shared systems for the monitoring purpose.

Open telemetry data collection, capturing traces, logs, and metrics.

Deployed J2EE applications to application servers in an agile continuous integration environment and also automated the whole process.

Automating Performance monitoring and alerting, and tuning the J2EE Mid Tier code base and platform using HP’s J Meter

Expert in building and Operating containers, Microsservices GCP with a focus around cost performance, observability and security.

Developed the back end using Groovy and Grail's, Value object and DAO.

Expert in installing and using splunk apps for unix and Linux.

Handling web redirects from Akamai CDN, instead of redirect at origin.

Implementing the application following the J2EE best practices and patterns like Singleton, Factory, Session Facade, MVC.

The pipeline incorporated the Hashicorp vault as central, secure storage for Jenkins configuration and job builds

Installation and implementation of the Splunk App for Enterprise security and document best practices for the installation and performed knowledge transfer on the process.

Create the Google Compute Engine and deployed the application for UAT environments for the some testing purpose.

Created the IAM credentials for the GCP projects.

In-depth knowledge of security and IAM



Contact this candidate