Resume

Sign in

Cloud Architect

Location:
Hyderabad, Telangana, India
Posted:
March 13, 2020

Contact this candidate

Resume:

Anil Kumar Koduri

https://www.linkedin.com/in/anil-kumar-koduri-0872b8158/

Mobile: +91-995*******

Email: adcach@r.postjobfree.com

Profile & Achievements

Overall 8 years of experience in IT industry and implemented organization DevOps strategy in various environments of Linux and windows servers along with adopting cloud strategies based on Google Cloud & Amazon Web Services and Environment.

4+ years of experience in handling SAN & NAS Technologies including EMC/Hitachi/HPE3PAR /NetApp Storage Arrays and Cisco/Brocade SAN Switches.

Good knowledge & hands-on experience in designing the high-level architecture and low-level architecture design for implementation of various applications on Google cloud & AWS cloud platforms.

Experience in migrating the Active Directory users to Google Cloud Platform using Google Cloud Directory Sync.

Good Experience in setting up the VPN Tunnels from On-Premises Data Center & AWS to the Google Cloud Platform with Policy Based, Route based and Dynamic BGP Based policies.

Experience in coordinating with on-premises infrastructure, Network, Security, Application teams in successful deployment of infrastructure and microservices to Google Cloud Platform.

Knowledge and experience in implementing the entire cloud infrastructure using Terraform.

Experience in migrating the Linux/Windows on-premises VMs, physical servers using Velostrata (Migrate for Compute Engine) to Google Cloud Platform.

Good experience in Google Cloud Stackdriver Monitoring, Logging, Error Reporting & Debugging and integrating Stackdriver with Splunk & Grafana and having experience in using ELK cluster for logging and visualization.

Good Experience in working with Google Kubernetes Engine deployment and HPA.

Experience in using Jenkins for continuous integration and Spinnaker for continuous deployment to deploy the microservices on to GKE.

Experience in deploying Istio on GKE and monitoring the applications using Prometheus, Grafana, Jaguar and Kiali.

Experience in creating cloud data pipelines using Cloud Pub/Sub, Apache kafka, Cloud DataFlow & Apache Beam, Google cloud storage and Cloud Datastudio.

Experience of database technologies to use Cloud SQL, Cloud Spanner, BigTable and Cloud Datastore.

Experience in automating the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers)

Experience in writing Dockerfiles and Jenkinsfiles as per the requirements.

Provisioning and managing AWS infrastructure: EC2, VPC, load balancers, Redshift, RDS, CloudFront, firewalls and enforce identity-based policies (IAM policies) via Terraform

Experience in S3 bucket administration (ACLs, policies, LifeCycle, replication)

Managing Backups of Instances and EBS storage, Snapshots using scripts.

Utilized Cloud Watch to monitor resources such as EC2, CPU memory, Amazon RDS DB services, DynamoDB tables, EBS volumes; to set alarms for notification or automated actions; and to monitor logs for a better understanding and operation of the system.

Migrating and Managing DNS entries in Route 53 and various DNS providers.

Good knowledge in GKE-Anthos.

Migrating on-prem workloads such as VMs as a container onto GKE.

Good knowledge on Azure AKS Clusters.

Basic knowledge on Oracle Cloud Computing Environment.

Good knowledge on Go-language.

Certifications:

Google Certified Professional Data Engineer

Google Certified Professional Cloud Architect

AWS Certified SysOps Administrator – Asscociate

AWS Certified Solution Architect – Associate

ITILv3 Foundation Certified

Technical Skills:

Programming languages Go-language

Cloud Technologies: GCP, AWS and OCP

DevOps Tools: Git, Jenkins, Docker, Kubernetes, Istio, Spinnaker, Ansible & Terraform

Operating Systems: Linux & Windows

Databases: Amazon RDS, Cloud SQL, Cloud BigTable, Cloud Spanner, Cloud Datastore

Monitoring Tools: Prometheus, Grafana, Splunk and ELK Stack

Scripting Tools: Shell Scripting, Basic knowledge in Python Scripting

Ticketing Tools: Atlassian Jira, Slack BMC Remedy and ServiceNow

Professional Experience:

Employer: Fariz Infosolutions Pvt Limited (www.fisclouds.com)

Project: Media Project on GCP

Role: Cloud Architect

Duration: May, 2019 – Till Date

Responsibilities:

Design of cloud architecture for meeting the requirements to migrate from AWS to Google Cloud Platform environment.

Co-ordinating with Client on understanding of their AWS environment and working functionality.

Discussion with Google Cloud Professionals and Client team in finalizing the architecture design.

Setting up the VPN Tunnel from AWS to Google Cloud Platform using Terraform.

Creating the required firewall rules and routes with proper network tags for secured communication using VPC.

Deploying the GCE, GKE, GCS using Terraform.

Setting up the Jenkins on GKE using Helm for continuous integration environment for build, pushing the docker images to GCR.

Creating the Cloud Pub/Sub notifications to trigger the pipelines in Spinnaker to deploy the GCR images on to GKE Cluster.

Deploying Istio using Helm onto GKE cluster for monitoring, visualization and tracing using Prometheus, Grafana, Kiali and Jaguar.

Deploying the GraphQL and CockroachDB on to GKE cluster and setting up the HPA.

Deploying the Locust Load test application using Helm on to GKE cluster for testing the staging environment with 20k concurrent users and production environment with 150k concurrent users.

Observation of Horizontal pod autoscaler, monitoring Prometheus and Grafana to analyze the performance of Google cloud platform and AWS environment.

Following Atlassian Jira and working according to the Sprint.

Quick updates to the Google and the Client using the Slack channel and escalation if needed for the details.

Organizing daily standup calls with internal team and weekly meetings with clients for sync-up & report submission.

Preparing the documentation report and submission to Google and Client.

Employer: Fariz Infosolutions Pvt Limited (www.fisclouds.com)

Project: Banking Project on GCP

Role: Cloud Architect

Duration: May, 2019 – Till Date

Responsibilities:

Actively involved in the design of high-level and low-level architecture design of On-premises development, test, staging and production environment to Google cloud platform.

Prepared the Terraform code for creating the organization, folders, Shared VPC project, service projects, Cloud VPN, IAM, GCE, GKE and Cloud SQL.

Creating the required firewall rules and routes with proper network tags for secured communication using VPC.

Synchronization of Active Directory users from on-prem environment to GCP using Google Cloud Directory Sync and setting up ADFS & SSO.

Deploying Jenkins and Spinnaker using Helm on to GKE private cluster for continuous integration and continuous deployment of different microservices and web apps.

Setup Monitoring using Istio on GKE and Stackdriver on GCP console.

Created various dashboards for monitoring in Grafana and Stackdriver as per the requirement.

Setup different alerting on the various resources based on thresholds in Stackdriver and Grafana.

Migration of On-prem VMs from VMware Vcenter to Google cloud platform using Velostrata (Migrate for Compute Engine).

Setup of locust load testing for web-facing applications.

Following Atlassian Jira and working according to the Sprint.

Organizing daily standup calls with internal team and weekly meetings with clients for sync-up & report submission.

Employer: Fariz Infosolutions Pvt Limited (www.fisclouds.com)

Project: Logistics Project on GCP

Role: Cloud Architect

Duration: May, 2019 – Till Date

Responsibilities:

Actively involved in the design of high-level and low-level architecture design of On-premises development, test, staging and production environment to Google cloud platform.

Prepared the Terraform code for creating the organization, folders, VPC, Subnets, Cloud VPN, IAM, GCE, GKE and Cloud SQL.

Creating the required firewall rules and routes with proper network tags for secured communication using VPC.

Deploying Jenkins and Spinnaker using Helm on to GKE cluster for continuous integration and continuous deployment.

Setup Monitoring using Istio on GKE and Stackdriver on GCP console.

Created various dashboards for monitoring in Grafana and Stackdriver as per the requirement.

Setup different alerting on the various resources based on thresholds in Stackdriver and Grafana.

Migration of On-prem VMs from VMware Vcenter to Google cloud platform using Velostrata (Migrate for Compute Engine).

Setup of locust load testing for web-facing applications.

Following Atlassian Jira and working according to the Sprint.

Organizing daily standup calls with internal team and weekly meetings with clients for sync-up & report submission.

Employer: L&T Technology Services Limited

Project: UbiqWeise IoT on Google Cloud Platform

Role: Tech Lead – Google Cloud Platform

Duration: Nov, 2018 – April, 2019

Responsibilities:

Design of high-level architecture of IoT on Google Cloud Platform for migrating the on-premise environment.

Analyze prospect requirements and establish parameters to ensure client receives the right solution, in conjunction with the Sales Representative.

Resolve any client or sales concerns and revise proposals if necessary.

Involved in creating RFP, POC for cloud migrations.

Managing and training the team on various GCP components related to Compute, Storage, Database, BigData.

Creating the IoT cloud environments using CI/CD, Microservices and container based for implementing the DevOps culture.

Training the team on Various IaC tools such as Terraform and Ansible.

Successfully configured the Apache Kafka for message queuing and installed Venrne MQTT Broker software for streaming of IoT sensor data.

Configured ELK stack for monitoring & logging for the cloud environment.

Successfully deployed the GCP cloud environment of VPC, HTTPS loadbalancer, Google Kubernetes, Autoscaled instances, Cloud Spanner, Cloud BigTable, BigQuery and Cloud Storage, App Engine, Jenkins servers using the Terraform automation IaC tool.

Coordinating with team on reviewing the pre-implementation plans for successful implementation on GCP with best practices.

Co-ordinating the various teams such as Application, Embedded, BigData & Google Cloud teams for successful deployment of the IoT product in to GCP platform.

SPOC for coordinating with Google Cloud Partner Engineer and Unicorn Scout engineer for deploying the solution on to the GCP

Reporting to the Manager on design related aspects for the effective implementation in GCP platform.

Employer: Vodafone India Services Pvt Ltd

Project: Vodafone UK & Vodafone Italy

Role: Assistant Manager – Role: Cloud & DevOps Engineer - IAAS

Duration: Apr’ 2016 – Nov’ 2018

Responsibilities:

Established the VPN connection between AWS and on-premise environment.

Configured security groups and VPC, Subnets and blocking suspicious IPs via ACL.

Setup and launch Amazon Linux, Ubuntu and RHEL and Windows EC2 instances, network interface with Elastic IP’s.

Created AMI images of the mission critical EC2 instance as backup using AWS CLI and GUI.

Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade AWS resources (CPU, Memory, EBS).

Configured and managing ALBs and Auto-scaling

Create/Managing buckets on S3 (CLI) and store DB and logs backup and Restricted Policies.

Manage Amazon RedShift clusters such as launching the cluster and specifying the node type as well.

Configured the Cloud Watch and Cloud Alarm for monitoring.

Configured Git with Jenkins, SonarQube and schedule jobs using pipeline and webhook.

Developed an automated, continuous, build process that reviews the source code, identifies build errors and notified appropriate parties to expedite/facilitate synchronization to the latest build.

Created support case with AWS for any critical issues and follow-up.

Having Good Experience in monitoring tools like Logic monitoring and Nagios console.

Employer: Vodafone India Services Pvt Ltd

Project: Vodafone New Zealand

Role: Assistant Manager – Role: Cloud & DevOps Engineer - IAAS

Duration: Apr’ 2016 – Nov’ 2018

Responsibilities:

Preparation of storage migration plans to deploy into GCP environment via Terraform.

Assisted in preparing the High level design architecture and detailed infrastructure design with senior architects.

Experience in Google Cloud Virtual Network, Cloud Load Balancing, Cloud CDN, Cloud VPN, Cloud DNS.

Experience in Cloud Storage, Compute Engine, Kubernetes and App Engine.

Good Hands-on experience on Google Cloud shell to deploy the infrastructure.

Experience of storage technologies Cloud storage, BigQuery,

Experience of database technologies, must know how to use Cloud SQL, Spanner, BigTable, etc.

Knowledge and hands on experience on Stack driver, pub-sub, VPC, Subnets, route tables, Load balancers, firewall etc.

Administrated all aspects in GIT and troubleshoot with merge conflicts. Working on weekly merges for different branches and resolving the conflicts as part of release activities.

Configured and maintained Jenkins to implement the CI/CD process and integrated the tool with Maven to schedule the builds.

Created GCR images for successfully tested applications.

Working with development/testing, deployment, systems/infrastructure and project teams to ensure continuous operation of build and test system.

Creating support case with Google Cloud and follow-up in case of any issue.

Employer: Profound Infotech Pvt Ltd

Role: Senior Storage Administrator

Duration: Nov’ 2011 – Apr’ 2016

Responsibilities:

Day to day administration of SAN & NAS environment.

Administration and Management of EMC VNX, Clariion CX4, VMAX, Hitachi & HPE3PAR Storage Arrays and Netapp filers of 7-mode.

Create Alias, Zones and add Zones across the Fabric of Brocade and Cisco switches.

Provision & Manage Aggregate, Volume, Qtree and LUNs on NetApp ONTAP. LUN creation and deployment on Windows and Linux server using FC and ISCSI protocols.

Creating quota reports based on resource usage and managing share quotas. Managing NFS and CIFS for UNIX and Windows clients.

Support client storage configurations, issues associated with the presentation of new devices as well as issues with existing presented devices/LUNs.

Coordination the team in replicating the data between two sites using Snap Mirror operation. Performing local replication operations such as SnapView (Snapshot/Clone) and TimeFinder (Mirror/Clone/Snap) as per the client request. Supporting the team for remote replication operations for SRDF (Sync, Async/AR), MirrorView (Sync/Async).

Coordinating with the SAN Vendor support team for any hardware failures and critical issues and collecting logs.

Working on Daily Incident / Service Requests. Performance Monitoring & health checks.

Provide documentation support and maintenance. 24x7 operations support and on-call support responsibilities.

Education:

Bachelor of Technology from JNTU.

(Anil Kumar Koduri)



Contact this candidate