Naveen Vajja
• Dallas, TX
• +1-857-***-**** • *******.***@*****.***
PROFESSIONAL SUMMARY:
• 6+ years of experience in the IT Industry of experience in automating, configuring, and deploying instances on AWS and GCP Cloud. Experience in DevOps Automation using Jenkins, Git, Bitbucket, Ansible, Docker, Kubernetes and Terraform and 2+ years of experience as a system administrator in Linux and Windows Servers.
• Experience supporting 24x7 production computing environments. Experience providing on-call and weekend support.
• Experienced in various AWS Cloud Platform features for environment configuration and management.
• Experience and thorough understanding of automated implementation and deployment of a cloud-based infrastructure (Web Applications, firewalls, load balancers, DNS, security, storage and management).
• Expertise in Development and Implementation of Continuous Integration (CI) and Continuous Deployment (CD) pipeline involving Jenkins, Ansible, Terraform and Docker containers to complete the automation from commitment to deployment.
• Developed Jenkins Files as groovy scripted pipelines to build the Docker images, push them to the Docker registry and perform the rolling deployments on to the Kubernetes Clusters.
• Worked with Ansible playbooks for provisioning, configuration management, patching, and software deployment. Built and deployed various Ansible playbooks and terraform modules in the dev, QA, pre-prod and prod environments.
• Installed and configured Zookeeper & Kafka with 3 nodes and connected to Kubernetes deployed application. Experience in using the package manager tools like Helm Charts and deploying charts to Kubernetes environments.
• Configured Prometheus and Grafana for Microservices like PODS and Nodes CPU utilizations and Pods deployments and performances. Experience in creating dashboards in Grafana by importing data from Prometheus.
• Expertise in Installation, Configuration of Splunk with components (indexer, forwarder, search heads) Heavy forwarder and Universal forwarder, License model. Configured Splunk Alerts and created Dashboards.
• Extensive experience in monitoring tools like Nagios, Splunk, Grafana, Prometheus and CloudWatch.
• Skilled DevOps/Build & Release Engineer with hands-on experience in managing artifact lifecycle using JFrog Artifactory.
• Proficient in configuring, administering, and integrating Artifactory with CI/CD pipelines and build tools to streamline artifact management. Strong background in repository management, artifact promotion strategies, security controls, and optimizing storage usage for enterprise-scale software delivery.
• Experience in developing various applications using Object Oriented Programming and proficient in JAVA, Python, C and C++.
• Application Deployments & Environment configuration using Chef, Ansible, Puppet.
• Used Agile practices and Test-Driven Development (TDD) techniques to provide reliable, working software early and often.
• Extensively worked on Jenkins, Bamboo, GitLab for continuous integration and for End-to-End automation for all build and deployments.
• Installed, configured, and managed the servers (AWS, Linux, Tomcat, Apache, MySQL, MongoDB, Groovy/Grails, Hudson/Jenkins, Jira, Git, JUnit).
• AWS Services Experience: EC2, ECS, EKS, ECR, SNS, SQS, Route 53, IAM, VPC, EBS, AMI, APIs, Route 53, CloudTrail, CloudFront, SQS, SNS, RDS, CloudWatch, S3, API Gateways, Autoscaling, ALB, NLB, Lambda, Cloud Formation, AWS Glue, Elastic Beanstalk, DynamoDB, Amazon Aurora DB.
PROFESSIONAL EXPERIENCE:
Client: Devops Engineer - UHG, Horsham, PA Jan 2024 – Present
Responsibilities:
• Worked on all phases of the Software Development Life Cycle. Well-versed with SDLC methodologies and principles, including Agile, Scrum, & Waterfall, as well as SCM best practices.
• Experience in dealing with GCP IaaS - VPC, Compute Engine, Cloud Functions, Resource Manager, Cloud VPN, Cloud Load Balancing, Cloud Armor, Cloud Auto-scaling, & Traffic Director.
• Worked with Ansible playbooks for provisioning, configuration management, patching, & software deployment. Built and deployed various Ansible playbooks and Terraform modules in dev, QA, pre-prod, and prod environments.
• Installed and configured Zookeeper & Kafka with 3 nodes and connected them to Kubernetes-deployed applications.
• Configured Prometheus and Grafana for monitoring microservices, including Pod and node CPU utilizations, Pod deployments, and performance metrics. Created dashboards in Grafana by importing data from Prometheus.
• Worked in the installation and configuration of Splunk components (indexer, forwarder, search heads), heavy forwarders, and universal forwarders. Configured Splunk alerts and created dashboards.
• Built automated CI/CD pipelines with Google Cloud Build, Jenkins, and Google Cloud Deployment Manager.
• Architected and deployed scalable, secure, and highly available solutions on Google Cloud Platform, leveraging services such as GKE (Google Kubernetes Engine), Big Query, and Cloud Pub/Sub. Optimized cloud resources to reduce operational costs by 20% while maintaining high performance. Led the migration of on-premises applications to GCP, ensuring a seamless transition with zero downtime and improved system resilience.
• Automated the creation, modification, and deletion of user IDs and service accounts using PowerShell, enhancing security and efficiency in Active Directory and Google Cloud Identity environments.
• Developed secure PowerShell scripts to automate password resets for user and service accounts, reducing support ticket volume and improving compliance with password policies.
• Automated the process of joining desktops to Google Cloud Identity via PowerShell, ensuring seamless integration and compliance with organizational policies.
• Created and maintained comprehensive PowerShell scripts to manage identity and access tasks, streamline administrative processes, and enhance overall system security and efficiency.
• Used Terraform and Golang to automate the deployment of GCP infrastructure across multiple regions, ensuring consistency and compliance with company standards.
• Automated security compliance checks on GCP using Golang scripts, covering IAM roles, network security, and data encryption.
• Designed and implemented microservices on Google Kubernetes Engine (GKE) using Golang to handle financial transactions.
• Set up Kubernetes (k8s) Clusters for running microservices and deployed microservices into production with Kubernetes backed Infrastructure. Development of automation of Kubernetes clusters via playbooks in Ansible.
• Involved in designing and deploying a multitude of applications utilizing almost all the AWS stack including EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, LAMBDA, REDSHIFT, focusing on high-availability, fault tolerance and auto-scaling in AWS cloud formation.
• Provided day-to-day production BAU (Business As Usual) support for critical systems, ensuring 24x7 availability and stability.
• Installed, configured, and maintained JFrog Artifactory (both OSS and Enterprise editions) to manage binary artifacts across development, staging, and production environments.
• Created and managed local, remote, and virtual repositories for Maven, npm, Docker, and other artifact formats.
• Set up artifact promotion pipelines using custom scripts or REST APIs to ensure secure and traceable releases.
• Monitored Artifactory logs and performance metrics; performed upgrades, patches, and backups to ensure system availability and reliability.
• Installing and configuring Terraform and building the infrastructure using terraform configuration file
• Implemented AWS solutions using EC2, S3, Elastic load balancer, Power Shell, Lambda, Auto Scaling groups, Optimized volumes
• Participated in planning and implementation of Systems Operations Engineering initiatives, contributing to documentation and compliance-focused tasks.
• Optimizing data transformations and processing performance in Glue jobs to meet SLAs and minimize costs.
• Involved in AWS EC2/VPC/S3/SQS/SNS based automation through Terraform, Ansible, Python, and Bash
• Developed efficient ETL jobs using AWS Glue to extract data from AWS S3, databases and streaming data.
• Implemented data quality checks and validation mechanisms within AWS Glue jobs to identify and handle erroneous data effectively.
• Performing the Data Quality checks on the extracted Data using PySpark via AWS Glue
• Worked on XML, PSV, TSV, JSON, CSV framework to perform the Data validations through PySpark.
• Read and interpreted Python scripts to aid in debugging, automation, and root cause analysis of operational issues.
• Utilized AWS Glue's error handling features to capture and manage data processing exceptions, ensuring data integrity and reliability.
• Integrated AWS Glue jobs with other AWS services like AWS Lambda and AWS StepFunctions to create end-to-end data workflows.
• Worked proactively on monitoring production and non-production micro service-based environments to ensure stability and performance through entire application lifecycle
• Triaged and troubleshoot operational issue to identify root cause and implemented long term solutions for identified issues in conjunction with development and Operational teams.
• Created detailed technical documentation on branching strategies for GitHub, build/release including processes and procedures.
• Deploy and monitor scalable infrastructure on Amazon web services (AWS) & configuration management.
Environment: Kubernetes, Terraform, Docker, Ansible, Splunk, Nginx, AWS, EC2,GCP, AWS Lambda, S3, CloudFormationS3, Nexus, JFrog, spinnaker
Client: Devops Engineer/Cloud Engineer– Byte alpha solutions, Hyderabad India July 2017 --- July 2022
Responsibilities:
• Set up continuous integration and continuous delivery (CI/CD) pipeline to automate build, integrate, test, and deploy processes. This frictionless approach enables continuous integration of various technology stack, automated testing, and automated deployment capabilities that allow the software to be developed and deployed rapidly, reliably, and repeatedly with minimal human intervention.
• Responsible for application Build & Release process which includes Code Compilation, Packaging, Security Scanning and Code quality scanning, Deployment Methodology and Application Configurations.
• Configure Jenkins jobs and pipelines using Git, Gradle, Maven, MS - Build, Jenkins, SonarQube, JFrog artifactory, which includes build and deployment of Java applications to WAS Server, .Net applications to IIS Server.
• Defining Release Process & Policy for projects early in SDLC and responsible for source code build, analysis and deploy configuration.
• Extensively worked on Jenkins for continuous integration and for End-to-End automation for all build and deployments. Implement CI-CD tools Upgrade, Plugin Management, Backup, Restore, LDAP and SSL setup.
• Working closely with Development, Operations team and project management to create build and Deploy jobs across multiple environments.
• Created pipelines from scratch and wrote Jenkins file using Groovy scripts.
• Used Kubernetes to deploy scale, load balance, scale and manage Docker containers.
• Wrote cloud formation templates (CFT s) to automate the services that are used for the application deployment.
• Using docker images automated the database deployments and as well the windows deployments.
• Used docker images in Jenkins for database and windows automation instead of the physical slaves.
• Used JFrog artifactory for storing all the docker images, used X-ray for scanning the docker images across all the client servers.
• Utilized AWS CloudWatch to monitor the performance environment instances for operational and performance metrics during load testing. Ran many microservices from local workstations on both Linux and windows machines (only on windows 10).
• Create & Utilize Cloud Watch to monitor resources such as EC2, CPU memory, Amazon RDS DB services, Dynamo tables, EBS volumes, Lambda Functions. Encrypting the EBS volumes to make sure the data in rest is secured and protected
• Implemented a server less architecture using Lambda and deployed AWS Lambda code from Amazon S3 buckets. Created a Lambda Deployment function and configured it to receive events from your S3 bucket using cloud watch events.
• Created AWS Lambda to invoke AWS Glue jobs and Send notifications to SNS.
• Creating the AWS Glue catalog and inserting data into the catalog DB, table.
• Worked on Docker to test the AWS Glue and AWS lambda local testing.
• Experience in Building the artifacts from Git generated by Maven/Gradle and uploading them to Nexus Artifactory Repository and deploy to higher environments using Jenkins files/Jenkins.
• Created and Managed various Splunk dashboards to view all the system generated metrics as a consolidated dashboard and integrate all the servers logging to Splunk.
• Implemented Micro-services using Pivotal cloud foundry platform built upon Spring Boot Services.
• Creating Jenkins jobs to create AWS infrastructure from Git repos containing Terraform code. Implementing & Working on Terraform scripts to create Infrastructure on AWS.
• Create and manage S3 buckets, enable logging in S3 bucket to track the request, who is accessing the data and enable versioning in S3 bucket and restore the deleted file by creating IAM roles.
• Created snapshots and Amazon Machine Images (AMI) for instance for backup and created access Management (IAM) policies for delegated administration within AWS.
• SVN to Bitbucket migration for version controlling and Jenkins to Bamboo for building and deploying.
• Supported an Agile CI/CD Environment as a DevOps Engineer where make the Atlassian tools (Jira and Bitbucket) and provide layer 3 support on these tools if there are any issues.
• Experience with Cloud automation technologies such as Cloud Formation and Terraform and experience in using Terraform for building, changing, and managing existing and cloud infrastructure as well as custom in-house solutions.
• Proficient in writing Templates for AWS IAC using Terraform to build staging and production environments.
• Coordinate/assist developers with establishing and applying appropriate branching, labeling/naming conventions using GIT source control and analyze and resolve conflicts related to merging of source code for GIT.
• Environment: Bitbucket, Git, Ant, Maven, Jenkins, Docker, Kubernetes, Chef, Nexus, AWS, Mongo dB, Couch DB, DB2, Tomcat, Shell, Ruby, Perl, Groovy, and Jira, AngularJS.
TECHNICAL SUMMARY:
• Operating System: Linux, UNIX, Windows
• Cloud Environment: Google Cloud (GCP), Amazon Web Services (AWS)
• Build Tools Executable: Maven, Ant, Gradle, War, Jar, Tar, Apks, Xamarin
• Version Control System: Subversion (SVN), GIT
• Configuration Management: Chef, Puppet, Ansible
• Containerized Tool: Docker, Kubernetes
• Monitoring Tool Continuous Inspection: Cloud Watch, Nagios, Splunk, Grafana, SonarQube
• Tools/ Webservers: Web Sphere Application Server 3.5, 4.0, Web Logic, JBoss, Apache, Tomcat, Nagios, Kafka, RabbitMQ, Log stash, Spark
• Web Servers: IIS 6.0/7.5/8.0, ASP.NET, Web Services on Windows
• Programming/Scripting Languages: C, C++, Core Java, UNIX Shell Scripting, Perl Scripting, Python, Groovy
• Databases: IBM DB2, Mainframes, Informatica, Couchbase DB, Oracle 9i/10g/12g, MS SQL Server 2005/2008, MySQL, SQLite, Mongo dB, Cassandra
• Servers: WebLogic, IBM WebSphere 8.5, Apache Tomcat, JBoss
• Web Technologies: HTML, HTML5, CSS, JavaScript, AngularJS, JQuery.ASP.Net, RESTful Web Services
EDUCATION DETAILS:
• Master of Science in Computer Science: University of Bridgeport, Bridgeport, CT.