Post Job Free

Resume

Sign in

Engineering Intern Cloud Engineer

Location:
Centreville, VA
Salary:
83$ in Hr
Posted:
April 04, 2023

Contact this candidate

Resume:

Smitha Rajathi Katta

DevOps Cloud Engineer Professional

https://www.linkedin.com/in/katta-smitha-5b678959/ adwcmh@r.postjobfree.com +1-443-***-****

Overall Experience: 7+ Years

SYNOPSIS

• Over 7+ years of experience in the Information Technology industry as Le in-Azure Security Engineer/ Agile operations, Build & Release Engineer, Software Configuration Management/ Release Deployments to various environments and Cloud Management.

• Skilled at software development life cycles and agile/waterfall methodologies, by researching on latest technologies and using them for Efficient increase in Organizational growth

• Experience in Cloud Administrator on Microsoft Azure, involved in configuring virtual machines, storage accounts, resource groups, Function applications, application insights, Service Bus, VM scale set with custom script, App service deployment and Azure SQL server.

• Experience on Migrating SQL database to Azure data Lake, Azure data lake Analytics, Azure SQL Database, Data Bricks and Azure SQL Data warehouse and Controlling and granting database access and Migrating On premise databases to Azure Data lake store using Azure Data factory.

• Experience in Developing Spark applications using Spark - SQL in Databricks for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.

• Good understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.

• Experience in Microsoft Enterprise Environments for multiple infrastructure upgrades, including Azure Cloud. Experience in Azure Cloud Skills: ARM templates, Azure Web App /API, Azure Search, Azure Function, Azure service backup Vault and Recovery Service Vault and Backup/Restore, SQL DB, Azure Data Factory, Azure Data Lake Storage, Monitor, and optimise databases, Terraform, Spark and Cloud IaaS, PaaS and SaaS, etc.

• Experience in Linux Administration (Installation, Configuration, Tuning and Upgrades of Linux (Red Hat and Oracle).

• Experience in Configuring public/private cloud infrastructures utilizing Amazon Web Services (AWS) including EC2, Elastic Load-balancers, Elastic Container Service (Docker Containers), S3, CloudFront, RDS, DynamoDB, VPC, Direct- Connect, Route53, CloudWatch, CloudFormation, IAM.

• Experienced in all phases of the software development life cycle (SDLC) with specific focus on the build and release of quality software. Experienced in Waterfall, Agile/Scrum, Lean and most recently Continuous Integration (CI) and Continuous Deployment (CD) practices.

• Custom integration, plugins dev for different tools around CI-CD like GIT, JENKINS, Artifactory, SonarQube, JaCoco, Spark, PIT-Mutation, Checkstyles, Cobertura, Splunk, Bitbucket, AKS, Helm Chart, Reporting and Dashboards using API and other techniques.

• Defined portlets, workflows, packages to perform automation of ERP deployments through IT Governance. Governance, Risk & Compliance (GRC).

• Worked to help design, install, and deploy cloud based Big Data solutions using Azure and AWS, including Redshift scaling and database (Liquibase and flyway) creation for clients.

• Good experience in Azure Cloud, Docker, Kubernetes Cluster, Identity Access Management (IAM) and Information Security Metrics.

• In-depth understanding of the principles and best practices of Software Configuration Management (SCM) in Agile, SCRUM to Stash Migration, Waterfall methodologies.

• Expertise in implementing the Configuration Management Tools like Spark, AKS, CHEF, PUPPET and Ansible.

• Experience in working with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation.

• Creation, Installation and administration of Red Hat Virtual machines in VMware Environment 5.X

• Worked on infrastructure with Docker containerization, terraform Collaborated with development support teams to set up a continuous delivery environment with the use of Docker.

• Good experience in Release notes in Azure Wiki.

• Experience with monitoring tools Splunk, PagerDuty, Spark and New Relic.

• Ability in execution of XML, Ant Scripts, Shell Scripts, Perl Scripts, Shell scripts and JAVA Scripts.

• Good development experience in HTML, JavaScript, XML, C# and JAVA/JSP, Python. SKILLS

• Cloud services: AWS (EC2/SQS/SNS/S3/IAM/ELB), Azure, GCP

• SCM Tools: Bitbucket, GIT, CVS, AWS, and Stash.

• CI & Provisioning Tools: Hudson, JENKINS, Azure DevOps, PUPPET.

• Monitoring Tools: Spark, PagerDuty, New Relic, Splunk, AWS cloud monitoring

• Build Tools: ANT, MAVEN, Gradle, CMake

• Operating System: Windows, UNIX, IOS, Red Hat LINUX, Ubuntu, Fedora.

• Bug Tracker & Testing Tools: JIRA, JaCoco, Cobertura, Check Marx, PIT-Mutation, Fortify on Demand SonarQube, Junit, Findbug Static, Selenium, Cucumber

• Analytics: SAS Studio, AWS EMR

• Tools: JFrog artifactory, Nexus, Atlassian Confluence.

• SDLC: Agile, Scrum Waterfall Methodologies.

• Container Orchestration Tools: Docker, EC2 Container Services, Terraform, Kubernetes

• Scripts: Shell Script, Batch Script, Groovy, Perl Script, PowerShell Script.

• Web Technologies: Servlets, JDBC, JSP, HTML, Java Script, XML, JSON.

• Web/App servers: WebLogic, Web Sphere, Apache Tomcat, RHEL, JBOSS. WORK EXPERIENCE

Client: Swift, US

Role: Python Developer / Linux Administrator/Grafana Developer. Oct’22 – Till Now

Responsibilities:

• Assist in the development of long-term solutions for managed services, hosting, training, and other associated support.

• Built an API- driven publishing service handled projects git linguist data in influx db, JSON, and JQuery the delivered.

• Built an Grafana dashboard connects different panels which generates high level reports of languages used against the repository at organization level.

• Involved in the design and development of the application using python3.X,

• Created and wrote result reports in different formats like TXT,CSV and JSON while working on CSV files and trying to get input from the docker git commits based image to ease company process and ensure continuity.

• Implemented dev automation using Docker, Jenkins CI, Cloudbees and other tools maintainingstreamlined process.

• Worked on ETL scripts in python to get data from one database table and update the resultant to another, process to automate the loading process.

Environment: Kubernetes, Openshift, cloudbees, JenkinsCI, Grafana, PYTHON, CHEF, JIRA, Confluence, Maven, Artifactory, GITHUB, Pycharm, VScode, ansible, linux RHEL 5.X Client: Bank of America, US

Role: Build Engineer / Linux Administrator

Duration: Feb’22 – Sep’22

Responsibilities:

• Involved in defining, documenting, negotiating and maintaining Product/Application Release Roadmap. Creation of Application Release Plan (Release Scope Planning & defining Milestones).

• Involved in various Web Application Servers (WAS) administration and troubleshooting

Establish technical standards for the technical framework

Conduct, assist with, an/or manage unit and system tests

Review Technical Transaction Design specifications created by developers

Develop and/or manage technical aspects of application software, user interfaces, and third- party components

Conduct, assist with, and/or manage unit and system tests

Troubleshoot issues by performing thorough analysis. Develop well thought out resolutions and options

Provide knowledge transfer to the technical project teams

.

• Involved in migration (Liquibase) activities of Java scripts and Database scripts from Oracle, MS SQL Server and MYSQL into different environments like Development, QA, UAT and Production on Red Hat Enterprise Linux

(RHEL)Infrastructure.

• Implemented GIT metadata including elements, labels, attributes, triggers, and hyperlinks. Implemented

& maintained the branching and build/release strategies utilizing GIT.

• Building post install scripts using Shell scripting in Linux servers. Environment: Java, J2EE, Python, ANT, Maven, JENKINS, Tomcat, GIT, GIThub, Bash, PUPPET, VMWare, Linux, CentOS, Rational ClearQuest, Deploy, Nexus, Oracle, MS SQL Server. Client: ACI Worldwide (Bangalore, India)

Role: Cloud AWS / Build and Release Engineer

Duration: Oct’19 - Jul’21

Responsibilities:

• Strong hands on committing and pushing the code to git repositories and installed Git plug-ins and creating builds in Jenkins

• Experience on AWS cloud services like EC2, S3, RDS, ELB, EBS, VPC, Route53, Auto scaling groups, Cloudwatch, CloudFront, IAM for installing, configuring, and troubleshooting on various Amazon images for server migration from physical into cloud.

• Created IAM policies to grant granular permissions to specific AWS Users, Groups, and Roles.

• Automating AWS deployment and configuration tasks using Lambda. Implemented CloudWatch alarms and lambda functions for automatic scaling, fault tolerance, self-healing.

• Building pipelines with Elastic Beanstalk deployment.

• Worked as a TFS Admin and Installed TFS 2013 and setup different TFS user groups for the project team.

• Modify source code to reduce build dependencies and increase build efficiencies.

• Created and modified the build definitions as per the projects.

• Worked on Cloud automation using AWS Cloud Formation templates.

• Automate Continuous Build and Deploy Scripts for JENKINS Continuous Integration tool

• Splunk Cloud based log management solution. Allows shipping of log files off host and into an index that can be searched and have graphs and alerts made from the data.

• Integrating various provisioning and monitoring modules into a single platform.

• Performed Branching, Tagging, and Release Activities on Version Control Tools: SVN, GIT.

• Developed PowerShell scripts for automation of the build and release process.

• Responsible for the Plugin Management, User Management, Build/Deploy Pipeline Setup and End-End Job Setup of all the projects.

• JIRA is used as ticket tracking, change management and Agile/SCRUM tool.

• Strong hands-on resolving issues for Development and QA Groups.

• Upgrading the application on Windows and Red hat Linux systems and creating automated builds and deploys using scripts.

• Developed build and deployment scripts using ANT and MAVEN as build tools in Jenkins to move from one environment to other environments and also written Maven and Ant build tools for application layer modules.

• Automate provisioning and repetitive tasks using Terraform and Python, Docker container, Service Orchestration.

• Installed and Configured SSH, TELNET, FTP, DHCP, DNS, NFS, NIS, TCP/IP and ZFS and Troubleshooted network issues using 'traceroute', 'netstat', 'ifconfig' and 'snoop', Telnet and SSH.

• Installation and configuration of Jboss, Web Sphere, Apache, Web Logic, LDAP, and mail servers. Created multiple configurations for their websites in Apache, Nginx, and passenger.

• Deployed Java/.Net applications and given integration support between frontend and backend developers.

• Installed and configured monitoring tool dataDog/Nag iOS for server level alerts and also written python scripts to fetch the logs from servers and push them to Datadog to monitor every log and also written rsync scripts and created Load balancers for all the production servers

• Administration of Redhat which included jumpstarting, performing live upgrades of Linux operating systems and Kickstarting.

• Administration and support of Unix Servers including Solaris, & Red Hat Linux and CentOS.

• Patch management using native commands on Redhat Linux and following the chance control procedures.

• Performance tuning and troubleshooting of the applications and resolution of issues arising out of the ticketing systems in Remedy.

• Upgrading and configuration of Operating Systems such as Linux, AIX and Windows 2K/XP/Vista/server 2003. Environment: AWS Services, Kubernetes, IAM, Auto Scaling, Linux, Lambda CHEF, JENKINS, Cloud Foundry, NewRelic, PagerDuty, Spark, PYTHON, CHEF, JIRA, Confluence, Maven, Artifactory, GITHUB Responsibilities:

• Created AWS S3 buckets, managed policies for S3 buckets and Setting up databases in AWS using RDS and configuring instance backups to S3 bucket. Managed cloud trail logs and objects by storing them in S3 buckets.

• Configured S3 to host static web content, Elastic Load Balancers with EC2 Auto scaling groups and deployed Cloud Stack using AWS Ops Works. Also designed roles and groups for users and resources using AWS IAM.

• Implemented additional layer of security for S3 buckets by defining custom bucket policies, enabling multi-factor authentication for accidental deletion and enabling Cloud Trail to track API calls for auditing all AWSresources.

• Worked on Docker hub, creating Docker images, and handling multiple images primarily for middleware installations and domain configuration.

• Configured Jenkins master with necessary plugins and slaves to support scalability and agility and configured Jenkins to implement nightly builds on daily basis and generated change log to include daily changes.

• Integrated Jenkins CI with GIT version control and implemented continuous build based on check-in for various cross functional applications and created GitHub Web Hooks to set up triggers for commit, push, merge and pull request events.

• Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation-using Jenkins along with Python and Shell scripts to automate routine jobs.

• Installed Chef Server on the workstation and bootstrapped the nodes using Knife and involved in writing Chef Cookbooks and recipes to automate the deployment process.

• Involved in integrating chef cookbooks into Jenkins jobs for CD framework, and worked with various custom resources, created roles & environments, and using chef handlers for different auto Kickoff Requirement Jobs.

• Configured and setup ELK stack (Elasticsearch, Logstash and Kibana) to collect, search and analyze log files across the servers and monitored the servers using Cloud Watch and ELK for Network and Log Monitoring.

• Developed Python and shell scripts for automation of the build and release process, developed Custom Scripts to monitor repositories, Server storage.

• Created Pre-commit hooks in Python/shell/bash for authentication with JIRA-Pattern Id while committing codes in SVN, limiting file size code and file type and restricting development team to check-in while codecommit.

• Deployed and configured JIRA, both hosted and local instances for issue tracking, workflow collaboration, and tool- chain automation.

Environment: AWS, Jenkins, Chef, Docker, Maven, Git, Ant, ELK, EC2, S3, RDS, EBS, Elastic Load Balancer, Auto Scaling, Shell, JIRA, Python, Nginx, Apache Tomcat.

Client: STAPLES (Bangalore, India)

Role: Build and Release Engineer

Duration: Jan’16 – Mar’18

Responsibilities:

• Worked on a large Data Centre Migration Project. Migrated Linux/Unix Servers from one datacenter to another data centre with minimal downtime. Involved in P2P, P2V and V2V migration.

• Worked on handling security issues like stale UNIX account cleanups, 90 day password changes, setting max age and min age, creating a list of unmask permission for various users.

• Experience in Shell scripting (ksh, bash) to automate system administration jobs

• Experience in working with various infrastructures– compute, networking, storage, infrastructure security.

• Worked with multiple development teams to troubleshoot and resolve issues

• Worked on Perforce by Syncing data from Depot and submitting them.

• Experience in using Perforce plugin to synchronize files to the Jenkins workspace.

• Build automate sync to the latest revision when new change lists are found using perforce. This will also trigger a new build. Also, whenever a new build is triggered manually in Jenkins, the plugin will sync to the latest revision in the Perforce depot.

• Monitoring System performance and do kernel tuning to enhance the system Performance.

• Applied Operating System updates, patches, and configuration changes.

• Worked with various teams to ensure system configurations are in compliance with corporate policies and control standards.

• Backup and restore of file systems using Veritas Netbackup.

• Worked on Remedy project and resolved Remedy tickets as assigned to the individual or team

• Participated in projects as directed including planning and the implementation of new applications and projects infrastructure.

• Involved in migration of projects from one flavor to another one.

• Involved in development, user acceptance, and performance testing, production & disaster recovery server. Environment: Puppet, Git, putty, windows, Java/J2EE, Ruby, Eclipse, Ant, Jenkins, Maven, Jira, Junit, Linux, Tomcat Apache Application Server

Client: Honda (Bangalore, India)

Role: Cloud DevOps Engineer

Duration: Apr’18 – Oct’19

Client: Edukinect (HYD, India)

Role : Associate Engineer

College INTERNSHIP, IND

Jun’15 – Dec’15

Edukinect evolved from a Microsoft innovation and incubation center in 2010. The company engages in building IT solutions and products for education and health-care. Main motive is to bring academic engagements also binds student talents with industry demands through a student and business connect service initiatives. My short run as Software Engineering Intern involved developing an online fast food System. Creation of Online Fast Food is portion of the entity which satisfies the customers who want to avail of quick food service and at the same time there are privileges given to admin and the staff so as to make use of this more efficiently.



Contact this candidate