Post Job Free

Resume

Sign in

Engineer Aws

Location:
Arlington Heights, IL
Posted:
June 30, 2020

Contact this candidate

Resume:

Bhanu Prakash

add8mb@r.postjobfree.com

678-***-****

linkedin.com/in/bhanuprakash91

EXECUTIVE SUMMARY

A Cloud enthusiast with around 6 years of experience in Information Technology. Being an Azure & AWS Certified professional, I like to pursue my career in Cloud related fields in conjunction with a variety of tools and technology services on agile SDLC framework. I’m glad you read till this far, lets discuss more about my skills that match the opportunity you have at hand. Thanks. PROFESSIONAL SKILLS AND INTERESTS

Scripting Languages Shell, Bash, Python (Basic)

Version & Source Control Git, GitHub, Nexus, Terraform Cloud Environment AWS, Google Cloud (Basic), Azure (Intermediate) Collaboration & Tracking JIRA, Trello, Slack, Team management, Monitoring Nagios, CloudWatch, Grafana

Azure Azure Web Roles, Worker Roles, VM Role, Azure SQL, Azure Storage, Azure AD Licenses, Virtual Machine Backup, Vault, Azure Log Analytics, Azure Resource Manager

(ARM), Virtual Machine Scale Sets (VMSS), Azure Virtual Networks (VNets), etc. AWS Cloud Formation, Autoscaling, Cloud Trail, EC2, ELB, VPC, Cloud Watch, Lambda, IAM, Route53, EMR, SNS, SQS, RDS, Dynamo DB, S3, Code Pipeline, etc. Virtualization Oracle VirtualBox ( RHEL, Linux, Centos & Ubuntu ), VMware Configuration Management Ansible, Chef, Terraform (Basic) Container Orchestration Docker, AWS ECS

Web & Application Servers Apache Tomcat, Nginx

Build Repo’s Maven, ANT(Basic)

Continuous Integration Jenkins, AWS Code build

CERTIFICATION

• Microsoft Azure Architect Technologies (InProgress)

• AWS Certified SysOps Administrator Associate

• AWS Certified Developer Associate

• Linux Foundation LFS101x Certified

PROFESSIONAL SKILLS DEMONSTRATED

Role: DevOps Engineer October 2019 - Current

Client: Kohls

Roles & Responsibilities:

• Extensive Usage of Configuration Management Tools Including Automation Scripting (Using Bash/Shell, YAML, Groovy, Python scripting.), Architecture Proposal, and layout best practices of SDLC.

• Identify and quantify value enhancement opportunities such as SaaS, PaaS, and IaaS Cloud Infrastructure.

• Installed and Configured Nexus to manage the artifacts in different Repositories.

• Define, drive automation & CI/CD roadmaps to migrate applications to DevOps standards.

• Configured S3 lifecycle of Applications & DB logs, including delete / archive logs based on retention policy of Apps and DB.

• Configured AWS Identity and Access Management (IAM) to manage AWS users & groups with access policies to AWS services.

• Build custom Amazon Machine Images (AMI) & deploy to multiple regions to launch EC2 / store snapshots.

• Install, configure Splunk components like Indexers, heads in creating new dashboards to track store deployments.

• Used AWS API Gateway to Make REST API Call to Dynamo DB.

• Develop and maintain deploy jobs for application code deployment across all environments using a wide range of Automated tools (Jenkins, GitHub, Nexus, SonarQube, Check Marx, Ansible, Puppet, Vault, Docker, K8S, & Consul).

• Generate reports with SonarQube to cover Code quality for potential bugs, code coverage, coding rules.

• Managed Azure Infrastructure Azure Web Roles, Worker Roles, VM Role, Azure SQL, Azure Storage, Azure AD Licenses, Virtual Machine Backup, and Recover from Services Vault using Cloud Shell and Azure Portal.

• Used Azure cloud services like AKS, Function app, Logic app, Azure DNS, Cloud Storage, and SaaS, PaaS, & IaaS concepts of Cloud computing architecture and implementation using Azure.

• Expertise in Azure Scalability, Availability, Build VM availability sets using Azure portal to provide resilient IaaS solution, Virtual Machine Scale Sets (VMSS) & Azure Resource Manager (ARM) to manage network traffic load.

• Define, drive automation & CI/CD roadmaps to migrate applications to DevOps standards & practices

• As a Dev-Ops Engineer at Kohls, my day to day responsibilities includes, Build & Deploy on to various environments

• Create Jenkins Job DSL scripts to remove manual overhead and auto-generate Jenkins jobs to perform day to day processes

• Automate Jenkins to Perform Shell commands by remotely connecting to instances and continue with ansible tasks

• Use scripts to perform massive deployments on to Corp and Store environments, Provide production-level deployment release and support across 1200 stores and apply fix packs as needed

• Integrated use of maven with Jenkins to generate dependencies for the sterling applications

• Build and generate artifacts are promoted & maintained after uploading to nexus for repo management

• Extensive use of Git in propagating feature branch changes to merge and make changes available to the enterprise team

• Proactively lead and complete JDK 1.8 Enterprise upgrade to resolve many build issues / obsolete dependencies

• Automating register-less sanity testing to evaluate transactions reaching various levels (Tibco, COSA, etc. from storefront)

• Monitor the health status of Corp, Store, and network using Splunk dashboards to support Peak to Holiday readiness

• Lead requirement-gathering & analysis sessions to address automation pain-points, provide task deliverables to Engineering

• Reduced build times for OMS by plug and play of the deltas jars into the build package Skills exercised: AWS services, Azure Cloud Services, Ansible, Jenkins, GIT, Ant, Maven, Nexus, SonarQube, Tomcat, Docker, Check Marx, Consul, Vault, Splunk, Prometheus, Grafana, Shell, Groovy, Ruby, Python, JIRA Role: AWS – Cloud Engineer / DevOps Engineer December 2018 - July 2019 Client: CAC inc

Roles & Responsibilities:

• Participate in design & architect discussions to adapt suitable AWS services & seamlessly integrate with 3rd party services

• Authenticate AWS account services with site24x7 to realize performance metrics of EC2’s, S3, load balancer, RDS instance

• Configure user, sub-user account dashboards, setup monitoring alerts on Twilio-SendGrid, 250ok, postman, etc.

• Create metric alerts of the deployed campaigns and analyze the deliverability of the results on 250ok via API gateways

• Provision, support AWS serverless services like DynamoDB, Athena, Glue jobs, crawlers, EMR, data-firehose API gateways

• Create, evaluate & administer JIRA ticketing. Discuss roadblocks at daily agile-scrum to achieve sprint deliverables in time

• Experience in creating Task definitions, which specifies the tasks, Resource allocation (Fargate), services and docker image on which application is built for Elastic Container Service and ALB.

• The application code upload will be used build a docker image & run in the AWS serverless container service known as Fargate.

• Once the task is defined with the tagged image from ECS, we build or update the Fargate service with the new task definition.

• Build the docker image, upload the new docker image to the ECR repository, deploy docker image as a Fargate task in AWS

• Experience in practicing Kanban dashboards, Scrum, Scaling practices & DevOps standards CI/CD integration for better results

• Equip developers and access control to use GIT to push code, perform code-build to continuously integrate with Code-pipeline and deploy the application via code-deploy on to use amplify to comply and complete CI/CD pipeline

• Requirement Analysis: Brainstorming with clients to design next-generation IT architectures to meet their business needs

• POC: Building Proof of Concept for new or emerging technologies at a client site

• Design, plan and execute enterprise resource planning (ERP) with Trust advisor with cloud-first approach on AWS

• Automate EC2, RDS instances to start/stop on defined schedules with CloudFormation based on resource tagging

• Define Cognito user pools to provision application-specific access to client-specific users by integrating user Active directories

• Integrate on-premise Microsoft Active Directory (AD) using Oauth to provide single sign-on for the organization

• Import data from Athena, S3 to Amazon quick-sight, create a dashboard to analyze and take informed decisions

• Extensive use of Cloudwatch alarms graphed metrics & trigger serverless services to perform defined actions at a given time

• Create process flows using step functions to run conditional statements to process raw data and spit out refined & usable data

• Maintain requirement check the log to keep code, database, and configuration in check with test and prod environments

• Perform audits; use trusted advisor to reduce the cloud infrastructure costs by defining threshold usage & SNS notification

• Setup up and configure complete Qlik Dev environment from scratch from users AD integration, Route53

• Create users, streams. Configure security & access rules. Import & export apps, assign license & tokens to generate reports

• Configure DynamoDB streams API endpoints to put change as the parquet file format to query on Athena from S3 with kinesis.

• Lead offshore team of 8 in developing a 3 million revenue customer management platform aided to deliver campaigns & maintain audience historic data to market proper loyalty offers Skills exercised: EC2, DynamoDB Streams, Kinesis Data-firehose, QuickSight, Prometheus, Grafana, SNS, Git, Route53, API Gateway, RDS, Athena, Gluejob, Workspaces, crawler, Code Pipeline, Code Deploy, Qlik, Trust Advisor, Step functions Role: IT - Systems Administrator January 2016 – May 2018 Client: Georgia State University

Roles & Responsibilities:

• Web Infrastructure: Install, configure and support web infrastructure and application platforms

• Worked on Build and deploy software artifacts across multiple engineering environments (Prod, test, Staging).

• Worked with VSTS API for customizing check-in policies and alert mechanism.

• Pipeline: Used AWS & built Code pipeline - DevOps Concept to explain real-world IT scenario in working

• Scrum: Introduced scrum methodology & sprints to improve standards & procedures of meetings at Panther Hackers(PH)

• Tableau: Developed a Tableau dashboard to help PH get funds. Analyzed volunteer expertise to improve content partnerships

• PHP Scripts: Created PHP scripts to automate verification and keep track of sponsors and registered conference attendees

• HTML, CSS & JavaScript: Designed, enhanced & created a highly scalable responsive website to meet the user requirements

• Deployment: Deploy applications with the help of Active Directory, SCCM, GSU NetBoot2.0 on on-premise machines in VPC

• Security: Develop bash/shell programs to automate updates, patches & build releases to the applications & secure machines Skills exercised: AWS (Code Pipeline, Code Deploy, ElasticBeanStalk), Git, GitHub, Jenkins, SonarQube, Maven, OS Essentials Microsoft Access DB, Sharepoint( Collaboration) Web App, Excel, Word, Publisher, AREMOS Role: DevOps/ Cloud Engineer (AWS) July 2015 - December 2015 Client: Green Buds Software Technologies

Roles & Responsibilities:

• Created blueprints to structure AWS VPCs utilizing Terraform. Used Elastic-Search to extract & analyze Cloud Watch logs.

• Created/Configured AWS Virtual Private Cloud, Availability Zones, EC2 instances, Subnets, routing tables, Network Access Control List (ACL), NAT and NAT Gateway, Route 53 DNS, users, Security Groups, Roles, Policies, Auto-Scaling, Elastic Load Balancer (ELB), Cloud Front, Cloud Watch, Simple Storage Service (S3), Elastic Block Store (EBS), and Glacier.

• Worked with developers & laid out an end-to-end streamlined Implementation process of CI/CD

• Administer, deploy & automate through various Version Control Systems like and Git, GitHub, SVN ( Basic )

• Utilized JSON, YAML to write Ansible Playbooks & AWS Configuration Management to provision cloud for over 60+ clients.

• Automated application deployment workflow with AWS ECS containers, Docker, Vagrant, Node.Js, CloudFormation, Opsworks, CodeDeploy, CodePipe, Codebuild, Jenkins, and Github(Brach, label, tagging & merge).

• Created an automated event/alert driven notification service utilizing SNS, SQS, Lambda, and CloudWatch.

• Migrated MySql Database to Aurora with the help of Amazon Database Migration Services and cut prone errors by 50%.

• Utilized AWS Route 53 to provide DNS Failover, a fault-tolerant, highly available architecture with Health checks helps us to understand how our end-users are routed to the Web Applications.

• Created Users, Groups, and implemented IAM policies. Configured AWS Lambda to log the changes in AWS resources.

• Created and launched instances of EC2 Red Hat Linux with EBS storage. Configured Trusted Advisor to stay within budget.

• Created infrastructure snapshots according to Recovery Time Objective & Recovery Point Objective guidelines.

• Created, renewed SSL certificates for the existing DNS applications endpoints & RHEL servers to provide security by Symantec.

• Docker: Installed Docker to containers & consoles for managing applications lifecycles.

• Monitoring: Installed and configured Nagios, Cloud-watch & Grafana, to create resource usage dashboards of production servers, which uses Graphite musing collect as the metric sender to monitor network bandwidth, memory usage, and hard drive.

• Installed, configured Database on LDAP (Server & Client) & User Management (creating admin, User migration) on LDAP server

• Worked on 3-tier web hosting architecture on web servers like Apache Tomcat, NGNIX

• Took advantage of Maven & POMs to automate the build process while integrating with 3rd party tools SonarQube & Nexus. Skills Exercised: GitHub, JIRA, YAML, collectl, AWS, IaaS, PaaS (Basic) Jenkins, Maven, UNIX, Nexus. Role: Assistant Systems Engineer May 2013 – June 2015 Client: Green Buds Software Technologies

Roles & Responsibilities:

• Release Management: Adapted Software release management strategies over various applications in the agile environment.

• Git: Worked with developers to administer Git source-code at enterprise level by supporting branching, merging, and tagging.

• JIRA: Used JIRA to track potential backlogs and progress of tasks in workflow collaboration and toolchain automation.

• Integration: Provisioned automated build processes with DevOps tools like Jenkins for Integration with Git, Maven, Nexus to setup a staging environment to deploy code & test applications. Used Maven tool to manage dependencies

• Automated the build and release management process including monitoring changes between releases.

• Experience in working with Agile/SCRUM methods which involves a 2 weeks sprint.

• Initiated responsibility for administering the SVN and GIT servers which included install, upgrade, backup, adding users, creating repository/branches, merging, writing hooks scripts, performance tuning, troubleshooting issues and maintenance.

• Comfortable and Flexible with Installing, updating & configuring various flavors of UNIX, Linux, and Windows

• Good Knowledge in developing Shell and Python (Basic) scripts & also worked on Nexus to deploy Artifacts

• Setup Clustered Environment with Tomcat server to deploy web applications integrated with Database Software

• Support: 24x7 Production support when needed in rotational shifts to maintain 99.99% uptime of services Skills Exercised: Git, JIRA, Jenkins, Maven, UNIX, Nexus, SDLC (Software Development Life Cycle ) Role: System Administration Intern August 2012 – December 2012 Client: Green Buds Software Technologies

Roles & Responsibilities:

• Infrastructure: Supported infrastructure environments comprising RHEL 3, 4, Linux, Centos, UNIX machines.

• LVM: Extensive use of Logical Volume Manager (LVM) with EXT 3/4, creating Volume Groups, Logical volumes.

• Data Migration: Used R-sync utility for customized data migration from UNIX to Linux systems.

• GCM Ticketing & Troubleshooting: Day to day troubleshooting and support for the end-users newly migrated to Linux. Managed system administration Incident Tickets, Requests, Change Request using Service now, and GCM ticketing tool. Environment: HP ProLiant-DL 360 G7, DL 380 G6, DL 580 G7, DELL Power Edge R series. Red-hat Linux 5.x 6.x.. Education Details

Georgia State University (Master Of Computer Science) May 2018



Contact this candidate