Post Job Free

Resume

Sign in

Aws Cloud Developer

Location:
Aldie, VA
Posted:
January 10, 2024

Contact this candidate

Resume:

PROFESSIONAL SUMMARY

●AWS Cloud Developer with more than 6 years of experience. Have good understanding in business and business requirements and translating them into development and writing test cases with end-to-end testing.

●Expertise in AWS cloud services like S3, IAM, SQS, SNS, EC2, EBS, ELB, Lambda, Step Functions, Cloud Watch, Cloud Formation, VPC, EC2, Flow Logs, Dynamo DB, EKS, ECS, and ECR.

●A very good hands-on experience in Python.

●Good hands on with Docker

●Good hands-on experience with Kubernetes

●Expertise using GIT repo

●Actively Participated in Sprint Planning and Sprint Grooming sessions to dissect the Sprint Backlog items and providing proper estimates and efforts.

●Very good hands-on experience with Rest APIs building using Flask.

●Experience in healthcare and financial domain.

●Very good hands-on experience with Python modules like pandas, datetime, sys, math,,num py,,Onelake, OS, logging, JSON etc

●Very Hands-on experience with AWS services APIs – boto3.

●Expertise in Entire Software Development Life Cycle (SDLC) in the Waterfall, Agile Scrum.

●Excellent knowledge of PI Planning, Sprint planning, Jira Board.

●Strong background in Web Services.

●Comfortable using different coding editors like VSCode, PyCharm, Eclipse, Jupyter Notebook

●Highly motivated, dedicated, hardworking and team player.

PROFESSIONAL EXPERIENCE

Client – Capital One

Role- Platform Engineer and AWS Cloud Developer Project – DataKata October 2022 – Till Present

Technology and Tools: Jira board for User Stories tracking, GIT Repo for source code, Jenkins to deploy build to AWS Cloud, Onelake exchange (internal tool) for registering Datasets, PyCharm Editor, Opssite (internal tool) for tracking vulnerabilities, Postman.

AWS services - S3, Cloud Watch, Cloud Formation, EC2, SNS Topic, Step functions, Lambdas, Flow-logs, IAM, ECS, EKS, Dynamo DB, Docker

Overview: Datakata application is based on ETL. We run monthly jobs for bank loans and credit cards beginning every month. It is a complete 5 days process where these jobs pick data from Onelake location, then transform, validate them (bank loans and credit card data) based on our Datasets and generates reports in S3 buckets which we share with the business FTP teams.

We publish messages in SNS topic based on different use cases which trigger an event, which is processed by lambdas. The lambdas have all the business logic and eventually generates reports in S3. We use Jenkins to deploy code in cloud, create infrastructure using cloud formation.

Roles and Responsibilities: I am responsible for running micro services job which pick up files from Onelake location and download them into S3. Then I run another micro services job, which pick up files from S3, check dependencies, validates them, process them, perform ETL and eventually generate files in S3, and also generate reports. These reports are finally used by business teams.

I am also responsible for modifying code, creating Datasets, registering them in different environments using exchange, for any new customer.

I am also responsible for managing hydration process which is responsible for resources vulnerabilities. My responsibility is also to make sure that any cloud resources which we use in compliant to Capital One policies. We use Opssite tool for tracking those vulnerabilities.

I also am involved in daily scrum standups, requirements gathering sessions for any new customer, PI, Sprint planning meetings, grooming sessions, creating stories based on use cases and requirements from business teams, creating Datasets, registering them, coding, testing in lower environments and production, pushing code to the master branch and deploying using Jenkins.

I am also a part of decommission DataKata project. The older version of DataKata takes a lot of time and human efforts. There are multiple jobs like UK Credit Card, US Credit Card, FDR, Bankloans, each one of them takes around 3-4 hours and if a process fails, then we have to re-run the process from scratch and eventually it takes a lot of time. So, we had to decommission it and modernize it. I am also a part of modernization project where I did coding using EMP notebook, set up the environments, created OneStream pipeline, and set up multiple AWS services for the project like SNS Topic, lambdas, IAM Roles etc.

Client – Capital One

Role- Platform Engineer and AWS Cloud Developer

Project – PolyPaths April 2022 – September 2022

Technology and Tools: Jira board for User Stories tracking, GIT Repo for source code, Jenkins to deploy to AWS Cloud, PyCharm Editor, Opssite tool for tracking vulnerabilities

AWS services - S3, Cloud Watch, Cloud Formation, EC2, SNS Topic, Lambdas, IAM

Overview: PolyPaths has multiple applications in it like Appport, Enterprise, Batch Calc APIs, and they all are windows platform-based applications. We have different set of users who use these applications like “TAV Team”,”MDE Team FTP Team” and “Intehsity Team”. All these applications run their individual jobs on a daily and/or monthly basis like “Daily IP Analytics”, “Daily Derivatives”, “Back Testing”, “Daily Pricing”, “MR File Creations”, “Book Income”, “IR monthly” and “CCAR”. Files are created by business users and are kept on a centralized windows server. We get those files and move them to our applications windows servers. Then we process each one of them based on the type of the job using “Arrow” tool. The Arrow tool eventually places the processed files into respective locations and generate reports, which are used by business users.

Roles and Responsibilities: All the daily jobs run in an automated way and I was responsible for monitoring each one of them and troubleshoot if any one of them fails. As part of my job, I was also responsible to find and solution and fix the job failures and rerun the job using Arrow tool. For the monthly jobs, I was responsible for running the BD process and do rehydration in QA and PROD. I was also responsible for taking care of the rehydration process for resources and services related to S3, lambdas, EC2, SNS and do a patch work in windows applications.

The set of tools, we use are: Remote Desktop, Arrow for running jobs, AWS Scheduler, Cloud Sanity for AWS connectivity for DEV and PROD, Jira, OpsSite for tracking vulnerability, Jenkins, GIT repo, VS Code as an Editor.

During the rehydration process, we monitor the health of an EC2 instance, and if the EC2 instance is in a bad state, I was responsible for switching to another one having the same volume, using the old server snapshot, and make sure that the new server has all the required patches, required security groups, IAM roles attached to it, and the required hardware. I was also responsible for fixing the vulnerabilities for the AWS resources, setting up the life cycle rules of S3 buckets, cross side replication between S3 buckets, making sure that SNS topics are encrypted using our KMS keys.

Client – Blue Cross Blue Shield Association BCBSA

Role- Backend Cloud developer

Project – Digital Experience (DX) Program September 2021 – March 2022

Technology and Tools: Jira board for User Stories tracking, GIT Repo for source code, Jenkins, AWS Services

AWS services - S3, EC2, SNS Topic, Lambdas, IAM

Overview: BCBSA MyBlue application has integration with multiple partners like FEPOC, WebMD, Chip Rewards, CVS, which offer different set of programs like health assessment, claims submission, online health goals, Pregnancy Care Incentive Programs, DMIP, Blue-Focus Programs like FITBIT to users. These programs can be used by any of the users from MyBlue application. The MyBlue portal has an integration with these partners, which provide different programs and/or incentives/rewards to FEP users.

Roles and Responsibilities: As part of daily run jobs, we gets files from its partners, which has all the data for MyBlue application users’ enrollments, registration, termination, member id process benefits for e-services, BHA submissions, OHC goals, reward base programs, PCIP incentives, Fitbit rewards. The system team is responsible for collecting all the above-mentioned data, run jobs and generate reports and give them back to the partners. The entire process was built and deployed in on premise servers.

The final goal was to move the entire process to AWS cloud. I was a part of the system team who was doing proof of concepts to implement the same process in AWS cloud. I was working with the System Team, doing POCs to mimic the entire process using AWS resources. One of the biggest challenges was to manage all the historical data from these partners and move them to cloud storage. As part of the POCs, I also worked with the team to set up S3 buckets, bucket policies, creating lambdas, EC2 servers set up, and other resources in AWS cloud.

Client - Blue Cross Blue Shield Association BCBSA

Role – Python Backend Developer

Project- MyBlue Portal/Mobile application October 2018 – August 2021

Technology and Tools:

Overview: MyBlue portal is an application where the user logs in and participates in several programs which are taken care of, by various BCBSA and other partners. BCBSA provides Insurance benefits to the customers using different plans. The portal application is used by the customers for their health care benefits and plans.

The portal has customer information coming from different partners like

WebMD, MyBlue, Chip Rewards, FEPOC. MyBlue application can also be accessed by the customers using IOS and Android mobile devices.

Roles and Responsibilities: MyBlue is an n tier-based application which has integration with multiple partners. I was a part of the backend development team which was responsible of creating different web services (rest apis) for different partners using python and the flask framework. The application data layer calls our rest apis to get partner data. I worked to create multiple rest apis to process registration steps, user enrollment, Fitbit integration, multiple e-Services, and integration with multiple third-party vendors.

Technology and Tools: GIT, Python, Jenkins, Jira, Flask framework, Tomcat

Client: North America Medical Technology Group (NAMTG) April 2017 – June 2018

Role – Automation Engineer (Intern)

Project: Manage My Patient (MMP)

Manage My Patient (MMP) was a health care management software built for midsized providers and healthcare facilities. MMP is an online portal for patients and health care providers. It’s a web-based application built over an Oracle database and uses Web Services / API’s, and RESTful services. Tools used for testing were SOAPUI, and JIRA.

●Key member of MMP product development team, responsible for developing automation scripts for MMP application, using Selenium. Understanding business requirements, contributing to test plans, designing and development of test cases and test automation for functional & regression testing.

●Developed Test Automation Framework using Selenium WebDriver, Java, Cucumber, Gherkin, and Maven for application UI testing.

●Regression test of existing features in each release.

●Performed back-end testing using SQL queries to validate the data ingestion into the Oracle database.

●Participated in Sprint Planning, Daily Scrum, Retrospective and Release Planning meetings.

●Used Postman tool to test REST APIs.

●Used ALM for repository.

●Used Jira as bug tracking tool.

●SQLDeveloper tool for Database validations

●Rally for User Stories

●Jira as a User Stories tool

●Sharepoint for documentation

●Participated in Sprint Planning, Daily Scrum, Retrospective and Release Planning meetings.

QUALIFICATIONS

Masters from UPTU, India.

Bachelor’s in science (BS) with Honors

Work Authorization Status – Permanent Resident (Green Card)



Contact this candidate