Post Job Free

Resume

Sign in

Customer Support Cloud Engineer

Location:
San Diego, CA
Posted:
April 26, 2023

Contact this candidate

Resume:

J

**** **** ** # ** • San Diego, Ca ***** •adwrd7@r.postjobfree.com • 858-***-****

Education

San Diego Miramar College San Diego Ca

Bachelors, Computer Science. 3.1 Grad date: 2022

Relevant Coursework: Java, Python, C#, Ruby on Rails

AWS Academy Graduate San Diego Ca

AWS Cloud Foundation Grad date: 2022

Relevant Coursework: AWS Lambda, AWS S3, AWS EC2

Booker T. Washington Miami, Fl

G.P.A 2.9 Grad Date: 2012

Fiverr Experience

San Diego, Ca/Remote

Freelance Software Engineer\ Cloud Architect Sept 2012 – Present

Developed and maintained a complex e-commerce website for a client using Ruby on Rails, AWS, and Java. My responsibilities included designing and implementing new features, improving site performance, and troubleshooting issues. Begin each line with an action verb and include details that will help the reader understand your accomplishments, skills, knowledge, abilities, or achievements.

Collaborated with a team of developers on a SaaS platform that used AWS services such as EC2, S3, and RDS. I was responsible for building and integrating new features using Java, and ensuring the platform was scalable and reliable.

Worked on a project to migrate an existing Ruby on Rails application to AWS. This involved configuring AWS services such as Elastic Beanstalk, RDS, and CloudFront, and optimizing the application for deployment in a cloud environment.

Designed and built a custom content management system for a client using Ruby on Rails. This involved creating a flexible data model, implementing user authentication and authorization, and integrating with third-party APIs.

I have worked on a variety of projects involving cloud infrastructure, hybrid deployments, and automation. Here are some highlights of my job experience. Designed and implemented hybrid cloud solutions for clients, using Azure and other cloud technologies to seamlessly integrate on-premises infrastructure with cloud services. This involved configuring and optimizing virtual networks, implementing VPN and ExpressRoute connectivity, and ensuring secure communication between the hybrid components. Developed automation scripts and tools for deploying and managing cloud resources, using technologies such as PowerShell, Azure CLI, and Terraform. I was responsible for ensuring that the deployment and management processes were consistent, repeatable, and efficient. Provided technical guidance and support to clients, helping them to identify and implement best practices for hybrid cloud infrastructure. I also provided training and mentorship to junior engineers on the team, helping them to develop their technical skills and knowledge. Contributed to the development of hybrid cloud solutions for internal projects, working with cross-functional teams to define requirements and ensure successful implementation. I worked closely with developers, QA engineers, and project managers to ensure that the hybrid components met the needs of the project. Maintained up-to-date knowledge of Azure and other cloud technologies, attending conferences, training sessions, and participating in online communities. I used this knowledge to inform my work on hybrid cloud projects and to identify new opportunities for innovation and optimization.

As a software engineer at Pubix company, I was responsible for designing and implementing a real-time data processing system for the client. The system needed to handle a high volume of data streaming from various sources and make it available for analysis and reporting in near real-time. After researching various technologies, we decided to use Apache Kafka as the backbone of our system. One particular project that I worked on was a dashboard for the client's marketing team to track the performance of their social media campaigns. I used Kafka to ingest data from various social media APIs, including Twitter, Facebook, and Instagram. We then transformed and enriched the data using Kafka Streams before storing it in a PostgreSQL database. To ensure the data was always up-to-date, I implemented a Kafka Connect connector to continuously poll the social media APIs for new data. I also used Kafka's partitioning and replication features to ensure fault tolerance and high availability. Overall, using Kafka allowed us to build a scalable and reliable system that could handle large volumes of data and provide near real-time insights to my client's marketing team.

Proficient in developing web applications using Angular framework and related technologies such as TypeScript, RxJS, and Angular Material.

Experienced in building responsive and accessible user interfaces using Angular's template-driven and reactive forms.

Skilled in integrating Angular applications with RESTful APIs and backend services using Angular's HttpClient and other techniques.

Familiar with Angular's dependency injection system and experienced in creating and using custom services and providers.

Knowledgeable about Angular's component-based architecture and experienced in creating reusable components and directives.

Experienced in debugging and optimizing Angular applications using tools such as the Angular CLI, Webpack, and Chrome DevTools.

Familiar with Angular's testing frameworks such as Jasmine and Karma, and experienced in writing unit tests and end-to-end tests for Angular applications.

Experienced in using version control systems such as Git and tools such as GitHub and GitLab for collaborative development of Angular projects.

Google Remote

Lead Software Engineer June 2019 – Nov 2022

Led a team of software engineers responsible for developing a new feature for Google Search. This involved designing the feature architecture, coordinating with product managers and designers to define requirements, and implementing the feature using a variety of technologies such as Java, Python, and Go.

Contributed to the development of Google's machine learning platform, helping to build scalable and efficient data processing pipelines and developing tools for data analysis and model training. I also worked on integrating the platform with Google's cloud infrastructure, enabling customers to easily deploy and scale their machine learning models.

Developed and maintained large-scale distributed systems for Google Cloud, working on core infrastructure components such as load balancers, distributed databases, and security systems. I was responsible for designing and implementing these systems with a focus on scalability, reliability, and security.

Mentored junior engineers and participated in recruiting efforts for the software engineering team at Google. I conducted technical interviews, provided feedback and guidance to candidates, and helped to create a welcoming and inclusive culture for the team.

As a Cloud Solutions Architect at Google, I was responsible for designing and implementing cloud solutions using Google Cloud Platform (GCP). My job involved working closely with clients to understand their business needs and developing customized solutions that met their requirements. One of the projects I worked on involved migrating a client's on-premises infrastructure to GCP. The client was experiencing performance issues with their existing infrastructure and wanted to move to a more scalable and cost efficient solution. To start, I conducted a thorough assessment of their current infrastructure and identified areas where GCP could provide significant improvements. Based on my assessment, I designed a solution that involved moving their workloads to GCP's compute engine and leveraging GCP's load balancing capabilities to distribute traffic across multiple instances. I also recommended using GCP's managed services such as Cloud SQL for their database needs and Cloud Storage for their file storage needs. This allowed the client to reduce their infrastructure costs and efficient solution. To accomplish this, I first conducted a thorough analysis of the client's existing infrastructure and identified areas that could be improved through migration to GCP. I then designed a customized solution that included the use of GCP services such as Compute Engine, Cloud Storage, and Cloud SQL. During the implementation phase, I worked closely with the client's IT team to ensure a smooth migration process. This involved setting up the necessary infrastructure in GCP, configuring the network and security settings, and migrating efficient solution. I conducted a thorough analysis of their existing infrastructure and identified the key areas that needed improvement. Based on my analysis, I designed a cloud solution using GCP that addressed the client's performance issues and provided them with a more scalable and cost-effective solution. I implemented a combination of GCP services such as Compute Engine, Cloud Storage, and Cloud SQL to build a highly available and fault-tolerant infrastructure.

AIO(All in One) Shoe-bot Remote

Software Engineer April 2018 – Feb 2020

Used C# libraries and frameworks, such as Selenium, HtmlAgilityPack, and Bot Framework, to develop the bot's modules.

Developed an all-in-one bot using C# that automated various tasks for purchasing hyped items such as shoes, clothes, and accessories. Also including data management, customer support, and automation of financial processes.

Conducted thorough testing to ensure that the bot performed as expected and was free of errors.

Documented the bot's architecture, user interface, and data flow, and provided training to users on how to use the bot.

Deployed the bot to the appropriate platforms and servers and ensured that it was available and reliable.

Big Fish Games Remote

DevOps Engineer June 2018 – Aug 2021

Developed a popular puzzle game for iOS and Android, working on all aspects of the game from initial concept to final release. This involved programming the game mechanics and user interface, designing the game levels and puzzles, and working with artists and sound designers to create a cohesive and engaging experience for players.

Collaborated with a team of developers on a mobile action game, responsible for programming the game's combat mechanics, character animations, and AI. I worked closely with the game designers to ensure that the gameplay was challenging, rewarding, and fun, while also optimizing performance for mobile devices.

Contributed to the development of a location-based augmented reality game, integrating the game mechanics with GPS data and real-world landmarks. I worked on the game's user interface, map system, and multiplayer functionality, ensuring that the game was both immersive and accessible for players.

Developed a mobile game engine using Unity, allowing for rapid prototyping and development of new game ideas. I created reusable code modules for common game mechanics such as physics, input, and UI, enabling faster iteration and easier collaboration among developers.

Developed multiple games using Unity and C#, including EverMerge.

Automated game development tasks, including testing and deployment, using tools such as Docker and Kubernetes.

I developed games using Unity and C# programming language. I collaborated with the design team to develop game mechanics, create gameplay features, and optimize game performance. I ensured that the game was stable, bug-free, and met all quality standards.

Ickler Electric Corporation Remote

Sr. Cloud Architect April 2018 – Nov 2022

Designed and implemented a large-scale microservices-based application architecture using AWS services such as EC2, ECS, Lambda, and API Gateway. I was responsible for designing the overall system architecture, ensuring scalability and fault tolerance, and optimizing performance.

I have worked on a variety of projects involving cloud infrastructure, hybrid deployments, and automation. Here are some highlights of my job experience. Designed and deployed Azure-based hybrid cloud solutions for enterprise clients, integrating on-premises infrastructure with cloud services. This involved configuring and optimizing virtual networks, implementing VPN and ExpressRoute connectivity, and ensuring secure communication between the hybrid components. Developed automation scripts and tools for deploying and managing cloud resources, using technologies such as PowerShell, Azure CLI, and ARM templates. I was responsible for ensuring that the deployment and management processes were consistent, repeatable, and efficient, while adhering to best practices for security and compliance. Implemented disaster recovery and business continuity solutions for clients, ensuring that critical applications and data were protected and recoverable in the event of a failure. I also conducted regular disaster recovery testing and validation, identifying and resolving issues to improve the overall resiliency of the hybrid infrastructure. Provided technical guidance and support to clients, serving as a subject matter expert on hybrid cloud infrastructure, networking, and security. I worked closely with clients to understand their needs and requirements, and provided recommendations and solutions to meet those needs. Participated in the development of hybrid cloud solutions for internal projects, collaborating with cross functional teams to define requirements and ensure successful implementation. I worked closely with developers, QA engineers, and project managers to ensure that the hybrid components met the needs of the project and were delivered on time and within budget.

Led a team of developers on a project to migrate an existing Ruby on Rails application to the cloud using AWS and Azure. This involved designing and implementing a hybrid cloud architecture, integrating with third-party services, and ensuring the application was scalable and reliable.

Worked with a client to develop a multi-cloud strategy, leveraging AWS, Azure, and Google Cloud Platform services. I was responsible for designing and implementing hybrid cloud architecture, ensuring high availability and disaster recovery, and optimizing costs.

Designed and implemented a serverless data processing pipeline using AWS services such as S3, Glue, Athena, and Lambda. I was responsible for designing the pipeline architecture, optimizing performance and cost, and ensuring data security and compliance.

I encountered an issue where some of the containers were not able to connect to the external resources due to incorrect configurations. This led to downtime and affected the user experience. I implemented a solution that involved using AWS System Manager Parameter Store to store the configuration settings for the containers. I created a custom Docker image that would retrieve the configuration settings from the Parameter Store during the container start-up process. To ensure the security of the configuration settings, I used AWS Identity and Access Management (IAM) roles to grant permissions to the ECS task definitions to access the Parameter Store. I also encrypted the configuration settings using AWS Key Management Service (KMS) for additional security. After implementing this solution, the containers were able to retrieve the correct configuration settings during the start-up process, which eliminated the connectivity issues. The users were able to access the application without any downtime or interruptions, resulting in a positive user experience.

As a Cloud Engineer at Ickler Electric, I was responsible for designing, deploying, and managing cloud solutions using Microsoft Azure and AWS. My job involved understanding the business requirements and developing customized solutions that met their needs. One of the projects I worked on involved deploying a cloud-based data analytics solution for a client in the healthcare industry. The client had a large amount of data that needed to be analyzed in real-time to improve patient outcomes. I designed an efficient solution. I worked closely with the client to understand their requirements and designed a customized solution using GCP services such as Compute Engine, Cloud Storage, and Cloud SQL. I also implemented automation tools such as Terraform and Ansible to streamline the deployment process and ensure consistency across the client's environments. The migration was completed successfully, and the client was able to achieve significant cost savings and improved performance.

United States Marine Corp San Diego

DATA System Administrator Sept 2012 – Mar 2018

Installing and configuring software, hardware and networks

Monitoring system performance and troubleshooting issues

Ensuring security and efficiency of IT infrastructure

Install and configure software and hardware

Manage network servers and technology tools

Set up accounts and workstations

Monitor performance and maintain systems according to requirements

Troubleshoot issues and outages

Ensure security through access controls, backups and firewalls

Upgrade systems with new releases and models

Develop expertise to train staff on new technologies

Build an internal kiwi with technical documentation, manuals and IT policies

Managing Windows, Linux, or Mac systems

Upgrading, installing, and configuring application software and computer hardware

Troubleshooting and providing technical support to employees

Creating and managing system permissions and user accounts

Performing regular security tests and security monitoring

Maintaining networks and network file systems

Develop SharePoint Online custom applications using SharePoint Framework, .Net Core APIs, React, Office Fabric UI, Graph API and SP-PNP.js

Execute all development aspects of projects from solution development through unit testing, code documentation, and solution support.

Experienced in all aspects of Azure resources and services configuration such as Azure Key Vault, Azure Container Registry, Kubernetes Cluster, App Insights etc. Created dev-ops pipelines for continuous integration and deployment using Azure DevOps.

Good Knowledge in ETL Tools, Agile Implementation, Sprint Planning.

Involved in all phases of Software Development Life Cycle (SDLC) including requirement gathering, analysis, design, development, deployment (to Dev, Staging, QA, and Production), administration and support maintenance of all applications

Installed important security and functionality patches to maintain optimal protections against intrusion and system reliability.

Designed proactive preventive maintenance schedules to prevent unnecessary downtime and hardware faults.

Established network specifications and analyzed workflow, access, information and security requirements.

Adopted cost-effective, useful solutions to implement into current systems.

Installing, configuring, and maintaining software applications, databases, and servers.

Monitoring network and system performance, identifying and resolving technical issues, and ensuring system availability and reliability.

Managing user accounts and permissions, ensuring data security, and maintaining data backups and recovery procedures.

Providing technical support to users, troubleshooting issues, and resolving help desk tickets in a timely and efficient manner.

Developing and implementing IT policies, procedures, and best practices to improve system performance and security.

Conducting regular system audits, identifying vulnerabilities, and recommending security enhancements.

Managing IT procurement, including hardware, software, and licenses, and ensuring compliance with procurement regulations and policies

Skills & Interests

Technical:

C#

Python

IaC

AWS

GCP

Java

Microsoft Office

C++

Linux

Git

Salesforce

Docker

Node.js

GitHub

Spring Boot

Cloud Services

PaaS

IaaS

AWS CloudFormation

Saas

Oracle

Amazon Web Services

Ruby on Rails

Microsoft Azure

Apache Kafka

Amazon EC2

IAM

DynamoDB

Terraform

Language: English, Spanish

City, State



Contact this candidate