Post Job Free
Sign in

Devops Engineer Aws Cloud

Location:
Atlanta, GA, 39901
Salary:
$70/on C2C
Posted:
May 16, 2025

Contact this candidate

Resume:

ANIKET

Phone: +1-623-***-****

Email: ******.*****@*****.***

PROFESSIONAL SUMMARY:

Senior Cloud & DevOps Engineer with 10+ years of experience in designing, automating, and optimizing cloud-based infrastructures using AWS, Kubernetes, Terraform, and CI/CD pipelines.

Expertise in AWS services such as EC2, S3, RDS, Lambda, DynamoDB, API Gateway, VPC, and EKS, ensuring high availability, cost efficiency, and scalability.

Infrastructure as Code (IaC) specialist, proficient in Terraform, AWS Cloud Formation, and Ansible, reducing infrastructure provisioning time by 40%.

Strong background in CI/CD automation, leveraging Jenkins, GitHub Actions, AWS Code Pipeline, and Git Ops workflows to enable continuous integration and deployment.

Experience in multi-cloud environments, including AWS, Azure, and Data bricks, designing secure and scalable cloud solutions.

Expert in containerization and orchestration, utilizing Docker, Kubernetes (EKS), and Helm charts for micro services deployment and management.

Led cloud migration projects, transitioning legacy applications to AWS, resulting in a 30% cost reduction and improved system resilience.

Security-focused DevOps Engineer, implementing AWS IAM policies, Guard Duty, AWS KMS, TLS/SSL encryption, and multi-account AWS Organizations for enhanced compliance.

Automated log aggregation and analysis using AWS Cloud Trail, Cloud Watch Logs, Splunk and ELK Stack (Elastic search, Logstash, Kibana) for real-time monitoring and security audits.

Developed server less applications using AWS Lambda, API Gateway, DynamoDB, and Step Functions, improving system performance and scalability.

Optimized monitoring and observability using Datadog, Splunk, Grafana, Prometheus, and Nagios, ensuring 99.99% system uptime.

Managed Hadoop & Big Data clusters, working with CDH, HDP, HDFS, YARN, Hive, Impala, Spark, and NiFi, enabling large-scale data processing.

Integrated Hortonworks NiFi for real-time data ingestion and processing, improving ETL pipeline efficiency.

Automated Hadoop cluster monitoring with Splunk Hadoop Connect and Grafana dashboards, enhancing visibility into performance metrics.

Strong scripting and automation skills in Python, Bash, and Ansible, reducing manual operational overhead by 50%.

Implemented security hardening strategies using Kerberos authentication, Ranger/Sentry role-based access control, and TLS/SSL encryption for Hadoop and cloud environments.

Configured and maintained high-availability MySQL, PostgreSQL, and DynamoDB databases, ensuring low-latency and high-throughput transactions.

Architected and deployed AI-powered solutions on AWS, integrating with data lakes and optimizing compute/storage costs.

Designed disaster recovery and backup strategies using AWS Backup, S3 versioning, RDS snapshots, and Cross-Region Replication for business continuity.

Migrate existing CDK automation to terraform

Implemented multi-account AWS organization structure, enforcing service control policies (SCPs) and consolidated billing to enhance governance.

Developed FinOps strategies to optimize AWS cost management, leveraging AWS Cost Explorer, Auto Scaling, and Reserved Instances, reducing cloud expenses by 30%.

Experience in handling production outages and incident response, quickly diagnosing issues, and restoring services with minimal downtime.

Strong understanding of POSIX/Extended ACLs, file system permissions, and Linux security hardening techniques for compliance.

Excellent leadership and mentoring skills, guiding junior engineers on AWS best practices, DevOps automation, and cloud security.

Cloud Platforms - AWS:

EC2, S3, RDS, Lambda, DynamoDB, API Gateway, VPC, EKS, ECS, Cloud Front, Direct Connect, Route 53

Azure:

Azure Compute, Storage, Networking, Azure DevOps

Data bricks:

Data processing, ML models, Spark optimization

DevOps & Automation - CI/CD Tools:

Git, GitHub Actions, Jenkins, AWS CodePipeline

Infrastructure as Code (IaC):

Terraform, AWS Cloud Formation, Ansible

Configuration Management:

Ansible, Chef, Puppet

Containerization & Orchestration:

Docker, Kubernetes (EKS, Helm)

Scripting:

Python, Bash

Monitoring & Logging:

AWS Cloud Watch, Splunk, Datadog, Grafana, Prometheus, Nagios, ELK Stack (Elasticsearch, Logstash, Kibana)

Security & Compliance:

IAM, AWS Guard Duty, AWS KMS, Security Groups, AWS Config, Kerberos, TLS/SSL, Ranger/Sentry, POSIX/Extended ACLs

Big Data & Hadoop Ecosystem:

CDP/CDH & HDP Admin, HDFS, YARN, Zookeeper, Hive, Impala, Spark, NiFi, Oozie

Databases & Storage:

MySQL, PostgreSQL, DynamoDB, AWS Glue, Athena

TECHNICAL SKILLS:

TRAINING & CERTIFICATIONS:

AWS Solutions Architect.

AWS DataBricks Platform Architect

AWS AI Practitioner

Microsoft Certified Azure Fundamental

Horton Works Certified Administrator.

Red Hat Certified Engineer

PROFESSIONAL EXPERIENCE:

Client: Gilead Sciences Phoenix, AZ Sep ’22 – Present

Role: Sr. DevOps Engineer

Responsibilities:

Architected and implemented generative AI solutions using AWS services, integrating AI/ML models with a data lake architecture for scalable data processing.

Designed and automated CI/CD pipelines leveraging GitHub Actions, Terraform, and AWS CodePipeline, ensuring smooth and efficient deployments.

Migrated legacy applications to AWS, utilizing EC2, S3, RDS, Lambda, and ECS to enhance system performance and reduce infrastructure costs by 30%.

Developed Infrastructure as Code (IaC) solutions using Terraform and AWS Cloud Formation, enabling consistent and automated provisioning of cloud resources.

Led containerization efforts by implementing Docker and Kubernetes (EKS) to enhance application scalability and portability.

Implemented DevSecOps best practices, integrating security tools like AWS GuardDuty, AWS Config, IAM policies, and Hashi Corp Vault to ensure compliance.

Configured and maintained monitoring & logging solutions using Datadog, Prometheus, Grafana, AWS Cloud Watch, and ELK stack (Elasticsearch, Logstash, Kibana) for proactive issue resolution.

Automated cloud infrastructure provisioning using Ansible and Terraform, reducing manual deployment efforts by 60%.

Designed and implemented AWS networking solutions, including VPC, Route 53, API Gateway, ALB/ELB, and Cloud Front for optimized traffic routing and performance.

Developed Lambda functions in Python & Bash scripting to automate operational workflows and optimize infrastructure efficiency.

Integrated AWS Step Functions to orchestrate server less workflows and improve automation in cloud environments.

Troubleshoot and fix errors in existing CDK pipelines.

Implemented AWS Backup and disaster recovery strategies, ensuring high availability and resilience for mission-critical applications.

Worked on AWS cost optimization strategies, leveraging AWS Compute Savings Plans, S3 lifecycle policies, and AWS Trusted Advisor, reducing operational costs by 30%.

Managed Kubernetes clusters (EKS) with Helm charts, ensuring efficient deployment, scaling, and lifecycle management of micro services.

Deployed API gateways and managed micro services using AWS API Gateway, Lambda, and ECS Fargate to support scalable architectures.

Developed and deployed cloud-native applications using AWS Lambda, Step Functions, and DynamoDB to support event-driven architectures.

Designed IAM roles, policies, and least privilege access controls, ensuring secure authentication and authorization across AWS services.

Implemented centralized logging and auditing solutions using AWS CloudTrail and AWS Config to enhance compliance and security posture.

Led SRE (Site Reliability Engineering) initiatives, improving system reliability by implementing auto-healing mechanisms and automated failover strategies.

Worked on AWS FinOps strategies, optimizing resource usage and implementing budget monitoring using AWS Cost Explorer & AWS Budgets.

Orchestrated serverless computing models, leveraging AWS Fargate, Lambda, and DynamoDB Streams for scalable, event-driven applications.

Integrated DevOps practices in data pipelines, automating ETL workflows using AWS Glue, Step Functions, and Kinesis for efficient data streaming.

Configured security scanning tools, such as SonarQube, Snyk, and AWS Inspector, to identify vulnerabilities and enforce secure coding standards.

Implemented CI/CD for big data applications, optimizing Apache Spark and EMR clusters to process large-scale datasets efficiently.

Developed automated compliance checks using AWS Security Hub and AWS Config Rules, ensuring adherence to enterprise security standards.

Environment: AWS (EC2, S3, RDS, DynamoDB, Lambda, CloudFront, API Gateway, CloudTrail, GuardDuty, Step Functions), Python, Bash, YAML, JSON, GitHub Actions, Jenkins, Terraform, Ansible, Cloud Formation, Docker, Kubernetes (EKS), Helm, IAM, AWS Config, GuardDuty, AWS Shield, KMS, Hashi Corp Vault, TLS/SSL, Datadog, Prometheus, Grafana, AWS Cloud Watch, ELK Stack (Elasticsearch, Logstash, Kibana), Apache Spark, AWS Glue, EMR, Kinesis, Athena, VPC, Route 53, ALB/ELB, CloudFront, NAT Gateway.

Client: PayPal, INDIA Oct’20 – Aug’22

Role: Sr. DevOps Engineer Responsibilities:

Designed and implemented large-scale data analytics solutions on Hortonworks Hadoop, processing 10–20 TB of daily downstream transaction data.

Optimized Hadoop cluster performance by configuring YARN, HDFS, Hive, and Impala, ensuring high availability and fault tolerance.

Automated cluster monitoring and log analysis using Splunk Hadoop Connect, Grafana, and ELK stack, improving system observability.

Deployed CI/CD pipelines using Jenkins, Git, and Ansible, automating the deployment of Hadoop ecosystem services.

Managed containerized workloads using Docker and Kubernetes (EKS/Open Shift) to enhance data processing efficiency.

Implemented security best practices for Hadoop using Kerberos authentication, Ranger/Sentry authorization, and TLS/SSL encryption to safeguard data integrity.

Designed Infrastructure as Code (IaC) using Terraform and AWS Cloud Formation for automated provisioning of cloud-based big data clusters.

Configured and maintained cluster resource allocation using YARN and Apache Ambari, optimizing compute performance and workload balancing.

Led Hadoop ecosystem upgrades (HDFS, MapReduce, Hive, Spark, Zookeeper), ensuring seamless transitions with zero downtime.

Implemented real-time monitoring and alerting using Nagios, Prometheus, and AWS CloudWatch, reducing downtime by 40%.

Optimized ETL workflows using Apache NiFi and Apache Spark, streamlining data ingestion pipelines for PayPal's transaction processing.

Managed AWS cloud environments for Hadoop workloads, utilizing EC2, S3, RDS, Lambda, and DynamoDB to enhance data processing scalability.

Orchestrated log analysis solutions using Elasticsearch, Logstash, and Kibana (ELK stack), enabling real-time troubleshooting of transaction data.

Developed Python and Shell scripts to automate system health checks, log rotation, and cluster maintenance tasks.

Implemented cost optimization strategies for cloud-based big data environments, reducing infrastructure costs by 30%.

Ensured data security compliance with PCI-DSS and GDPR standards, enforcing encryption and access control measures across all data processing workflows.

Collaborated with cross-functional teams to troubleshoot and optimize big data queries, improving response time for analytics by 50%.

Environment: Hortonworks Hadoop, HDFS, YARN, Hive, Impala, Spark, Zookeeper, Apache NiFi, Jenkins, Git, Ansible, Terraform, Cloud Formation, Docker, Kubernetes (EKS/OpenShift), Splunk, Grafana, Prometheus, Nagios, ELK Stack (Elasticsearch, Logstash, Kibana), AWS (EC2, S3, RDS, Lambda, DynamoDB), On-Prem Hadoop Clusters, Kerberos, Ranger/Sentry, TLS/SSL, PCI-DSS, GDPR, Python, Shell Script, Bash, SQL.

Client: LimeLight, INDIA Nov’19 – Oct’20

Role: DevOps Engineer Responsibilities:

Designed and deployed Cloudera clusters in pre-production and production environments, ensuring high availability, fault tolerance, and scalability.

Optimized cluster performance by benchmarking YARN, HDFS, Impala, and Spark, identifying bottlenecks, and fine-tuning resource allocation.

Implemented Infrastructure as Code (IaC) using Terraform and Ansible, automating the provisioning and configuration of Cloudera Hadoop clusters.

Secured data in transit and at rest using TLS/SSL encryption, Kerberos authentication, and Apache Ranger/Sentry for access control.

Integrated monitoring and observability using Cloudera Observability, Grafana, Prometheus, and Splunk, improving proactive issue resolution.

Automated CI/CD pipelines with Jenkins, Git, and Docker, streamlining deployments of Hadoop ecosystem components.

Managed log aggregation and real-time monitoring using Elasticsearch, Logstash, and Kibana (ELK stack) to improve debugging and root cause analysis.

Deployed containerized workloads on Kubernetes (EKS/OpenShift) to enhance the scalability of data pipelines.

Developed Python and Shell scripts for automated cluster maintenance, health checks, and log management.

Implemented automated disaster recovery strategies using AWS S3 for data backup, Cloudera BDR, and snapshot management.

Optimized ETL data ingestion pipelines using Apache NiFi and Spark, reducing processing time for large datasets by 40%.

Configured and fine-tuned security policies to ensure GDPR, HIPAA, and PCI-DSS compliance, securing sensitive customer data.

Designed and maintained cost-effective AWS infrastructure for Hadoop workloads, optimizing usage of EC2, S3, RDS, and Lambda.

Collaborated with cross-functional teams to improve cluster performance, resulting in a 25% reduction in query execution time for analytics.

Led performance tuning and optimization efforts for Cloudera cluster components, achieving 30% improved resource utilization and higher efficiency in computing and storage.

Environment: Cloudera Hadoop, HDFS, YARN, Hive, Impala, Spark, Zookeeper, Apache NiFi, Jenkins, Git, Ansible, Terraform, Docker, Kubernetes (EKS/OpenShift), Cloudera Observability, Grafana, Prometheus, Splunk, ELK Stack (Elasticsearch, Logstash, Kibana), AWS (EC2, S3, RDS, Lambda), On-Prem Cloudera Clusters, Kerberos, Ranger/Sentry, TLS/SSL, GDPR, HIPAA, PCI-DSS, Python, Shell Script, Bash, SQL.

Client : Medacist, INDIA Jan’18 – Nov’19

Role: Hadoop Administrator Responsibilities:

Deployed and configured Hortonworks NiFi clusters for real-time data ingestion, transformation, and routing, ensuring seamless data flow from multiple sources.

Integrated NiFi with Oozie to automate workflow execution for downstream Hadoop jobs, improving processing efficiency.

Automated job scheduling using Apache Oozie and Crontab, optimizing workflow execution and reducing manual intervention.

Implemented a high-availability MySQL relational database cluster (Active-Standby mode) to support backend applications and ensure data resilience.

Secured data in transit and at rest using TLS/SSL encryption, Kerberos authentication, and Apache Ranger/Sentry for role-based access control.

Optimized Hadoop cluster performance by tuning YARN, HDFS, and MapReduce, reducing job execution time by 35%.

Integrated Grafana and Nagios for real-time monitoring and alerting, enabling proactive detection and resolution of system issues.

Automated cluster maintenance tasks such as log rotation, service restarts, and storage cleanup using Shell scripting and Python.

Performed version upgrades and patch management for Hortonworks Data Platform (HDP), ensuring the latest security and performance improvements.

Implemented disaster recovery strategies using HDFS snapshots, MySQL replication, and AWS S3 backups, minimizing downtime risk.

Conducted security audits and compliance checks, ensuring adherence to GDPR, HIPAA, and PCI-DSS standards.

Led root cause analysis (RCA) and performance tuning efforts, reducing system outages and improving uptime by 99.9%.

Collaborated with cross-functional teams to optimize ETL workflows and improve data pipeline efficiency, enhancing data availability for analytics.

Environment: Hortonworks Data Platform (HDP), HDFS, YARN, MapReduce, Apache NiFi, Apache Oozie, Shell Scripting, Python, Crontab, Grafana, Nagios, Ambari Metrics, Kerberos, Apache Ranger/Sentry, TLS/SSL, GDPR, HIPAA, PCI-DSS, MySQL (Active-Standby Cluster), HDFS, AWS S3, AWS EC2, RDS, On-Prem Hadoop Cluster.

Client : Sandisk, INDIA Aug’14 – Jan’18

Role: AWS Cloud Engineer Responsibilities:

Designed, deployed, and managed AWS cloud infrastructure using EC2, S3, RDS, Lambda, and VPC, ensuring high availability and scalability.

Automated infrastructure provisioning and configuration management using Terraform and AWS Cloud Formation, reducing deployment time by 40%.

Implemented CI/CD pipelines using Jenkins, GitHub Actions, and AWS CodePipeline, accelerating application deployments with zero downtime.

Optimized cost and resource utilization using AWS Cost Explorer, Trusted Advisor, and Auto Scaling, reducing cloud expenses by 30%.

Developed and deployed serverless applications using AWS Lambda, API Gateway, and DynamoDB, improving system performance and reducing operational overhead.

Ensured security best practices by implementing IAM roles, AWS KMS for encryption, GuardDuty, and AWS Config for compliance monitoring.

Monitored system health and performance using AWS Cloud Watch, Splunk, and Grafana, proactively addressing system bottlenecks and failures.

Configured and managed Kubernetes clusters using Amazon EKS and Helm charts, enabling containerized application deployments.

Deployed and maintained Docker containers using ECS and EKS, improving application scalability and resource efficiency.

Designed disaster recovery and backup strategies using AWS Backup, S3 versioning, and RDS snapshots, achieving 99.99% system availability.

Implemented log aggregation and analysis using AWS CloudTrail, CloudWatch Logs, and ELK Stack (Elasticsearch, Logstash, Kibana) for enhanced security monitoring.

Optimized database performance and management using Amazon RDS (MySQL, PostgreSQL) and DynamoDB, ensuring low-latency transactions.

Automated security patching and OS updates using AWS Systems Manager Patch Manager and Ansible, reducing security vulnerabilities.

Led cloud migration initiatives by refactoring and re-platforming on-prem applications to AWS, reducing infrastructure costs by 40%.

Implemented multi-account AWS organization structure with consolidated billing and service control policies (SCPs) for enhanced security and governance.

Conducted regular security audits and compliance checks to ensure adherence to ISO 27001, SOC 2, and GDPR standards.

Developed and optimized S3-based data lake architecture, integrating with Athena and Glue for scalable data analytics.

Collaborated with DevOps teams to automate application deployments and improve cloud infrastructure reliability using GitOps workflows.

Provided technical mentorship and training to junior engineers on AWS best practices, automation, and security strategies.

Environment: AWS (EC2, S3, RDS, Lambda, Dynamo DB, API Gateway, VPC, EKS, ECS, Cloud Front), Terraform, AWS Cloud Formation, Ansible,Jenkins, GitHub Actions, AWS Code Pipeline, Docker, Kubernetes (EKS), IAM, AWS KMS, Guard Duty, AWS Config, Security Groups, VPC Peering, AWS Shield, GDPR, SOC 2, ISO 27001, AWS Cloud Watch, Spelunk, Grafana, ELK Stack (Elastic search, Log stash, Kibana), CloudTrail, Amazon RDS (MySQL, PostgreSQL), DynamoDB, AWS Glue, Athena, Python, Bash, AWS Lambda Functions, AWS VPC, Route 53, ALB/NLB, Direct Connect, VPN, AWS Backup, S3 Versioning, RDS Snapshots, Cross-Region Replication.

EDUCATION:

Master of Engineering - Information Technology from MIT College of Engineering, Pune, India- 2013

Bachelor of Technology - Information Technology from Government College of Engineering, Amravati, India - 2010



Contact this candidate