Post Job Free
Sign in

Big Data Azure Devops

Location:
Irving, TX
Salary:
140000
Posted:
May 18, 2025

Contact this candidate

Resume:

Professional Overview:

Having ** years of IT experience with strong expertise in DevOps, cloud infrastructure, and Big Data Engineering.

8+ years of hands-on experience in Azure DevOps, DevSecOps, managing Azure infrastructure (AKS, ADF, APIM, Purview, Entra ID, etc.) and AWS infrastructure (EKS, API gateway, S3, IAM, Lambda, EKS, AWS glue, etc.)

Proficient in containerization and orchestration using Docker and Kubernetes (AKS/EKS).

Designed and automated CI/CD pipelines using tools like Azure DevOps, Jenkins, GitHub

Skilled in implementing Infrastructure as Code (IaC) using Terraform.

6+ years of experience in Big Data technologies including Apache Spark (Scala), Hadoop, Hive, Pig, Kafka, Sqoop, Flume, and Oozie.

Experience in real-time data processing using Kafka and Spark Streaming and stored processed data into Cassandra NoSQL database.

Worked across multiple domains: Retail, Telecom, Financial Services, Manufacturing, CPG, and Aerospace.

Proven ability to lead cross-functional teams, architect cloud-native solutions, and drive enterprise-scale data modernization initiatives.

Experienced in monitoring and maintaining containerized applications with advanced troubleshooting and observability tools in Kubernetes environments.

Successfully secured CI/CD pipelines by integrating static and dynamic code analysis tools (SAST/DAST) and remediating vulnerabilities across the DevOps lifecycle.

Led migration of legacy applications and data workloads to cloud-native architectures on Azure and AWS, ensuring scalability, reliability, and cost optimization.

Adept at working in Agile/Scrum environments, promoting DevOps culture and continuous improvement through automation, collaboration, and DevSecOps best practices.

Certifications:

Microsoft Certified: DevOps Engineer Expert (AZ-400)

Credential ID: BC594CC628F8F8FA Valid: 24th April 2025 – 24th April 2026

Verify Credential: AZ-400

Microsoft Certified: Azure Administrator Associate (AZ-104)

Credential ID: 14B8925CB35CC61 Valid: 3rd Sept 2024 – 3rd Sept 2025

Verify Credential: AZ-104

Education:

B. Tech (Computer Science) from C. R. Reddy College of Engineering affiliated to Andhra University during 2006-2010 with 75.80%.

Technical Skills:

Category

Technologies / Tools

Azure Infrastructure

AKS, APIM, Azure SQL, ADF, Managed Airflow on ADF, Purview, Azure AD, Blob & File Storage, Azure Networks

AWS Infrastructure

EKS, API Gateway, VPC, EBS, S3, IAM, CloudWatch, CloudTrail, Lambda, AWS Config, Billing & Costing, AWS KMS, AWS Glue

DevOps & DevSecOps

Azure DevOps, Jenkins, SonarQube, Fortify SCA, Web Inspect, Fortify SSC, Snyk, OWASP ZAP

Infrastructure as Code

Terraform

Containerization & K8s

Docker, Kubernetes (AKS, EKS), Kubectl, Helm

Version Control

GitHub, Bitbucket, Azure Repositories

Big Data Ecosystem

Spark, Hadoop, HDFS, MapReduce, Pig, Hive, Sqoop, Oozie, YARN-MR, Flume, Apache Kafka

Databases

Oracle 9i/10g, MySQL, Teradata

Programming Languages

Scala, Java, C++, C

Application Servers

WebLogic, Tomcat, Nginx

IDEs

Eclipse, NetBeans, Visual Studio, IntelliJ IDEA

Operating Systems

Windows XP/7/8/10/11, Linux

Other Tools

SQL Developer, Toad, SVN, Git, Ant, Maven

Professional Experience:

Client: DCN, Kansas, USA

Project: IndustryX Digital Thread Components Jul 2020 – Till Date

Role: Lead DevOps Engineer

Roles and Responsibilities:

Architected and implemented CI/CD pipelines using Azure DevOps and Kubernetes for end-to-end deployment automation.

Deployed and managed containerized microservices to Azure Kubernetes Service (AKS) using Helm charts and Kubernetes Operators.

Conducted proactive monitoring of Kubernetes clusters and applications; resolved performance and reliability issues.

Integrated SonarQube for code quality enforcement, ensuring 95%+ unit test coverage and code standard compliance.

Performed continuous container vulnerability scanning and remediation to ensure secure deployment environments.

Managed Azure infrastructure using Infrastructure as Code using Terraform for scalable and repeatable resource provisioning.

Collaborated with cross-functional teams to promote DevOps best practices and improve release velocity and reliability.

Enabled knowledge sharing and process standardization across teams to support agile product development cycles.

Led the setup and deployment of CI/CD pipelines in Jenkins, supporting diverse client use cases across multiple environments.

Deployed microservices to Amazon EKS (Elastic Kubernetes Service) and orchestrated container workloads.

Integrated SonarQube for static code analysis, driving compliance with quality gates and coding standards.

Embedded DevSecOps practices by integrating SAST (Fortify SCA), DAST (Web Inspect), and OWASP checks into pipelines.

Administered cloud infrastructure on AWS, including resource provisioning, security, and container registries (ACR).

Monitored and resolved infrastructure and application security vulnerabilities.

Drove collaboration with product teams to adopt modern DevOps practices and improve system reliability and performance.

Promoted inner-sourcing and documentation for long-term maintainability of CI/CD solutions.

Environment: Azure Cloud, Azure DevOps, Jenkins, DevSecOps, Kubernetes (AKS), AWS

Client: Visa INC, California, USA

Project: ADPBI (Analytics Data Platform Business Intelligence) Jun 2017 – Jun 2020

Role: DevOps & Big Data Engineer

Roles and Responsibilities:

Designed and implemented automated Hive ETL workflows using Jenkins and GitHub, enabling seamless integration, version control, and auditability for large-scale transaction data processing.

Built Jenkins pipelines with file availability checks, load validations, retries, and marker-based idempotent execution logic, automating end-to-end data processing and ensuring reliable execution.

Enabled CI/CD for Hive-based workflows by integrating GitHub with Jenkins, automating deployment and version control of HiveQL scripts, and promoting efficient, traceable deployments.

Migrated legacy Oozie workflows to Jenkins-based DevOps pipelines, modernizing the workflow orchestration and improving deployment speed and reliability.

Created modular shell scripts for orchestrating Hive actions, file validation, and archival processes on HDFS, enabling automation and error-free processing of data files.

Optimized analytical dashboards (e.g., Visa Direct Global/Regional, Co-brand Performance, Merchant Analytics) by enhancing performance and streamlining the data pipeline.

Implemented resource-aware scheduling and retries for long-running Hive jobs, improving performance and robustness of the data processing pipeline.

Maintained checkpointing and safe reload mechanisms using HDFS marker files to prevent duplicate processing and ensure consistency across jobs.

Configured and managed Hadoop ecosystem components (Hive, HDFS, YARN) using Apache Ambari, integrating service health checks and automated configuration management into Jenkins pipelines.

Monitored cluster performance and set up alerting mechanisms through Ambari, enabling proactive issue resolution and smooth integration with Jenkins for real-time feedback on pipeline health.

Developed a Global Scorecard to project potential issuer growth, aligning data-driven insights with strategic business goals for revenue forecasting.

Created strategies for database layer optimizations, pushing complex table joins and transformations for better performance and scalability.

Collaborated with cross-functional teams to define and implement best practices in DevOps and Big Data, ensuring the integration of operational efficiency, security, and best practices in the data pipeline.

Environment: Hadoop, Hive, Jenkins, GitHub, Oozie, Apache Ambari, Tableau, Unix.

Client: AT&T, Texas, USA

Project: Wi-Fi Services Jun 2015 – May 2017

Role: Big Data Engineer

Roles and Responsibilities:

Built Spark Streaming jobs to process real-time Kafka data.

Stored processed data into Cassandra NoSQL database.

Proposed and implemented migration from Cron to Oozie for scheduling.

Enhanced system performance by rewriting poorly optimized jobs.

Developed reusable Oozie workflows and shell scripts.

Evaluated and introduced new tools via proof-of-concepts (Spikes)

Environment: Spark, Kafka, Cassandra, Oozie

Client: Walmart, California, USA

Project: Walmart Web Intelligence Feb 2011 – Apr 2015

Role: Big Data Engineer

Roles and Responsibilities:

Migrated large-scale crawl data from MySQL to Hadoop HDFS for scalability.

Set up Hadoop cluster and configured Hive metastore with MySQL.

Developed Apache Pig scripts and external Hive tables for reporting.

Scheduled workflows using Oozie, replacing ad-hoc job execution.

Created Sqoop jobs to synchronize data between Hadoop and MySQL.

Configured NFS for Name Node and set up password-less Hadoop.

Automated cleanup of Hadoop logs and temporary files.

Environment: Hadoop, Pig, Hive, Sqoop, Oozie, Java, MySQL, Linux

UUmamaheswararao Kambala

Email: *********.*******.****@*****.***

Mobile: +1-469-***-****

LinkedIn: www.linkedin.com/in/umamahesh-kambala



Contact this candidate