Post Job Free
Sign in

Data Engineer Cloud

Location:
Aubrey, TX
Salary:
100000
Posted:
September 25, 2025

Contact this candidate

Resume:

Sr. Cloud Engineer (DATA AI/ML)

SHILPA.G

Phone No: 469-***-****

Email : ****************@*****.***

Around 9 years of experience in the IT industry comprised of Development, Systems Administration, Infrastructure Management Installation, Configuration, Tuning and Upgrades of Linux (Red Hat and Oracle).Software Configuration Management (SCM) experience includes Build/Release Management, Change/Incident Management implementing duties such as monitoring, Automation, deployment, documenting, support and troubleshooting along with Amazon Web Services and Cloud Implementation.

Results-driven AI/ML & Cloud Data Engineer experience designing and delivering scalable, cloud-native data and machine learning solutions across AWS, Azure, and GCP. Strong background in application development, data engineering, and AI/ML integration using Python, Java, Scala, and SQL.

Designed and deployed AI/ML workflows using Spark MLlib, TensorFlow, Azure ML, and GCP Vertex AI, focusing on predictive analytics, classification, and LLM-based applications including Retrieval-Augmented Generation (RAG) and feature extraction.

Experienced Azure Data Engineer with a strong specialization in AI/ML integration, and also experience in designing and developing scalable big data pipelines and cloud-native analytics platforms.

Unique experience working with AWS services like VPC with scalable infrastructure, knowledge on transferring peta bytes of data between on premise data center and S3 using Snowball, written CloudFormation templates to provision and configure many resources and services running on AWS EC2.

Implemented RabbitMQ monitoring using CloudWatch, Prometheus, and Grafana for performance tracking

Experience with AWS services EC2, managed Docker containers on a cluster hosted on a serverless infrastructure using ECS and distributed the application traffic in ELB, Cloud Front to distribute content to edge locations, Cloud watch to set alarms and notifications. Also worked with Glacier storage, IAM policies for different users, RDS, Route53,

Designed filesystems to install 3 instances of Jenkins on a single server with different port numbers and deployed Jenkins war file onto Apache Tomcat with different ports and integrated them to load balancer to perform round robin distribution of builds in Jenkins.

Experienced in branching, tagging and maintaining the version across the environments using SCM tools like GIT, Subversion (SVN) and TFS on Linux and windows platform.

Implemented Build/Deployed automation Server utilizing CI Technologies like Jenkins/Hudson, Subversion, Maven, Ant, Nexus, JIRA and Selenium for both .NET and J2EE Applications on mixed OS (Windows/Linux/Unix).

Extensively worked with Scheduling, deploying, managing container replicas onto a node cluster using Kubernetes and experienced in creating Kubernetes clusters work with frameworks running on the same cluster resources. Proficient knowledge with Mesos providing the fine-grained resource allocations for pods across nodes in a cluster.

Worked on creating the Docker containers, Docker images, tagging and pushing the images and Docker consoles for managing the application life cycle and Deployed Docker Engines in Virtualized Platforms for containerization of multiple applications.

Extensive experience working with command line utility tool Vagrant and configured Vagrant File in Ruby, creating Vagrant Boxes and synced folders to sync onto a host machine. Installed Plugins to extend Vagrant with stable API’s to withstand major version upgrades.

Written Chef Cookbooks and recipes to automate the deployment process and to integrating Chef Cookbooks into Jenkins jobs for a continuous delivery framework. Worked with developing Chef Recipes using Terraform scripts to perform deployments onto application servers like Tomcat and Nginx.

Worked on web servers like Apache, Nginx and application servers like Web Logic, Tomcat, WebSphere, JBOSS and IIS Server to deploy code.

Experience in keeping up and looking at log archives using monitoring tools like Nagios, Splunk, CloudWatch, ELK Stack, Dynatrace, New Relic, Prometheus, and App Dynamics.

Worked on the connectivity and Firewall issues for the installation and connectivity of the tools.

Experience with Agile Development Methodology (Scrum), and Waterfall.

TECHNICAL SKILLS:

Cloud Environments

AWS, Microsoft Azure, GCP, PCF, OpenStack

Operating Systems

RHEL/Centos 5.x/6.x/7, Ubuntu/Debian/Fedora, Sun Solaris 7/8/9/10, Windows Server 2003/2008/2012, MAC

AWS Services

EC2, ELB, ECS, EBS, AMI, IAM, VPC, Route 53, SNS, RDS, SQS, CloudWatch, CloudTrail, CloudFormation, Snowball, Lambda, DynamoDB, Aurora, RedShift, X-ray, VM import/export, Auto scaling

Version Control Tools

GIT, SVN, Bit Bucket, TFS, GitHub

Build Tools

Ant, Maven, Gradle

Containerization Tools

Docker, Kubernetes, Mesos, Marathon

CI Tools

Jenkins/Hudson, Anthill Pro, deploy, Bamboo

Bug Tracker and Testing

JIRA, HP ALM, TFS

Repositories

Nexus, Art factory, Frog, Nugget, My Get

Scripting Languages

Shell, Bash, Perl, Python, Ruby, YAML PowerShell

Web Servers/App Servers

Apache, Nginx, IBM HTTP Server, JBoss, Web Logic 11g, Tomcat

Databases

MySQL, Oracle DB, MongoDB, Cassandra, Kafka, PostgreSQL, SQL Server, NoSQL, MariaDB, Hadoop, Big data

Web Technologies/Programming Languages

Servlets, JDBC, JSP, XML, HTML, CSS, C, C++

Web/Application servers

WebLogic, WebSphere, Apache, Tomcat. IIS, JBoss

Networking protocols

TCP/IP, SMTP, SOAP, REST, HTTP/HTTPS, DNS

Monitoring and Profiling tools

Splunk, Nagios, Zabbix

Configuration Management tools

Chef, Puppet, Ansible, Salt Stack, Data Bags, directories

Web Technologies

XML, HTML5, XHTML, CSS3, jQuery, JavaScript, AngularJS, NodeJS, Bootstrappin

PROFESSIONAL EXPERIENCE

Client: Client: M&T BANK, Buffalo NY June 2025-PRESENT

Role: Software Engineer / Data

Responsibilities:

•Designed and implemented Azure DevOps CI/CD pipelines using YAML for building, testing, and deploying .NET Core and Angular applications

•Automated infrastructure provisioning and application deployment using Terraform scripts within CI/CD workflows.

•Designed and implemented CI/CD pipelines in Azure DevOps for .NET Core (C#) backend APIs and Angular front-end applications, enabling automated build, test, and deployment across development, staging, and production environments.

•Designed, implemented, and maintained Identity and Access Management (IAM) solutions using Azure AD, Entra ID, and OAuth 2.0/OpenID Connect for secure authentication and authorization.

•Orchestrated and deployed data processing applications, ETL (Extract, Transform, Load) jobs, or data pipelines in Kubernetes using containerization technologies like Docker.

•Integrated machine learning models into data pipelines using Azure Databricks, Spark MLlib, and TensorFlow, enabling large-scale predictive analytics for customer behavior and operational forecasting

•Created and deployed RESTful endpoints for serving AI models in production using Azure Kubernetes Service (AKS) and Docker containers, enabling scalable and secure inference.

•Integrated Angular UI components with RESTful APIs for IAM dashboards, enabling administrators to monitor authentication logs, access requests, and security alerts.

•Developed and maintained Azure Infrastructure as Code using Terraform and ARM templates to provision and manage Azure App Services, Azure SQL Database, Storage Accounts, and Networking resources.

•Developed Spark applications using PySpark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.

•Integrated Terraform with Azure DevOps pipelines to enable fully automated provisioning and configuration of cloud resources.

•Leveraged Azure Functions and .NET microservices for real-time identity event processing, access request approvals, and audit logging.

•Performed Data Visualization and Designed Dashboards with Tableau and generated complex reports including chars, summaries, and graphs to interpret the findings to the team and stakeholders.

•Provisioned, configured, and managed Azure infrastructure using HashiCorp Terraform, ensuring infrastructure-as-code best practices for scalability and maintainability.

•Developed and maintained Azure Infrastructure as Code using Terraform and ARM templates to provision and manage Azure App Services, Azure SQL Database, Storage Accounts, and Networking resources.

•Coordinated with cross-functional teams in Agile Scrum environments to deliver iterative improvements to cloud-hosted web applications, ensuring alignment with business goals.

•Managed Azure DevOps Repos with branching strategies (GitFlow), pull request reviews, and merge policies to ensure secure and stable code Responsibilities.

Environments: Microsoft Azure (Including ASP, VNETs Web, Resource groups, Key Vault, Azure SQL, CouchDB, RabbitMQ), Docker, Kubernetes, Jenkins, Terraform, .Net,Jira, DynamoDB, PowerShell Devops, NoSQL, Yarn, Mapreduce, Hive, Sqoop, Flume, Oozie, Quilk Replica, AWS Data Factory, HBase, Kafka, Impala, SparkSQL, Spark Streaming, Eclipse, Jira, Scala, JSON, Oracle, Teradata, CI/CD, PL/SQL UNIX Shell Scripting, Cloudera,

Client: TruistBank, Atlanta,GA April 2024– May 2025 Role: Sr Cloud Engineer (Data AI/ML)

Responsibilities:

•Implemented scalable and fault-tolerant machine learning infrastructure Kubernetes, leveraging Docker containers to streamline deployments and orchestration processes.

•Deployed instances in AWS EC2 and used EBS stores for persistent storage; configured & supported storage level.

•Worked on several Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry to create container replicas when there is high traffic or load to leverage crashes and shutdown containers and stored them in Docker Hub.

•Automated Snowflake infrastructure provisioning using Terraform/CloudFormation for scalable and reproducible environments.

•Implemented data ingestion pipelines using Snow Pipe for seamless and automated loading of data into Snowflake, enhancing overall data processing efficiency.

•Collaborated with development teams to integrate CI/CD practices, improving software release cycles.

•Deployed and managed Snowflake on AWS, optimizing data storage, compute resources, and performance tuning.

•Developed and maintained Cloud FormationJSON Templates and automated the cloud deployments using Chef.

•Created data models for AWS Redshift and Hive from dimensional data models.

• Builted data pipelines to feed structured and unstructured data into AI workloads, incorporating text analytics and NLP techniques for parsing customer support logs and generating insights.

•Integrated machine learning models into data pipelines using AWS SageMaker, Spark MLlib, and TensorFlow, enabling large-scale predictive analytics for customer behavior and operational forecasting

•Automated CI/CD workflows using GoLang utilities for infrastructure-as-code deployments with Terraform and CloudFormation.

•Built and maintained access control mechanisms using IAM policies and role-based access to restrict PHI access.

• Extract, transform, and load (ETL) data from multiple federated data sources (JSON, relational database, etc.) with Data Frames in Spark.

•Developed SparkAPI of queries against tables in entrise data warehouse in AWS Synapse Analytics by using table partitions.

•Worked on Schedule refresh in Power BI Service based on timings provided for all types of source datasets

•Integrated Python with big data technologies such as Apache Spark or Hadoop for scalable and distributed data processing.

•Implemented data ingestion pipelines using Snow Pipe for seamless and automated loading of data into Snowflake, enhancing overall data processing efficiency.

•Transformed and Copied data from the JSON files stored in a Data Lake Storage into an AWS Synapse Analytics table by using AWS Databricks

•Integrated security scanning into CI/CD pipelines (SAST/DAST, container image scanning) to enforce DevSecOps practices.

•Designed and deployed scalable, highly available, and fault-tolerant infrastructure on AWS.

•Deployed Dbt models and transformations to production environments using version control systems and continuous integration/continuous deployment (CI/CD) pipelines, enabling seamless and reliable deployment of data artifacts.

•Worked on SQL Server Integration Services (SSIS) to integrate and analyze data from multiple heterogeneous information sources.

•Implemented Infrastructure as Code (IaC) using Terraform and AWS CloudFormation for automated deployments.

•Automated infrastructure provisioning using Terraform for OpenShift cluster setup and management.

•Migrated Corillian banking workloads from on-premises to AWS using Terraform and Ansible, improving availability and reducing infrastructure costs.

•Developed and maintained Python automation scripts to provision, configure, and manage AWS resources including EC2, S3, RDS, IAM, and CloudWatch, reducing manual intervention by 70%.

•Worked on Amazon AWS concepts like EMR and EC2 web services for fast and efficient processing of Big Data.

•Transformed and Copied data from the JSON files stored in a Data Lake Storage into an AWS Synapse Analytics table by using AWS Databricks.

•Designed, deployed, and managed OpenShift Container Platform (OCP) clusters across hybrid and cloud environment.

•Built Boto3-based Python utilities for AWS service integration, enabling seamless automation of infrastructure provisioning, data backups, and resource cleanup.

•Developed and maintained CI/CD pipelines using GitLab and AWS CodePipeline, reducing deployment times by 40%.

•Strong experience deploying and managing RabbitMQ on AWS using EC2, ECS, EKS, and AWS Managed Services.

•Proficient in high availability (HA), clustering, and federated RabbitMQ configurations.

•Automated infrastructure provisioning and management using Terraform and GitLab and managed AWS services including EC2, S3, RDS, Lambda, and VPC.

•Built highly available and scalable VPC/VNET architectures with subnets, peering, and private endpoints for AWS workloads.

•Enforced least privilege access policies using Terraform Sentinel, IAM policies, and network security groups (NSGs).

•Deployed RabbitMQ on AWS using EC2 with Auto Scaling Groups for fault tolerance.

•Implemented monitoring solutions with ELK Stack and AWS CloudWatch for proactive issue detection.

•Enhanced cloud security by setting up IAM policies, security groups, and encryption mechanisms.

•Designed serverless automation workflows using AWS Lambda (Python), S3 event triggers, and CloudWatch events to handle log processing, file transformations, and scheduled tasks.

•Automated Snowflake infrastructure provisioning using Terraform/CloudFormation for scalable and reproducible environments.

•Collaborated with development teams to integrate CI/CD practices, improving software release cycles.

•Deployed and managed Snowflake on AWS, optimizing data storage, compute resources, and performance tuning.

Environment: Red Hat Linux, AWS, S3, EBS, Ant, Gradle, Kubernetes, Elastic Load balancer (ELB), Docker, Octopus, VPC, IAM, Perl, shell,Impala, SparkSQL, Spark Streaming, Eclipse, Jira, Scala, JSON, Oracle, Teradata, CI/CD, PL/SQL UNIX Shell Scripting, Cloudera Cloud Watch, Glacier, Terraform, Jenkins/Hudson, Hadoop, Bash Scripts, GIT,Gitlab,Splunk, Docker, Rally board, Chef, Ansible.

Client: Dell RR2E,Round Rock,TX May 2023 – Mar 2024

Role:Sr Cloud Engineer/Data

Responsibilities:

•Created,maintain and customize complex JIRA project configurations including workflows, custom fields, permissions.

•Installed and configured Splunk monitoring server, installed Splunk forwarded on all the nodes in the environment.

•Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to deploy Web applications.

•Deployed instances in AWS EC2 and used EBS stores for persistent storage; configured & supported storage level.

•Used Cloud Front to deliver content from AWS edge locations to users, allowing for further reduction of load on front-End servers.

•Developed and deployed scalable .NET Core APIs and microservices to Azure App Services and AWS ECS, ensuring high availability and seamless CI/CD integration.

•Cookbooks with test-kitchen and chef spec. Refactored Chef and Ops Works in AWS cloud environment.

•Implemented &maintained the branching and build/release strategies utilizing Subversion/GIT. Manage configuration of Web App and Deploy to AWS cloud server through Chef.

•Migrated VMware Virtual Machines to AWS and managed Services like EC2, S3, Cloud Formation, Route53, ELB, RDS, and VPC.

•Experience in working with healthcare datasets while ensuring compliance with regulatory requirements such as HIPAA.

• Designed and implemented data pipelines to ingest, process, and analyze healthcare claims data from multiple sources, ensuring data accuracy, integrity, and compliance with industry standards

•Leveraged PL/SQL for complex data analysis tasks, supporting the generation of insightful reports and analytics.

•Utilized Java frameworks for building scalable ETL (Extract, Transform, Load) pipelines, ensuring smooth data flow and transformation

•Extensive experience with NoSQL databases like HBase and Cassandra for scalable data storage solutions.

•Proficient in optimizing ETL processes for performance and scalability using ODI performance tuning

•Designed and developed Spring Boot microservices for efficient data processing and transformation.

•Got involved in migrating on prem Hadoop system to using GCP (Google Cloud Platform).

•Worked on a direct query using PowerBI to compare legacy data with the current data and generated reports and stored dashboards.

•Building multiple Data pipelines, end to end ETL and ELT process for Data ingestion and transformation in GCP and coordinate tasks among the team.

•Configured and maintained routers, switches, and load balancers (Cisco, Juniper, F5, Palo Alto) to optimize traffic flow and resiliency.

•Automated network configuration management with Ansible and Terraform, reducing manual changes by 70%.

•Troubleshot and resolved complex network latency, packet loss, and DNS routing issues impacting global users.

•Collaborated with data scientists to operationalize machine learningmodels,ensuring smooth transition from development to production environments.

•Implemented infrastructure-as- code practices to automate the provisioning and configuration of Hadoop clusters, reducing deployment time and minimizing configuration drift.

•Designed and implemented scalable and reliable infrastructure solutions for model deployment, including containerization using technologies like Docker and Kubernetes.

•Implemented Kubeflow piplines for seamless integration of data preprocessing, model training and model serving resulting in improved collaboration and reproducibility across the data science team.

•Built and maintained CI/CD pipelines for automated model training, testing and deployment, ensuring reproducibility and version control .

•Managed and optimized data storage systems for Big data, ensuring data availability, integrity, and accessibility while optimizing storage costs.

•Implemented security measures and access control to protect sensitive data and models, ensuring compliance with relevant regulation ( GDPR,HIPAA).

•Collaborated with cross-functional teams, including data scientists,software engineers, and IT operations, to align MLops processes with organizational goals and objectives.

•Automated RabbitMQ infrastructure setup using Terraform and CloudFormation.

•Configured, Administered Jenkins for managing weekly Build, Test and Deploy chain, GIT with Dev/QA/Prod.

•Configured Ansible Control Machine and wrote Ansible Playbooks with Ansible roles. Used file module in Ansible Playbook to copy and remove files on EC2 instances.

•Used Ansible to Setup/teardown of ELK stack (Elasticsearch, Log stash, Kibana) and troubleshoot the build issues with ELK and work towards the solution.

•Created inventory in Ansible for automating the continuous deployment and wrote playbooks using Yamlscripting.

• Installed, configured and administered application servers like WebSphere Application Server, web servers like Apache 2.2, IIS and Oracle DB in various environments like Dev, QA, and Prod on RHEL.

Environment: Red Hat Linux, AWS, S3, EBS, Ant, Gradle, Kubernetes, Elastic Load balancer (ELB), Docker, Octopus, VPC, IAM, Perl, shell, Cloud Watch, Glacidepl, .Net,Terraform, Azure, Jenkins/Hudson, Hadoop,Maven, Bash, Nagios, Bash Scripts, GIT, Splunk, Docker, Jira, Chef, Ansible.

Client: KOHL’S, Milwaukee, WI, Oct’2021-Apr’2023, Role: DevOps Engineer/Data

Responsibilities:

•Performed automated installations of Operating System using kickstart for Linux.

•Installation, Maintenance, Administration and troubleshooting of Linux andAIX OperatingSystems.

•Developed the POCs for the migration of the DR project from On-Prem to AWS.

•Written Bash scripts for the change of IPS from on-Prem to AWS.

•Drawn the Gliffy diagrams for the POCs of the DR project migration to AWS.

•Managing user access for AWS instance using Jenkins and created security groups and instance profiles.

•Supported AWS Cloud environment with 600+AWS instances and configured Elastic IP and Elastic storage.

•Hands-on experience in Kafka Clustering, High Availability, and Disaster Recovery strategies.

•Migrated legacy messaging systems (RabbitMQ/WebSphere MQ) to Kafka for better scalability.

•Providing highest level technical support in production environment .

•AWS instances and worked with EBS and S3 storage, IAM.

•Launched and configured Amazon EC2 Cloud Servers using AMI' s (Linux /Ubunt).

•Installing RedHat Linux using kickstart and applying security polices for hardening the serverbased on the company policies.

•Installed, configured, and managed IBM MQ Queue Managers in a high-availability environment.

•Worked with multiple storage formats (Avro, Parquet) and databases (Hive, Impala, Kudu).

•Developed Star and Snowflake schemas based dimensional model to develop the data warehouse

•Build machine learning models to showcase Big data capabilities using PySpark and MLlib.

•Worked on Amazon AWS concepts like EMR and EC2 web services for fast and efficient processing of Big Data.

•Developed in loading data into HBase NoSQL database.

•Building, Managing and scheduling Oozie workflows for end-to-end job processing.

•Worked on Hortonworks-HDP 2.5distribution.

•Responsible for building-scalable distribution data solution using Hadoop.

•Built PL/SQL (Procedures, Functions, Triggers, and Packages) to summarize the data to populate summary tables that will be used for generating reports with performance improvement.

•Involved in importing data from MS SQL Server, MySQL and Teradata into HDFS using Sqoop.

•Played a key role in dynamic partitioning and Bucketing of the data stored in Hive Metadata.

• Wrote Hive QL queries for integrating different tables for create views to produce result set.

•Collected the log data from Web Servers and integrated into HDFS using Flume.

•Refresh Linux servers which includes new hardware, OS upgrade, application installation,testing.

•Installing, configuring RedHatservers, automating Compiling Java Code, Debugging andPlacing Builds into Maven Repository.

Environment:RHEL 6/7, Centos, Windows 2008/2012, VMware, WebLogic, Oracle DB, Apache, LVM, WebSphere, GIT, Maven, Jenkins, Nexus, SonarQube, Chef, Ansible, Docker, Selenium, AWS, Kubernetes, Splunk, JIRA, Python, Bash and Yaml scripting.

Client: Sparc Technologies, INDIA. April’2015 – Feb’2019 Role: Software Engineer(Cloud /Data)

Responsibilities:

•Involved inS3 Buckets in AWS and stored files. Enabled Versioning and security for files stored. Manage AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Chef.

•Performed AWS Cloud deployments for web applications running on AWS Elastic Beanstalk with monitoring using CloudWatch and VPC to manage network configurations.

•Installed Jenkins/Plugins for GIT Repository, Setup SCM Polling for Immediate Build with Maven and Maven Repository (Nexus Artifactory) and Deployed Apps using custom ruby modules throught cicd.

•Worked on writing Jenkins build a pipeline with Gradle script and Groovy DSL (Domain Specific Language) and integrating ANT/MAVEN build scripts with Gradle for the sole purpose of continuous build.

•Created GCP Big Query authorized views for row level security or exposing the data to other teams.

• Used ETL to implement the Slowly Changing Transformation, to maintain historical data in the Data warehouse.

•Performed ETL testing activities like running the Jobs, Extracting the data using necessary queries from database transform, and upload into the Data warehouse servers.

•Implemented data monitoring solutions within Spring Boot applications.

•Designed SSIS Packages to extract, transfer, load (ETL) existing data into SQL Server from different environments for the SSAS cubes (OLAP), SQL Server Reporting Services (SSRS).

•Processed and loaded bound and unbound data from Google Pub/Sub topics to BigQuery using Cloud Dataflow with Python.

• Implemented end-to-end data pipelines on Databricks to collect, process, and store large volumes of data from diverse sources, resulting in improved data accessibility and reliability

• Implemented data integration solutions using Java to seamlessly connect diverse data sources and formats.

•Involved in migrating on-prem Hadoop systems to AZURE

•Designed and implemented data solutions on AZURE, leveraging services such as BigQuery, Dataflow, and Storage to enable scalable and efficient data processing.

•Performed SQL Joins among Hive tables to get input for Spark batch process.

•Worked with a data science team to build statistical models with Spark MLLIB and PySpark.

•Administered and Engineered Jenkins for managing weekly Build, Test and Deploy chain as a CI/CD process, SVN/GIT with Dev/Test/Prod Branching Model for weekly releases.

•Coordinate/assist developers with establishing and applying appropriate branching, labelling/naming conventions using Subversion (SVN) and Git source control.

•Deployed Puppet, Puppet Dashboard, and Puppet DB for configuration management to existing infrastructure.

•Containerize applications and services using Docker, CoreOS, and Virtualization technologies, implementing all necessary tooling to support the products and automated deployments, scaling, and operations of application containers across clusters of hosts, provided container-centric infrastructure by Kubernetes.

Environments: AWS (EC2, VPC, Subnet, ELB, NAT, CND, Beanstalk, CloudTrail, S3, RDS, Cloud watch, DynamoDB), Jira, chef, Ant, terraform, Tomcat, Git, Groovy Script, Nexus, Bamboo.

Client: IVYComptech, India Jan 2014-Mar’2015 Role: Software Engineer

Responsibilities:

•Deployed WARS and EARS using WebLogic Admin Console as well as running scripts.

•Focal point for project design and architecture for WebLogic Application server layout, which includes Internet and Intranet Web Sites.

•Configured Node Manager to start and stop servers from admin console.Configured JDBC connection pools and data sources for the applications.

•Involved in monitoring and tuning performance metrics like JVM, execute threads, JDBC connections.Configuration and Management of LAMP Tech Stack (Linux, Node JS, Apache, MySQL, Tomcat, and PHP).

•Database administration and management (MySQL).Developed BEA WebLogic Application Server regular administration tasks scripts.

•Configured the Web Server interfaces, session management, virtual hosts and transports for BEA WebLogic Application Servers.

•Create the Data Source, Connection Pool and test the Connection to connecting the DB or Not.

•Artifactory and deployed Apps using custom ruby modules through Puppet as a CI/CD Process.

Environments: Microsoft Azure (Including ASP, VNETs Web & Mobile, Blobs, Resource groups, Key Vault, Azure SQL, CouchDB, RabbitMQ), Bitbucket, Chef, Docker, Kubernetes, Jenkins, Maven, JFrog, Terraform, .Net, Ruby, oracle Web Logic, Rally, Nagios, DynamoDB, PowerShell.

Education : KAKATIYA University Bachelor’s in Computer science 2011 may.

PROFESSIONAL SUMMARY



Contact this candidate