Post Job Free
Sign in

Data Engineer Machine Learning

Location:
Creve Coeur, MO, 63141
Posted:
June 16, 2025

Contact this candidate

Resume:

ASHOK REDDY MADHIRE

**************@*****.***

314-***-****

Professional Summary

Results-oriented Data Engineer with 12 years of experience delivering scalable data solutions in banking, healthcare, insurance, and financial domains. Proficient in designing data pipelines, cloud-native architectures, and analytics platforms using AWS, Azure, GCP, and modern big data technologies.

Core Skills: -

Cloud Platforms: AWS (Glue, S3, Lambda, Redshift), Azure (Data Factory, Synapse, Databricks), GCP.

Big Data: Hadoop, Spark, Hive, Kafka, Flink

Programming: Python, Scala, SQL, Pyspark

Databases: Snowflake, MongoDB, Cassandra, PostgreSQL, MySQL, Azure Cosmos DB

DevOps: CI/CD, Jenkins, Kubernetes, Terraform

Visualization: Tableau, Power BI

Professional History

Senior Data Engineer

East West Bank Houston, TX [Remote] December 2022 to Present

Roles and responsibilities:

Design, develop, and maintain scalable and resilient ETL/ELT pipelines using AWS Glue, Apache Flink, Step Functions, and Lambda for real-time and batch processing.

Developed and optimized ETL/ELT pipelines using AWS Glue to automate data ingestion from both on-premises and SaaS systems.

Designed and maintained robust CI/CD pipelines in Jenkins to automate testing, deployment, and validation of data workflows, cutting release cycles by 60%.

Built and deployed scalable machine learning pipelines using Python libraries like pandas, NumPy, scikit-learn, and PyTorch to power predictive analytics across multi-terabyte datasets.

Created numerous pipelines in Azure using Azure Data Factory v2 to get the data from disparate sources by using different Azure Activities like Move &Transform, Copy, filter, for each, Databricks, etc.

Built pipelines aligned with the Medallion Architecture (Bronze, Silver, and Gold layers to ensure clean, optimized, and production-ready datasets for ML workflows.

Seamlessly integrated Jenkins with tools like Git, Docker, Kubernetes, and Big Query to enable smooth, scalable end-to-end deployments of data pipelines.

Containerized complex data engineering workflows using Docker, enabling consistent environments across development, testing, and production.

Built Docker images for ETL pipelines and analytics jobs.

Spearheaded the design and rollout of scalable ETL pipelines using Informatica PowerCenter, enabling efficient monthly processing of over 10TB of data with an error rate under 2%.

Led the design and rollout of a scalable data lake on S3, enabling the team to efficiently manage petabytes of structured and semi-structured data for analytics and reporting.

Designed and built scalable, cloud-native data platforms using Snowflake, powering analytics and machine learning use cases across multiple business domains.

Implemented data lakes on S3 with strong understanding of storage tiers, encryption, and cost-optimized data management. Utilize AWS Lake Formation, Glue Catalog, Crawlers, and Apache Hudi for governance and efficient querying.

Design Setup maintain Administrator the Azure SQL Database, Azure Analysis Service, Azure SQL Data warehouse, Azure Data Factory, Azure SQL Data warehouse

Developed and deployed machine learning models using tools like Scikit-learn, XGBoost, and LightGBM to solve real-world data problems efficiently.

Created reusable Jenkins pipeline templates to standardize ETL job deployment, making it easier and faster for teams to onboard new data projects.

Cut storage costs by 35% by strategically using S3 storage classes like Glacier and Intelligent-Tiering for long-term and infrequently accessed data.

Implemented CI/CD pipelines using Jenkins and Terraform to seamlessly deploy ETL workflows and ML models.

Conducted capacity planning for the Redshift cluster, optimizing storage and query performance.

Partnered with data analysts to create business intelligence dashboards using Tableau.

Led a cross-functional initiative to modernize legacy ETL systems by migrating to Informatica IICS, resulting in a 40% reduction in processing time and enhanced system flexibility.

Implemented a hybrid cloud strategy to balance cost and performance, reducing infrastructure expenses by 25%.

Hands-on data analytics experience with Databricks on AWS, PySpark, and Python to process large datasets efficiently.

Built Docker images for ETL pipelines and analytics jobs, significantly reducing onboarding time and deployment inconsistencies.

Mentor junior engineers through code reviews, technical guidance, and training sessions on modern tools like Databricks, Snowflake, AWS Glue, etc.

Automated near real-time data ingestion and transformation workflows using Snow pipe, Streams, and Tasks, improving data freshness and reducing manual overhead.

Designed and managed end-to-end ML pipelines using Apache Airflow, Kubeflow, and ML flow, ensuring scalable and reproducible workflows.

Set up automated Jenkins triggers to deploy pipelines directly from code commits, streamlining the release process and reducing the need for manual operations.

Built reliable, event-driven data ingestion pipelines using S3 triggers, Lambda, and Step Functions—handling both real-time and batch workloads seamlessly.

Designed and managed robust ETL and data ingestion workflows with Apache Airflow, Spark, and Python, enabling real-time scoring with embedded AI/ML models.

Collaborate cross-functionally with product owners, data scientists, business stakeholders, and DevOps teams to deliver data-driven solutions.

Developed and maintained a library of reusable mappings, sessions, and workflows, significantly boosting development efficiency and reducing redundancy across the team by 30%.

Streamlined deployment processes by integrating Dockerized applications with CI/CD pipelines (Jenkins/GitHub Actions), improving reliability and speed of releases.

Manage sprint planning, task assignments, and progress tracking in Agile/Sc rum environments.

Migrate on-premises databases (like Oracle, SQL Server) to Amazon Aurora using AWS Database Migration Service (DMS) with minimal downtime.

Tuned complex SQL queries and materialized views to accelerate dashboard performance, cutting processing time by over 50% for key business stakeholders.

Incorporated automated data validation into Jenkins pipelines using pytest, Great Expectations, and custom scripts, significantly enhancing data accuracy and trust.

Strengthened data security by implementing granular IAM and S3 bucket policies, ensuring compliance with internal data governance and access control standards.

Design, develop, and maintain scalable and resilient ETL/ELT pipelines using AWS Glue, Apache Flink, Step Functions, and Lambda for real-time and batch processing.

Develop efficient data models in PostgreSQL, RDS, and Athena. Optimize queries and partition strategies for large-scale analytical workloads.

Implement data lakes on S3 with strong understanding of storage tiers, encryption, and cost-optimized data management. Utilize AWS Lake Formation, Glue Catalog, Crawlers, and Apache Hudi for governance and efficient querying.

Implemented granular access control and dynamic data masking in Snowflake to ensure enterprise compliance with GDPR, HIPAA, and internal security policies.

Built and optimized Spark-based ETL jobs in Java to process terabytes of structured and unstructured data efficiently.

Managed Jenkins infrastructure at scale, including master-agent configuration, plugin updates, and performance tuning to support a high-performing data engineering team.

Designed and implemented Infrastructure as Code (IaC) using Terraform to automate the provisioning of cloud resources across Azure, AWS, and GCP.

Developed and maintained custom Docker files for Python-based data services, optimizing for performance, image size, and reproducibility.

Orchestrated seamless integration between Informatica, AWS Redshift, and Snowflake to support a real-time analytics platform for enterprise-wide data access.

Partnered closely with data scientists to streamline model training workflows, refactoring Python code to reduce training times by 40%.

Led the migration of large-scale legacy data systems (Oracle, Teradata) to Snowflake, achieving a 60% cost reduction and significantly faster query performance.

Build ETL/ELT pipelines using AWS Glue, Lambda, and Step Functions to automate data ingestion and transformation for P&C insurance workflows.

Develop and optimize SQL scripts, stored procedures, and views for efficient data retrieval and complex analytical queries on Aurora.

Integrated S3 with services like Glue, Redshift Spectrum, and EMR to enable flexible, serverless data processing and large-scale querying across data stored in Parquet format.

Built real-time streaming pipelines using Structured Streaming in Databricks, consuming from Kafka and Event Hubs for fraud detection and monitoring.

Build ETL/ELT pipelines using AWS Glue, Lambda, and Step Functions to automate data ingestion and transformation for P&C insurance workflows.

Developed automated feature engineering systems in Python, significantly speeding up model development cycles and minimizing manual intervention.

Applied deep learning frameworks such as TensorFlow and PyTorch to build high-performance models for complex data applications.

Standardized Docker usage across the team, creating best practices for container structure, image versioning, and local development.

Optimize queries and data structures in databases (Snowflake, Big Query, Redshift) to improve the efficiency of forecasting pipelines.

Design, develop, and optimize data platforms and pipelines in a distributed cloud environment, with a strong focus on Data Lakehouse architecture using open table formats like Apache Hudi or Iceberg.

Worked with Apache Kafka, including Schema Registry, for event-driven data pipelines and near real-time analytics.

Set up monitoring and audit trails using CloudTrail and S3 access logs, giving the team visibility into data usage patterns and enhancing our governance framework.

Crafted advanced transformation logic and dynamic, parameterized mappings to accommodate evolving business rules across diverse data domains.

Created various pipelines to load the data from Azure data lake into Staging SQLDB followed by Azure SQL DB.

Knowledge in retrieving, analyzing and presenting data using Azure Data Explorer/Kusto.

Built and maintained modular, testable data transformation pipelines using dbt, promoting code reuse, version control, and reliable deployments.

Develop and manage RESTful APIs using API Gateway and Lambda, ensuring secure and scalable data access.

Designed and optimized pretraining, fine-tuning, augmenting, and optimizing large language models (LLMs) for AI-driven applications.

Optimized performance of long-running jobs through partitioning, pushdown optimization, and session-level tuning, reducing execution time and improving throughput.

Created reusable Terraform modules for consistent deployment of Databricks workspaces, VNETs, AKS clusters, and storage accounts.

Orchestrated both scheduled and event-driven data workflows in Jenkins, supporting batch and near-real-time data processing use cases.

Created high-performance data processing modules with NumPy and Dask, supporting distributed AI model training on large-scale datasets.

Developed data quality frameworks and implemented unit testing in Databricks notebooks for reliable data workflows.

Deploy and manage containerized applications on EKS using Kubernetes, Helm, and best practices in microservices architecture.

Build and manage data visualizations and dashboards using Amazon Quick Sight, enabling self-service analytics and business intelligence across the organization.

Senior Data Engineer

NJM Insurance Group Texas City, TX Oct 2019 to Nov 2022

Roles and responsibilities:

Designed scalable ETL pipelines using Scala and Azure Databricks for claims data ingestion and transformation.

Develop microservices and serverless applications using Python, Java, Lambda, Step Functions, and API Gateway to support data ingestion, transformation, and analytics.

Enabled real-time analytics by implementing scala and Azure Event Hub for customer queries.

Secure data assets through robust IAM policies, encryption techniques, secrets management (Vault, AWS Secrets Manager), and auditing via CloudTrail and CloudWatch.

Established a comprehensive data quality framework using Informatica IDQ, enhancing data trustworthiness and consistency across key business systems.

Implemented comprehensive logging, alerting, and rollback capabilities in Jenkins to increase pipeline reliability and improve monitoring and troubleshooting.

Tuned Big Query datasets at terabyte scale by applying best practices like partitioning, clustering, and materialized views to keep performance high and costs low.

Built and maintained scalable ETL pipelines using AWS Glue, enabling efficient data movement across systems in both batch and near real-time.

Drove a successful migration from on-prem storage to AWS S3, increasing data availability and simplifying pipeline orchestration.

Standardized Docker usage across the team, creating best practices for container structure, image versioning, and local development.

Used Snowflake’s Time Travel and Zero-Copy Cloning features to support efficient debugging, backup, and dev/test environment provisioning without storage overhead.

Integrated ML deployment pipelines into CI/CD workflows using Docker, FastAPI, and Python, ensuring smooth and repeatable transitions from development to production.

Standardized and optimized feature engineering workflows by integrating Feast and Tecton as part of a centralized feature store strategy.

Develop efficient data models in PostgreSQL, RDS, and Athena. Optimize queries and partition strategies for large-scale analytical workloads.

Managed Terraform state securely using S3 with DynamoDB locking, ensuring collaboration across teams.

Create QuickSight datasets, analyses, and dashboards, integrating with Redshift, Athena, and Snowflake for real-time or scheduled insights.

Wrote complex PySpark scripts within Glue to transform and clean large volumes of structured and semi-structured data.

Introduced S3 lifecycle rules to automate archival and cleanup of stale data, helping enforce data retention policies with minimal manual effort.

Leveraged frameworks like TensorFlow and PyTorch to support deep learning initiatives, optimizing GPU utilization through efficient Python scripting.

Build ETL/ELT pipelines using AWS Glue, Lambda, and Step Functions to automate data ingestion and transformation for P&C insurance workflows.

Partnered with product and analytics teams to develop event-driven data models in BigQuery, enabling near real-time insights for decision-makers.

Led the transition from legacy script-based jobs to modular Jenkins files using Groovy, improving maintainability, scalability, and reducing build failures.

Deployed a containerized microservices architecture using Docker and Kubernetes for claims automation.

Integrated predictive analytics models into workflows using Azure ML, reducing claim processing time by 20%.

Automated workflow scheduling and dependency tracking with Apache Airflow.

Had an extensive role in On-Premises Mid-tier application migrations to the Cloud-lift and shift to AWS

infrastructure.

Deployed Docker containers in cloud-native environments (Kubernetes, GKE, AWS ECS) as part of scalable data infrastructure solutions.

Optimized warehouse sizing, auto-scaling, and workload management to deliver consistent performance while maintaining cost efficiency.

Integrated Snowflake with tools across the modern data stack—Airflow, Kafka, S3, Power BI, and Tableau—to enable seamless, end-to-end analytics pipelines.

Managed Glue Data Catalogs to ensure consistent metadata, making data more discoverable and easier to work with for downstream teams.

Implement and automate CI/CD pipelines using GitHub Actions and Terraform Enterprise for infrastructure provisioning and code deployment.

Led model deployment efforts across platforms like AWS Sage Maker, Google Vertex AI, and Azure ML Studio, enabling robust production ML systems.

Automated routine reporting tasks by integrating Big Query with Data Studio and Looker, freeing up hours of manual work and enabling real-time dashboards.

Engineered custom connectors and leveraged Informatica APIs to facilitate smooth integration with both internal systems and third-party platforms.

Built robust monitoring systems to detect data drift and model decay, using Python libraries such as Evidently and Prometheus for alerting and observability.

Enforce fine-grained access controls using IAM, Vault, and AWS Secrets Manager to manage sensitive credentials and secrets.

Established best practices around object versioning, encryption (SSE-S3 and SSE-KMS), and cross-region replication to improve data resilience and disaster recovery readiness.

Tuned Glue jobs for better performance and lower costs — optimizing partitioning strategies, job bookmarks, and execution parameters.

Extensive experience with Databricks, including writing Scala and Spark SQL notebooks for complex data aggregations, transformations, and schema operations. Expertise in Databricks Delta and Data Frame concepts.

Used Terraform Enterprise for managing cloud infrastructure. Develop modules, work with providers, and write/debug Terraform scripts for resource provisioning (EKS, S3, IAM, RDS, etc.).

Led the migration of legacy analytics code to clean, modular Python packages—improving scalability, maintainability, and team productivity.

Diagnosed and resolved issues related to container performance, networking, and memory usage, improving pipeline stability and runtime efficiency.

Collaborated closely with DevOps and cloud engineering teams to containerize data pipelines and deploy them via Jenkins into cloud platforms like GCP and AWS.

Enhanced the data warehouse with time-series analysis capabilities for trend monitoring.

Created a Linked service to land the data from different sources to Azure Data Factory.

Maintained version control over datasets and models using DVC, promoting reproducibility and collaboration across ML projects.

Implemented robust data quality checks and monitoring using scheduled queries and Cloud Logging, helping maintain 99.9% data accuracy across key datasets.

Set up and scheduled Glue Crawlers to automatically detect schema changes and catalog new data sources.

Build servers using AWS, importing volumes, launching EC2, RDS, creating security groups, auto-scaling,

load balancers (ELBs) in the defined virtual private connection.

Build ETL/ELT pipelines using AWS Glue, Lambda, and Step Functions to automate data ingestion and transformation for P&C insurance workflow, Stream lined job monitoring and alerting through Informatica and scheduling tools like Control-M and Autosys, leading to a 95% improvement in SLA compliance.

Integrated Terraform into CI/CD pipelines using GitHub Actions and Jenkins to automate deployments and approvals across environments.

Mentored junior engineers in Python, ML system design, and data engineering best practices, helping build a culture of quality and continuous learning.

Databricks job configuration, Refactoring of ETL Databricks notebooks

Created various pipelines to load the data from Azure data lake into Staging SQLDB followed by Azure SQL DB.

Knowledge in retrieving, analyzing and presenting data using Azure Data Explorer/Kusto

Established and enforced scalable data governance standards using metadata tagging, access policies, and lineage tracking within Snowflake.

Created pipelines to load data from Lake to Databricks and Databricks to Azure SQL DB.

Integrated Snowflake with BI tools like Tableau and Power BI to enable interactive dashboards and reporting.

Migrated on-premises SQL Server databases to Azure SQL Database with zero downtime.

Monitor infrastructure and applications using CloudWatch, CloudTrail, and implement alerting strategies.

Created dynamic dashboards in Power BI for executive reporting on claim trends.

Reduced cloud costs by 30% through serverless computing using Azure Functions.

Design and manage cloud-based data architectures AWS to store and process large datasets used in forecasting.

Senior Azure Engineer

Syneos Health, Inc Framingham, MA 2018 May-Aug 2019

Roles and responsibilities:

Built a scalable data lake architecture on Hadoop and GCP Big Query for patient data storage and analytics.

Implementing new environments in the Azure cloud.

Led training sessions and internal workshops to share best practices for Snowflake performance tuning, architecture patterns, and cost optimization strategies.

Worked closely with analysts and data scientists to build a metadata-driven ingestion framework, leveraging Glue Data Catalog to onboard over 100 diverse data sources with minimal friction.

Created real-time patient monitoring dashboards using Tableau and Big Query views.

Develop Azure Databricks notebooks to apply business transformations and perform data cleansing operations. Used spark, python, delta while reading data from parquet files in ADLS to Data frame and writing to destination ADLS location. Partitioned data and implemented schema drift by using metadata JSON file before writing data to destination location in ADLS.

Led the design and optimization of scalable data pipelines in Big Query, improving query efficiency and cutting compute costs by 30% through smarter partitioning and clustering.

Designed and launched a real-time inference platform using Python, Kafka, and Redis, delivering AI recommendations with sub-second latency.

Led internal workshops and documentation efforts to upskill team members on Docker and container-based development practices.

Automated training and evaluation processes with tools like MLflow and Weights & Biases, streamlining model experimentation and tracking.

Worked closely with AWS services like S3, Athena, Redshift, Lambda, and Step Functions to create seamless end-to-end data workflows.

Led a data migration initiative by moving data assets to the cloud using a GenAI automation option, ensuring scalability and performance optimization.

Work with BI tools (Power BI, Tableau) to create dashboards that provide insights into forecast accuracy and trends.

Developed optimized data models (Star/Snowflake schema) and partitioning strategies for improved query performance.

Managed end-to-end CI/CD processes for Snowflake development using GitLab CI and Terraform, enabling reproducible and automated deployments.

Partnered with data quality and product teams to implement validation rules and alerting on top of Snowflake, increasing confidence in reported metrics.

Spearheaded the migration of legacy on-premise data warehouses to Google Big Query, boosting query performance fivefold and significantly reducing infrastructure costs.

Mentored junior developers and led code reviews, fostering a collaborative environment and ensuring adherence to best practices in Informatica development.

Implement CI/CD workflows using GitHub Actions and Git-based workflows to automate deployments across environments.

Monitored and debugged Glue jobs using CloudWatch logs and metrics, proactively identifying bottlenecks or failures.

Developed data models to enable data-driven insights for business solutions, enhancing decision-making processes.

Extensive experience with Databricks, including writing Java notebooks for complex data aggregations, transformations, and schema operations. Expertise in Databricks Delta and DataFrame concepts.

Extensive experience with Java and Apache Spark, leveraging Spark SQL and Java-based transformations for large-scale data processing.

Collaborated with DevOps and platform teams to ensure Docker images met security and compliance standards across all environments.

Collaborate with Set up monitoring systems using Evidently AI and Fiddler AI to detect data drift and maintain model performance in production. data scientists to integrate time-series forecasting models (ARIMA, Prophet, LSTMs) into production environments.

Developed CI/CD pipelines for Snowflake deployments using Terraform, GitHub Actions, and Jenkins.

Integrated semi-structured data (JSON, Avro, Parquet) with Snowflake using VARIANT columns and Snowflake functions.

Migrated data from on-premises to AWS, leveraging AWS Glue, S3, Redshift, and Lambda to optimize data ingestion and processing. Big

Built reusable SQL templates and custom UDFs in Query to ensure consistent analytics logic across teams and reduce duplication of effort.

Established data versioning pipelines using DVC and Python, enabling full traceability and reproducibility for all ML experiments.

Designed and implemented Glue Workflows and job triggers to handle complex data dependencies and orchestrate multi-step pipelines.

Experience with Redshift, Aurora, EMR, and Iceberg for large-scale warehousing and querying.

Automated the deployment of big data applications using Kubernetes and Helm charts.

Built custom logging and performance profiling tools in Python to monitor model behavior and production system health.

Developed advanced, multi-stage feature pipelines for AI-driven fraud detection, reducing false positive rates by 25% through targeted optimizations.

Built scalable ML pipelines with Apache Spark MLlib and Databricks, supporting large-scale data transformations and model training.

Applied data quality checks and governance practices to ensure data consistency, accuracy, and security across the board.

Developed patient trend analysis reports, enabling better decision-making for healthcare providers.

Monitored pipeline performance with Google Cloud Monitoring and set up automated alerts for anomalies.

Migrated legacy ETL processes to serverless workflows using Cloud Functions and Pub/Sub.

Reduced query execution times by 40% by optimizing Big Query schemas and indexes.

Data Engineer

Value Labs Hyderabad, India Aug 2014-Oct 2017

Roles and responsibilities:

Developed big data solutions for risk modeling using HDFS, Hive, and Scala.

Wrote efficient MapReduce jobs to process transactional data, reducing processing time by 30%.

Created real-time analytics dashboards for credit risk scoring using Tableau and Kafka streams.

Automated ETL workflows using Talend, enabling seamless data integration across departments.

Ensured data quality by implementing validation scripts in Python and SQL.

Integrated BigQuery with Airflow (via Cloud Composer) to orchestrate complex ETL workflows, reducing operational load and making pipelines more reliable.

Used Databricks for encrypting data using server-side encryption.

Applied LLMs and NLP models for real-time extraction and classification of unstructured data, including customer support content and internal documents.

Partnered with data scientists, analysts, and product teams to understand data needs and deliver solutions that drive insights and decision-making.

Conducted exploratory analysis and iterative model development in Jupyter Notebooks, VS Code, and Google Colab, accelerating experimentation cycles.

Implement Azure Data Factory operations and deployment into Azure for moving data from on-premise into cloud.

Implement governance strategies using Glue, Lake Formation, and auditing tools like CloudTrail.

Deployed disaster recovery strategies using AWS RDS snapshots and Redshift backups.

Integrated batch processing jobs with Apache Oozie for scheduling and workflow automation.

Conducted data profiling and cleansing to ensure accuracy for regulatory reporting.

Implemented access controls and audit trails to comply with PCI-DSS requirements.

Provided training sessions to the analytics team on HiveQL and Scala.

Created well-structured data marts and semantic layers on top of BigQuery to support self-service analytics and improve query speeds for business users.

Involvement in working with Azure cloud stage (HDInsight, Databricks, Data Lake, Blob, Data Factory, Synapse, SQL DB and SQL DWH).

Big-Data Engineer

Capgemini Hyderabad, India Mar 2012 to June 2014

Roles and responsibilities:

Built pipelines to take data from various telemetry streams and data sources to craft a unified data model for analytics and reporting.

Created numerous pipelines in Azure using Azure Data Factory v2 to get the data from disparate sources by using different Azure Activities like Move &Transform, Copy, filter, for each, Databricks, etc.

Extensively utilized Databricks notebooks for interactive analysis utilizing Spark APIs.

Creating Temporary views and loading curating data in destination tables.

Led the adoption of CI/CD practices for BigQuery code using dbt and GitHub Actions, streamlining deployment workflows and improving code quality across the team.

Kept up with the latest AWS Glue features and regularly introduced improvements to streamline development and optimize processing time.

Databricks job configuration, Refactoring of ETL Databricks notebooks

Created various pipelines to load the data from Azure data lake into Staging SQLDB followed by Azure SQL DB.

Knowledge in retrieving, analyzing and presenting data using Azure Data Explorer/Kusto

Created pipelines to load data from Lake to Databricks and Databricks to Azure SQL DB.

Education:

Bachelor of Science in Computer Science

Lakireddy Bali Reddy College of Engineering in April 2011.



Contact this candidate