Name: Revanth
Email: *******************@*****.***
Phone No: +1-774-***-****
Python Developer
PROFESSIONAL SUMMARY:
Highly motivated IT professional with 7+ years of experience as a Python Developer. Experienced in designing data-intensive applications using various technologies, including Python, Django, Flask, Pyramid, and Client server-based applications using Restful/Soap Webservices and skilled in Hadoop Ecosystem, Cloud Data Engineering, and Data Visualization.
Experienced in implementing Object Oriented Programming, Hash Tables (Dictionaries), and Multithreading.
Proficient in working with Django, MYSQL, Exception Handling, and Collections using Python.
Experienced in Developing Web applications and Implementing Model View Control (MVC) architectures using Server-side applications like Django, Flask, and Pyramid.
Experience in applying Design Patterns such as MVC, Singleton and working with frameworks like Django.
Worked extensively with MVW frameworks such as Flask, HTML, CSS, XML, JavaScript, and Bootstrap.
Skilled in handling Django ORM (Object-Relational Mapper) and SQL Alchemy for efficient database interactions.
Extensive hands-on experience in developing Spark applications utilizing tools such as Spark Core, Spark MLib, Spark Streaming, and RDD transformations.
Proficient in importing streaming data into HDFS using Flume sources, sinks, and interceptors.
Experienced in Implementing both WAMP and LAMP architectures in Python Environment.
Knowledgeable in working with common operators in Airflow, such as Python Operator, Bash Operator, and Google Cloud Storage operators.
Strong expertise in software development using Python, utilizing libraries like Beautiful Soup, NumPy, SciPy, Matplotlib, python-twitter, Panda’s data frame, urllib2, and MySQL DB for database connectivity.
Processed Large Datasets using PySpark, Scala and created Restful APIs in Node JS, React JS.
Developed Python Mapper and Reducer scripts and implemented them using Hadoop Streaming.
Strong Knowledge in developing enterprise-level solutions using Hadoop, leveraging key components such as Apache Spark, Airflow, MapReduce, HDFS, Sqoop, PIG, Hive.
Strong understanding of Data Warehousing concepts and hands-on in Data modeling, OLTP & OLAP database system design using ER diagrams, ETL processing, and data marts.
Skilled in designing Star Join Schema and Snowflake models, creating FACT and Dimension tables, and performing Conceptual, Physical, and Logical data modeling.
Proficiency in SQL Programming, Teradata, Oracle PL/SQL, Debugging, Performance tuning, and Shell Scripting.
Demonstrated expertise in implementing AWS cloud services including EC2, EBS, S3, VPC, RDS, SES, ELB, EMR, ECS, CloudFront, CloudFormation, CloudWatch, RedShift, Lambda, SNS, and DynamoDB.
Expertise in data import and export using Sqoop, facilitating seamless data movement between Hadoop Distributed File System (HDFS) and Relational Database Systems (RDBMS) such as Teradata.
Skilled in working with various database platforms, including both NoSQL and RDBMS tools such as MySQL, Oracle, SQL Server, Postgre SQL, DB2, DynamoDB, MongoDB.
Proficient in scheduling, deploying, and managing container replicas on node clusters using Kubernetes.
Extensive experience with Microsoft SQL Server Integration Services (SSIS), Reporting Services (SSRS), and Analysis Services (SSAS).
Hands-on experience with Docker components including Docker Engine, Hub, Machine, Docker Compose, and Docker Registry.
Adept in developing microservices, REST APIs, data pipelines, and automation solutions using modern frameworks and cloud platforms.
Strong in backend development, DevOps practices, containerization, and integration with machine learning systems
Proficient in creating various types of visualizations using Power BI and Tableau.
Extensive expertise in backend and UI development using Java, including multithreading, JEE (Java Enterprise Edition), and popular Java frameworks like Spring and Hibernate.
Well-versed in all stages of the software development life cycle (SDLC), including Agile and Waterfall methodologies.
Strong experience in creating CI/CD pipelines using tools like Ant, Maven, Git, Bitbucket, Hudson/Jenkins.
Hands on experience with version control tools such as JIRA, GitHub, SVN and GitLab.
Expertise in Testing of Python applications using different Automation Testing tools such as Selenium and PyTest.
Excellent communication, interpersonal and analytical skills, and a highly motivated team player with the ability to work independently.
Excellent analytical and problem-solving skills and ability to work on own besides being a valuable and contributing team player.
TECHNICAL SKILLS:
Languages:
Python, Java, J2EE
Python Libraries:
Numpy, Matplotlib, NLTK, Statsmodels, Scikit-learn/Sklearn, SOAP, Scipy
Python Frameworks:
Django, Flask, FastAPI, Pandas, NumPy, PySpark
Python IDE:
Sublime Text 3, Eclipse, Jupyter Notebook, VIM, Pycharm
Web Technologies:
CSS, JavaScript, XML, AJAX, JQuery, Bootstrap, AWS, RESTFul Web Services
Big Data Tools:
Apache Spark, Hadoop, Kafka, Airflow, Hive
Machine Learning:
Scikit-learn, TensorFlow, MLflow, DVC
Cloud & DevOps:
AWS (Lambda, S3, EC2, RDS), Azure, Docker, Kubernetes, Terraform, GitHub Actions
Data:
PostgreSQL, MySQL, MongoDB, Redis, Cassandra
CI/CD:
Jenkins, GitLab CI/CD, GitHub Actions
Other Tools:
JIRA, Confluence, VS Code, PyCharm, SwaggerHub
Operating Systems:
Windows, MAC OS, Unix/Linux
PROFESSIONAL EXPERIENCE:
Client: Comerica Bank, Auburn Hills, MI Nov 2023 – Till Date
Role: Python Developer
Responsibilities:
Designed microservices using FastAPI to expose risk analytics as REST APIs, enhancing modularity and performance across services.
Built ETL pipelines using Apache Airflow and PySpark to ingest and transform high-volume data from internal financial systems and third-party providers.
Implemented data validation and profiling logic using Pandera and Great Expectations to ensure data accuracy and integrity.
Developed and deployed machine learning models for credit score predictions using Scikit-learn, integrated via REST API endpoints.
Developed asynchronous services using Python asyncio and aiohttp to handle high-frequency financial transactions and assessments.
Created interactive dashboards and KPIs for compliance and business risk using Plotly Dash and Tableau Embedded.
Integrated the system with Comerica's Azure Data Lake, using Azure SDK for Python and Blob Storage APIs.
Authenticated microservices using OAuth2 and JWT, implementing security best practices across endpoints.
Automated deployment pipelines using GitHub Actions and Terraform, managing environments on AWS and Azure hybrid cloud.
Implemented containerized services with Docker and orchestrated them using Kubernetes (EKS & AKS).
Maintained high-performance database queries using PostgreSQL, MongoDB, and Redis, supporting concurrent read/write operations.
Integrated logging and monitoring with Prometheus, Grafana, and ELK stack for observability and troubleshooting.
Designed CI pipelines that automatically run unit/integration tests using PyTest, code analysis via SonarQube, and deploy using Helm Charts.
Collaborated with Data Science and Business teams to define use cases and KPIs for machine learning integration.
Built batch automation scripts for reconciliation and audit logs using Pandas, NumPy, and openpyxl.
Applied data anonymization and masking techniques to comply with GLBA and FFIEC regulations on sensitive data.
Led code reviews, design walkthroughs, and mentored junior developers on Pythonic practices and system design.
Designed GraphQL gateway using Ariadne for consolidated querying across multiple data microservices.
Converted legacy Java-based risk modules to Python-based microservices, reducing latency by 40%.
Used MLflow for tracking model experiments and deploying models into production as Dockerized REST services.
Created RESTful APIs and backend logic for customer risk scoring calculators used by relationship managers.
Coordinated UAT testing and production cutover, maintaining strong documentation via SwaggerHub and Confluence.
Environment: Python 3.11, FastAPI, Flask, Airflow, PySpark, Azure Data Lake, AWS Lambda, PostgreSQL, MongoDB, Redis, GitHub Actions, Docker, Kubernetes, Terraform, Tableau, Plotly Dash, Prometheus, Grafana, Jenkins, MLflow, Scikit-learn, PyTest, Swagger Hub, Linux
Client: Sherwin Williams, Cleveland, OH Oct 2021 – Oct 2023
Role: Python Developer
Responsibilities:
Developed RESTful APIs using Django REST Framework for managing inventory updates, shipment statuses, and warehouse data exchange.
Implemented a real-time tracking system integrated with IoT sensors using MQTT protocol and Paho-MQTT Python client.
Built a predictive demand forecasting model using Time Series algorithms (ARIMA, Prophet) and deployed using Flask APIs.
Designed and executed ETL pipelines using Apache Airflow, extracting data from SAP HANA and external vendor APIs.
Integrated with Sherwin-Williams’ ERP system using OData and SAP Python connectors for product, vendor, and shipment data.
Optimized backend processes using Celery with RabbitMQ to run scheduled and async tasks like auto-reordering and shipment reminders.
Built data visualization dashboards using Plotly Dash and Grafana to provide real-time insights into stock levels, logistics KPIs, and fulfillment delays.
Wrote unit, integration, and load tests using PyTest and Locust, achieving 85%+ code coverage across modules.
Designed and implemented role-based access control (RBAC) and audit logging using Django’s built-in auth system and custom middleware.
Used Docker Compose to containerize services for development, QA, and UAT environments.
Orchestrated deployments using Kubernetes clusters hosted on Azure Kubernetes Service (AKS).
Automated CI/CD pipelines using GitLab CI, integrating with SonarQube for static code analysis and quality gates.
Connected APIs to Azure Cosmos DB and PostgreSQL for hybrid data storage strategy – NoSQL for IoT event logs and relational for core business data.
Used Redis caching for frequently accessed product/warehouse data to reduce API response latency by 30%.
Developed data validation pipelines using Great Expectations to ensure supply chain data quality before downstream analytics.
Implemented SFTP-based secure data exchange with 3rd-party logistics vendors and validated schema using Cerberus JSON schema validation.
Led sprint planning and demos using Agile/Scrum, coordinated with Product Managers, QA, and DevOps teams.
Created automated job alerts and SLA violation triggers using Python Alerting Service (custom-built) and PagerDuty API.
Integrated OpenTelemetry for tracing and Prometheus for metrics to improve observability of microservices.
Developed CLI tools using Click for ops teams to run batch reports, health checks, and log diagnostics easily.
Participated in data governance initiatives by documenting pipelines, setting up DQ checks, and maintaining lineage in Azure Purview.
Environment: Python 3.10, Django, Flask, Django REST Framework, Apache Airflow, ARIMA/Prophet, Azure Cosmos DB, PostgreSQL, Redis, MQTT, SAP HANA, Azure Blob Storage, Docker, Kubernetes (AKS), GitLab CI/CD, Prometheus, Grafana, Celery, RabbitMQ, PyTest, Plotly Dash, Great Expectations, Azure DevOps, VS Code, Linux, Swagger
Client: Caliber Technologies Pvt Ltd, India June 2020 – Aug 2021
Role: Python Developer/Data Engineer
Responsibilities:
Possess hands-on experience in Amazon Web Services (AWS) and its services, including Amazon Redshift, Amazon S3 (Simple Storage Service), AWS Lambda, AWS Glue, Amazon Data Pipeline, Amazon EMR, and Amazon CloudWatch.
Utilized AWS Cloud Shell to configure services such as Amazon EMR, Amazon S3, and Amazon Redshift in AWS.
Developed Power BI reports leveraging Azure Analysis Services to optimize performance and enhance data analysis capabilities using Amazon Redshift.
Utilized PyUnit as the unit test framework for all Python applications and accessed database objects using Django Database APIs.
Worked with MVW frameworks such as Django, AngularJS, HTML, CSS, XML, JavaScript, jQuery, and Bootstrap for designing, coding, and developing Python-based applications using the Django MVC pattern.
Implemented unit and functional testing for Python applications using frameworks such as unit test, unittest2, mock, and custom frameworks aligned with Agile Software Development methodologies.
Created and modified web services and stored procedures using Python.
Exposed services as RESTful APIs in JSON format for an Admin UI developed with Django.
Developed REST APIs using Django to fetch data from a NoSQL Amazon DynamoDB database.
Streamlined data analysis, algorithm development, and model optimization workflows by writing Python, Golang, and Linux shell scripts.
Implemented a fully automated multi-cloud infrastructure platform based on client requests, with a focus on developing APIs using Python and Django for provisioning VMs and Amazon DocumentDB databases.
Utilized the Pandas API to manipulate and retrieve data in both time series and tabular formats.
Used Bash and Python (Boto3) for additional automation tasks alongside Ansible and Terraform.
Implemented the MVC Architectural Pattern using the Spring Framework, including JSP, Servlets, and Action classes.
Developed streaming applications using PySpark to read data from Apache Kafka and persist it in NoSQL databases such as Amazon HBase and Amazon Cassandra.
Performed monitoring of YARN applications and effectively resolved cluster-related system problems.
Created shell scripts to parameterize Hive actions in Apache Oozie workflows and schedule jobs.
Populated Amazon S3 and Amazon Cassandra with large volumes of data using Apache Kafka.
Played a key role in a team responsible for developing an initial prototype of a NiFi big data pipeline. This pipeline showcased a complete end-to-end scenario of data ingestion and processing.
Utilized Apache NiFi to process high-volume data streams, enabling ingestion, processing, and low-latency provisioning using Hadoop ecosystems such as Hive, Pig, Sqoop, Kafka, Python, Spark, Scala, NoSQL, and Druid.
Developed product profiles using Pig and commodity UDFs, as well as written Hive scripts in Hive QL to de-normalize and aggregate data for analysis.
Developed secure data streaming solutions using Apache NiFi, Amazon Kinesis, and Apache Kafka, for delivering highly sensitive data with low latency to multiple teams while maintaining confidentiality.
Conducted data analysis and design activities, as well as building and managing extensive and sophisticated logical and physical data models, along with metadata repositories using ERWIN and AWS Glue Data Catalog.
Involved in deployment and release management, effectively handling the deployment of production code and managing weekly releases using Git/GitLab.
Environment: AWS, Power BI, PySpark, Kafka, HBase, Cassandra, Hive, Pig, Sqoop, Kafka, Python, Spark, Scala, NoSQL, Druid, GitLab, ERWIN, Kafka, Python, Linux, MongoDB, Golang, NiFi, Spring.
Developer
Client: Metaminds, Hyderabad, India May 2019 – May 2020
Role: Data Analyst
Responsibilities:
Wrote several Teradata SQL Queries using Teradata SQL Assistant for ad-hoc Data Pull requests.
Developed Python programs for manipulating data from various Teradata sources and converting them into consolidated CSV files
Performed statistical data analysis and data visualization using Python and R.
Worked on creating filters, parameters and calculated sets for preparing dashboards and worksheets in Tableau.
Created data models in Splunk using pivot tables by analyzing vast amounts of data and extracting key information to suit various business requirements.
Collaborated with data scientists to architect custom solutions for data visualization using tools like Tableau, R, and R-Shiny.
Maintained large datasets by combining data from various sources, including Excel, SAS Grid, SAS Enterprise, Access, and SQL queries.
Analyzed Dataset using SAS programming, R and Excel.
Gained experience in performing Tableau administration by using Tableau admin commands.
Developed normalized Logical and Physical database models for designing an OLTP application.
Wrote Teradata SQL scripts using OLAP functions like rank and rank over to improve query performance when pulling data from large tables.
Created complex SQL queries using joins and OLAP functions like CSUM, COUNT, and RANK.
Involved in extensive routine operational reporting, hoc reporting, and data manipulation to produce routine metrics and dashboards for management.
Created action filters, parameters, and calculated sets for preparing dashboards and worksheets in Tableau.
Built and published customized interactive reports and dashboards, including report scheduling using Tableau Server.
Worked on Spark SQL and DataFrames for faster execution of Hive queries using Spark SQL Context.
Designed and developed ETL processes using the Informatica ETL tool for dimension and fact file creation.
Developed and automated solutions for a new billing and membership Enterprise Data Warehouse, including ETL routines, tables, maps, materialized views, and stored procedures using Informatica and Oracle PL/SQL toolsets.
Environment: SQL/Server, Oracle 11g, MS-Office, Teradata, Informatica, ER Studio, XML, Hive, HDFS, Flume, Sqoop, R connector, Python, R, Tableau
Client: Pramati Technologies Pvt Ltd, Hyderabad, India Jan 2018 – April 2019
Role: Web Application Developer
Responsibilities:
Responsible for analyzing various cross-functional, multi-platform applications systems enforcing Python best practices and provide guidance in making long term architectural design decisions.
Coded model level validation using Python programming with emphasis in web security.
Handled Business logics by backend Python programming to achieve optimal results.
Created Jobs for auto deployments and Involved in writing and onboard python scripts for Deployments, JMS configurations and JDBC configurations etc.
Involved in created and supported MQ Queue Managers, Remote Queues, Local Queues, Queue Aliases, Channels, Clusters, Transmission Queues, Performance Events, Windows, Triggers, Processes and MQ error trapping applications.
Developed web services using Restful technology to support JSON and XML.
Created new connections through applications for better access to MySQL database and involved in writing SQL, PLSQL - Stored procedures, functions, sequences, triggers, cursors, object types etc.
Created Jobs for auto deployments and Involved in writing and onboard scripts for Deployments, JMS configurations and JDBC configurations etc.
Developed Ant scripts to deploy EAR, WAR files for deployment into server.
Involved in using XML for developing tool for user interface customization.
Deployed Enterprise application using Web sphere Server.
Maintained multiple Enterprise applications in Production Environment of Web sphere.
Successfully executed all the test cases and fixed any bugs/issues identified during the test cycles.
Environment: Python 2.7, Flask, CentOS, CSS, HTML, XML, XSD, Bootstrap, JavaScript, AJAX, MYSQL, Restful API, Linux, COBOL, WebSphere MQ, SCRUM, Agile, CVS, PL/SQL,