SRILATHA BALA
PYTHON DEVELOPER
USA +1-614-***-**** **************@*****.*** LinkedIn
SUMMARY:
Experienced in designing and delivering scalable Python applications and cloud-based data solutions with 5+ years of experience in backend development and data engineering. Skilled in building RESTful APIs, ETL pipelines, and high-performance web applications using Python, Django, and Flask. Hands-on experience with AWS serverless services including Lambda, DynamoDB, S3, Glue, Step Functions, and API Gateway. Proficient in both relational and NoSQL databases, data modeling, and analytics using Pandas, NumPy, and visualization tools like Tableau and Power BI. Strong background in Linux/UNIX environments, Docker, Vagrant, and agile development practices. Experienced in architecting high-availability, real-time systems and implementing secure, efficient cloud-based solutions.
SKILLS:
Programming Languages: Python, SQL, Core Java, Bash/Shell Scripting Frameworks & Technologies: Django, Flask, FastAPI, Apache Airflow, Apache Spark, PySpark, Kafka Data Processing & Big Data: pandas, NumPy, PySpark, Apache Spark, Hive, HDFS, Delta Lake, MapReduce Data Warehousing & ETL: dbt, Airbyte, AWS Glue, Azure Data Factory, Google Dataflow Databases: MySQL, PostgreSQL, MongoDB, DynamoDB, Redshift, Snowflake, BigQuery Cloud Platforms & Services: AWS (S3, Lambda, Glue, Redshift, EMR, Athena, Step Functions), Azure (Synapse, Data Lake, Databricks, SQL Database), GCP (BigQuery, Cloud Storage, Composer) Data Visualization & BI Tools: Tableau, Power BI, Looker DevOps & Tools: Docker, Kubernetes, Terraform, Jenkins, Git/GitHub/GitLab, RESTful & GraphQL APIs, CI/CD Core Skills: Data Architecture, ETL/ELT Pipelines, Real-Time Data Processing, Data Modeling, Data Governance, Agile Methodologies, Problem Solving, Team Collaboration, Cloud Architecture EDUCATION:
Campbellsville University, USA Jun 2024
Master of Science in Computer Science
Kakatiya University, INDIA Aug 2021
Bachelor Of Science in Computer Science
PROFESSIONAL EXPERIENCE:
Python Developer Apr 2023 - Present
Intel Corporation
Developed and optimized ETL/ELT pipelines with Python, Airflow, dbt, and AWS Glue, reducing data processing time by 30% and improving pipeline reliability.
Built and maintained real-time data ingestion systems in Python leveraging Kafka, Spark, and PySpark, increasing data throughput by 40% for analytics workloads.
Enhanced SQL query performance across Snowflake, Redshift, and BigQuery by implementing Python-based workload tuning scripts, reducing execution costs by 25%.
Partnered with data science teams to integrate ML model deployment frameworks in Python, cutting time-to-insight by 35%.
Automated pipeline orchestration and monitoring with Python and Bash, reducing manual intervention by 40% and improving operational efficiency.
Python Developer Jan 2020 - Jul 2022
NVIDIA Corporation
Designed Python applications to re-architect storage layers on Azure Data Lake and AWS S3, lowering storage costs by 30% while maintaining scalability and compliance.
Automated infrastructure provisioning and deployment using Python with GitLab CI/CD, Jenkins, and Terraform, reducing release cycles by 50%.
Built Python-based frameworks for data modeling in Snowflake and BigQuery, simplifying query complexity and improving execution time by 35%.
Developed scalable ETL workflows in Python using Azure Data Factory, Databricks, and PySpark, increasing processing efficiency by 40%.
Contributed to Python-driven data governance and validation tools, boosting analytics data quality and trust across teams. Associate Python Developer Feb 2019 - Jan 2020
Oracle Corporation by Nefroverse
Optimized SQL and PL/SQL scripts with Python-based automation, enhancing enterprise transaction processing efficiency by 40%.
Integrated GraphQL APIs with Python services to enable faster and more flexible data access, reducing response times by 35%.
Built Python-based streaming and monitoring solutions with Oracle GoldenGate and Kafka, cutting event latency by 30%.
Automated ingestion workflows using Python for OCI Object Storage, reducing manual effort by 60% and improving data availability.