GNANESHWARI NALLA
**** ****** *** ******, ********, VA. LinkedIn: www.linkedin.com/in/gnaneshwarinalla
Cell Phone: 513-***-****
Email: *******@****.**.***
EDUCATION:
University of Cincinnati, Cincinnati – Ohio, United States April 2021
Master of Engineering in Computer Science; GPA: 3.83/4
(Courses: Advanced Algorithms, Data Analysis, Database Theory, Cloud Computing)
VNR VJIET – Hyderabad, India Sep 2014 to May 2018
Bachelors Computer Science and Engineering; GPA 3.78/4.0
(Courses: Object Oriented programming using Python, Relational Database, Big Data)
SKILLS
Data Management: SQL Server 2012 • Oracle/MySql • Snowflake
Hadoop Ecosystem: Map Reduce, Hive, Pig, Oozie, Sqoop, Flume, EMR
Apache Spark: Flume, Kafka with Spark Streaming, Spark Sql, HIVE
Data Analysis: OLAP • Jupyter Notebook
Business Intelligence: Tableau • Power BI
Programming Skills: Object Oriented Programming • HTML • Java • Python Scripting• SQL
File Formats: Parquet, ORC, RC, Avro, sequenceFile, textFile, etc.,
Cloud Computing: AWS (EC2, S3, RDS, KINESIS, VPC, ROUTE53, RedShift, CloudWatch)
Streaming Analytics: Kakfa, Flume, Kinesis
EXPERIENCE:
Software Developer at Core Compete Private Limited July 2018- July 2019
Tools Used: Cloudera, HDFS, Tableau, AWS, Python, Sql
Roles and Responsibilities:
Analyzed and built pipelines and generated visualization reports on data using python programming, Spark and tableau
Cleansed, prepared and structured data for final analysis after harvesting the data.
Develop, optimize dashboards around KPIs and ad-hoc queries from business users and business/product team
Performing incremental updates to the data on a regular basis by scheduling cron jobs
Cosumed and Combined data from WebAPI and AWS RDS and dumped into HDFS with partitions.
Utilized SAS DI for data flow design construction, execution and maintenance of data integration process.
Extracted, transformed and ingested data from various resources and SQL procedures into SAS data mart using SAS ETL.
Proficient in transforming spark dataframes, datasets and HIVE tables using python as a language
Proficient in implementing end-to-end data warehousing from reading s3 to writing to snowflake tables and creating dashboards.
Decent experience in calling spark built-in APIs and process huge tera bytes of data at scale
Expertise in handling large amounts of data for cleansing, scrubbing and quality
Continuously fetched ever changing metadata from AES RDS and fetched data from Hbase to write into Hive tables.
Ability to conduct root cause analysis (RCA) to detect and correct anomalies and ensure data quality
Adept in Agile methodology using Scrum, iterative and incremental software development lifecycle
Expertise in implementing Six Sigma quality methodologies
KEY ENGINEERING PROJECTS:
Twitter Streaming Sentiment Analysis April 2020
Tools used: Twitter API, Spark, Python, Spark Streaming, tableau.
Analyzed and assessed sentiment around a stock price of company using tweets on twitter and created visual dashboards.
Defined and implemented a function that returns tweets for the hashtag of our choice.
Extracted and parsed value from tweets returned in JSON format and then sent to spark streaming service.
Created a spark streaming service that polls from the socket connection on which twitter app is installed for every two seconds
Performed spark dataframes transformations, removed stop words, tokenized tweets, created a word count dictionary
Data is sent continuously sent to tableau server for visual analysis of what’s trending about the stock on twitter.
Student Tracker – Spring, Spring Boot, JSP, Servlet MVC: December 2019
Designed an application to perform CRUD operations – add, update, delete, search a student profile from a university database.
Worked on Spring Frameworks Spring IOC, CDN, MVC, Hibernate ORM, Security, Spring REST, Design patterns - AOP, SOA, JSP & JSTL.
Used Spring AOP module to implement logging and set up the build and automation for a Java based project using Maven
Designed and developed the End points (Controllers), DAO Layer using Hibernate/JPA and JDBC template, using Spring IOC and CDI.
Developed the persistence layer using Hibernate, created the POJO and mapped using Hibernate annotations and Transaction Management.
Used Amazon Identity Access Management (IAM) tool created groups & permissions for users to work collaboratively.
Used AWS S3 as storage for storing the files accessed via S3 REST API and Improved my knowledge on containerization using Docker.
LEADERSHIP ROLES:
Member and Volunteer for Hyderabad Youth Assembly, a program under Street Cause, NGO.
Top 130 members recruited from over 6000 applicants to co-lead the projects, after a two-stage interview process.
Led a team of 25 volunteers for ‘clean water and sanitation’ project under UN Millennium Development Goal.
Organized ‘Run for a Cause 3.0,’ a fundraiser event for adoption a tribal village to improve access for safe water.
Core Member, VJ TEATRO, a club that provides a platform to promote short films, music videos and creative content.