Sanjana Kiran
315-***-**** *******.*****@*****.***
https://www.linkedin.com/in/sanjana-kiran/ https://github.com/sanjanakiran EDUCATION
Master of Science in Computer Science, Syracuse University, Syracuse, NY May 2018
-Relevant Coursework: Algorithm Design, Object Oriented Design (OOD), Operating System, Software Engineering etc Bachelor of Engineering in Computer Science, RNS Institute of Technology, Bangalore, India May 2016 TECHNICAL SKILLS
Languages: Python, R, Java, Scala, HTML5, CSS3, JavaScript Databases: SQL Server 2016, 2014, 2012, Vertica, DB2, Google BigQuery Data Modeling: ER-win, MS Visio
Database Programming: T-SQL, Dynamic SQL
BI Tools: Informatica Power Center 9.x, Power BI, MS Excel, Tableau Others: AWS, Google Cloud Analytics, Adobe Analytics, Jira, Github, Sharepoint, TFS, Apache Spark, Hadoop, Airflow, Jenkins(CICD), Jira
PROFESSIONAL DEVELOPMENT
Apache Spark 2.0 with Scala- Udemy, Certificate no- UC-VLKUJ6ZQ Java Spring Framework- Udemy, Certificate no- UC-1HENAVAX Well Building 2050 International competition (Technologies: Python, Google BigQuery, Tableau, GitHub)
• Created compelling dashboards using Tableau with interactive views to deliver actionable insights
• Interfaced with data miners to Extract, Transform and Load data (ETL) from various data sources using BigQuery.
• Developed Machine Learning classification models to identify most influential features and improved accuracy to 80%. EXPERIENCE
Software Developer- IBM Watson Health Nov 2018- Jan 2019 Environment: Python, Vertica, IBM DB2, DB Visualizer, Apache Airflow, Kubernetes, Jenkins, Jira, Confluence, GitHub Responsibilities:
• Working with the Engineering team to Extract, Transform and Load the Healthcare data.
• Migrated the features built on Vertica DB to IBM DB2 Database of the project as per the client requirements.
• Performed DDL, DML, DCL operations and incorporated Dynamic SQL to develop customizable queries
• Performed workflow management and built data pipelines using Apache Airflow on Kubernetes.
• Participated in the full life cycle of large feature development through definition, design, implementation and unit testing.
• Participated in Joint Application Development sessions with Clients and Stakeholders to meet business requirements.
• Reviewed and incorporated the changing business requirements of clients using Agile Scrum methodologies throughout Software Development Life Cycle (SDLC) to resolve open issues Developer- iConsult, Syracuse, NY May 2017-May 2018 Environment: Java1.7, J2EE, REST, Servlet, Spring, SQL Server Database, JDBC, XML, JUnit4, Github, Jenkins, Maven, Bootstrap, Eclipse IDE, Agile.
Responsibilities:
• Designed and developed the web application using Java, J2EE, SpringMVC and Webservices.
• Applied Model-View-Controller (MVC) design pattern for designing the application.
• Used Apache Tomcat Application Server for deploying the web application.
• Used Spring Security for Authorizing and Authenticating users for application security.
• Code Reviews using Crucible, identifying refactoring opportunities and maintenance of code.
• Used Maven as Build and dependency management tool for the application.
• Used Jenkins for continuous integration and continuous deployment of the application.
• Used Log4j for logging, debugging.
• Debug software program problems and resolve technical and performance issues.
• Attending the daily/weekly calls with customers to provide the status and updates to the customer.
• Utilized Agile Methodology.
Sanjana Kiran
315-***-**** *******.*****@*****.***
https://www.linkedin.com/in/sanjana-kiran/ https://github.com/sanjanakiran Research Assistant at Syracuse University, NY Nov 2016-May 2017 NOAA Weather Data Analysis (Technologies: Python, Apache Spark, Scala, SparkSQL, Apache Hadoop)
• Prepared over 35Gb data for evaluation into Hadoop Cluster using MapReduce and Spark creating Parquet files.
• Performed statistical analysis for identifying seasonal trends and developed forecasting models using Machine Learning.
• Improved the performance of the application by 50% by transforming RDD to Spark DataFrames. Developer Intern- Sakhatech Information System Pvt Ltd, Bangalore, India Feb 2016-Apr 2016 Environment: SQL Server 2016, ER-win, MS VISIO, Python, JIRA, TFS Responsibilities:
• Developed user application that keeps track of finances of clients and help to manage their income and expenditure.
• Performed data modeling; designed and built relational databases to capture day-to-day business transactions.
• Utilized ER-Win to design ER diagrams and create physical instance of database as per business requirements.
• Wrote multitude of complex stored procedures to reduce the network traffic and improve on query execution time.
• Performed DBCC consistency checks, and fixing data corruption in user databases.
• Normalized and de-normalized existing tables to handle insert, update, and delete anomalies.
• Designed trigger logic to track DDL operations at server and database levels on OLTP servers.
• Wrote regular and recursive CTEs to query from hierarchal data structures instead of self joins for better functionality and performance.
• Participated in Joint Application Development sessions with Clients and Stakeholders to meet business requirements. PROJECTS
NoSql DataBase (Technologies: Object Oriented Design, C++, Visual Studio)
• Developed in-memory database that supports Data Manipulation Language (DML) using key/value pairs. Remote Code Publisher (Technologies: Object Oriented Design, C++, HTML5, CSS, JavaScript, Visual Studio)
• Developed a Client-Server model which communicates using HTTP protocol.
• Given the source files, extracted lexical content, type information, dependency information and Strong component details Amazon Product reviews (Technologies: Python ( Libraries like Numpy, Pandas, Sk-learn, NLTK, Matplotlib), Amazon API)
• Accessed Amazon API for data, performed statistical analysis and RDMS for pre-processing the product details.
• Built statistical models using Sk-learn and predicted sentiment of customer reviews with increased accuracy of 76%.