Sangeeta Patnaik ************@*****.*** 845-***-**** Seattle, WA
LinkedIn - https://www.linkedin.com/in/sangeeta-patnaik-a52339133
PROFESSIONAL SUMMARY
Experienced software professional with strong analytical and problem solving skills. Committed to achieve goal and meet deadlines. Worked on backend applications, involved in development/unit testing/ integration testing. Worked on both Cassandra (NoSQL) and MySQL databases. Good knowledge of Java/ Spring Boot/ Hibernate/ Python/ REST/ OOPs/ Microservices. Involved in full cycle software implementation including requirement elicitation, design, development, testing, deployment and maintenance. Experienced on both Agile SCRUM and Waterfall projects. Have good knowledge of retail platform, master data management and analytics.
SKILLS
•Development in Java/Python
•Worked with REST APIs, Hibernate, Web Services, Micro services, kubernetes, Jenkins
•Databases: Cassandra, MySQL, Oracle
•Testing: JUnit to create unit tests
•DevOps: Maven, Gradle, Docker, GitLab(CI/CD), NewRelic, DataDog, Splunk, Terraform, JIRA, Confluence
•Streaming technology Kafka
•Proficient with various data structures and algorithms
Familiar with Machine Learning, NLP Concepts and Tableau
Demonstrated interest in learning Amazon AWS
WORK EXPERIENCE
Software Developer, Nordstrom March 2020 – Current
Worked for Nordstrom POAVS application to develop features, carry out project tasks.
Gained understanding of retail platform, in-depth development, implementation and support.
Worked in Agile environment and involved in requirement gathering, analysis and system design
Developed backend features for the application in Java and used hibernate for ORM mapping to DB
Developed interactive Tableau dashboard to analyze the Vendor Portal data at different levels
Worked with business to understand the requirements and translated those requirements into technical/functional specifications for product simplification.
Proactively worked with business to conduct data cleanup activities on huge data set for engineering improvement.
Designed a vendor compliance model and have provided a detailed structure on the best use of existing resources and new development
Maintained detailed documentation of project on Confluence
Worked on DB components like Oracle, MY SQL
Developed application to consume JSON files received by REST API calls
Implemented continuous deployment using Jenkin
Worked on Test Driven Development (TDD) methodology thereby creating detailed JUnit tests
Used Splunk, DataDog and New relic to check the health of the VMs and to monitor logs
Worked on DEVOPS such as Docker, JIRA, Gradle and GIT (CI/CD) for code repository and versioning.
Worked on streaming service like Kafka and cloud services like AWS (EC2, S3, SQS, SNS, RDS)
Software Developer, AT&T Labs July 2019 – Feb 2020
Worked for AT&T Labs to develop features for application WFM and TFDE
Develop backend features for WFM application in Java
Used Hibernate for ORM mapping to MySQL database tables
Developed application to consume JSON files received by REST API calls.
Used Jenkins to configure monitoring jobs to check the health of the VMs (DEV/IST/E2E)
MySQL Database development required in creation of new tables, PL/SQL stored procedures, functions, views, indexes and constraints, triggers
Worked on Test Driven Development (TDD) methodology thereby creating detailed JUnit tests for every single piece of functionality before actually writing the functionality.
Good knowledge of Microservices, Data Structures and Multithreading.
Used Git for code repository/ versioning/ branching and SVN
Experience in all aspects of Software Development Life Cycle (SDLC) including requirements analysis, design specification, code development, code integration, testing, and deployment using Object Oriented Analysis
Experience in Agile methodologies like SCRUM, Test Driven Development (TTD), Incremental and Iteration methodology, Agile Development & Testing using Software Development Life Cycle and DDD.
Worked on migration of application database from Cassandra to MySQL – involved in database design involving tables, foreign keys, unique keys, indexes. Creation of Entity class for tables and creation of DAOs
Mind Wandering and Machine Learning Research Assistant NLP, RL Aug 2018 – May 2019
Masters research project deals with finding out the variation of human thoughts over time to determine the level of ADHD in people.
Designed a model that takes high volume of student’s thoughts as input data. Text is processed using Topic Modelling, cosine similarity, W2V which were pipelined as per the design to reduce it to state transition matrix.
Markov Chain model is then used to get the most visited topic by student with high and low ADHD.
Implemented Word Cloud, Sensitivity Analysis, Bigram collocations, Term Co-occurrence for better visualization.
Software Developer, TCS Mumbai, India Jan 2011 – Feb 2016
Worked for client General Electric (GE) for project Fleet Data Management.
Design/Implement solution to handle Customer Delivery information.
Responsible for code reviews of the entire development program.
Time and cost estimation as per the business requirement. Involved in Documentation which includes preparing requirements document, Technical Design Document, High Level Design document.
Worked on DevOps – Oracle Primavera P6 solution
Used PLSQL programming, JAVA and UNIX shell scripting.
MASTERS PROJECTS
Privacy Leak R, Python, R Shiny, Statistics
The project analyzed threat of providing personal information at social network, retail markets that leads to privacy leak.
Designed a model that demonstrate how non-sensitive data can be manipulated using concepts of joint probability, entropy, regression and probability distribution to get closeness to sensitive data.
Designed decision tree that was used to analyze the sequence of questions that could lead to privacy leak.
Entity extraction from Knowledge Database, Clustering, Classification and Evaluation Java, Python
The project aimed at text classification, entity extraction from knowledge database and ranking.
Analyzed the importance of document ranking in search engines.
Developed a model that extract entities from Knowledge database like DBpedia Spotlight and Tag me API.
These entities were clustered and classified using K-Means and RF for a given document to get a score.
Weight concepts was used to get the rank of each document to decide the order of relevance in search engine.
BM25 similarity was used to compute the similarity score. Scores were evaluated using TREC-CAR evaluation.
EDUCATION
University of New Hampshire, Durham, NH January 2017 – May 2019
Master of Science (MS) in Computer Science
Positions Held: Research Assistant/ Grader/ Member of Women in Computing @ UNH
Biju Patnaik University of Technology, India August 2006 – May 2010
Bachelor of Technology in Applied Electronics and Instrumentation