Post Job Free

Resume

Sign in

Data Java Developer

Location:
United States
Posted:
October 21, 2020

Contact this candidate

Resume:

Sugandha

Graceville, Florida 850-***-**** adg6ou@r.postjobfree.com

Profile Summary

·Expert in Design, Development, Implementation and testing of Web Based & Client/Server applications and Database applications.

·Worked on Java 8 features like Lambda expressions and Stream API for Bulk data operations on Collections which would increase the performance of the Application with good knowledge of Spring Core, spring batch, spring security, spring boot, spring MVC and spring's integration with Hibernate and Struts technologies.

·Hands on experience on Big Data Integration and Analytics based on Hadoop, SOLR, Spark, Kafka, Storm and web Methods technologies and on major components in Hadoop Ecosystem like Hadoop Map Reduce, HDFS, HIVE, PIG, Hbase, Zookeeper, Oozie and Flume.

·Experience in managing and reviewing Hadoop log files. Experienced in processing big data on the Apache Hadoop framework using Map Reduce programs.

·Proficiency in implementing Multi-threaded applications and in using design patterns like Factory Method, Abstract Factory, Singleton, Builder, MVC and DAO patterns in software design.

·Expertise in developing and implementing enterprise applications using Java/J2EE Technologies including Core Java, JDBC, Hibernate, Spring, JSP, Servlets, Java Beans and EJB.

·Experience in using build/deploy tools such as Jenkins, Docker and OpenShift for Continuous Integration & Deployment for Microservices.

·Hands on experience on Apache spark integration with kafka, Elastic Search, Cassandra, MongoDB, file System source (CSV,XML,JSON) and RDBMS data source(Oracle,DB2).

·Proficient with container systems like Docker and container orchestration like EC2 Container Service, Kubernetes, worked with Terraform.

·Strong experience on DevOps essential tools like Chef, Puppet, Ansible, Docker, Kubernetes, Subversion (SVN), GIT, Hudson, Jenkins, Ant, Maven.

·Experienced in using Selenium Grid, Sauce Labs, and Docker for cross platform and cross browser testing by running the Test Scripts on various virtual machines.

·Extensively used JAVA OOP's concepts for developing Automation Frameworks using Eclipse, Maven, Selenium WebDriver and TestNG.

·Extensive experience in Amazon Web Services (Amazon EC2, Amazon S3, Amazon Simple DB, Amazon RDS, Amazon Elastic Load Balancing, Amazon SQS, AWS Identity and access management, AWS Cloud Watch, Amazon EBS and Amazon CloudFront).

·Expert in designing ETL data flows using creating mappings/workflows to extract data from SQL Server and Data Migration and Transformation from Oracle/Access/Excel Sheets using SQL Server SSIS.

·Experience in writing Build Scripts using Shell Scripts, ANT, MAVEN and using CI (Continuation Integration) tools like Continuum, Jenkins and Hudson as well as in writing JUnit test cases using Junit Frameworks like Mockito and JMock.

Experience

SR. JAVA DEVELOPER

7- ELEVEN JUN 2020 - PRESENT

·Developed Single Page Application (SPA) with Angular as front-end, Spring Boot as back-end and MySQL for database support.

·Constructed Spring Boot four layers including model, DAO, service and controller. Manipulated database with Hibernate and created Entity to interact with persistence.

·Used Microservice architecture, with Spring Boot-based services interacting through a combination of REST and Apache Kafka message brokers.

·Implemented Spring MVC controllers with annotations, validations and using model attributes to pass request from the presentation layer to helper classes in java.

·Implemented REST Microservices using spring boot. Generated Metrics with method level granularity and Persistence using Spring AOP and Spring Actuator.

·Processed data into HDFS by developing solutions, analyzed the data using MapReduce, Pig, Hive and produce summary results from Hadoop to downstream systems.

·Wrote Spark-SQL and embedded the SQL in SCALA files to generate jar files for submission onto the Hadoop cluster.

·Developed Microservices based on Restful web service using Akka Actors and Akka-Http framework in Scala which handles high concurrency and high volume of traffic

·Developed REST based Scala service to pull data from ElasticSearch/Lucene dashboard, Splunk and Atlassian Jira.

·Develop algorithms & scripts in Hadoop to import data from source system and persist in HDFS (Hadoop Distributed File System) for staging purposes.

·Involved in writing Java API for Amazon Lambda to manage some of the AWS services. Used Spring Boot for developing microservices, REST to retrieve data from client-side using Microservice architecture and Pivotal Cloud Foundry (PCF) for deploying microservices.

·Migrated an existing on-premises application to AWS. Used AWS services like EC2 and S3 for small data sets processing and storage, Experienced in Maintaining the Hadoop cluster on AWS EMR.

·Converted all Hadoop jobs to run in EMR by configuring the cluster according to the data size.

·Monitor and Troubleshoot Hadoop jobs using Yarn Resource Manager and EMR job logs using Genie and kibana. Developed Data Ingestion application to bring data from source system to HBase using Spark Streaming, Kafka.

·Involved in extracting customer's big data from various data sources into Hadoop HDFS. This included data from mainframes, databases and also logs data from servers.

·Wrote Spark transformations and action jobs to get data from source DB/log files and migrating to destination Cassandra database.

·Configured and maintained Jenkins to implement the CI process and integrated the tool with Ant and Maven to schedule the builds.

·Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.

·Virtualized servers in Docker as per test environments and Dev-environments requirements and configured automation using Docker containers.

·Used Streams and Lambda expressions available as part of Java 8 to store and process the data. Experienced with event-driven and scheduled AWS Lambda functions to trigger various AWS resources. Load data into Amazon Redshift and use AWS Cloud Watch to collect and monitor AWS RDS instances.

JAVA DEVELOPER

LIBERTY MUTUALS NOV 2019 – MAR 2020

·Worked on MongoDB database concepts such as locking, transactions, indexes, sharding, replication and schema design. Involved in Performance tuning and indexing strategies using mongo utilities like Mongostat and Mongotop.

·Involved in the complete end-to-end CI/CD process, building and deploying the application on Apache servers using Jenkins, Deploy and Release.

·Worked on developing Restful endpoints to Cache application specific data in in-memory data clusters like Redis and exposed them with Restful endpoints by using Redis Cache.

·Used Selenium Web Driver to handle various web page controls like textbox, button, dropdown, checkbox, radio button, labels using XPath and other locators.

·Tested Service and data access tier using JUnit in TDD methodology. Followed Agile SCRUM methodology and used Test Driven Development (TDD).

JAVA DEVELOPER

M DOC JAN 2017 – JUL 2018

·Designed the application by implementing JSF Framework based on MVC Architecture with EJB, simple Java Beans as a model, JSP, JSF, Prime Faces components as View and Faces Servlet as a Controller.

·Used spring framework in the development of a business bean and to interact with the Hibernate ORM tool. Worked on creating various types of indexes on different collections to get good performance in Mongo database.

·Worked with AWS to implement the client-side encryption as Dynamo DB does not support at REST encryption at this time.

·Used Jenkins plugins for code coverage and also to run all the test before generating war file. Setup and Implement Continuous Integration and Continuous Delivery (CI & CD) Process stack using AWS, GITHUB/GIT, Jenkins and Chef.

·Worked with GIT Configuration, Branching and Merging, Resolved conflicts, Push changes etc.

·Worked on deployment automation of all the micro services to pull image from the private docker registry and deploy to docker swarm cluster using Ansible.

·Develop Unix Shell scripts to perform Hadoop ETL functions like Sqoop, create external/internal Hive tables, initiate HQL scripts etc.

·End-to-end performance tuning of Hadoop clusters and Hadoop MapReduce routines against very large data sets.

·Created and maintained Technical documentation for launching Hadoop Clusters and for executing Hive queries and Pig Scripts. Created User accounts and given the users the access to the Hadoop Cluster.

·Load and transform large sets of structured, semi structured and unstructured data using Hadoop/Big Data concepts.

·Used multithreading for writing the collector parser and distributor process, which was getting real time data from zacks API in format of JSON.

·Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and managed releases of Helm packages.

·Handled importing of data from various data sources, performed transformations using Hive, Pig and Spark and loaded data into HDFS.

·Implemented docker-maven-plugin in maven pom to build docker images for all microservices and later used Docker file to build the docker images from the java jar files.

·Wrote scripts and indexing strategy for a migration to Confidential Redshift from SQL Server and MySQL databases

·Worked on AWS Data Pipeline to configure data loads from S3 to into Redshift. Used JSON schema to define table and column mapping from S3 data to Redshift, Created various performance metrics and configured notifications using CloudWatch and SNS.

·Implemented Kafka producer and consumer applications on Kafka cluster setup with help of Zookeeper. Used Spring Kafka API calls to process the messages smoothly on Kafka Cluster setup.

·Implemented the function to send and receive AMQP messages on RabbitMQ synchronously and asynchronously send JMS message to Apache ActiveMQ on the edge device.

·Worked with multiple business / reporting team and developed dynamic SQL, complex SQL queries using joins, sub-queries and inline views to address reporting requirement.

JAVA DEVELOPER

IBM AUG 2014- DEC 2016

·Worked on core java development for different components. Developed the application using Core Java, Multi-Threading, Spring Core, Beans, JDBC, Transaction and Batch, ORACLE, Maven.

·Involved in a team to create structure of management system by using latest front-end technologies such as HTML, CSS and Bootstrap.

·Used Spring MVC Model View Controller to handle/intercept the user requests and used various controllers to delegate the request flow to the Backend tier of the application.

·Configured spring to manage Actions as beans and set their dependencies in a context file and integrated middle tier with Hibernate.

·Wrote Hibernate configuration file, hibernate mapping files and define persistence classes to persist the data into SQL Database.

·Troubleshooting backup and restore problems and performed day-to-day trouble shooting for the end users on Solaris and Linux based servers.

·Utilized Hibernate for Object/Relational Mapping purposes for transparent persistence onto the Oracle database.

·Used Spring Core for dependency injection/Inversion of control IOC, and integrated frameworks like Struts and Hibernate.

·Worked closely with Oracle database in backend to interconnect with user interfaces using native complex SQL queries

·Developed REST and SOAP test suites for testing API's using SoapUI and Postman tools. Used Amazon Elastic Beanstalk with Amazon EC2 to deploy project into AWS.

·Build servers using AWS, importing volumes, launching EC2, RDS, creating security groups, auto-scaling, load balancers (ELBs) in the defined virtual private connection.

·Wrote Hibernate configuration files to enable the data transactions between POJO and Oracle Database. Developed simple to complex Map Reduce streaming jobs using Java language for processing and validating the data.

Education

MASTER’S IN COMPUTER SCIENCE MAY 2020

UNIVERSITY OF FLORIDA

Academic Projects

1. P2P File Sharing System: Java: (Computer Networks) Developed a multi-threaded network for P2P file sharing, with client server protocols like choking and un-choking mechanism, with constraints like handshaking and interested.

2. Keyword counter: C++: (Advanced Data Structures) Project Topic: Output the ‘n’ most popular search words for a search engine using Fibonacci Heaps and Hash Tables.

3. Vehicle Detection using Tensor flow: Python: (Machine Learning) Developed a project using Tensor Flow and COCO-API, using the faster R-CNN and Mobile-SSD algorithms to detect Vehicles in given Images.

4. Tapestry Algorithm: (Distributed Operating Systems) Using Elixir, I coded the algorithm and a simple object access service to prove its usefulness in the research paper given. http://bit.ly/36qJRuo

5. Twitter Clone Simulation: (Distributed Operating Systems) Using Elixir and Web Sockets, I developed the system architecture of twitter and then the full functionality through Web Sockets. The client (based on Phoenix) and JSON based API were developed to provide this functionality. JavaScript was used for the front-end.

Achievements

1. Highest score in the University in a C, C++ certification exam by IIT Bombay.

2. Complete capstone projects on Clustering, Linear Regression, Logistic Regression, SVMs and Decision Tree models as a part of Stanford Machine Learning course on Coursera

3. JavaScript, HTML, CSS Coursera certification license: FHNDTY3GFZ7U

4. Leadership: Director at tech club ISTE [Indian Society for Technical Education] branch in Amity School of Engineering and Technology [2017-18], ‘Operations Head’ [2016-17]

Awards

1. Academic Achievement Award (scholarship) at the University of Florida

2. Scholar Badge at Delhi Public School, Mathura Road, New Delhi



Contact this candidate