Post Job Free

Resume

Sign in

Data Digital Marketing

Location:
Calgary, AB, Canada
Posted:
March 01, 2021

Contact this candidate

Resume:

Rajkumar Rollu

Email: adkkpj@r.postjobfree.com Phone: 720-***-****

LinkedIn

Professional Summary

Around 9 years of IT experience involving project development, implementation, deployment and maintenance using Big data Hadoop Ecosystem related technologies in Insurance, Health Care & Retail Industry Project sectors with multiprogramming language expertise like Java, Python.

5+ years of Hadoop Developer experience in designing and implementing complete end-to-end Hadoop Infrastructure using HDFS, MapReduce, HBase, Spark, Yarn, Kafka, Zookeeper, PIG, HIVE, Sqoop, Storm, Oozie, and Flume.

2+ years of Java programming experience in developing web-based applications and Client-Server technologies.

In depth understanding of Hadoop Architecture and its various components such as Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager and Map Reduce concepts.

Strong experience creating real time data streaming solutions using Apache Spark Core, Spark SQL and Data Frames.

Experience in analyzing data using Hive QL, Pig Latin and custom MapReduce programs in Java

Experience in importing and exporting data using Sqoop from Relational Database Systems to HDFS and vice-versa.

Experience of Partitions, Bucketing concepts in Hive and designed both Managed and External tables in Hive to optimize performance

Collecting and aggregating a large amount of Log Data using Apache Flume and storing data in HDFS for further analysis.

Job workflow scheduling and monitoring using tools like Oozie.

Experience in designing both time driven and data driven automated workflows using Oozie.

Comprehensive experience in building Web-based applications using J2EE Frame works like Spring, Hibernate, Struts and JMS.

Worked in complete Software Development Life Cycle (analysis, design, development, testing, implementation and support) using Agile Methodologies.

Transforming some existing programs into lambda architecture.

Experience in setting up automated monitoring and escalation infrastructure for Hadoop Cluster using Ganglia and Nagios.

Experience in installation, configuration, support and monitoring of Hadoop clusters using Apache, Cloudera distributions and AWS.

Experience in working with various Cloudera distributions (CDH4/CDH5), and Hortonworks

Assisted in Cluster maintenance, Cluster Monitoring, Managing and Reviewing data backups and log files.

Experience in different layers of Hadoop Framework - Storage (HDFS), Analysis (Pig and Hive), Engineering (Jobs and Workflows).

Good knowledge on Amazon AWS concepts like EMR & EC2 web services which provides fast and efficient processing of Big Data

Expertise in optimizing traffic across network using Combiners, joining multiple schema datasets using Joins and organizing data using Partitions and Buckets.

Hands on experience in Sequence files, RC files, Combiners, Counters, Dynamic Partitions, Bucketing for best practice and performance improvement.

Experienced in using Integrated Development environments like Eclipse, NetBeans and IntelliJ

Migration from different databases (i.e. Oracle, DB2, Cassandra, MongoDB) to Hadoop.

Generated ETL reports using Tableau and created statistics dashboards for Analytics.

Familiarity with common computing environment (e.g. Linux, Shell Scripting).

Familiar with Java virtual machine (JVM) and multi-threaded processing.

Detailed understanding of Software Development Life Cycle (SDLC) and sound knowledge of project implementation methodologies including Waterfall and Agile.

Development experience with Big Data/NoSQL platforms, such as MongoDB and Apache Cassandra.

Keeping up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWS.

Strong hands-on experience with DW platforms and databases like MS SQL Servers 2012 and 2008, Oracle 11g/10g/9i, MySQL, DB2 and Teradata.

Experience in designing and coding web applications using Core Java & web Technologies- JSP, Servlets and JDBC.

Extensive experience solving analytical problems using quantitative approaches using machine learning methods in R.

Excellent knowledge in Java and SQL in application development and deployment.

Familiar with data warehousing "fact" and "dim" table and star schema and combined with Google Fusion tables for visualization.

Good working experience in PySpark and Spark Sql.

Experience in creating various database objects like tables, views, functions, and triggers using SQL.

Good team player with ability to solve problems, organize and prioritize multiple tasks.

Excellent technical, communication, analytical and problem-solving skills and ability to get on well with people including cross-cultural backgrounds and troubleshooting capabilities.

Technical Skills

Hadoop/Big Data : MapReduce, HDFS, Hive, Pig, Sqoop, Spark, Storm, Kafka, Flume, Oozie, Impala.

Amazon Web Service : EMR, S3, EC2, Athena, SageMaker, Glue, CodeBuild, SNS, Lambda

Programming Languages : Python, C, Core Java, R

Development Tools : Eclipse, IntelliJ, VS Code, SVN, Git, Maven, SOAP UI, SQL Developer, QTOAD, Snowflake

Methodologies : Agile/Scrum/Kanban Board UML, Rational Unified Process

Monitoring and Reporting : Ganglia, Nagios, Custom Shell scripts.

NoSQL Technologies : Accumulo, Cassandra, MongoDB, HBase

Frameworks : MVC, Struts, Hibernate

Scripting Languages : Unix Shell Scripting, SQL

Distributed plat forms : Hortonworks, Cloudera, MapR

Databases : Oracle 11g, 12c, MySQL, MS-SQL Server, Prometheus

Operating Systems : Windows XP/7/10, UNIX, Linux

Software Package : MS Office 2007/2010.

Web/ Application Servers : WebLogic, WebSphere, Apache Tomcat,

Visualization : Tableau, Kibana and MS Excel

Professional Experience

Client: Shaw Communications (October 2019 – Present)

Denver, CO

Role: Sr. Big Data Engineer

Description:

Advanced Analytics is the data analytics solutions project leveraging AWS, Big Data Environment and Data Science with goal of driving data engineering driven and build data products and features for the Data Science ML models used for churn prediction, cross/up-sell to generate more actional insights, engage stakeholder to influence decision making and run campaigns through segmentation and Digital marketing.

Responsibilities:

Involve in different phases of Software Development life Cycle to Analyse, Design, Coding and implement high quality scalable solutions as per the business requirements.

Interface with data scientists, product managers, Architects and business stakeholders to understand data needs and help build large data products that scale across the company

Define and implement best practice approaches for processing large S3, Snowflake data sets for analytics modelling.

Develop Python scripts to automate validation, logging, and alerting for Spark applications running on AWS EMR.

Create Athena external tables for consumption purpose by implementing bucketing and partitions based on date of the incremental data loads.

Collaborate with Data Scientists to implement advanced analytics algorithms that exploit our rich data sets for statistical analysis, prediction, clustering and machine learning.

Develop python framework to convert Hive/SQL queries into Spark transformations using Spark RDDs, Data Frames and perform actions on data sets stored in memory cache.

Develop PySpark analytical methods such as Data Modelling and Data Processing.

Develop ETL pipelines using PySpark that links various datasets and store data into s3 as parquet files

Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns using tools and languages prevalent in the big data ecosystem

Implement and leverage CI/CD to rapidly build & test application code using AWS CodeBuild and CloudFormation

Develop consumption framework using Spark to ingest data from multiple sources in AWS Data Lake and store data as Parquet format in AWS S3.

Involve in Ad Hoc stand up and architecture meetings to set up daily priorities and track the status of work as a part of highly agile work environment.

Drive the design, build new data models and data pipelines in production

Build and apply analytical functions on data using PySpark dataframes and source that data into various ML models.

Optimize and Monitor performance of Spark applications running in production and take corrective action and improve in-case of failures.

Environment: Hadoop, Spark, Hive, SQL, Python, Linux, AWS EMR, Athena, S3, EC2, Glue, SNS, Lambda, CodeBuild, Shell Scripting, VS Code, Bitbucket, Snowflake

Client: UnitedHealth Group/Optum (October 2018 – September 2019)

Eden Prairie, MN

Role: Sr. Hadoop Engineer

Description:

Risk analytics project is the analytics engine to ingest membership and claims data from UHG's member management database to Hadoop Ecosystem to run analytics process right claims and remove reversals and duplicates in claims (pharmacy, lab and medical claims).

Responsibilities:

Collaborate with technology leads and architects in conceptualizing objectives and develop new Hadoop applications to process large datasets in agile environment.

Create Hive external tables for consumption and store data in HDFS partitions as ORC, Parquet and Text file formats.

Implement Sqoop scripts and create jobs to import data from RDMS(Oracle, SQL) into HBase tables using incremental and full Refresh imports.

Develop python framework to convert Hive/SQL queries into Spark transformations using Spark RDDs, Spark SQL and Datasets and perform aggregations on data stored in memory.

Design data warehouse using Hive external tables for Job Control & Data Quality to track job history and successful completion of production jobs.

Develop Python code to gather the data from HBase and implement machine learning solutions using PySpark.

Design and implement complex HiveQL scripts using hive CDC, window functions, joins and partitions to process complex data sets as per business requirements.

Develop Spark SQL applications in PySpark to build analytics on incoming claims data.

Write Exporter module in Go Lang to push YARN applications and Scheduler data onto Prometheus time series database to generate reports.

Develop exporter framework in Python to get scheduler, job history data from YARN REST APIs and store into Hbase tables, generate custom alerts to monitor SLA bound Hadoop applications.

Manage and schedule jobs by defining Hive, Spark, Sqoop and Python actions on a Hadoop cluster using Oozie work flows and Oozie Coordinator engine

Responsible for data extraction and data ingestion from different data sources into HDFS by creating ETL pipelines.

Perform importing and exporting data into HDFS and Hive using Sqoop.

Utilize SparkSQL to extract and process data by parsing using Datasets or RDDs with transformations and actions (map, flatMap, filter, reduce, reduceByKey)

Develop the Oozie actions like hive, spark and java to submit and schedule applications to run in Hadoop cluster

Resolve Spark and Yarn resource management issues in Spark including Shuffle issues, Out of Memory issues, heap space errors and schema compatibility.

Monitor and troubleshoot performance of applications and take corrective actions in-case of failures and evaluate possible enhancements to meet SLAs

Environment: Hadoop, MapR, Linux, Hive, MapReduce, HDFS, Spark, Hbase, Shell Scripting, Sqoop, Go, Python, Jira, Oracle Database, Grafana, Prometheus, VS Code, IntelliJ

Client: Charter Communications (February 2018 – September 2018)

Greenwood Village, CO

Role: Hadoop Developer

Description:

This project involves Building New Generation Analytics Platform environment for Application Platform Operations team by using Hadoop Big data ecosystem to analyze 26 million residential and business customers data coming from device provisioning, order events, service activation tickets data, billing data, voice log data and data coming from the different sources and provide the data to downstream applications to generate dashboards and reports to better understand the orders flow-through and fallouts.

Responsibilities:

Consult leadership/stakeholders to share design recommendations and thoughts to identify product and technical requirements, resolve technical problems and suggest Big Data based analytical solutions.

Implement solutions for ingesting data from various sources and processing the Datasets utilizing Big Data technologies such as Hadoop, Hive, Kafka, Map Reduce Frameworks and Cassandra

Installation and maintenance of Hadoop and spark cluster for both dev and production environment.

Analyze source systems like Oracle, RDBMS database tables, perform analysis, data modelling from source to target mapping and build data pipelines to ingest data into Hadoop as per the business requirement.

Worked in AWS environment for development and deployment of Custom Hadoop Applications.

Installing and configuring EC2 instances on Amazon Web Services (AWS) for establishing clusters on cloud

Design and develop real time data streaming solutions using Apache Kafka build data pipelines to store Big data-sets into NoSQL databases like Cassandra.

Creation of Cassandra keyspace and tables by using map function to store JSON records.

Develop Kafka Producers and Consumers from scratch as per the business requirements.

Implement a POC using Apache Kafka and Spark with Java to parse Real time data coming from event logs and store into Cassandra tables to generate reports.

Develop data pipeline using Sqoop to ingest billing and order events data from Oracle tables into Hive tables.

Create Flume source, sink agents to ingest log files from SFTP sever into HDFS for analysis.

Create Hive Tables as per requirement were internal or external tables defined with appropriate static and dynamic partitions, intended for efficiency.

Create visualization of Cassandra datasets through apache Zeppelin to study customers’ orders summary reports.

Develop end-to-end data processing pipelines that begin with receiving data using distributed messaging systems Kafka through persistence of data into Cassandra

Extraction of key values pairs from XML files using spark and store data into Cassandra tables.

Implement Spark applications and Spark SQL which is responsible for the creation of RDDs and data frames of large datasets by storing in cache for faster querying and processing of data.

Worked on Spark SQL, created data frames by loading data from Hive tables and created prep data and stored in AWS S3

Optimize the performance of the Hive joins and queries by creating Partitions, Bucketing in Hive and Map Side joins for large data sets.

Scheduling jobs using Oozie actions like Shell action, Spark action and Hive action.

Environment: Hadoop, Hortonworks, Linux, Hive, MapReduce, HDFS, Kafka, Spark, Cassandra, Shell Scripting, Sqoop, Java, Maven, Spring Framework, Jira, Zeppelin, Oracle Database, AWS S3, EC2, Redshift.

Client: Comcast (July’2016 – Jan 2018)

Englewood, CO

Role: Hadoop Developer

Description:

Comcast Corporation is a global media and technology company serving high-speed internet, cable and mobile services for around 30 million customers. Datacast is a service for frontend applications (xfinity, comcast business self-service, etc.) for displaying information, offering promotions, discounts, incentives and advertisements to Comcast customers during their logged in sessions so that we can better support them. This project involves MELD (Massive Event Log Data Processing) is an Enterprise Data Warehouse application build by Comcast in Multitenant Hadoop environment utilizes many of Hadoop components and related facilities to coordinate processing large amounts of data from all domains of business like digital marketing, network, billing, usage from various sources like SQL Server, Oracle, Teradata, Routers, Switches, Video that comprises of real time process. Extract Transform and load millions of data using streaming applications and store in Hadoop to perform MapReduce job for aggregations. In addition, requires hosting Rosetta data on Gridgain caching (in-memory store) servers and providing access to data through a REST API web service.

Responsibilities:

Perform best practices for the complete software development life cycle including analysis, design, coding standards, code reviews, source control management and build processes

Work collaboratively with all levels of business stakeholders to architect, implement and test Big Data based analytical solution from disparate sources

Coordinate with the Architects, Manager and Business team on current programming tasks.

Document and demonstrate solutions by developing documentation, flowcharts, layouts, and code comments

Producing detailed specifications and writing the program codes.

Testing the code in controlled, real situations before deploying into production.

Perform daily analysis on Teradata/Oracle source databases to implement ETL logic and data profiling.

Importing data from different data sources like Teradata, Oracle into HDFS using Sqoop and performing transformations using Hive, Map Reduce and then loading data into HDFS.

Process data ingested into HDFS using Sqoop, custom HDFS adaptors and analyzed the data using Spark, Hive, and MapReduce produced summary results from Hadoop to downstream systems.

Develop simple to complex MapReduce streaming jobs using Java language for processing and validating the data.

Build applications using Maven and integrate with Continuous Integration servers like Jenkins.

Create/Modify Shell scripts for scheduling data cleansing scripts and ETL loading process.

Solved performance issues in Hive, Pig scripts with understanding of Joins, Group and aggregation, and how does it translate to MapReduce jobs.

Create Sqoop jobs with incremental load to populate Hive External tables with partitions and bucketing enabled.

Implement ETL logic by solving fail over scenarios to ingest Data from external sources through SFTP servers into HDFS.

Transferred the data using Informatica tool from AWS S3 to AWS Redshift. Involved in file movements between HDFS and AWS S3.

Developed end-to-end data processing pipelines that begin with receiving data using distributed messaging systems Kafka through persistence of data into HBase.

Develop Spark applications to perform all the data transformations on User behavioral data coming from multiple sources.

Used Datastax Spark-Cassandra connector to built data ingest pipeline from Cassandra to Hadoop platform

Create Hive queries, which help market analysts to spot emerging trends and promotions by comparing fresh data with reference tables and historical metrics.

Create components like Hive UDFs for missing functionality in HIVE for analytics.

Work on Hadoop cluster with Active directory and Kerberos implemented to establish SSO connection with applications and perform various operations.

Work on various performance optimizations like using distributed cache for small datasets, Partition, Bucketing in Hive and Map Side joins.

Schedule jobs using UC4 automation framework to run daily, weekly and monthly basis.

Design and develop Map Reduce jobs to process data coming in different file formats like XML, CSV, flat files and JSON.

Implement Elastic search instance to read log files from servers and generate reports visualize data on Kibana Dashboard.

Monitor the applications health and performance using Nagios system. Provide production support and troubleshoot the requests from end-users.

Be the part of triage call to handle and solve defect reported by Admin team and QA team.

Environment: Hadoop, Hortonworks, Linux, Hive, MapReduce, HDFS, HBase, Pig, Shell Scripting, Sqoop, Hue, Ambari, Yarn, Tez, Accumulo, Grid gain, AWS S3, EC2, Python, Git, Kafka, Storm, Apache Spark, Elastic Search, Kibana, Kerberos, Nagios, Toad for Oracle, Teradata SQL Developer.

Client: GEICO (October’2015 – June 2016)

Chevy Chase, MD

Role: Hadoop Developer

Description:

Geico is the one of the largest auto insurance company serving more than 13 million auto policies nationwide.

The core database for the interface was built using MySQL to load and query data as required. As the data started to grow (both structured and unstructured) the requirement for data analysis and data mining became apparent. This resulted in implementing Hadoop system using a few instances in gathering and analyzing data log files. The idea was to better understand customer base, buying habits, promotional effectiveness, inventory management, buying decisions etc. We Develop Hadoop MapReduce job that captures the data and with JGraph API to display it.

Responsibilities:

Devised and lead the implementation of the next generation architecture for more efficient data ingestion and processing.

Worked with highly unstructured and semi structured data of 90 TB in size with replication factor of 3.

Developed simple to complex Map Reduce streaming jobs using Java language for processing and validating the data.

Developed data pipeline using Map Reduce, Flume, Sqoop and Pig to ingest customer behavioral data into HDFS for analysis.

Extensive experience in writing Pig scripts to transform raw data from several data sources into forming baseline data.

Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics.

Enabled speedy reviews and first-mover advantages by using Oozie to automate data loading into the Hadoop Distributed File System and PIG to pre-process the data.

Handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, Map Reduce and then loading data into HDFS.

Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems and suggested some solution translation via lambda architecture.

Collecting and aggregating large amounts of log data using Flume and staging data in HDFS for further analysis.

Good working knowledge of Amazon Web Service components like EC2, EMR, S3 etc.

Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.

Solved performance issues in Hive and Pig scripts with understanding of Joins, Group and aggregation and how does it translate to MapReduce jobs.

Created and worked Sqoop jobs with incremental load to populate Hive External tables.

Developed Hive scripts in Hive QL to de-normalize and aggregate the data.

Implemented Spark using Python (PySpark) and Spark SQL for faster testing and processing of data.

Experience in developing regression models on R for the statistical analysis.

Experience in writing R or Python code using current best practices, including reproducible research and web-based data visualization.

Identify concurrent job workloads that may affect or affected by failures or bottlenecks.

Developed some utility helper classes to get data from HBase tables.

Scheduled and executed workflows in Oozie to run Hive and Pig jobs.

Professional experience with NoSQL HBase solutions to solve real world scaling problems.

Attending daily status calls to follow scrum process to complete each user story within the timeline.

Implemented Cluster for NoSQL tools Cassandra, MongoDB as a part of POC to address HBase limitations.

Worked on Implementation of a toolkit that abstracted Solr & Elasticsearch.

Viewing various aspect of a cluster using Cloudera Manager.

Environment: Hadoop, Linux, CDH4, MapReduce, HDFS, HBase, Hive, Pig, Shell Scripting, Sqoop, Python, Java 7, MySQL, NoSQL, Eclipse, Oracle 11g, Maven, Log4j, Git, Kafka, Storm, Apache Spark, Elastic Search, Solr, Datameer.

Client: Acxiom (August’2013 - December’2014)

Conway, AR

Role: Hadoop Developer

Description:

The Company wanted to retire the legacy SQL server database due to increasing customer base and growing data. Hadoop exactly fits the requirement, the transaction data was published to JMS with transaction event as payload, our project was to listen to this JMS queue, shred the transaction events and update reporting tables near real time.

Responsibilities:

Loading the data from the different Data sources like (Teradata, DB2, Oracle and flat files) into HDFS using Sqoop and load into Hive tables, which are partitioned.

Created different pig scripts & converted them as a shell command to provide aliases for common operation for project business flow.

Implemented various Hive queries for Analysis and call then from java client engine to run on different nodes.

Created few Hive UDF's to as well to hide or abstract complex repetitive rules.

Developed Oozie Workflows for daily incremental loads, which gets data from Teradata and then imported into hive tables.

Involved in End to End implementation of ETL logic.

Reviewing ETL application use cases before on boarding to Hadoop.

Developed bash scripts to bring the log files from FTP server and then processing it to load into Hive tables.

Developed Map Reduce programs for applying business rules to the data.

Did Implementation using Apache Kafka replacement for a more traditional message broker (JMS Solace) to reduce licensing and decouple processing from data producers, to buffer unprocessed messages.

Created HBase tables and column families to store the user event data.

Written automated HBase test cases for data quality checks using HBase command line tools.

Implemented receiver-based approach, here I worked on Spark streaming for linking with Streaming Context using java API and handle proper closing & waiting for stages as well.

Maintaining Authentication module to support Kerberos.

Experience in Implementing Rack Topology scripts to the Hadoop Cluster.

Participated with the admin team in designing and upgrading CDH 3 to HDP 4.

Developed Some Helper class for abstracting Cassandra cluster connection act as core toolkit.

Enhanced existing module written in python scripts.

Used dashboard tools like Tableau.

Environment: Hadoop, Linux, MapReduce, HDFS, HBase, Hive, Pig, Tableau, NoSQL, Shell Scripting, Sqoop, Java, Eclipse, Oracle 10g, Maven, Open source technologies Apache Kafka, Apache Spark, ETL, Hazel cast, Git, Mockito, python.

Client: McKesson (November’2012 – July’2013)

San Francisco, CA

Role: Hadoop Developer

Description:

Implemented & Re-architect an application called GRID-AC & GDDN, Introduced new flow with the help of open source tools like Hadoop, Cassandra & Hazel cast. To replaced licensed product like oracle golden gate & Solace hardware & software. It a central data movement platform.

Responsibilities:

Installed and configured Hadoop Ecosystem components and Cloudera manager using CDH distribution.

Frequent interactions with Business partners.

Designed and developed a Medicare-Medicaid claims system using Model-driven architecture on a customized framework built on Spring.

Moved data from HDFS to Cassandra using Map Reduce and BulkOutputFormat class.

Imported trading and derivatives data in Hadoop Distributed File System and Eco System (MapReduce, Pig, Hive, Sqoop).

Involved in loading and transforming large sets of Structured, Semi-Structured and Unstructured data and analyzed them by running Hive queries and Pig scripts.

Created tables in HBase and loading data into HBase tables.

Developed scripts to load data from HBase to Hive Meta store and perform Map Reduce jobs.

Was part of an activity to setup Hadoop ecosystem at dev & QA Environment.

Managed and reviewed Hadoop Log files.

Responsible writing PIG Script and Hive queries for data processing

Running Sqoop for importing data from Oracle & Oracle & Another Database.

Creation of shell script to collect raw logs from different machines.

Created Partition in a hive as static and dynamic.

Implemented Pig Latin scripts using operators such as LOAD, STORE, DUMP, FILTER, DISTINCT, FOREACH, GENERATE, GROUP, COGROUP, ORDER, LIMIT, AND UNION.

Optimized the Hive tables using optimization techniques like partitions and bucketing to provide better performance with Hive QL queries.

Defined some PIG UDF for some financial functions such as swap, hedging, Speculation and arbitrage

Coded many MapReduce program to process unstructured logs file.

Worked on Import and export data into HDFS and Hive using Sqoop.

Used different data formats (Text format and Avro format) while loading the data into HDFS.

Used parameterize pig script and optimized script using illustrate and explain.

Involved in the process of configuring HA, Kerberos security issues and name node failure restoration activity time to time as a part of zero downtime.

Implemented FAIR Scheduler as well.

Environment: Hadoop, Linux, MapReduce, HDFS, HBase, Hive, Pig, Shell Scripting, Sqoop, CDH Distribution, Windows, Linux, Java 6, Eclipse, Ant, Log4j and Junit

Client: Fidelity investments (June’2011 – October’2012)

Boston, MA

Role: Java/J2EE Developer

Description:

Action Response Solution is a robust workflow tool providing end-to-end notification response event management. This comprehensive solution creates automated notification events for eligible corporate action types using configurable templates, which are released to portfolio managers, delegates and other intended parties.

Responsibilities:

Write design document based on requirements from MMSEA user guide.

Performed requirement gathering, design, coding, testing, implementation and deployment.

Worked on modeling of Dialog process, Business Processes and coding Business Objects, Query Mapper and JUnit files.

Involved in the design and creation of Class diagrams, Sequence diagrams and Activity Diagrams using UML models

Created the Business Objects methods using Java and integrating the activity diagrams.

Involved in developing JSP



Contact this candidate