NITHYA YOGESH
Certified Salesforce Administrator
Suwanee, GA
Email : ******.**@*****.***
Phone : 470-***-****
https://linkedin.com/in/Nithya-Yogesh
https://trailhead.salesforce.com/me/nithyayogesh
Summary :
Certified Salesforce Administrator.
Detail Oriented, responsible and committed engineer, with a get-it-done, on – time and high-quality product spirit with more than 8 years of IT experience in Semiconductor industry and Data Storage industry with automation testing background.
Certificates : Salesforce administrator ( 5th September 2020 )
Education : Master’s in Astrophysics, Bangalore University (2003-2005)
Bachelor’s in Computer Science, Bangalore University (2000 – 2003)
Skills : Salesforce Administration, Salesforce Integration, Data loader, Requirements gathering, Business analysis, Reports and Dashboard building.
Software Testing (Integration, System, Functional, Regression, Stress, Performance), Automation Testing, Scripting, Quality Assurance, Test Case Design and Defect Management.
Programming Languages: C, PERL scripting, Shell, Unix/Linux, embedded C, SQL, Apex
Tools : Version control tools, SVN, VSS, BURT, HP Quality Centre, SQL basics, Perforce, Eclipse
Experience :
Salesforce.com administrator Current, Atlanta, GA
●Administration of the Salesforce environment; responsibilities include customizing and implementing profiles, roles, security settings, sharing rules, applications, custom objects, custom fields, page layouts, workflow, validation rules, approvals, dashboards, reports, etc.,
●Campaign management in Salesforce. Create all sales related campaigns in Salesforce.com. Procedures include detailed analysis of target markets, matching lists to Salesforce.com, creating new campaign masters, accounts, members, contacts, contact roles, and opportunities. Also involves creating campaign monitoring dashboard charts and reports in Salesforce.com
●Experience in Salesforce point and click configuration using workflows, flows, validation rules, sales process setup, roles & profiles, reports and dashboards.
●Work with Salesforce developers on system extensions, customizations and integrations.
●Experience in building complex, scalable, and high-performance software systems that have been successfully delivered to customers
●Develop strong relationships with sales management to ensure proper use of the application
●Participates in the Product Development lifecycle with Product Management, IT Security/Compliance, Sales, Marketing, Customer Service, Account Managers and Development
●Convey technical information to non-technical customers and good listening skills to translate in the other direction
●Familiarity with agile software delivery methodologies such as Scrum
●Problem solver motivation, high attention to details and outstanding analytical skills
●Great communication and problem-solving skills needed in order to provide a high level of customer service to Salesforce users
Automation Test Engineer NetApp, California (Jan 2009 - 2013)
Service processor is module which is ready to provide system status and assist with problem resolution as long as the system has AC power supply. It a system independent resource to assist with the monitoring and management of numerous system parameters like voltages, fans, current speed, etc., Service processor is one of the modules that is lately developed by combining the features of existing BMC (Board management controller) and RLM (remote LAN management) modules, in the wake of bringing forth the best from both the modules. Currently this module is under development and I am involved in validating all the features of SP module, on various upcoming Boilermaker ONTAP operating system and Carnegie and Absolut series of platforms.
Responsibilities:
Analyze the requirements and come up with the testable features and non testable ones.
Testable features are then segregated into those which can be automated and manual test cases.
Prepare the validation test plan for the features of Service processor, like GDB, H/W assist, SP CLI, Phoenix, system Watchdog, SNMP traps, environmental, state management, etc.
Design every test plan to contain the scope of test case, reference documents, test case summary, verification items, BURTS that exist on the feature if the feature is still under development and required libraries.
Using test plans, every test case was manually executed for upcoming ONTAP.
Used agile methodology for testing the ONTAP.
Frameworks were scripted using PERL scripting and executed in NATE testing environment.
Defects/BURTS were effectively tracked until they are fixed.
Logs are uploaded in the QC repository
Participated in Analysis, walkthroughs, inspections, code reviews and user group meetings.
The platform Quality assurance team deals with assuring the quality of all the NetApp storage platforms and also various features of the Operating system ONTAP and the file system WAFL. Objective of PLQA team, is to perform functional and feature testing at stage 1 and regression and release testing at stage 2, manually and also through automated test cases, by which the quality of every platform and the (OS) ONTAP versions that are to be released in the market are assured.
Involved mainly in designing and automating test cases. Also involved in creating and upgrading the libraries for QA team, using PERL script along with NATE scripting.
Responsibilities:
Providing estimates for the different levels of test activities.
Create and execute front end tests for maintenance and new feature releases
Create front end test automation suites using PERL and NATE scripts
Setting up of NATE environment using the tharnhost files, ntest files.
Write Test Plans
Write and execute test cases
Create libraries and upgrade the existing libraries by adding functions.
Send Status Reports on Testing Progress
Participate in release management activities
Identify bug trends and drive resolution process
Participate in reviews and discussions and contribute to the overall SDLC
Work with development team to resolve issues found during testing
RAID error propagation is a functionality of the WAFL file system present in the data ONTAP of the storage devices, which allows the disks to avoid marking the aggregate WAFL inconsistent and panicking the system when an unrecoverable error has occurred. This may end up in the loss of data that is stored in the disks.
The project involves creating such prototypes of various situations where the WAFL is marked dirty and the data stored in the disks are checked for reliability. They are retrieved and cross verified with the actual data to know if the disks are reliable. Such test cases are tested for various configurations and features such as FAKE_LA, MIRRORED volumes, etc. They are also tested on virtual simulators and also on the real hardware – NAS/SAN storage devices.
Responsibilities:
Development (and maintenance) of test automation scripts for the test cases for various types RAID disks and storage resiliency features in NetApp DATA ONTAP
Involved in developing Entry & Exit criteria and defined the pass and fail standards
Manually testing the REP feature related test cases and capturing the console logs.
Testing of all the REP test cases for different configurations such as VSIM, FAKE LA, AGGR, MIRROR volumes
Performed Positive & Negative and End-to-End Testing
Uploading the logs to the QC repository
Developing the pseudo code using console logs
Automating these frameworks using PERL and NATE
Executing the automated script on different configuration (32-bit/64-bit/FAKE_LA/VSIM).
Executing all the test cases through XANT GUI as a batch file or through a plan file
Fixing the code review comments
Reported bugs and maintained a log of bugs
Tested newly developed as well as enhancements and bug fixes
Test execution of new/old products and technology and gathering reports/results.
Working with development team to resolve issues which are found during testing
Software Engineer Mindteck, Bangalore (2007 -2009)
●Preparation of Software Requirement Specification. Software Requirements verification against system requirements. Porting the existing firmware from Nucleus RTOS to uclinux OS without much modifications on the actual application of the (existing) analyzer, for the re- designed Microprocessor board around Freescale Cold fire MCF52277 processor. Also developed a device driver for NVRAM card.
●Devised 2 sets of protocol commands, to be used for the communication between the master and Slave processor and also between the master module (CCM) and the user on the host system. Implementation of the protocol, in the form of a PARSER module using embedded C. Designing of the application called “Analyses module”. Development of the application using C language, to analyze the data acquired from the Gas.
Research Intern National Aerospace laboratories, Bangalore (2005 -2007)
This involved Modification of various modules of the meteorological code “Varsha GCM”, for display of computed MET variables, for refinements in algorithms and also for increasing the range of predictability
It also required graphical representation of the atmospheric parameters like wind velocity, vorticity, etc., at different levels from “Varsha GCM” code, using GNU plotting in UNIX computational environment.
Responsibilities
Plotting atmospheric parameters obtained using the VARSHA GCM code.
Implementing the existing algorithms using C and C++, for improvising VARSHA GCM.
Implementing double and multi precision accuracy of atmospheric parameters.
Graduated from Bangalore University in 2005, Bangalore, India. Majored in Astrophysics.