Post Job Free

Resume

Sign in

Machine Learning Data Science

Location:
Charlotte, NC
Salary:
75/hr
Posted:
March 07, 2024

Contact this candidate

Resume:

Padmelaxmi Viswanathan

+1-704-***-****

ad36mb@r.postjobfree.com

https://www.linkedin.com/in/padmelaxmi

EDUCATION

Birla Institute of Technology and Science, Pilani, India

Oct 2020 – Sept 2022

Master of Technology in Data Science and Engineering

Relevant Coursework: Machine Learning Data Mining Artificial Intelligence Natural Language Processing Deep Learning Information Retrieval Graph Networking and Processing Big Data Systems Stream Processing and Analytics Data Visualization

Pondicherry Engineering College, Pondicherry, India

Jun 2004 – Apr 2008

Bachelor of Technology in Civil Engineering

Relevant Coursework: Mathematical Foundation Statistics C C# Java

PROFESSIONAL SUMMARY

Data Scientist with experience in executing data-driven solutions with adept knowledge on Data Analytics, Data Mining, Machine Learning (ML), Predictive Modeling, Natural Language Processing (NLP) and Graph Mining

Proficient in all aspects of a data science project life cycle including Data Extraction, Data Pre-Processing, Feature Engineering, Dimensionality Reduction, Statistical Modeling, Model Testing & Validation, Implementation and Data visualization

Spearheaded multiple high-profile Machine Learning projects and Agile projects collaborating with stakeholders across multiple business and tech teams, vendors and senior management

Proficient in Machine Learning techniques such as Linear and Logistics Regression, Decision Trees, Random Forest, SVM, Naive Bayes, K-Nearest Neighbors, Bayesian, Adaboost, XG Boost in Forecasting/ Predictive Analytics utilizing structured and unstructured data

Expert in building Computer Vision and Deep Learning applications

Expertise in Natural Language Processing (NLP) in python including tokenization, lemmatization, stemming, parts of speech tagging, sentimental analysis

Proficient in deep learning models like CNN, RNN, LSTM

Experience in using various python packages like Pandas, Keras, PyTorch, Tensorflow, Scikit-learn, OpenCV, NLTK, SciPy, NumPy, Matplotlib, Plotly, Seaborn

Experience in productionizing Machine Learning pipelines on Google cloud platform which performs data extraction, data cleaning, model training and updating the model on performance metrics

Achieved Continuous Integration and Continuous Deployment (CI/CD) for applications using Git, Subversion, Jenkins and Fast ARM

Expertise in creating data visualizations, dashboards and Ad-hoc reports in PowerBI, Tableau, Splunk, ELK and Microsoft Excel

Experience in building data warehouses, data marts for creating power BI reports to visualize various key performance indicators of business

Well versed in programming and debugging code written in languages including Python, Unix, COBOL, Mainframes

Ensured end-to-end on-time delivery adhering to/exceeding quality and security standards as a technical Lead

Equipped with technical and business acumen to arrive at design-related and data-driven decisions

Excellent leadership quality and managerial skills including experience in driving teams located in different geographic locations

Proficient in tracking and delivering multiple projects in parallel using Agile methodology

Strong skills in identifying process/performance improvements and automating repetitive processes by providing innovative value additions and cost saves

Experience in evaluating applications and platforms by conducting performance tests, identifying feasible alternative solutions and providing recommendations

As a Security Champion, proficient in assessing and ensuring application code and vendor software code is free from vulnerabilities and remediating

Strong in analytical ability, problem solving techniques, verbal and written communications, presentation and interpersonal skills

TECHNICAL SKILLS

Programming Languages

Python, SQL, Abinitio Programming, UNIX shell scripting, COBOL, REXX, Easytrieve, Powershell

Software Methodologies

Agile and Waterfall

Database Systems

Oracle, PostgreSQL, SQL Server, Snowflake, DB2, IMS

Operating Systems

Red Hat Enterprise Linux, z/OS, Windows

Cloud Technologies

AWS (EC2, S3), Microsoft Azure

Machine Learning Algorithms

Linear regression, Logistic regression, SVM, Decision tree, Random Forest, Boosting, Bagging, KNN, K-means clustering, Naïve bayes

Python packages

Pandas, Keras, PyTorch, Tensorflow, Scikit-learn, OpenCV, NLTK, SciPy, NumPy, Matplotlib, Plotly, Seaborn

Data Modeling Tools

Snowflake, Erwin Studio, Star-Schema, FACT and dimension tables, Pivot Tables

Reporting /Visualization

ELK, Splunk, Tableau, PowerBI, Crystal reports

ETL Tools

Ab Initio Suite of Products

Graphical Development Environment, Technical Repository Management Console, Enterprise Meta> Environment, Control Center, Abinitio Web EME, Conduct > It

HDFS, MapReduce, Spark, Storm, Kafka, Scala

Other Specialized Software/ Applications/ Tools

Open Text Exstream,

IDEs (Eclipse, PyCharm, Anaconda, Jupyter, Spyder, IntelliJ)

SQL clients (Squirrel, Toad, PGAdmin, DB2Visualizer, DBeaver),

Version Control (Jira, Git)

Release management (ServiceNow, HP Service Manager, ChangeMan)

Job Schedulers (Autosys, ControlM, Automic Scheduler)

EXPERIENCE

DATA SCIENTIST

Narvee Tech - CVS Heath

Chicago, IL Jul 2023 – Till Date

Responsibilities

Collaborated with business stakeholders and Subject Matter Experts to gather and discuss requirements.

Applied Machine Learning Algorithms such as Logistic Regression, Decision Tree, and Random Forest to implement predictive modeling got various types of business use cases.

Built a ML model to perform customer analysis and increase the sales by strategic email campaigning of offers to target customers.

Built monitoring dashboards that visualize the present and predicted values of customer centric features using Tableau.

Derive data from relational databases to perform complex data manipulations and conduct extensive data checks to ensure data quality.

Perform Data wrangling to clean, transform and reshape the data utilizing Numpy and Pandas library.

Productionized machine learning pipelines that gather data from BigQuery and build forecasting models to predict temperature and humidity spikes inside the warehouses

Work with datasets of varying degrees of size and complexity including both structured and unstructured data to perform Data mining, Data cleaning, Data collection, Variable selection, Feature engineering, Developing models, Validation, Visualization

DATA SCIENTIST

Singular Analysts Inc. - CVS Heath

Chennai, India Feb 2022 – Jun 2023

Responsibilities

Applied Machine Learning Algorithms such as Logistic Regression, Decision Tree, and Random Forest to implement predictive modeling got various types of business use cases.

Built a ML model to perform customer analysis and increase the sales by strategic email campaigning of offers to target customers.

Built monitoring dashboards that visualize the present and predicted values of customer centric features using Tableau.

Derive data from relational databases to perform complex data manipulations and conduct extensive data checks to ensure data quality.

Perform Data wrangling to clean, transform and reshape the data utilizing Numpy and Pandas library.

Productionized machine learning pipelines that gather data from BigQuery and build forecasting models to predict temperature and humidity spikes inside the warehouses

Work with datasets of varying degrees of size and complexity including both structured and unstructured data to perform Data mining, Data cleaning, Data collection, Variable selection, Feature engineering, Developing models, Validation, Visualization

PROGRAM LEAD

Datacaliper LLC – Capital One Munission Mi-Corporation Mutual Drug Dedrone

Chennai, India Jan 2021 – Jan 2022

Responsibilities

Led multiple machine learning and data analytics projects in parallel

oImage recognition and fingerprint matching to support Jail Management Systems and Record Management Systems

oGenerate caption and detailed description for road accidents using a picture to be stored in government records

oIdentify and exclude profanity from user messages that will be posted on Federal Government websites

oRecommendation engine using Market Basket Analysis and Inventory Restocking Analysis using time series forecasting for a pharmaceutical wholesaler with more than 23,000 products

Provided AI/ML consultation for a high-profile air space security project

Analyze and assess quality of data flow using Python and SQL

Retrieve, gather, organize, interpret and determine the meaning of data and metadata using data profiling techniques

Build dynamic mechanism to report business data quality to key stakeholders on real time

Create visualizations, reports and dashboards to provide insights and recommendations to business partners using Power BI, Splunk, ELK and Tableau

Collaborate with autonomous cross-functional teams, review development projects and test results

Accomplishments

Trained a team of 12 in Python, SQL and Power BI

Provided internal training sessions on Machine Learning, NLP and Graph Theory

Automated Ab Initio graph-to-file data lineage lookup process using shell script

Automated data profiling process using Python and SQL

LEAD DATA ANALYST

Datacaliper LLC – Capital One

Richmond, VA Oct 2018 – Dec 2020

Responsibilities

Meet with business stakeholders and Subject Matter Experts to gather and discuss requirements in pipeline

Analyze source database architecture and components to classify and categorize data elements to cater to growing business needs in target environment

Prepare new logical data model and create physical data model that suits the logical data model

Work with database administrators, business analysts to review and validate the models developed

Collect exhaustive metadata on business and technical front and complete data curation

Ensure data lineage is maintained across platforms by creating Business and Technical datasets in in-house metadata management tools

Analyze and assess quality of data that flows through the application and build dynamic mechanism to report business data quality to key stakeholders on real time

Drill down data to locate errors, share findings with upstream application and collaborate until root cause is fixed

Retrieve, gather, organize, interpret and determine the significance of data and metadata using data profiling techniques

Perform Oracle database validation using SQL Functions and Commands

Create rich visualizations, reports and dashboards to provide insights and recommendations to business partners using ELK and Splunk

Analyze Ab Initio code and help business understand legacy applications, develop solutions to modernize data applications and guide through data transformation from physical servers to cloud environment

Analyze code and arrive at end-to-end flow of data and establish data lineage

Collaborate with cross-functional teams, review development projects and test results, and ensure that data transformation and environment migration is completed without business impacts

Identifies and reports software defects and test findings using JIRA

Accomplishments

Identified a gap in business functionality and proposed optimization in Anti Money Laundering process

Automated Ab Initio graph-to-file data lineage lookup process using web scraping and shell script

Setup automated reporting to notify management about discrepancies in data quality on a regular basis

AB INITIO TECH LEAD

Tata Consultancy Services – JP Morgan Chase

Columbus, OH Feb 2013 – Oct 2018

Responsibilities

Build, maintain and upgrade Ab Initio environments using multiple versions of Ab Initio suite of products, OpenText, Streamdiff, CVision and Splunk hosted on RHEL servers and Windows servers

Participate in vendor software license and contract negotiations along with senior management, enable negotiations, install software licenses, and ensure that software license agreements are in compliance

Collaborate and lead the technical team in designing solutions for application functionalities, classifying and building data models for data elements.

Establish coding standards and best practices for the team to follow as part of application development

Create new or make changes to Ab Initio components as part of application development or maintenance projects

Review code changes/deliverables to make sure they adhere to standards and best practices

Work with QA team to identify test scenarios, number of cycles, testing schedule and reporting format

Review implementation documentation to make sure all standards and requirements are met before an application or code changes are deployed to production

Conduct performance tests after analyzing and observing existing Production volume and load peak, forecast trends and propose solutions to improve performance at application, infrastructure and Ab Initio configuration layers

Manage user setup and their accesses using KeON and EPV

Closely work with vendor support teams on issues and bugs with the softwares and roll out fixes

Collaborate with server support operations for work involving problem determination and implementation of changes to hardware, software, applications or network systems

Setup password-less connectivity between various servers in Linux, Windows and Mainframes platforms

As an Application Security Champion, identify and address security concerns in the application and infrastructure

Perform Disaster Recovery testing and prove sustained resiliency

Develop quick and easy Unix programs for ad-hoc reports, data extraction and for automating BAU activities

Take part in Agile ceremonies such as daily stand up, backlog grooming, review and retrospective meetings. Keep JIRA board updated with daily status of stories/tasks.

Set clear team goals and evaluate progress along the way and enable the team to work as a collaborative unit by resolving conflicts, if any.

Accomplishments

Completely owned and delivered Tech Refresh project that required building Ab Initio environment from scratch and involving 10 software and 23 versions, sftp connectivity with 58 interfacing applications on 11 application and web servers across multiple environments

Automated Ab Initio code migration to higher environments and code migration validation process using Unix Shell scripting which saves $180,000 per annum

Automated Black Duck Code scanning compliance process using Unix Shell Scripting saving $63,000 per annum

Automated change record creation process and transformed it from highly manual to a click of the button saving $96,000 per annum

Automated OpenText Exstream database administration activities using MS DOS scripts saving 1600 person hours per annum

DEVELOPMENT TEAM LEAD

Tata Consultancy Services – JP Morgan Chase

Chennai, India Nov 2011 – Feb 2013

Responsibilities

Gather requirements, draft high level and technical design and develop programs by following best practices, established coding standards and naming conventions for a platform migration project

Drive and co-ordinate test cycles in both IST and UAT environments to test all possible scenarios

Review code changes and test results to make sure they align to the requirements

Plan implementation activities, deploy code in Production environment, verify and validate code deployment after the implementation is complete

Provide post-implementation support and support future enhancements

Delegate and coordinate tasks with the team on all the above activities

Accomplishments

Performed Proof of Concept for migrating Document Composition application from COBOL (Mainframes) to Ab Initio hosted on Linux servers

Lead a team of 12 and migrated 80% of data processing (by volume) to target environment in less than two months

PRODUCTION SUPPORT ENGINEER

Tata Consultancy Services – JP Morgan Chase

Chennai, India Apr 2011 – Nov 2011

Responsibilities

Monitor Production batch cycle for multiple applications in parallel actively to make sure that they complete successfully and in compliance with Service Level Agreements (SLAs)

Triage, troubleshoot and mitigate problems with Linux, Mainframe and Windows based business applications

Acknowledge notification in case of job failures, determine cause of the issue, apply fixes as required and ensure SLA is met consistently

Identify root causes of technical issues in Production, and design and develop solutions to fix the identified issues

Ensure network, system and data availability and integrity through preventative maintenance on Linux servers

Engage subject matter experts from Development team or Operations team, as needed, in case of issues that occur due to a change introduced in the system

Interact with project teams for new releases and provide insight from technical issues of previous releases and Production batch cycle

Accomplishments

Created utility to generate customized reports from Service Manager tool to arrive at metrics and assess health of the application

Streamlined Production support process and protocols which greatly reduced response and resolution time

MAINFRAMES DEVELOPER

Tata Consultancy Services – JP Morgan Chase

Chennai, India Sep 2008 – Apr 2011

Responsibilities

Draft technical design after thoroughly understanding requirements and develop programs following best practices, established coding standards and naming conventions for a bank merger project

Test code thoroughly in both IST and UAT environments with all possible test cases and through regression testing, stress testing, positive/negative testing and boundary condition testing

Ensure code readiness for Production deployment, verify and validate code deployment after the implementation is complete and perform production simulation testing

Provide post-implementation support and support future enhancements

Document test results, observations, best practices and lessons learnt during the course of the project

Accomplishments

Developed a tool to validate correctness of Zeke scheduler events across development and QA regions

+1-704-***-****

ad36mb@r.postjobfree.com

https://www.linkedin.com/in/padmelaxmi



Contact this candidate