NISHIGANDHA
+1-943-***-**** *****************@*****.***
PROFESSIONAL SUMMARY:
Results-driven Data Analyst and Data Engineer with around 7+ years of experience in designing, building, and optimizing data solutions.
Proficient in SQL, Python, Java, and R for data analysis, machine learning, and predictive modeling.
Expertise in ETL processes, data warehousing, and data governance to ensure data integrity and scalability.
Skilled in creating interactive dashboards and visualizations using Tableau, Power BI, and Looker.
Experienced in big data technologies such as Apache Spark, Hadoop, and Kafka for real-time and batch data processing.
Strong background in cloud platforms like AWS, Azure, and GCP for building and managing data pipelines.
Adept at performing statistical analysis, A/B testing, and trend analysis to data driven decision-making.
Proven ability to collaborate with cross-functional teams to deliver actionable insights and improve business outcomes.
Expert in Agile methodologies (SCRUM, SAFe, XP) for process improvement and project management.
Hands-on experience with Test-Driven Development (TDD) and Behavior-Driven Development (BDD) to solve complex business problems, optimize operations, and drive revenue growth.
Strong knowledge of data collection, cleaning, and transformation to ensure high-quality data for analysis.
Expertise in ERP implementation (Workday, Oracle Cloud) and data migration, ensuring seamless transitions and data accuracy during system transformations.
Skilled in documentation and knowledge sharing to ensure seamless collaboration and team efficiency.
Strong background in data governance, quality assurance, and testing/validation to ensure compliance and accuracy during data migrations.
Knowledgeable in A/B testing, Micro services testing and API automation.
Adept in infrastructure automation using Docker, Kubernetes, Terraform, and Open Shift.
Experience in test reporting and monitoring using Cucumber Report and Extent Report.
Hands on experience in SQL and NOSQL database such as Snowflake, Cassandra and MongoDB.
EDUCATION DETAILS:
Northeastern University, College of Engineering Boston, MA
Master of Science in Information Systems
University of Pune Pune, India
Bachelor of Computer Engineering
TECHNICAL SKILLS:
Languages: C, C++, Python, R, PL/SQL, Java, Scala, UNIX shell scripting, TSQL, Excel (Advanced).
Big Data Tools: Apache Spark, Hadoop, Hive, Kafka, Map reduce.
Databases: Oracle, MS-SQL Server, MySQL, PostgreSQL, NoSQL (HBase, Cassandra, MongoDB), Teradata, BigQuery.
Data Visualization Tools: Tableau, Power BI, Looker, Matplotlib, Seaborn, Looker.
Tools: Eclipse, NetBeans, Github, Jupyter, Amplitude, Workday, ERP.
Statistical Analysis: Hypothesis Testing, Regression Analysis, A/B Testing.
Operating Systems: Windows XP/2000/NT, Linux, UNIX.
ETL/Data Integration: Alteryx, Talend, Apache NiFi, Informatica.
Cloud Platform: AWS (Redshift, S3, Glue, Lambda), Azure (Data Factory, Synapse), GCP (BigQuery, Dataflow).
Skills: Data Cleaning, Data Wrangling, Debugging, Troubleshooting, Production Support, Critical Thinking, Effective Communication, Problem-Solving, Portfolio prioritization.
Certification: Google Data Analytics, Python for Data Science by IBM, AWS Data Engineer.
WORK EXPERIENCE:
Client: Community Dreams Foundation Sebring, FL Dec 2024 to Present
Role: Data Analyst
Responsibilities:
Conduct statistical analysis on large datasets using Python (Pandas, NumPy) and SQL, identifying trends and improving operational efficiency by 25%.
Design interactive dashboards in Tableau and Power BI to visualize KPI metrics, enabling leadership and stakeholders to track performance and make data-driven decisions.
Optimized SQL queries to manage, test, debug, and analyze data, improving query efficiency by 25% and ensuring seamless platform operations and performance monitoring.
Utilize Python and Pyspark for data cleaning, analysis, and automation of repetitive tasks, improving team efficiency by 20%.
Use Snowflake for large-scale data warehousing and analytics, enabling faster query performance and scalability.
Handled transaction management, data modeling, normalization, troubleshooting, and debugging to optimize database performance and integrity.
Collaborate with cross-functional teams to diagnostic analytics for identifying root causes of business challenges and recommended data-driven solutions.
Create and schedule Unix shell scripts for automated data backups and report generation.
Reduce database query response time by 20% through query optimization and indexing strategies.
Manage multiple analytics projects simultaneously, ensuring timely delivery and alignment with business goals.
Design and execute A/B tests to optimize product features, resulting in a 15% increase in customer engagement.
Ensure data accuracy and integrity by implementing robust data validation and cleaning processes.
Environment: AWS (EC2, S3, EBS, ELB, RDS, Cloud Watch), Pyspark, Python, Spark streaming, Machine Learning, Snowflake, Tableau, Power BI, NoSQL, PostgreSQL, Shell Script, Scala, GCP, Python, Git, Advanced Excel, Amplitude.
Client: Tenet Healthcare Remote July 2024– Nov 2024
Role: Data Analyst
Responsibilities:
Spearheaded the root cause analysis of patient outcomes data, identifying key factors influencing readmission rates and reducing readmissions by 15% through targeted interventions.
Designed and implemented interactive dashboards in Power Bi and Tableau to track KPIs such as patient satisfaction, and operational efficiency, enabling data-driven decision-making for senior leadership.
Collaborated with cross-functional teams to integrate EHR (Electronic Health Record) data with external datasets, improving data accuracy and completeness by 25%.
Developed and maintained PL/SQL scripts for data migration and integration between systems.
Built ETL pipelines to streamline data ingestion from multiple sources, reducing data processing time by 30%.
Ensured compliance with HIPAA regulations by implementing robust data security and privacy measures.
Maintained data quality control (QC) by implementing robust validation and cleaning processes.
Environment: Amazon Web Services, Amazon S3, EC2s, Amazon Redshift, Pyspark, Snowflake, Tableau, Power Bi, Python, SQL, ETL, MySQl, Oracle, Git, Microsoft Visio, and Draw.io
Client: Walmart Dallas, TX Dec 2023 – Apr 2024
Role: Data Analyst/Engineer
Responsibilities:
Conducted exploratory data analysis (EDA) using Python (Pandas, NumPy) and SQL on large-scale datasets, identifying trends and anomalies that informed strategic business decisions, resulting in a 10% improvement in operational efficiency.
Created interactive dashboards in Tableau and Power BI to track key performance metrics like sales, inventory turnover, and customer satisfaction, making it easier for stakeholders to make data-driven decisions.
Built and optimized ETL pipelines with Apache Spark and AWS Glue, cutting data processing time by 25% and ensuring more accurate reporting.
Automated repetitive reporting tasks using advanced Excel techniques like Pivot Tables, VLOOKUP, and Macros, saving the team 15 hours per week.
Conducted A/B testing to evaluate pricing strategies and promotional campaigns, which led to a 12% boost in sales revenue.
Partnered with supply chain, marketing, and finance teams to analyze customer behavior and purchasing patterns, improving inventory management by 20%.
Developed predictive analytics models in Python (Scikit-learn) to forecast demand and optimize stock levels, reducing overstock by 18%.
Improved data quality by implementing data validation and cleaning processes, increasing data accuracy by 30%.
Used Snowflake for data warehousing and BigQuery for quick, ad-hoc analysis, making data retrieval faster and more efficient.
Presented actionable insights to senior leadership, breaking down complex findings into clear, actionable recommendations that drove business growth.
Environment: Python (Pandas, NumPy, Scikit-learn), SQL, Tableau, Power BI, Apache Spark, AWS (Glue, Redshift, S3), Snowflake, BigQuery, Advanced Excel (Pivot Tables, VLOOKUP, Macros), Statistical Analysis, A/B Testing, ETL Pipelines.
Client: Amazon Pune, India May 2020 – Aug 2022
Role: Data Engineer
Responsibilities:
Designed and deployed scalable ETL pipelines using Apache Spark and AWS Glue, processing terabytes of data daily to support analytics and reporting.
Built and maintained data warehouses on AWS Redshift, enabling seamless data access and analysis for cross-functional teams.
Developed real-time data streaming pipelines using Apache Kafka, improving real-time analytics capabilities by 25%.
Optimized PL/SQL procedures and SQL queries for an e-commerce platform, reducing page load times by 15%.
Optimized SQL queries and Spark jobs, improving query performance by 40% and reducing processing time.
Partnered with product managers to design experiments and analyze results, driving product improvements and innovation.
Built and maintained Excel-based models to forecast business performance and support strategic planning.
Implemented Git for version control in project, streamlining collaboration and maintaining code integrity across data pipeline development.
Presented data-driven insights to senior leadership, effectively narrating complex stories for decision-making.
Mentored junior analysts/ Engineer on best practices for SQL querying, data visualization, and A/B testing, fostering a culture of continuous learning within the team.
Environment: AWS (EC2, S3, EBS, ELB, RDS, Cloud Watch), Snowflake, Shell, JIRA, Jupyter, SQL, HDFS, Spark, Python, AWS, CDH, Putty, Scala Apache Kafka, Git.
Client: HSBC Pune, India Aug 2017 – Apr 2020
Role: Data Analyst / Engineer
Responsibilities:
Developed and executed complex SQL and Excel (Pivot Tables, VLOOKUP)queries to analyze transactional data, customer behavior, and payment trends, enabling data-driven decisions that improved operational efficiency
Conducted root cause analysis on payment processing failures, identifying systemic issues and recommending solutions that reduced errors by 35%.
Utilized SQL and Python to automate data cleaning and reporting processes, reducing manual effort by 30% and improving the accuracy of monthly financial reports.
Designed and maintained PowerBI and Tableau dashboards to track key performance metrics, including loan approval rates, payment success rates, and customer retention, providing actionable insights to senior leadership.
Implemented DataOps principles, including CI/CD pipelines, to automate data integration, transformation, and deployment processes.
Developed and optimized PL/SQL stored procedures, functions, and packages to support business processes, improving system efficiency by 25%.
Assisted in risk analysis and compliance reporting to ensure adherence to banking regulations.
Conducted A/B testing on digital banking features, such as loan application workflows and payment processing interfaces, resulting in a 12% increase in customer conversion rates and a 20% reduction in drop-offs.
Supported strategic decision-making by delivering timely, accurate insights through ad-hoc data analysis.
Developed KPIs and dashboards to track business performance, enabling stakeholders to monitor progress toward goals in real-time.
Built Excel-based financial models to forecast loan portfolio performance, supporting risk management and strategic planning efforts.
Environment: Excel (Pivot Tables, VLOOKUP), Spark SQL, A/B Testing, Hypothesis testing, Python, KPI, Java, Linux, MySQL, Oracle, Eclipse, Tableau, Power BI, Pycharm, Oracle, GitHub, REST API, SOAP, and Agile Methodologies.