Vasudeva Bichal
[Plano, Texas] • [925-***-****] • [*******@*****.***] • [http://bit.ly/3YP6Tsd]
SUMMARY
As a Seasoned Data Professional with over a decade of proven hands-on project/product deliveries and data I have been the key technical contributor to develop and oversee Auto-transport System Operations across BNSF Railway network lines. Helped building data pipelines into various source systems, rest data on the BNSF Data Lake, and enabled exploration and access for analytics, visualization, machine learning, and product development efforts across the company. I have led the development of very large and complex data applications into cloud environments directly impacting the design, architecture, and implementation of Enterprise Autotransport data products around topics like revenue management, supply chain, manufacturing, marketing, Federal Regulations and logistics by working closely with process owners, product owners and business users
CORE COMPETENCIES and SKILLS
• Data Science Strategy & Leadership
• Machine Learning & Predictive Analytics (Pandas, Scikit learn, Tensor flow, MLLib, Pyspark libs, Logistic Regression, Classification and Segmentation Models)
• Team Building & Mentorship
• Cloud Data Architecture (AWS EC2,Redshift,Cloud9,S3, CLi, Azure, ADLS Databricks)
• Big Data Engineering & Streaming (Spark, Kafka, window functions and streaming API’s)
• Executive Communication & Business Alignment (POC presentations to align with business functions)
• Statistical Modeling & Experiment Design (Seeking alpha, ROC Measurements)
• BI Tools (Power BI, Tableau) & Data Storytelling (Time series Analysis, Segmentation and Partitioning, Data shuffling, reporting KPI’s)
PROFESSIONAL EXPERIENCE
Senior Data Solutions Analyst
BNSF Logistics – Fort Worth, TX May 2022 – May 2025
Contributed to code development in projects and services for Python Scala, Java and Rest API features integrating with EDI, Mainframe, SAP, Salesforce systems, Filenet and Opentext systems.
Managed and scaled data pipelines from internal and external data sources to support new product launches and drive data quality across data products building datamart, datalake, deltalake and datawarehouses within Azure Cloud Echo systems databricks, Pyspark, scala, python, json and parquet files.
Built and owned the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance by creating steps functions and microservices within AWS EC2, SNS, Redshift cloud systems
Recommended best practices to implement around systems integration, security, performance and data management using OIDC connectivity patterns.
Empowered/Mentored the business teams by creating value through the increased adoption of data, data science and business intelligence landscape by providing data stories from the reports built with Power BI and Tableau, Python Matplot Lib, by performing EDA and statistical analysis on the business datasets with proper explanations.
Collaborated with internal clients (data science and product teams) to drive solutioning and POC discussions by listening closely and capturing subtle requirements so the business tasks can be optimized and accelerated.
Evolved the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners by conducting architecture review meetings on a regular time frequency.
Optimized and Developed procedures to “productionalize” data science models.
Defined and managed SLA’s for data products and processes running in production.
Supported large-scale experimentation done by data scientists.
Prototyped new approaches and built solutions at scale.
Researched in state-of-the-art methodologies starting with Hadoop, Scala, Java, Informatica and SAP BI tools.
Created documentation for learnings and knowledge transfer.
Created and audited reusable packages or libraries.
Contributed and developed and maintained metadata management using Unity Catalog, Alation and Informatica Governance tool, data lineage, and data glossaries is a plus.
Projects: Increase velocity of Auto inspections, Reduce Claim Liabilities for Business by 15% (about 20 geographic Sites, using IoT Devices, EDI data sources, Variety of Databases like Oracle, DB2, SQL servers, Teradata, Databricks, Azure Cloud).
Consultant – Trade Analytics / Data Science Lead
Bank of America – New York, NY / Fort Worth, TX Oct 2021 – Apr 2022
Helped create and contribute to Hadoop Big-data systems to optimize partitioning strategies.
Optimize the speed of data collection batch processes
Fine-tuning the complex SQL queries.
Led data engineering strategy for pre-trade analytics.
Built ingestion and transformation pipelines using Hadoop, Kafka, Spark.
Collaborated with quants to develop A/B testing and statistical models.
Refactored Python modules in Quartz, improving performance.
Delivered time series analytics tools for trading strategy decisions.
Projects: Pre-Trade Analytics, Account Reconciliation, End of Day Record keeping
ECM Consultant & Senior Data Analyst
BNSF Railway – Fort Worth, TX Dec 2006 – Sep 2021
Developed integration processes with Salesforce, SAP PI,Tibco, Kafka with Restful API development using Springboot, Kubernetes, Agile Version One, Github, Jenkins.
Designed and led analytics and IoT projects.
Developed fraud detection models improving revenue retention by 15%.
Built dashboards reducing manual reporting by 35%.
Improved data accuracy by 25%.
Led a 10% efficiency improvement through ML and big data platforms.
Projects: Portfolio of Monolithic Application and Workflow development targeted for Content Management Systems about 150 Applications, deriving department wise Analytics and reports for Financial, Legal, Marketing, Human Resources, Contracts, Engineering Systems, Asset Management Systems like SAP modules, CRM system using Salesforce environments.
Includes hands on contributions to Architecture, Design, Implementation and Sustenance of IT products with Restful API development services, Release and Change management Services, Webservices using Springboot, Python, Java, Scala RHEL OS, Kubernetes, Kafka ..etc. and Multiple Content Management systems like Opentext Content Server, Opentext Media Manager, IBM Filenet, Content Navigator ..etc.
EDUCATION & CERTIFICATIONS
• Post Graduate Diploma in Data Science & Business Analytics – University of Texas at Austin, 2022–2023
• Certifications: Certified in Advanced Analytics & Deep Learning – UT Austin
• Certifications: International Strategic Management & Global Management – 2006
• B.Tech in Electrical & Electronics Engineering – JNTU, India
TECHNICAL SKILLS
• Languages: Python, SQL, Scala, C
• ML Libraries: Scikit-learn, TensorFlow, Keras, Statsmodels, Pandas
• Big Data: Spark, Kafka, Hadoop, Databricks
• Cloud: AWS (Glue, Lambda, S3), Azure (ADLS2)
• Visualization: Tableau, Power BI, Seaborn, Matplotlib
• Databases: Oracle, DB2, SQL Server, NoSQL
• DevOps & Workflow: Git, CI/CD, Jupyter, Streamlit
• Methodologies: Agile, Scrum, SAFe, Functional Programming
LEADERSHIP HIGHLIGHTS
• Scaled and led data teams (5–15 analysts/engineers).
• Coached junior team members in modern ML practices.
• Facilitated executive-technical communication, increasing project success by 30%.
PASSIONS & FOCUS AREAS
• Building high-impact data science teams that bridge strategy and execution.
• Driving scalable, production-ready ML and analytics solutions.
• Championing a culture of continuous learning and innovation.