Gopalakrishnan Lakshmanan
Visit: www.linkedin.com/in/gopalakrishnan-lakshmanan-12b551a
Email ID: *************@*****.***
Cell: +1-503-***-****
Location: Irving, TX
Authorization: GC EAD
Summary of Experience:
●Industry Expertise: Over 17+ years in IT specializing in Oracle PL/SQL, Microsoft SQL Server, cloud-based data engineering, large-scale database management, OSS/BSS systems/Telecom Domain.
●Database Engineering/Pipeline & Optimization:
oStrong skills in Oracle PL/SQL, Teradata, and Microsoft SQL Server for data migration, ETL, conversion, and optimization.
oExtensive experience in developing complex ETL/ELT data pipelines for both batch and real-time/near real-time data ingestion and processing,
●Cloud-Based Data Engineering:
oProficient in Google Cloud Platform (GCP) using BigQuery, Dataproc, Hive (Hadoop), and Spark.
oExperienced in AWS services including S3, Glue, RDS (PostgreSQL), Lambda, Athena, and Redshift.
●Programming & Automation:
oPython, Java, PL/SQL, Unix Shell Scripting for data integration, automation, and system performance tuning.
oExpertise in Python (including API development with frameworks like FastAPI), Talend, Airflow, and CI/CD pipelines with GitLab.
●Oracle MetaSolv Solutions (MSS/M6):
oExpert in configuring, coding, and optimizing GESW migrations.
oAutomating audits, addressing system bugs/enhancements, and integrating service order workflows.
oProficiency in telecom provisioning fiber, copper, voice, and internet services.
●Telecom Systems & Data Migration:
oExtensive experience in end-to-end data migrations, OSS/BSS integrations, and service order workflow management.
oI specialize in migrating customer, product, and network data.
●Leadership & Collaboration:
oLed onshore & offshore teams in complex telecom projects.
oManaged Agile initiatives (Jira) and provided 24x7 production support.
oLed cross-functional teams in building scalable cloud data platforms supporting 500M+ records, with 7+ years in tech leadership and delivery oversight
Key Areas of Expertise
●Telecom Systems & Migrations
oMetaSolv M6 Inventory Management – Extensive experience in configuring, coding, and performing GESW migrations.
oTelecom Provisioning – Expertise in fiber, copper, voice, and internet service provisioning.
oService Order Workflow Management – Developing comprehensive workflows for customer, product, and network data.
oOSS/BSS Systems – Skilled in handling order processing, provisioning, and telecom network migrations.
●Data Engineering/Pipeline & Database Development
oSQL & PL/SQL Development – Advanced expertise in ETL processes, database optimization, and migration.
oData Pipeline Architecture: Designing and implementing robust ETL/ELT pipelines for batch processing and architecting data streaming solutions for real-time/near real-time data ingestion using tools like Kafka (familiarity), GCP Pub/Sub, and AWS Kinesis (conceptual).
oData Modeling: Proficient in developing conceptual, logical, and physical data models using ERDs and other modeling techniques to support data warehouse and application database design.
oCloud Data Processing – Skilled in Google Cloud Platform (GCP) using BigQuery, Dataproc, Hive (Hadoop), Spark, and Snowflake for large-scale data analytics and reporting.
oPerformance Optimization – Proficient in query tuning, indexing, partitioning, and data profiling.
oExpertise in designing multi-stage ETL/ELT data pipelines that logically separate raw, transformed, and optimized data layers, aligning with principles similar to the Medallion Architecture for enhanced data quality, governance, and consumption.
●Cloud & Automation Technologies
oGoogle Cloud Platform (GCP) – Hands-on experience with BigQuery, Composer, and Dataproc.
oAWS – Expertise in S3, Glue, RDS (PostgreSQL), Lambda, Athena, and Redshift.
oDesigned and deployed scalable Data Lake architectures using AWS (S3, Glue, Redshift) and GCP (BigQuery, Cloud Storage) to support Data as a Service (DaaS) delivery models.
oPython, Talend, Airflow – Used for data pipeline automation and workflow optimizations.
oCI/CD – Experience with GitLab, automated deployments, and pipeline management.
oAPI Integration & Development: Experience in integrating with third-party APIs and developing custom APIs using Python, with a preference for FastAPI for building efficient and scalable services.
oEvent-Driven Systems: Experience in building and managing components of event-driven systems using Pub/Sub mechanisms (e.g., GCP Pub/Sub, AWS SNS/SQS conceptual) and processing with tools like AWS Lambda and Apache Kafka (familiarity).
oFamiliarity with diverse job scheduling and orchestration tools, adapting quickly to new platforms like ControlM/Redwood, drawing on experience with Apache Airflow (via Cloud Composer) and Cloud Scheduler.
●Development & System Tuning
oJava/Python Programming – Hands-on experience in telecom system conversions and database tuning.
oSystem Performance Optimization – Ensuring high availability, scalability, and telecom database performance.
●Business Intelligence & Data Visualization
oBI Tools – Expertise in Tableau, Power BI, and building interactive dashboards.
oData Insights & Analytics – Strong experience in data transformations and business analytics.
●Project & Team Leadership
oAgile & Scrum Methodologies – Managed teams using Jira & Spira.
oCross-Functional Collaboration – Worked with stakeholders, product managers, and development teams.
oProduction Support & System Troubleshooting – 24x7 support for critical telecom data operations.
Technical Skills
●Cloud Platforms: GCP (BigQuery, Dataproc, Cloud Storage, Composer, Dataflow), AWS (S3, Glue, Lambda, Athena, RDS, Redshift)
●Data Warehousing: Snowflake, Redshift, BigQuery, PostgreSQL
●Big Data Technologies: Databricks, Apache Spark, PySpark, Apache Hive,Hadoop, Apache Kafka (familiar), Dataflow, Cloud Functions
●Workflow Orchestration: Databricks Workflows (Job Scheduling), Airflow (Cloud Composer), AWS Step Functions
●Languages: Python, Java, SQL, PL/SQL, Shell Scripting, .Net, C#
●Frameworks/APIs: FastAPI (Preferred for API development), RESTful APIs
●Database Platforms: Oracle (12c, 19c), PostgreSQL, SQL Server, MySQL, MongoDB (familiarity)
●ETL Tools: Talend, AWS Glue, custom ETL (Python, SQL)
●DevOps & CI/CD: GitLab CI/CD, Jenkins, Bitbucket, Git, GitFlow
●Data Formats: Parquet, AVRO, JSON, CSV
●Project Management: JIRA, Confluence, Agile/Scrum, Remedy
●Visualization: Tableau, Power BI
Professional Experience:
Liberty Latin America Remote Dec 2024 - present
Sr. Data Engineer
Project: Data assurance
Roles & Responsibilities:
●Define and implement a data assurance framework to validate and reconcile data between OSS/BSS applications post-migration.
●Develop and oversee data validation strategies to ensure consistency between migrated and existing data.
●Design automated workflows for real-time data integrity checks and reconciliation.
●Design and implement data validation rules within the assurance framework to support the integrity of data consumed by API-driven services and real-time analytical dashboards.Ensure data consistency across OSS (Operational Support Systems) and BSS (Business Support Systems).
●Implement data integrity validation mechanisms, identifying and resolving data mismatches.
●Define and enforce data mapping, transformation rules, and business logic validation.
●Develop data reconciliation reports and dashboards to track discrepancies and resolution progress.
●Design and develop interactive data visualization dashboards for real-time data insights.
●Utilize Tableau to present data trends, discrepancies, and reconciliation results.
●Analyze large datasets for anomalies, performance bottlenecks, and optimization opportunities.
●Provide trend analysis and predictive analytics to anticipate data inconsistencies before they impact operations.
●Conduct data audits to ensure compliance with regulatory and security standards.
●Assess the impact of data integrity issues on system performance and business processes.
●Implement error detection alerts and proactive issue resolution workflows.
●Work with business, IT, and data teams to align data assurance processes with business goals.
●Communicate validation results, risk assessments, and recommendations through structured reports and dashboards.
●Collaborate with solution architects, data engineers, and business analysts to enhance reporting capabilities.
●Used Databricks with Delta Lake to manage historical data for customer migration processes, supporting rollback and auditability of staged records.
●Investigate and diagnose data mismatches between legacy and target systems.
●Develop and implement remediation strategies for identified inconsistencies.
●Establish and maintain a repository of common migration issues and resolutions.
●Implement automated data validation and reconciliation scripts to streamline assurance efforts.
●Develop real-time monitoring tools for proactive data quality management.
●Optimize existing data validation processes for improved accuracy and efficiency.
●Provide actionable insights based on data analysis to optimize OSS/BSS performance.
●Recommend enhancements to existing applications for better data integrity and analytics.
●Define best practices for ongoing data synchronization and future migrations.
Technologies Used: SQL, PL/SQL, Tableau, AWS Services, SalesForce, Aria, AWS Athena, AWS Glue, Snowflake, MySQL, Databricks
Verizon Irving, TX April 2024-December 2024
Data Engineer/Data Analyst
Project: C-OPS Automation
Roles & Responsibilities:
●C-OPS Automation project for Verizon, focusing on optimizing operations across 300+ enterprise applications supporting Verizon’s enterprise customers.
●The objective was to improve the efficiency of support processes and streamline issue resolutions.
●Reviewed and analyzed production support workflows to identify repetitive tasks and recurring issues within individual applications and interlinked processes.
●Collaborated with support teams to map out and understand application dependencies, interlinking data flows, and common failure points.
●Developed and implemented automation scripts using Python, connecting to multiple databases such as Oracle, PostgreSQL, and various cloud interfaces, significantly reducing manual intervention and operational overhead for repetitive tasks.
●Utilized Python for developing automation solutions, including scripts for API integration and potentially leveraging frameworks like FastAPI for creating internal tools/services to streamline data retrieval and updates across systems.
●Built real-time APIs with FastAPI to serve processed data from BigQuery and PostgreSQL as reusable services.
●Optimized support processes by identifying recurring patterns in issues and automating resolutions, enabling faster response times for Verizon’s enterprise customers and enhancing the reliability of applications.
●Orchestrated system integration and automated workflows for application support, resulting in streamlined cross-application communication and improved data integrity across systems.
●Monitored and maintained automation tasks, ensuring they ran reliably in production environments and refining them as needed to accommodate changes in business processes or application updates.
●Collaborated with development teams to report and track recurring issues identified through automation, facilitating code fixes and improvements to prevent future occurrences of bugs.
Technologies Used: SQL, PL/SQL, Oracle MetaSolv, Jira, Unix, Python, MySQL, PostgreSql, Selenium, .Net, c#
Liberty Latin America Irving, TX April 2023 – April 2024
Data Engineer
Project: Customer, Equipment & Service Migration
Roles & Responsibilities:
●Led the seamless migration of customers from AT&T’s network to Liberty Latin America’s (LLA) network for Puerto Rico and the Virgin Islands, minimizing downtime and ensuring a smooth transition of services and billing operations.
●AWS Components: Utilized AWS S3 to receive and securely store AT&T files monthly. Processed these files using AWS Glue for efficient data extraction, transformation, and loading (ETL) into tables in Amazon RDS (PostgreSQL).
●Leveraged AWS Lambda functions and AWS Step Functions for orchestrating complex data workflows and transformations.
●Developed Python scripts within AWS Lambda for data processing, which included consuming and integrating data from various RESTful APIs.
●Coordinated data migration efforts, processing critical customer product, service, and billing information from AT&T into the LLA system.
●Integrated AWS Secret Manager for secure storage and management of sensitive data such as API keys and credentials.
●Collaborated with cross-functional teams to integrate customer data with the LLA network, meeting regulatory compliance and maintaining performance standards throughout the migration process.
●Retrieved provisioning data from AT&T using APIs provided by AT&T and executed the API calls through AWS Lambda to fetch necessary customer records. Used these records to match and validate against the LLA network, ensuring accurate provisioning and service continuity.
●Executed bulk SIM information loading processes and utilized AT&T APIs to update network configurations, followed by LLA APIs to activate customers on the LLA network. Ensured smooth transitions and activations for customers.
●Transferred final data outputs to ServiceNow for order creation, product management, and customer setup, facilitating a seamless handoff to service operations teams.
●Sent processed customer data to LLA’s billing systems for accurate customer billing and invoicing.
Technologies Used: AWS, AWS S3, AWS Glue, AWS RDS (PostgreSQL), AWS Lambda, AWS Step Functions, AWS Secret Manager, AWS Redshift, Python, Salesforce, SQL, APIs, Snowflake
Verizon Irving, TX June 2021- April 2023
Data Engineer
Project: Feed Rationalization for GCH
Roles & Responsibilities:
●High-Volume Data Processing with GCP Services:
Designed and implemented a robust data processing framework using BigQuery and Dataflow for handling datasets exceeding 500 million records.
●Utilized BigQuery's partitioned tables to break down and manage large datasets for efficient parallel querying and scalable analytics.
●Developed Dataflow jobs using Apache Beam and Python to process large datasets in real-time, enabling distributed and scalable data transformations.
●Designed and implemented components of a data streaming architecture using GCP Dataflow for real-time processing of high-volume data feeds
●ETL Processing with GCP Components:
Created ETL workflows utilizing Dataflow for data ingestion, transformation, and loading. Integrated data from various sources into BigQuery staging tables, using Cloud Storage as an intermediary storage layer for raw data.
oDesigned and developed data transformation scripts using Python within Dataflow to clean, validate, and transform data before loading it into BigQuery's target tables, ensuring high data quality and performance.
●File Storage and Management using Cloud Storage:
Employed GCP Cloud Storage for secure and scalable file storage, archiving large data files, and integrating them into data pipelines. Utilized lifecycle management policies for file retention, automating archival processes to maintain compliance and data accessibility.
●Leveraged Databricks notebooks for PySpark-based data transformations and performance tuning of high-volume datasets integrated into BigQuery pipelines.
●Distributed Data Processing with DataProc and BigQuery:
Leveraged DataProc for distributed data processing and transformations involving large datasets, enabling parallelized operations and data aggregation tasks. Utilized BigQuery's native capabilities for high-speed data aggregation, partition exchange, and optimized query execution.
oFor efficient data aggregations and complex calculations, designed processes to swap partitions within BigQuery tables, minimizing downtime and maximizing throughput for high-volume operations.
●Data Orchestration with Cloud Composer:
Implemented data orchestration workflows using Cloud Composer (based on Apache Airflow) to schedule and manage complex data processing pipelines, integrating Dataflow, BigQuery, and other GCP services seamlessly. Developed workflows to handle hierarchical data relationships and streamline data transformations.
●Automation and Job Scheduling with Cloud Scheduler and Pub/Sub:
Automated data processing and archiving tasks using Cloud Scheduler to trigger data pipelines and Cloud Functions for serverless automation. Integrated Pub/Sub for real-time messaging and event-driven data processing across different stages of the workflow.
●Version Control & CI/CD Pipeline with GitLab:
Maintained all Python and configuration code in GitLab for version control and integrated deployment pipelines using GitLab CI/CD. This ensured consistent, automated deployments of Dataflow and Cloud Functions scripts across different environments, supporting continuous development and operational efficiency.
●Hierarchical Data Transformation using BigQuery:
Designed data workflows to manage hierarchical data relationships within BigQuery, leveraging recursive queries and partition-specific processing. This approach consolidated data from multiple sources into a unified structure for complex reporting and analytics.
●Query Optimization and Performance Tuning:
Optimized BigQuery SQL queries using clustering, partition pruning, and query optimization techniques to enhance data retrieval and transformation speeds. Utilized caching and materialized views to improve query performance for frequently accessed data.
●Data Validation and Monitoring with GCP Services:
Integrated data validation processes using Cloud Functions and Dataflow, with robust error-handling mechanisms. Set up monitoring and logging via Stackdriver (now Cloud Operations) to track data pipeline performance, log errors, and send alerts for proactive issue resolution.
Technologies Used: ETL, Oracle PL/SQL, BigQuery, Dataflow, DataProc, Cloud Storage, Cloud Functions, Cloud Composer (Apache Airflow), Cloud Scheduler, Pub/Sub, Stackdriver (Cloud Operations); Python, GitLab CI/CD, Databricks
Verizon Irving, TX June 2020-June 2021
Senior Technical Lead
Project: Verizon UDA (Unified Data Architecture)
Roles & Responsibilities:
●Technical Coordination: Served as the primary onsite technical coordinator, collaborating with Verizon’s Global Network and Technology Business Group (AI&D) to support successful project launches utilizing WinSCP, Super Putty, and CI/CD practices with GitLab.
●Team Leadership & Project Management: Led and mentored a cross-functional team on data engineering projects, ensuring timely delivery and collaboration through Jira for agile project management.
●Data Pipeline Design: Designed data pipelines to manage the entire data lifecycle (ingestion, curation, analytics) within Verizon’s Unified Data Architecture (UDA), leveraging tools such as ETL (Talend), GCP BigQuery, Spark, and Hive.
●Data Migration & Validation: Executed data migration and validation activities to ensure integrity and reliability using Oracle PL/SQL, Teradata, and Microsoft SQL Server. Migrated legacy processes from Teradata to Hadoop-based tools like Hive and Spark to modernize workflows.
●Code Optimization & Automation: Enhanced existing tools and reports by optimizing SQL queries, automating ETL processes, and integrating new business requirements using Python 3.1, SQL Developer, and Tableau.
●Scalable Solutions Development: Developed and deployed scalable code solutions with technologies like Python 3.1, SQL, and GCP BigQuery, adhering to CI/CD practices with GitLab for continuous integration.
●System Testing & User Acceptance: Conducted system testing and provided UAT support, working closely with stakeholders to verify outcomes while utilizing Unix and Super Putty for testing and deployment.
●Monitoring & Support: Monitored data servers and pipelines with WinSCP and Super Putty, implementing heartbeat monitoring and automated alerts to ensure reliability and minimize downtime. Provided ongoing support for real-time dashboards using Tableau, ensuring timely business intelligence insights for stakeholders
Technologies Used: ETL, Oracle PL/SQL, Teradata, Microsoft SQL Server, Tableau, GCP BigQuery, Hive, Spark, Unix, WinSCP, Super Putty, CI/CD with GitLab, Python 3.1, Tableau
Windstream Charlotte, NC September 2017 – June 2020
Senior Technical Lead
Project: EarthLink MetaSolv Data Migration
Roles & Responsibilities:
●Led the data migration project for Windstream, ensuring a seamless integration of Earthlink’s customer, service, product, equipment, and network data into Windstream's MetaSolv application.
●Analyzed source data from Earthlink’s Oracle database and flat files to understand the structure and dependencies. Developed data mapping and transformation strategies to align with the target data model.
●Designed and implemented a staging data model to standardize incoming data and efficiently handle large-scale migrations, ensuring consistency and integrity.
●Developed scripts in Oracle PL/SQL to extract, transform, and load (ETL) data from staging into the target MetaSolv Oracle database, ensuring compatibility with Windstream’s data model and enabling modifiability through the MetaSolv application.
●Conducted data validation and integrity checks at multiple stages to ensure data consistency and accuracy after the migration process.
●Created scripts and jobs to automate data loading and validation, using tools and technologies such as Unix, WinSCP, and SQL for seamless execution.
●Coordinated with business stakeholders and development teams to finalize the data model and establish key relationships between data entities in the target environment.
●Managed project tracking and progress using Jira under the Agile methodology, ensuring iterative delivery and continuous alignment with business requirements.
●Provided support during User Acceptance Testing (UAT) to validate data modifications through the MetaSolv application and resolve any discrepancies.
●Delivered regular updates to stakeholders on project progress, risks, and mitigation strategies, focusing on successful project execution within timelines.
Technologies Used: Oracle PL/SQL, Unix, WinSCP, MetaSolv M6, Jira, Agile Methodology.
Fusion Wayne, NJ May 2016 – September 2017
Senior Technical Lead
Project: MetaSolv Solution Implementation & Migration
Roles & Responsibilities:
●Led the migration project to replace the existing COSMOS application with Oracle MetaSolv Communication Solutions (M6) for Fusion Connect, a CLEC telecom company, to streamline service delivery to enterprise customers.
●Installed and configured the MetaSolv Communication Solutions in Fusion Connect’s environment, setting up the foundation for all data and process migrations.
●Analyzed the data and integrated applications in COSMOS, identifying all relevant data entities including customer data, product data, equipment data, services data, and customizations or automations in the legacy system.
●Mapped legacy data entities and attributes in COSMOS to corresponding entities in MetaSolv M6 through comprehensive data modeling and data mapping.
●Designed and implemented a staging data model to standardize incoming data for transformation and loading into the target MetaSolv M6 database.
●Performed ETL processes to extract, transform, and load data from COSMOS to the target MetaSolv database using Oracle PL/SQL and Unix scripts, ensuring data consistency and integrity throughout the migration.
●Created and/or adapted automations to match the existing functionality in COSMOS using Oracle MetaSolv Solutions' in-built automation capabilities.
●Conducted data validation and functional testing to verify data accuracy and ensure that migrated data performed optimally within the new system.
●Carried out detailed testing of MetaSolv’s functionalities to ensure that all required business processes, such as customer and order creation, were working as expected.
●Trained end-users on using the new MetaSolv application, covering key functionalities, process automations, and order management.
●Executed new customer orders, services, and provisioning workflows to validate end-to-end business processes within MetaSolv and ensure that they aligned with business needs.
●Managed project tasks using Jira to track progress, manage tasks, and document issues, adhering to an Agile methodology for iterative development and continuous improvement.
●Coordinated with business stakeholders and technical teams to address any gaps or modifications needed during the transition from COSMOS to MetaSolv.
Technologies Used: Oracle PL/SQL, Unix, WinSCP, MetaSolv M6, Jira, Agile Methodology, Java
CenturyLink La Corsse, WI September 2013 – May 2016
Senior Technical Lead
Project: NIC Pre-Conversion MSS Cleanup
Roles & Responsibilities:
●Led the Network Inventory Cleanup Project for Century Link (now Lumen) as they transitioned from Oracle MetaSolv Solution to TIRKS, focusing on identifying and resolving discrepancies in network equipment and topology data.
●Analyzed and corrected discrepancies within the MetaSolv application related to equipment details, port addresses, mounting positions, cross-connect configurations, and CLR/DLR (Circuit Layout Report/Design Layout Report), ensuring data integrity before the transition.
●Reviewed reports from data centers and matched them with the existing data in MetaSolv Solution to identify discrepancies, conducting comprehensive validations to ensure all records were consistent and accurate.
●Developed Oracle PL/SQL procedures and packages to perform bulk data fixes in the MetaSolv Oracle Database, automating the correction of data discrepancies and improving efficiency in the cleanup process.
●Performed network topology corrections and updated the network assignment records, working in close alignment with business and operational teams to verify accuracy.
●Generated detailed reports on the data fixes performed, highlighting any errors or unresolved issues, and distributed them to the appropriate stakeholders for review and resolution.
●Ensured compliance with OSS/BSS application standards and maintained the overall health of the MetaSolv environment during the migration to TIRKS.
●Collaborated closely with cross-functional teams, database administrators, and stakeholders, addressing their concerns and providing necessary technical insights on the Oracle MetaSolv Solution.
Technologies Used: Oracle MetaSolv, SQL, PL/SQL
Integra Telecommunications Portland, OR January 2011 – September 2013
Senior Technical Lead
Project: MetaSolv Automation and Ancillary Application Enhancement
Roles & Responsibilities:
●Provided application support for Oracle Communication MetaSolv Solution used by Integra Telecommunications, ensuring seamless service enablement for business customers and resolving issues encountered by Integra’s business users efficiently.
●Analyzed and applied fixes in MetaSolv application to resolve user-reported issues related to service provisioning, customer management, product and order management, and network configurations, maintaining service integrity and customer satisfaction.
●Designed and developed automation processes for Circuit, Equipment, and Network provisioning to streamline routine tasks and improve operational efficiency. Leveraged .Net technology to enhance and extend custom automation scripts for both Physical and Virtual connections, resulting in faster execution of provisioning tasks.
●Performed data migrations from external applications into Integra’s MetaSolv Solution, including the integration of customer data, orders, products, equipment, and network systems, ensuring data accuracy and integrity during the migration process.
●Enhanced custom automation workflows using .Net-based tools to handle complex scenarios and meet evolving business requirements, while maintaining compatibility with MetaSolv’s core functionalities.
●Collaborated with business stakeholders and development teams to implement automation enhancements and address emerging challenges in network and equipment management.
●Conducted testing and validation to ensure that all data migrations, automations, and enhancements were performing optimally in the MetaSolv environment, adhering to business standards and operational requirements.
●Maintained detailed documentation for all fixes, automations, and enhancements, ensuring transparency and traceability in the support and development processes.
Technologies Used: PL/SQL, Oracle MetaSolv, XML, .Net
Windstream Chennai, TN, India August 2008 – January 2011
Lead Developer
Project: Production Support and MetaSolv Product Maintenance
Roles & Responsibilities:
●Supported and upgraded Oracle MetaSolv Solutions (ASAP), the primary OSS/BSS product used by Windstream, an ILEC telecom company, for end-user and enterprise services.
●Handled daily tickets and resolved issues related to the MetaSolv application, including user-reported incidents and system bugs, ensuring consistent application performance and reliability.
●Managed and executed multiple data migrations within MetaSolv, such as:
oData migrations from MetaSolv to MetaSolv for consolidation or upgrades.
oImported and integrated data from flat files into MetaSolv using intermediate data models and custom scripts.
oMigrated data from third-party applications into MetaSolv, creating necessary data models to ensure smooth integration and consistency.
●Performed data analysis, data modeling, and script development