Post Job Free

Resume

Sign in

Data Big

Location:
Flushing, NY
Posted:
July 13, 2023

Contact this candidate

Resume:

Reza Mohd Karim

SPARK//HADOOP/PYTHON Developer, Kafka work, Informatica MDM/BDM (AWS + Kafka certified) Email: adyaj2@r.postjobfree.com, Phone: +1-917-***-****, +1-646-***-**** Looking for remote position

EXPERIENCE WITH TOP SKILLS REQUIRED

• Over 10+ years of professional experience with Data Engineer/ Data Center Engineer IT experience user requirements, analysing, designing, implementing, testing, and documenting across diverse financial industries / work environments.

• Extensive professional IT experience, including 6+ years of data processing and analysis experience, handled high volume data and supported various technology stacks.

• Having experience as an AWS MSK /Apache Kafka/ Confluent Kafka developer with deep understanding of Kafka Development

• Well versed with Kafka and streaming internals, Java, Spring Boot and similar technologies

• Having working knowledge with AWS technologies including ECS/EKS, Lambda, API Gateway, S3, Terraforms and Kafka

• Collaborating with cross-functional teams to design scalable and highly available MSK architecture that meets the organization's streaming data requirements. This involved determining the appropriate number of brokers, configuring replication, and implementing security measures.

• Deploying MSK clusters on AWS, including selecting the appropriate instance types, storage options, and networking configurations. I was responsible for provisioning and configuring the clusters, ensuring optimal performance and reliability.

• Continuously monitoring the MSK clusters' performance and fine-tuning configurations to optimize throughput, latency, and overall stability. This involved identifying and resolving bottlenecks, implementing partitioning strategies, and adjusting consumer group configurations.

• Implementing robust security measures to protect sensitive data flowing through MSK. This included enabling encryption at rest and in transit, configuring VPC endpoints, and integrating with AWS Identity and Access Management (IAM) for access control.

• Designing and implementing disaster recovery strategies for MSK clusters, ensuring data replication and failover mechanisms were in place to minimize downtime and data loss.

• Having experience with Azure Databricks (ADB) and Azure Data Factory (ADF)

• Hands-on experience with Azure Data Lake, Azure SQL, and Azure Synapse.

• Knowledge of developing ETL/ELT processes between Blob storages, staging, and SQL-managed instances.

• Experienced with use-case development, with Software methodologies like Agile and Waterfall.

• Worked on designing HIPPA compliant Data lake with layered architecture in Snowflake.

• Developed continuous ingestion pipeline for batch loading data to Snowflake from AWS S3/ Azure blob along with the configuration of notification integration for error handling

• Designed RBAC framework to setup role hierarchy with well-defined domain, functional and access roles

• Integrated snowflake with dbt to configure CI/CD, for modelling data, transformation and testing

• Designed Automated Data Masking Framework to hide PII data from specified roles using row and

• Column level masking policy

• Worked on building a continuous data ingestion pipeline to extract data from the Postgres database and load it to Snowflake using AWS-managed Airflow(MWAA) and configured audit, logging, and alerting for the same

• Experience in Big Data and Hadoop Administration ecosystems including HDFS, Pig, Hive, Impala, HBase, Yarn, Sqoop, Flume, Oozie, Hue, MapReduce, and Spark

• Worked on Regression testing using Data Validation framework (DVF) which is a Data Validtion framework built and used by the P&C Data Modernization system.

• Qtest is a vendor product which houses testing evidences for USAA, These data contain both manual and automated tests, Qtest has many integrations around it to support USAA’s compliance mission.

• Designed and developed various modules in Hadoop Big Data platform and processed data using MapReduce, Hive, Impala, Sqoop, flume, Zookeeper, Kafka and Oozie

• Create new mapping designs using various tools in Informatica Designer like Source Analyzer, Warehouse Designer, Mapplet and Mapping Designer.

• Developed various mappings using Mapping Designer and worked with Aggregator, Lookup, Filter, Router, Joiner, Source Qualifier, Expression, Stored Procedure, Sorter and Sequence Generator transformations.

• Develop and exercise automated infrastructure testing to ensure Kafka configuration changes or upgrades are not detrimental to Kafka-integrating applications.

• Using Data Director, Data Quality and data Governance tools streamlined business processes, and reducing the costs associated with data administration, data cleansing, third-party data providers, and capital costs. SKILLS:

• Spark + Scala, Python + PySpark

• Bigdata Hadoop Admin/Development (Hive&Impala)

• Hadoop Distribution: CDH, HDP

• Core Java + JavaScript + Springboot

• Kafka Admin/Develop. + Ansible & Tableau

• Microservices (SOAP, RESTFUL API’s like XML, AJAX &JSON)

• AWS S3, Lambda, EC2, EMR, Redshift, RDS, Snowflake

• ETL/Nifi, PowerCenter ETL, Talend, Snaplogic, AWS Glue

• Informatica MDM 10.1, BDM, Powercenter ETL & IICS

• Docker Swarm, Kubernetes, Jenkins, CI/CD Integration

• Production Support and QA support

• Agile Environment Scrum Methodologies

• Some experience with HealthCare industry domain; Big Data stack: Hadoop, Sqoop, Hive, Spark, Kafka, Airflow Programming Languages: Java, Scala, Python

Databases: MySQL, HBase; Data Warehouse: Snowflake Cloud Technology: AWS - S3, EMR, Redshift, Athena, Glue Azure - Data Factory, Azure storage, Databricks, Key Vault, Active directory, Data flow DevOps Tools: GIT, Docker; Scripting: Linux Shell Scripting RELEVANT SKILLS (years):

• 3 Years of experience in Spark-Scala

• 4+yrs of experience in Python/PYSpark

• 5+ yrs of Experience in Hadoop Big Data

• 4 yrs on Hadoop distribution w/CDH

• 3 yrs of Experience in Java & JavaScript

• 5 Years of Experience in Kafka, Ansible

• 2 Years of Experience in Microservices

• 3+ Years of Experience in AWS-Snowflake, S3, EMR

• 2 Years of Exp. in ETL/Nifi, Talend, AWS Glue

• 3+ yrs of experience in informatica MDM, BDM, ETL

• 2+ years of experience in DevOps tools

• 2+ yrs. of exp. QA/Production support; 3+ years of exp. In Agile methodologies CERTIFICATIONS: AWS Solution Architect (SAA-C02) certified; AWS Bigdata analytics Specialty (DAS-C01) certified. Completed Kafka Certification course with Kafka Streaming platform (Hands on Project) from EDUREKA. In the process of obtaining AZURE “AZ-303/AZ-304” Solution Architect design/technologies certifications; In the process of taking “CCDAK” Confluent/Apache Kafka certification exam PROFESSIONAL WORK EXPERIENCE

USAA, San Antonio, TX Dec. 2021 – March, 2023

Data Engineer with Python/Python Scriptings, Data validation & some Kafka work

• Responsibilities – Data Modernization Project as a Hadoop/Python Developer/Devops:

• Worked on GitLab pipeline, when the pipeline triggers in GitLab with all the required files, it starts execution with GitLab-ci-yml file. Based on the stages and sequence mentioned in the gitlab-ci.yml. It executes the steps.

• Worked in an agile environment to manage and operationalize Kafka components (Producers, Connectors, Consumers, Stream Processers, zookeepers, brokers, Control Canter, Rest proxies, Connect, replicators) in a multi data canter on- premise and public cloud environment. Below is the step performed in sequence:

• Building and publishing the Docker images in Artifactory with added modules.

• Publishing all the Helm charts to Artifactory.: Worked on Gitlab/OpenShift connectivity to Qtest Application.

• Researched and established DBT cloud tool to integrate in Pipeline as ETL tool and do away with OpenShift.

• Experience in Python development using various libraries.

• Experience in REST CI/CD, TDD, Java/JVM based solutions

• Strong in data-structure, Algorithm, Multithreading and concurrency.

• Strong understanding of Python Memory Management and concurrency (GIL)

• In-depth knowledge and experience with Data structures and Collections

• Strong understanding of Functional programming; Good experience on SQL Development.

• strong understanding and hands-on programming/scripting experience skills – Python, UNIX shell, Perl, and JavaScript.

• Experience with Lean / Agile development methodologies

• Hands on experience on Oracle Database development having proficiency in writing SQL, PLSQK and tuning the queries.

• Able to read and understand PLSQL package/procedure logic Witten in Oracle report builder templates known as RDF’s

• Worked on Gitlab/Openshift connectibity to Qtest Application.

• Researched and established DBT cloud tool to integrate in Pipeline as ETL tool and do away with OpenShift. Qtest high level capabilities include –Managing test cases (Manual and Automated). Automated test results via TRAPI (Test results API) from any platform that supports REST API’s Story-t-to-test-case-defect mapping via a Jira-to- Qtest Integration. Launcing test scripts (TestNG, Cucumber, Selenium, UFT, etc.) via the ad-hoc or scheduledtask

• Worked on Regression testing using Data Validation framework (DVF) which is a Data Validtion framework built and used by the P&C Data Modernization system.

• Qtest is a vendor product which houses testing evidences for USAA, these data contain both manual and automated tests, Qtest has many integrations around it to support USAA’s compliance mission. M&T Bank, Buffalo, NY Dec, 2020 – Dec. 2021

Project: Consumer Pods, Role: HADOOP/Data Engineer/Python Developer/Kafka work Responsibilities - Kafka Development work:

• Have exp. in agile environment to manage and operationalize Kafka components (Producers, Connectors, Consumers, Stream Processers, zookeepers, brokers, Control Center, Rest proxies, Connect, replicators) in a multi data center (on- premise and public cloud) environment.

• Evolve and optimize enterprise-grade Kafka topologies.

• Address performance and scalability challenges posed by new or changing Kafka producers and consumers.

• Implement solutions to monitor Kafka components to proactively address any Kafka messaging issues.

• Conduct multi-environment capacity planning.

• Identify and implement best practices to support a highly-available deployment (considerations include business continuity/disaster recovery, backup and restoration, repartitioning, zero-outage upgrades, etc).

• Develop and exercise automated infrastructure testing to ensure Kafka configuration changes or upgrades are not detrimental to Kafka-integrating applications.

• Assist with development of self-service tooling to enable development to easily provision and configure Kafka.

• Author automated services to provision and instantiate Kafka clusters across multiple platforms (on premise and public cloud).

• Enforce security standards as part of overall Kafka implementation.

• Provide in-depth expertise on evolving Kafka capabilities.

• Involvement in discussions with Business Analyst to understand the user requirements.

• Working in end-to-end ETL processing using Informatica BDM and Power Canter.

• Working experience in Talend DI Tool, Working on writing Oracle SQL queries.

• Experience in developing mappings and applying required logics based on the requirement.

• Experience with building stream-processing applications using Apache Flink, Kafka Streams.

• Experience with Cloud Computing platforms like Amazon AWS, Google Cloud etc. PYTHON DEVELOPER: Design and develop ETL integration patterns using python on spark

• Develop framework for converting existing INFORMATICA bigdata jobs to pyspark jobs

• Generating Sqoop commands to bring data from Teradata to Hadoop and running pyspark scripts for data transformation.

• Developed Roboust test suite using python for validating bulk jobs that were updated from older version to newer one

• Worked on the graph plots using python matplotlib library and observed the trends of the data related to transactions and mobile spend summary.

HADOOP DEVELOPER: Understanding the technical mapping documents and involved in discussions with data analysts for developing complex logics satisfying business requirements.

• Experience in working with BAU Support team involving in the Production implementation. •

• Good knowledge on Big Data, Hadoop, and Spark SQL components.

• Experience in writing Unix scripts and creating CA Automic job schedulers. Also, worked with CONTROL-M job schedulers.

• Ability to be creative and take self-initiatives and execute/ manage multiple projects in parallel during time critical situations.

• Very good Team player, Self-motivated, hardworking professional with good organizational, leadership, interpersonal ability to handle multiple tasks, learn and adapt quickly with new technologies.

• Data Ingestion into Hadoop. This involves reading data from different files (Parquet, XML, CSV, AVRO, ORC) and loading into Hadoop staging tables using Informatica ETL tool.*Gathering requirements from the source system SME

• Cleansing and Transforming data through Informatica BDM and Talend Tools

• To get Data Certification and UAT Signoffs to promote task to Production environment

• Handled production issues during warranty. Primary Skills: Pyhton, Informatica, Hadoop, Kafka. Other Skills: Spark, Hive, UNIX, Oracle, Talend.

TECHNICAL EXPERTISE:

• ETL Tool : Informatica BDM, Power Canter, Talend.

• Big data : Hadoop, Hive, Spark

• Database : Oracle

• Shell Scripting: : UNIX.

TERRAFORM WORK: Extensive experience with AWS Cloud technologies and native toolsets such as Amazon EC2 Service, Amazon S3 buckets, AWS EC2 instances and Amazon VPCs.

• Strong experience with automation and configuration management using Terraform, Ansible, Chef and Jenkins.

• Migration process was facilitated using Terraform and Ansible. The result was a newly implemented DevOps pipeline on the AWS platform

• Used Terraform and Ansible to automate infrastructure deployment and configuration management.

• Setup and monitored MongoDB and Kafka clusters.

• Worked extensively with GitHub to manage source code and build pipelines. Integrated infrastructure provisioning scripts on each of these platforms using industry standard tools and services such as Terraform. AWS-MSK WORK: Managed Streaming for Kafka (MSK), CloudFormation (CF), Cloud Development Kit (CDK), Lambda, Faregate with Elastic Container Services (ECS)

• Extracting data from APIs and databases; writing complex transformations in Python, Pandas, and SQL; building data pipelines with Logstash; executing multi-step data workflows with dependencies using Makefiles and Luigi; using DBT for analytics engineering in SQL Server; administering topics, partitions, and consumer groups in Kafka; administering Elasticsearch indices, nodes, and clusters

• Building Python APIs with FastAPI for exposing data in Snowflake, deployed to Azure Kubernetes Service. Proof of concept with dbt for Snowflake ELT, paving the way for data engineers.

• Building developer tools and writing documentation for using Apache Beam with Apache Flink on Kubernetes

(Minikube and Azure Kubernetes Service).

• Responsible for enabling developers to use services for implementing event driven architectures in public (AWS) and private clouds, like AWS Managed Streaming for Kafka (MSK), SNS/SQS, and Event Bridge.

• Responsible for participating in third party SaaS and PaaS solution evaluation, feature comparisons, and requirements analysis.

• Automated generation and association of SASL/SCRAM principals in AWS Secrets Manager to AWS MSK clusters using Cloud

• Development Kit (CDK) and usage of those secrets in Fargate services

• Built data analytics pipelines with Python, dbt, Ansible, and SQL Server for presenting MuleSoft Any point platform metrics, KPIs, and transaction logs for other developers. Responsible for enabling developers to implement event driven architecture in public (AWS) and private clouds, Enabled AWS Managed Streaming for Kafka (MSK) for developers using Cloud Development Kit (CDK) and TypeScript;

• Automated generation of key stores for client certificate authentication to Kafka using AWS Lambda and Java Wrote documentation for onboarding development teams to Kafka CLOUD WORK: Developed cloud automation tailored to customers’ needs.

• Involved with planning, designing, and transforming environments from on-premises to cloud-based. Worked on Microsoft Azure Administrator configuring availability sets, virtual machine scale set (VMSS) with load balancers, network security group (NSG), Virtual networks, Docker, and Kubernetes.

• Worked as Cloud Administrator on Microsoft Azure, involved in configuring virtual machines, storage accounts, and resource groups.

• Remote login to Virtual Machines to troubleshoot, monitor and deploy applications

• Carried out the configuration of Microsoft DevTest Labs to migrate the virtual machines from one subscription to another subscription

• Managed day-to-day activity of the cloud environment, supporting development teams with their requirements

• Created Labs, Virtual Machines along with setting up policies and using Formulas and Custom Images to deploy the network American Express, New York city, NY Apr2017–Dec2020 Hadoop Developer w/Spark/Scala, Kafka Developer., AWS & Informatica BDM/MDM, PowerCenter ETL

• Designed and developed various modules in Hadoop Big Data platform and processed data using MapReduce, Hive, Impala, Sqoop, flume, Zookeeper, Kafka and Oozie

• Experience in Job management using Fair scheduler and Developed job processing scripts using Oozie workflow.

• Used Spark, Hive for implementing the transformations need to join the daily ingested data to historic data.

• Used Spark-Streaming APIs to perform necessary transformations and actions on the fly for building the common learner data model which gets the data from Kafka in near real time.

• Developed Spark scripts by using Scala shell commands as per the requirement.

• Used Spark API over EMR Cluster Hadoop YARN to perform analytics on data in Hive.

• Developed Scala scripts, UDFs using both Data frames/SQL/Data sets and RDD in Spark for Data Aggregation, queries and writing data back into OLTP system through Sqoop.

• Experienced in performance tuning of Spark Applications for setting right Batch Interval time, correct level of Parallelism & memory tuning.

• Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames and Pair RDD's.

• Performed advanced procedures like text analytics and processing, using the in-memory computing capabilities of Spark.

• Developed reusable transformations to load data from flat files and other data sources to the Data Warehouse.

• Assisted operation support team for transactional data loads in developing SQL Loader & Unix scripts.

• Implemented Spark SQL queries which intermix the Hive queries with the programmatic data manipulations supported by RDDs and data frames in Scala and python.

• Implemented Python script to call the Cassandra Rest API, performed transformations and loaded the data into Hive.

• Extensively worked on Python and build the custom ingest framework.

• Experienced in handling large datasets using Partitions, Spark in Memory capabilities, Broadcasts in Spark, Effective & efficient Joins, Transformations and other during ingestion process itself.

• Experienced in writing live Real-time Processing using Spark Streaming with Kafka.

• Created Cassandra tables to store various data formats of data coming from different sources.

• Designed, developed data integration programs in Hadoop environment with NoSQL data store Cassandra for data access/ analysis.

• Worked extensively with Sqoop for importing metadata from Oracle.

• Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.

• Involved in file movements between HDFS and AWS S3. Extensively worked with S3 bucket in AWS.

• Used Reporting tools like Tableau to connect with Hive for generating daily reports of data.

• Collaborated with the infrastructure, network, database, application, and BI teams to ensure data quality & availability.

• Informatica Master Data management (MDM) & Big data management (BDM) version 10.2

• Integrated multiple customer contact systems & organizational reference data systems into a single multi-domain hub, giving business users full visibility across multiple client databases.

• Used informatica data Director (IDD), data quality (IDQ) and data Governance (DG) tools for data compliance, fraud prevention & data privacy requirements.

• Using Data Director, Data Quality and data Governance tools streamlined business processes, and reducing the costs associated with data administration, data cleansing, third-party data providers, and capital costs.

• Productivity improvements across the organization by reducing duplicate, inaccurate, and poor-quality data, helping to refocus resources on higher value activities.

• Worked on Informatica Intelligent Data Platform (IDP) for Financial Services Informatica tools to accelerate new client onboarding, improve customer

• engagement, identify new revenue opportunities, comply with regulations, and respond quickly to new market opportunities and threats.

• Some knowledge with Informatica MDM Service Integration Framework (SIF) and Informatica Power Exchange.

• Worked on Informatica Big data, management (BDM) product tool to build Data Quality, Data Integration, and Data Governance processes for big data platforms.

• Used InformaticaBDMtool to perform data ingestion into a Hadoop cluster, data processing on the cluster, and

• extraction of data from the Hadoopcluster.Worked on Informatica BDM Smart executor “BLAZE”to enable

• seamless Informatica mappings. Worked on leveraging BDM Polygot capability. Informatica PowerCenter 10.1(ETL):

• Knowledge on Usage of PowerCenter 10 Designer to develop mappings that extract data from a source to a target, transforming it as per requirement. Has ability to deploy PowerCenter transformations to format, join, segregate, and route data to appropriate targets; Workflow Monitor, and Repository Manager

• Some experience in developing Informatica PowerCenter ETL mappings using Designer, Workflow Manager,

• Extraction, Transformation and Load process (ETL) using Informatica PowerCenter to populate the tables in Data warehouse and Data marts. Informatica PowerCenter a “metadata-driven integration platform,” is a data extract, transfer, and load (ETL) product

• Some knowledge in tuning techniques in Informatica Power center.

• Experience with ETL tools and processes for financial industry domain in online credit card applications and mortgage/loans applications

• Used Snaplogic ETL tool to rapidly load data in Snowflake like Cloud based Data warehouse.

• g: Used Informatica Power Center for (ETL) extraction, transformation and loading data from heterogeneous source systems into target database.

• Create new mapping designs using various tools in Informatica Designer like Source Analyzer, Warehouse Designer, Mapplet and Mapping Designer.

• Developed various mappings using Mapping Designer and worked with Aggregator, Lookup, Filter, Router, Joiner, Source Qualifier, Expression, Stored Procedure, Sorter and Sequence Generator transformations.

• Worked with complex mappings having an average of 15 transformations.

• k: Created and scheduled Sessions, Jobs based on demand, run on time, and run only once.

• Monitored Workflows and Sessions using Workflow Monitor.

• Used Teradata & Netezza as a source system.

Master Card, Purchase, NY Nov2015 – Mar2017

Spark/Scala Developer + Hadoop Developer & Kafka Develop/Admin

• Hands on experience in Hadoop ecosystem components such as HDFS, MapReduce, Yarn, Hive, HBase, Oozie, Zookeeper,

• Sqoop, Flume, Impala, Kafka/Kafka connect. Good programming skills at higher level of abstraction using Spark/Scala

• Implemented Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data

• Converted existing MapReduce jobs into Spark transformations and actions using Spark RDDs, Data frames and Spark SQL APIs.

• Developed a data pipeline using Kafka, Spark, and Hive to ingest, transform and analysing customer behavioral data.

• Worked on Big Data infrastructure for batch processing as well as real-time processing. Responsible for building scalable distributed data solutions using Hadoop.

• Developed real time data processing applications by using Scala and Python and implemented Apache Spark Streaming from various streaming sources like Kafka. Developed Spark jobs and Hive Jobs to summarize and transform data.

• Involved in HDFS maintenance and loading of structured and unstructured data and imported data from mainframe

• dataset to HDFS using Sqoop and written the PySpark Script to process the HDFS data.

• Extensively worked on the core and Spark SQL modules of Spark.

• Involved in Spark and Spark Streaming creating RDD's, applying operations -Transformation and Actions.

• Created partitioned tables and loaded data using both static partition and dynamic partition method.

• Executed Hive queries on Parquet tables stored in Hive to perform data analysis to meet the business requirements.

• Ingested data from RDBMS and performed data transformations, and then export the transformed data to HDFS as per

• the business requirement. Used Impala to read, write and query the data in HDFS.

• Worked on troubleshooting spark application to make them more error tolerant.

• Stored the output files for export onto HDFS and later these files are picked up by downstream systems.

• Load the data into Spark RDD and do in memory data Computation to generate the Output response. Involved in converting Hive/SQL queries into Spark transformations using Spark RDD, Scala. Credit Agricole CIB, New York Jan2013 – Oct2015

Hadoop Developer with Spark/Scala & Kafka+ AWS with JAVA

• Launching Amazon EC2 Cloud Instances using AWS (Linux/Ubuntu/RHEL) and configured launched instances with respect to specific applications.

• Developed Spark code using Scala and Spark-SQL/Streaming for faster testing and processing of data.

• Imported the data from different sources like HDFS/HBase into Spark RDD, developed a data pipeline using Kafka to store data into HDFS. Performed real time analysis on the incoming data

• Worked extensively with Sqoop for importing and exporting the data from data Lake HDFS to Relational Database systems like Oracle and MySQL

• Developed python scripts to collect data from source systems and store it on HDFS

• Involved in converting Hive or SQL queries into Spark transformations using Python and Scala

• Built Kafka Rest API to collect events from front end.

• Built real time pipeline for streaming data using Kafka and Spark Streaming.

• Worked on integrating Kafka with Spark streaming processes to consume data from external sources and run custom functions

• Exploring with the Spark and improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, and Pair RDD's.

• Developed Spark jobs and Hive Jobs to summarize and transform data.

• Used OOZIE engine for creating workflow and coordinator jobs that schedule and execute various Hadoop jobs such as MapReduce Jobs, Hive, Spark and automating Sqoop jobs.

• Configured Oozie workflow to run multiple Hive jobs which run independently with time and data availability.

• Shared Daily Status Reports with all the team members, Team Leads, Managers.

• Environment: Hadoop, MapReduce, HDFS, Pig, HiveQL, Oozie, Flume, Impala, Cloudera, MySQL, Shell Scripting, HBase, Java, Kafka EDUCATION: BS in Electrical Engineering, City College of New York



Contact this candidate