Post Job Free

Resume

Sign in

Python Developer

Location:
Parsippany, NJ
Posted:
April 25, 2024

Contact this candidate

Resume:

Sahithi Kilaru

Python Developer/Data Engineer

ad493g@r.postjobfree.com

973-***-****

Professional Summary:

* ***** ** ********** ** the Analysis, Design, Development, Management, and Implementation of various stand-alone, client-server enterprise applications.

Web/Application Developer, coding with analytical Programming using Python, Golang, Django, Flask, AWS, RESTful services, C, C++, JAVA, and SQL.

Experience with Python Libraries such as HTTPLib2, Urllib2, Beautiful Soup, NumPy, SciPy, Pickle, Pandas, Pyside, SciPy, PyTables, Matplotlib, SQLAlchemy, PyQt.

Hands-on experience in developing Rest-API for web-based applications using Python, Django, XML, Java, HTML, CSS DHTML, JavaScript, jQuery, and Django web Framework.

Proficient in front and back-end development with experience in Python, Django, AJAX, HTML5, CSS3, JavaScript, Bootstrap, jQuery, Angular8, Angular JS, Node JS, and Express JS.

Experience in working with Amazon Web Services like EC2, Virtual private clouds (VPCs), Storage models (EBS, S3, instance storage), and Elastic Load Balancers (ELBs).

Good experience in Shell Scripting, Oracle RDBMS, SQL Server, Unix and Linux.

Experience in Amazon AWS, Azure, and Google Cloud Platform.

Integrated Python applications with various GCP services, leveraging cloud-native solutions for scalability and reliability.

AWS cloud experience using EC2, S3, EMR, RDS, Redshift, AWS Sagemaker, and Glue.

Experience with Web Development, Amazon Web Services, Python, Django, Flask, and Pyramid framework, with a good understanding of Django ORM and SQLAlchemy.

Very good experience with cloud platforms like Amazon AWS, Azure, and Google App Engine.

Managed the Docker container through the pods and performed the load balance between the pods through Kubernetes.

Possess a solid understanding of fundamental AI concepts, including supervised and unsupervised learning, reinforcement learning, and ensemble methods.

Experienced in MVW frameworks like Django, Angular.js, JavaScript, React.js, backbone.js, jQuery, and Node.js. Familiar with JSON-based REST Web services and Amazon Web services.

Developed and implemented ETL processes, enhancing data integration by modifying existing Python/Django modules to deliver specific data formats.

Hands on exp in AWS and Azure frameworks, Cloudera, Hadoop Ecosystem, Spark/Py Spark/Scala, Data bricks, Hive, Redshift, Snowflake, relational databases, tools like Tableau, Airflow, DBT, Presto/Athena, and Data DevOps Frameworks/Pipelines with strong Programming/Scripting skills in Python.

Proficient in developing high-performance APIs using FastAPI, a modern Python web framework.

Experience in developing and implementing Web Services using REST, SOAP, WSDL

Experience coding using various IDEs like NetBeans, Sublime Text, Spyder, PyCharm, emacs, Scikit-learn, Data Visualization (Tableau).

Proficiency in multiple databases like MongoDB, Cassandra, MySQL, PostgreSQL, ORACLE, and MS SQL Server.

Experience customizing JIRA projects with various schemas, complex workflows, screen schemes, permission schemes, and notification schemes.

Hands-on experience working in WAMP (Windows, Apache, MYSQL, and Python/PHP) and LAMP (Linux, Apache, MySQL, and Python/PHP) Architecture.

Technical Skills:

Programming Languages

Python, R, SQL, Java

SDLC Methodologies

Agile/SCRUM, Waterfall

Operating Systems

Windows, Linux

Python Libraries

Python, httplib2, Jinja2, NumPy, matplotlib, Pickle, PySide, SciPy, wxPython, PyTables

Python Frameworks

Django, Flask, Pyramid, Pyjamas,

Python IDE'S

Sublime Text, VIM, PyCharm, PyDev, NetBeans, Eclipse.

Databases

Microsoft SQL Server, Oracle, MySQL, PostgreSQL, MongoDB, snowflake

Version Controls

Git, GitHub

CI/Build Tools

Jenkins, Maven

Web Technologies

HTML5, CSS3, JavaScript, Ajax, jQuery, Node JS, Angular JS, Bootstrap

Technologies and Tools

Yang tree configuration, Core, RestConf, Netconf, Yang Web Services, GitHub.

Development Tools

PyCharm, VS Code, Jupyter Notebooks, Sublime Text, Notepad++, Slack.

Hadoop

HDFS, Spark, Sqoop, Hive, HBase, Oozie, Impala, Zookeeper

Professional Summary:

Client: Evernorth Health Services, Morris Plains, NJ July 2023 –Present Date

Role: Sr. Python Developer/ Data Engineer

Responsibilities:

Involved in the defect resolution continuous integration to keep in line with Agile Software Methodology Principle.

Developed backend modules using Python, Golang on Django, and Flask Web Framework on MySQL.

For the development of the user interface of the website using HTML, CSS, and JavaScript, AngularJS, Typescript, jQuery, and AJAX.

Expertise in developing consumer-based features and applications with Python, Django, HTML, Behaviour Driven Development (BDD), and pair-based programming and provided guidance to younger developers.

Have work knowledge of JIRA (Agile) for the bug tracking of the project.

Used Django-celery with the help of RabbitMQ Used Jenkins for continuous integration (CI) and continuous deployment (CD).

Integrated Python applications with various GCP services, leveraging cloud-native solutions for scalability and reliability.

Utilized Google Cloud SDK and APIs for seamless communication between Python code and GCP resources.

Involved in the development of Web Services using SOAP for sending and getting data from the external interface in the XML format.

Implemented and managed Amazon Redshift clusters for high-performance data warehousing, enabling efficient processing of large datasets.

Optimized Redshift performance through schema design, query optimization, and distribution key strategies.

Utilized Docker Compose for defining and managing multi-container Docker applications.

Created Docker images for various applications, optimizing them for size and performance.

Developed and maintained Django-based web applications, utilizing the Django ORM for efficient database interactions.

Experienced in designing database schemas, tables, views, and stored procedures using PL/SQL, adhering to best practices for performance, scalability, and maintainability.

Proficient in utilizing Cucumber framework to implement BDD methodologies, translating business requirements into executable specifications for automated testing.

Implemented RESTful APIs using Flask, integrating SQLAlchemy ORM for seamless data management.

Designed and optimized data models in Django ORM, ensuring robust and scalable solutions.

Developed Python scripts for data cleansing and transformation, improving data quality and reliability.

Implemented RESTful endpoints, leveraging FastAPI's asynchronous features for optimal responsiveness.

Proficient in Linux system administration, including user management, file system navigation, permissions configuration, and troubleshooting and resolving issues related to Linux servers.

Performed routine database maintenance tasks, including patching, upgrading, and optimizing RDS database performance.

Utilized SageMaker capabilities for data preprocessing, including feature engineering and handling missing values.

Collaborated with cross-functional teams to integrate SageMaker solutions into existing workflows and applications.

Developed and implemented high-performance APIs using Google's Protocol Buffers and gRPC framework.

Designed and defined API interfaces and data structures using Protocol Buffers' interface definition language (IDL).

Implemented some of the big data operations on AWS cloud. Created cluster using EMR, EC2 instances, IAM, S3 buckets. Submitted Spark jobs from EMR cluster from Airflow.

Collaborated with cross-functional teams to define infrastructure requirements and implemented solutions using Terraform.

Experienced in deploying and managing tasks and services within ECS clusters, ensuring high availability, fault tolerance, and scalability.

Used web services like Amazon Web Services (AWS) EC2, AWS S3, Elastic Map-reduce (EMR), Auto scaling, Cloud watch and SNS.

Developed and implemented machine learning models using TensorFlow, leveraging its extensive ecosystem for deep learning applications.

Experienced in implementing serverless APIs using AWS API Gateway and AWS Lambda, including designing RESTful endpoints, handling authentication and authorization, and implementing rate limiting and caching strategies.

Developed comprehensive unit tests using Pytest to validate the functionality and reliability of FastAPI applications.

Integrated DynamoDB seamlessly into serverless architectures using AWS Lambda, API Gateway, and other AWS services, leveraging DynamoDB's scalability and pay-per-request pricing model for cost-effective and scalable solutions.

Designed and maintained GitLab CI/CD pipelines for multiple projects, enabling automated testing and deployment, which significantly improved code quality and deployment reliability.

Utilized SparkSQL alongside HiveQL for seamless query execution, allowing for the integration of Spark's processing capabilities with Hive's structured querying language.

Applied AI techniques to healthcare data for tasks such as disease prediction, medical image analysis, and personalized medicine.

Utilized Pandas to efficiently clean, preprocess, and manipulate large datasets for analysis.

Conducted in-depth data analysis using Pandas, extracting meaningful insights and trends from complex datasets.

Developed complex SQL queries and stored procedures in DB2, optimizing data retrieval and manipulation processes.

Experienced in managing infrastructure state and dependencies with Terraform.

Explored and implemented solutions for real-time data processing with Hive, integrating with technologies like Apache Kafka.

Implemented data processing pipelines in Python using GCP's Dataflow for real-time and batch data processing.

Utilized PySpark SQL for executing SQL queries on structured data and integrated with Hive for managing metadata and accessing Hive tables.

Proficient in utilizing PySpark's DataFrame API to perform various data manipulation tasks such as filtering, selecting, joining, aggregating, and transforming large datasets.

Experienced in integrating PySpark with diverse data sources including structured and semi-structured data formats like CSV, JSON, Parquet, and databases like MySQL, PostgreSQL, and MongoDB.

Proficient in container orchestration using Kubernetes to automate the deployment, scaling, and management of containerized applications.

Implemented SASL authentication for Kafka clusters, ensuring secure communication between brokers and clients.

Familiar with Terraform modules for reusable infrastructure components.

Skilled in writing complex SQL queries, stored procedures, triggers, and views for managing and manipulating data in relational databases.

Demonstrated ability to optimize SQL queries and database performance through query optimization, index tuning, and database parameter tuning.

Proficient in automating infrastructure management tasks using AWS Lambda and AWS CloudFormation, enabling infrastructure as code (IaC) practices for consistent and repeatable deployments.

Utilized JaaS configuration to define and enforce authentication rules for Java-based Kafka clients.

Implemented and optimized Kubernetes manifests for deploying applications in a microservices architecture.

Set up and maintained CI/CD pipelines for Python applications on GCP, using tools like Jenkins.

Used Python to extract weekly availability information from XML files using underscore JS.

Proficient in writing Spark applications using Databricks notebooks, leveraging the power of Apache Spark for distributed data processing.

Scheduled Hive jobs using tools like Apache Airflow, ensuring timely execution and coordination with other data processing tasks.

Proficient in leveraging Redis data structures such as strings, lists, sets, hashes, and sorted sets to implement various data storage and manipulation patterns efficiently.

Demonstrated expertise in web scraping using Python and Scrapy to extract structured data from websites.

Developed and implemented Scrapy spiders to crawl and scrape data from various websites, ensuring efficient and targeted data extraction.

Familiar with various machine learning algorithms for classification, regression, and clustering tasks.

Knowledgeable about deep neural networks, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their applications in computer vision and natural language processing.

Integrated Databricks with cloud services AWS for seamless data storage, processing, and analytics in a cloud environment. Connector to load data to and from Snowflake and analyze to discover business goals.

Proficient in designing, developing, and managing data warehouses using Snowflake, a cloud-based data platform that offers scalability, flexibility, and performance for modern data analytics.

Experienced in designing and implementing data models in Snowflake, including schema design, table structures, and optimization for performance and scalability.

Created and maintained data visualizations using tools like Tableau, presenting complex information in an easily understandable format to support effective communication of ideas.

Experienced in utilizing Cypress for end-to-end testing of web applications, ensuring their functionality, performance, and reliability.

Integrated Cypress tests into continuous integration and continuous deployment pipelines using tools like Jenkins, Travis CI, or GitHub Actions, facilitating automated testing as part of the software delivery process.

Expertise in creating SSIS packages to efficiently move and transform data between different environments.

Implementing complex data transformations using SSIS transformations and scripting.

Performing Functional testing, regression testing, integration testing, and communication testing.

Developed and implemented ETL processes, enhancing data integration by modifying existing Python/Django modules to deliver specific data formats.

Analysing the Python, and Django script/code for various MSP services and ETL applications.

Received training to analyse the causes and fix the issues raised in the production.

Designed and developed the framework to consume the web services hosted in Amazon EC2. Created monitors, alarms, and notifications for EC2 hosts using Cloudwatch.

Automated the existing scripts for performance calculations using NumPy, SciPy, and SQL Alchemy and hands-on experience with GUI tool kit like PyQt.

Wrote POCs for the ETL services using Python and migrated the services from DataStage to Python.

Environment: Python 3.8, PyCharm, JSON, SQlite3 Studio, Postman, Visual Studio Code, Windows, and Python libraries, Django, Angular 8, HTML5, CSS3, Bootstrap, AWS (EC2, S3, Auto scaling, Cloud watch, SNS), AWS Sagemaker, GCP (Google Cloud Platform), POSTMAN, Node JS, XML, JSON, Bootstrap, PyUnit, Scala, Mongo DB, GIT, Agile, Linux.

Client: US Bank, Minneapolis, Minnesota. Jan 2023- Jun 2023

Role: Sr. Python Developer

Responsibilities:

Involved in the defect resolution continuous integration to keep in line with Agile Software Methodology Principle.

Developed entire frontend and backend modules using Python, C++ on Django and Flask Web Framework.

Developed UI using React JS, CSS, HTML, Typescript, and JSON.

Used Pandas API to put the data as time series and tabular format for east timestamp data manipulation and retrieval.

Built data pipelines and data transformation jobs using Boto, Pandas, and NumPy for Google Analytics Data and Amazon Redshift Data.

Developed Python-based API (RESTful Web Service) to track sales and perform sales analysis using Flask, SQL Alchemy, and PostgreSQL.

Deployed Scalable Hadoop cluster on AWS using S3 as an underlying file system for Hadoop.

Design, deploy, and manage a continuous integration system which includes automated testing and automated notification of results using technologies like Terraform, Ansible, packer, Cloud Formation, Docker, and server spec.

Skilled in developing complex SQL queries and PL/SQL procedures/functions for efficient data retrieval, manipulation, and aggregation, ensuring optimal database performance.

Knowledgeable about Terraform providers and the Terraform Registry for resource provisioning.

Familiarity with version control systems like Git for managing code repositories and tracking changes collaboratively in a Unix environment.

Developed Python scripts for data cleansing and transformation, improving data quality and reliability.

Developed and executed ETL processes to load data into Redshift from various sources, ensuring data accuracy and integrity.

Designed and implemented end-to-end ETL pipelines using PySpark, encompassing data extraction, transformation, and loading tasks.

Hands-on experience in leveraging PySpark's parallel processing capabilities to distribute computations across a cluster of machines, enabling high-speed data processing for large-scale datasets.

Proficient in using PySpark's Structured Streaming API to process real-time data streams, enabling real-time analytics and insights from continuous data streams.

Implemented SageMaker Endpoints for real-time inference, ensuring low-latency responses for production applications.

Familiarity with Continuous Integration (CI) and Continuous Deployment (CD) pipelines, integrating Cucumber tests into build processes using tools like Jenkins to ensure the timely validation of software changes.

Implemented robust security measures and access controls for DynamoDB, including encryption at rest and in transit, IAM policies, and fine-grained access control with AWS Identity and Access Management (IAM), ensuring data confidentiality, integrity, and compliance with regulatory requirements.

Developed client-side code to interact with gRPC servers, making RPC calls and exchanging Protocol Buffers-encoded messages.

Integrated gRPC APIs into existing microservices architectures, enabling efficient communication between services.

Skilled in utilizing Redis as an in-memory data store for caching frequently accessed data, session management, and real-time analytics, improving application performance and scalability.

Proficient in designing schema-less data models for NoSQL databases like MongoDB, allowing flexibility in data structure and accommodating evolving application requirements.

Experience in working with distributed NoSQL databases for horizontally scalable data storage and retrieval, including data partitioning, replication, and consistency management.

Worked with both Tez and Spark execution engines in Hive, understanding the strengths and use cases of each to choose the most appropriate option based on project requirements.

Proficient in designing, implementing, and optimizing Sybase databases for various projects.

Worked with AWS Lambda, API Gateway, and other AWS services for seamless integration.

Skilled in container orchestration using ECS, orchestrating the deployment, scaling, and monitoring of Docker containers across ECS clusters.

Provided support for troubleshooting and debugging machine learning models on SageMaker.

Developed build and deployment scripts using Ant and Maven as build tools in Jenkins to move from one environment to other environments.

Automated ETL pipelines using Hive, enhancing data processing efficiency and reducing manual intervention in data workflows.

Advanced skills in writing complex SQL queries and stored procedures in Snowflake to extract, transform, and load (ETL) data from various sources, and perform advanced analytics and reporting tasks.

Experienced in integrating Snowflake with data lakes and big data platforms, enabling seamless data sharing and analytics across structured and semi-structured data formats using technologies like Apache Hadoop, Apache Spark, and Delta Lake.

Successfully managed DB2 databases in environments with high transaction volumes, ensuring data consistency and reliability.

Integrated Docker containers into CI/CD workflows, facilitating consistent environments across development, testing, and production stages, leading to smoother deployments and reduced configuration drift.

Integrated Keras with TensorFlow for seamless model building and training.

Proficient in Python for backend development, emphasizing web applications and ORM.

Implemented a content management system using Django ORM for extensibility and scalability.

Created interactive dashboards using Tableau to visualize key performance indicators, facilitating data-driven decision-making.

Orchestrated end-to-end data processing pipelines using Apache Hive on Apache Spark, ensuring efficient and scalable data processing.

Successfully implemented and maintained payment gateway solutions to enhance the efficiency of online payment processes.

Competent in integrating Terraform with CI/CD pipelines for automated deployments.

Proficient in writing APIs using REST frameworks to facilitate smooth communication between banking systems and payment gateways.

Leveraged Hive DataFrames in Spark to seamlessly work with Hive tables, facilitating the integration of Spark-based analytics with Hive's metastore.

Proficient in designing, implementing, and maintaining Apache Kafka-based data streaming architectures.

Integrated Docker into continuous integration/continuous deployment (CI/CD) pipelines for automated and streamlined application deployment.

Proficient in designing, configuring, and managing APIs using Amazon API Gateway.

Worked with Terraform to create AWS components like EC2, IAM, VPC, ELB, security groups.

Used ANT and MAVEN as build tools on Java projects for the development of artifacts in the source code.

Integrated JFrog Artifactory with build tools, such as Terraform and Ansible, to streamline the deployment of AWS components and maintain consistency across multi-tier web-based applications.

Used AWS Cloud Watch to monitor and store logging information.

Used regular expressions for faster search results in combination with Angular built-in, custom pipes, and ng2-charts for report generation.

Demonstrated expertise in using Databricks as a unified analytics platform for big data and machine learning.

Familiarity with Lua scripting in Redis for implementing complex data processing logic directly within the Redis server, improving performance and reducing network overhead.

Proficient in designing, implementing, and managing AWS Simple Queue Service (SQS) solutions.

Demonstrated ability to integrate AWS SQS with other AWS services, such as AWS Lambda, S3, and EC2, to create seamless, event-driven architectures.

Conducted database migrations to RDS, ensuring minimal downtime and data consistency.

Worked with various RDS database engines such as MySQL, PostgreSQL, or Oracle, depending on project requirements.

Implemented robust error-handling mechanisms in shell scripts, enhancing the scripts' reliability and maintainability.

Proficient in writing shell scripts to automate routine tasks, increasing productivity and reducing manual errors.

Integrated secure payment gateways into shopping cart systems, implementing industry-standard security measures to protect sensitive customer information.

Implemented job scheduling and automation in Databricks, ensuring timely execution of data processing tasks and maintaining data pipeline workflows.

Integrated SNS with other AWS services, such as Lambda functions, EC2 instances, and S3 buckets, for building scalable and event-driven architectures.

Managed resource allocation and scaling within Kubernetes clusters, optimizing resource utilization.

Worked with POSTMAN for API testing. Developed REST APIs and worked with Node.js which is responsible for writing server-side web application logic.

Involve in the development of Web Services using SOAP and REST for sending and getting data from the external interface in XML and JSON format.

Utilize PyUnit, the Python Unit test framework, for all Python applications.

Integrated data storage solutions using the Django ODM system for MongoDB.

Utilised version control tool GIT to communicate with other team members to work on the same codebase in the repository.

Environment: Python, Django, React JS, HTML5, CSS3, Bootstrap, AWS (EC2, S3, Auto scaling, Cloud watch, SNS), AWS Sagemaker, POSTMAN, Node JS, XML, JSON, Bootstrap, PyUnit, Mongo DB, GIT, Agile, Unix.

Client: Coles Group, Melbourne, Australia Jun 2021 - Aug 2022

Role: Python Developer/ Data Engineer

Responsibilities:

Experienced in developing Web-based Applications using Python, CSS, HTML, JavaScript, ReactJs, Golang, and jQuery.

Created custom directives in ReactJS for re-usable components (multi-field form elements, background file uploads).

Skilled in developing RESTful APIs and microservices using Golang, leveraging its concurrency features for high-performance applications.

Experienced in implementing concurrent and parallel programming patterns in Golang, such as goroutines and channels, to optimize resource utilization and throughput.

Created Tableau scorecards, and dashboards using stack bars, bar graphs, scattered plots, geographical maps, and Gantt charts using Show Me functionality.

Familiar with Go's standard library and ecosystem, including packages for networking, HTTP handling, database access, and testing.

Familiar with Go's built-in support for JSON and protocol buffers for efficient data serialization and communication in microservices architectures.

Capable of integrating third-party APIs and services into Golang applications, implementing authentication, error handling, and rate limiting strategies for robust API consumption.

Identified and resolved issues related to Query blocks, Deadlocks, timeout, and finding long-running queries using various SQL inbuilt tools.

Integrated Docker with ECS for container deployment, leveraging Docker Compose to package, build, and deploy container images to ECS clusters.

Strong command-line skills for navigating and managing Unix environments, including file system operations, user management, and package installation.

Experienced in developing PL/SQL scripts and procedures for data migration, extraction, transformation, and loading (ETL) processes, ensuring accurate and efficient data transfer between systems.

Build the evaluation model in the Cloudera workbench workspace by levering the machine learning models (NLP, Logistic model, decision trees) by Python / PySpark. Advocated for ethical AI practices in LLM deployment.

Implemented automated scaling and management of DynamoDB resources using AWS Auto Scaling and AWS CloudFormation, dynamically adjusting capacity to accommodate fluctuating workloads while minimizing costs and operational overhead.

Proficient in writing Python scripts and leveraging PySpark for automating data processing workflows, scheduling jobs with tools like Apache Airflow, and orchestrating ETL pipelines.

Used Django template system for front-end UI along with OpenStack dashboard Worked on Python OpenStack APIs and used NumPy for Numerical analysis.

Collaborated with cross-functional teams to design and optimize cloud-based, distributed data systems using Snowflake and Spark, achieving scalable and high-performance solutions.

Familiarity with Snowflake's architecture and capabilities for automatic scaling and elasticity, enabling on-demand provisioning of compute resources and handling fluctuating workloads efficiently.

Performed data Aggregation operations using Spark SQL queries with Scala.

Developed custom Airflow operators and sensors to integrate with various data sources, enabling seamless data extraction, transformation, and loading (ETL) processes.

Contributed to the establishment of best practices and guidelines for CI/CD implementation, including version control strategies, branch management, and code review processes.

Implemented Apache Airflow for orchestrating and scheduling data workflows, enhancing data pipeline orchestration and management.

Orchestrated multi-account and multi-region deployments using CDK's cross-account and cross-region support, enabling centralized management and governance of distributed AWS environments.

Used Spark SQL with Scala for creating data frames and performed transformations on data frames.

Design, deploy, and manage a continuous integration system which includes automated testing and automated notification of results using technologies like Terraform, Ansible, packer, Cloud Formation, Docker, and server spec.

Proficient in using Cucumber's tagging feature to organize and execute subsets of scenarios based on categories such as priority, feature, or environment, enhancing test suite manageability and flexibility.

Proficient in AWS Cloud services with a focus on Machine Learning, specifically AWS SageMaker.

Integrated Pandas with visualization libraries like Matplotlib and Seaborn to create informative charts and graphs.

Collaborated with cross-functional teams to define and implement data warehousing best practices using Redshift.

Experienced in implementing security best practices with Terraform for infrastructure deployments.

Applied scikit-learn for classical machine learning tasks, including data preprocessing, feature selection, and model evaluation.

Integrated PySpark seamlessly with the broader Python ecosystem, incorporating libraries like NumPy, Pandas, and scikit-learn for advanced analytics and machine learning.

Implemented deep learning models using Torch, a scientific computing framework with a focus on deep learning.

Implemented shell scripts for log analysis, backup procedures, and software installations.

Implemented RESTful APIs using Flask, integrating SQLAlchemy ORM for seamless data management.

Wrote various spark transformations using Scala to perform data cleansing, validation and summarization activities on user behavioural



Contact this candidate