UG Projects 2021-22

The following project descriptions and references should provide most details. The tools (such as programming languages, simulation platforms) and datasets for project realisation depend on what are available.

For enquiry, please attend the following Q&A Teams meeting sessions.

Q&A Session - Teams Meeting Link:
1st October 2021, 9am - 9:30am (UK time), click here to join

Keywords:Classification, Computer Programming (Matlab, Python, TensorFlow), Computational Intelligence, Convotional Neural Network, Deep LearningeXplainable Artificial Intelligence (XAI)Fuzzy Logic, Graph Neural NetworkGenerative Adversarial NetworkNeural Network, Machine Learning, Population-Based Search Algorithms (Genetic Algorithm, Particle Swarm Optimisation, Q-learning, etc.), Reinforcement LearningSupport Vector Machine, Self-Organizing Map

Requirements: Hardworking, self-motivated, creative, good programming skills, willing to learn.

Remarks: 

  • All projects are open-ended. They are basically at UG level but can be taken into research level depending on problem formulation and the way approaching the problem.
  •  All project topics are challenging and students need to gain NEW knowledge for the projects.
  • High Performence Computing (HPC) service can be used to speed up the learning process
  • I categorise the project outcomes into the following three levels (basic, merit and distinction) according to 1) the challenging level of the problem, 2) scientific and academic contributions (insights, ideas, knowledge, novelty), 3) the breadth and depth (coverage, quality, level of sophistication) of design, methodology, and results. The level of breadth and depth would give a sense of the contribution made to the project.
  • Encourage to publish the contributions and achievements in a journal paper.
  • Technical papers can be downloaded from IEEE Explore (https://ieeexplore.ieee.org/Xplore/home.jsp) and Science Direct (https://www.sciencedirect.com/) using your King's login details (through institutional login).

 

[HKL01] eXplainable Artificial Intelligence (XAI): Catheter Detection using Explainable Machine-Learning Models

A central venous catheter (CVC) is an indwelling device that is peripherally inserted into a large, central vein (most commonly the internal jugular, subclavian, or femoral), and advanced until the terminal lumen resides within the inferior vena cava, superior vena cava, or right atrium. Over the following decades, central venous access rapidly developed into an important experimental instrument for studying cardiac physiology, as well as an indispensable clinical tool in the treatment of many disease processes. However, malpositioned catheters will lead to many complications. Early recognition of malpositioned tubes is the key to preventing risky complications (even death), even more so now that millions of COVID-19 patients need different kinds of tubes and lines, including CVC [1].

Correct positioning of a CVC is verified following the insertion procedure with radiologic confirmation. Landmark determination of CVC terminal tip precise location is controversial and may be influenced by patient positioning with oblique adjustments, anatomical variation, complications of pneumothorax or other lung conditions which may impact the ability to visualise the catheter tip [2].

Therefore, automatic catheter tip detection can assist clinicians efficiently find whether the catheters are placed in the right place or not, preventing risky complications. This task is about the tip detection of a central venous catheter. 

There are two kinds of frameworks for tip detection tasks, which are the one-stage framework and two-stage framework respectively. In one-stage framework, the input is the x-ray image and the output is the tip of catheters [3]. In a two-stage framework[4][5], for the first stage, the input is the x-ray image and the output is the detected catheter (boundingbox or mask). For the second stage, the input is the detected catheter and the output is the related tip. The idea for tackling with this problem could be referenced from the latest studies about human keypoint detection tasks. 

The dataset can be found in RANZCR CLiP - Catheter and Line Position Challenge | Kaggle

The eXplainable Artificial Intelligence (XAI) techniques (e.g., model-agnostic techniques: Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Layer-Wise Relevance Propagation (LPR)) will be employed to explain the decision/actions made by the machine learning models. An explaination interface will be created to present the explainable report to the user.

catheter tip detection

[1] Kolikof J, Peterson K, Baker A M. Central Venous Catheter[J]. StatPearls [Internet], 2020.

[2] Moureau N L. Vessel health and preservation: the right approach for vascular access[M]. Springer Nature, 2019.

[3] Ma H, Smal I, Daemen J, et al. Dynamic coronary roadmapping via catheter tip tracking in X-ray fluoroscopy with deep learning based Bayesian filtering[J]. Medical image analysis, 2020, 61: 101634.

[4] Li R Q, Xie X L, Zhou X H, et al. Real-Time Multi-Guidewire Endpoint Localization in Fluoroscopy Images[J]. IEEE Transactions on Medical Imaging, 2021.

[5] Zhou Y J, Xie X L, Zhou X H, et al. A Real-time Multi-functional Framework for Guidewire Morphological and Positional Analysis in Interventional X-ray Fluoroscopy[J]. IEEE Transactions on Cognitive and Developmental Systems, 2020.

Resources:

  1. "Why Should I Trust You?": Explaining the Predictions of Any Classifier, https://arxiv.org/abs/1602.04938
  2. https://github.com/marcotcr/lime

  3. A Unified Approach to Interpreting Model Predictions, https://arxiv.org/abs/1705.07874
  4. https://github.com/slundberg/shap
  5. Interpretable Machine Learning A Guide for Making Black Box Models Explainable, https://christophm.github.io/interpretable-ml-book/

  6. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI, https://ieeexplore.ieee.org/document/9233366
  7. InDepth: Layer-Wise Relevance Propagation, https://towardsdatascience.com/indepth-layer-wise-relevance-propagation-340f95deb1ea
  8. Layer-Wise Relevance Propagation: An Overview, https://link.springer.com/chapter/10.1007/978-3-030-28954-6_10 (The book can be downloaded through Institional Login (Access via your institution, choose King's College London))
  9. Metrics for Explainable AI: Challenges and Prospects, https://arxiv.org/abs/1812.04608
  10. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics, https://www.mdpi.com/2079-9292/10/5/593/pdf
  11. Explaining Artificial Intelligence Demos, https://lrpserver.hhi.fraunhofer.de/
  12. Explaining Deep Learning Models for Structured Data using Layer-Wise Relevance Propagation, https://arxiv.org/abs/2011.13429
  13. Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification, https://www.frontiersin.org/articles/10.3389/fnagi.2019.00194/full
  14. Explaining Deep Learning Models Through Rule-Based Approximation and Visualization, https://ieeexplore.ieee.org/document/9107404

 

[HKL02] Control of Articulated Robots using Reinforcement-Learning Graph-Neural-Network-Based Techniques 

Reinforcement Learning (RL) algorithm is one of the machine learning algorithms which generates optimal policy by maximising the expectation of accumulated rewards in a long-term consideration even without knowing the dynamics of the environment. Graph Neural Network (GNN) based on graph theory (describing a an object as a graph, i.e., a connected graph consists of nodes and edges and ) will integrate information from neighbour nodes for mining hidden features among nodes. The aim of this project is to design a RL-GNN based controller for articulated/spider-like robots for realising body moving and accomplish some simple objects such as position control. Performance of the controller should be evaluated by both designing reward functions with respect to objects and training rates comparing with other ANN training methods.

The project can be implemented (but not only restrict to) through following optional languages and platforms:

  • OpenAI Gym: Possess several existing models from training, relatively user-friendly.
  • ROS2: An open-source robotics middleware suite, consists plenty of available plugins & tools. Relatively difficult to start but more powerful.
  • MATLAB/Simulink: A proprietary multi-paradigm programming language and numeric computing environment, with plenty of downloadable powerful modules.
  • Python: An interpreted high-level general-purpose programming language.
  • TensorFlow: A free and open-source software library for machine learning.

Demonstration:

References:

  1. Graph Representation Learning (William L Hamilton)
  2. Reinforcement Learning- An Introduction
  3. NerveNet- Learning Structured Policy with Graph Neural Networks

Resources:

 

[HKL03] Fall/Behaviour Detection/Prediction/Classification using 

Graph Neural Networks

The idea of the project is to create a graph neural networks and machine-learning models for the detection/prediction/classification of falls or behaviours.

References:

  1. Graph Representation Learning (William L Hamilton)
  2. Martínez-Villaseñor, L., Ponce, H., Brieva, J., Moya-Albor, E., Núñez-Martínez, J. and Peñafort-Asturiano, C., 2019. UP-fall detection dataset: A multimodal approach. Sensors, 19(9), p.1988.
  3. Hemmatpour, M., Ferrero, R., Montrucchio, B. and Rebaudengo, M., 2019. A review on fall prediction and prevention system for personal devices: evaluation and experimental results. Advances in Human-Computer Interaction, 2019.
  4. Tsai, T.H. and Hsu, C.W., 2019. Implementation of fall detection system based on 3D skeleton for deep learning technique. IEEE Access, 7, pp.153049-153059.
  5. Cai, X., Liu, X., An, M. and Han, G., 2021. Vision-Based Fall Detection Using Dense Block With Multi-Channel Convolutional Fusion Strategy. IEEE Access, 9, pp.18318-18325.
  6. Keskes, O. and Noumeir, R., 2021. Vision-based fall detection using ST-GCN. IEEE Access, 9, pp.28224-28236.
  7. Chami, I., Ying, Z., Ré, C. and Leskovec, J., 2019. Hyperbolic graph convolutional neural networks. Advances in neural information processing systems, 32, pp.4868-4879.
  8. FallFree: Multiple Fall Scenario Dataset of Cane Users for Monitoring Applications Using Kinect, https://ieeexplore.ieee.org/document/8334766
  9. Realtime Multi-Person Pose Estimation, https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation
  10. CMU Perceptual Computing Lab/openposePublic, https://github.com/CMU-Perceptual-Computing-Lab/openpose

Datasets

 

[HKL04] eXplainable Artificial Intelligence (XAI): Classification of Cancer Cells using Explainable Machine Learning Techniques

Machine-learning-based classifiers will be developed for detecting and classifier cancer image samples. Advanced deep-learning structured networks (e.g., convolutional neural network, long-short-term-memory network, transformer, graph neural network, generative adversarial network, deep fuzzy network) combining with the state-of-the-art techniques (machine attention, data augmentation, fuzzy logic) will be employed to develop classifiers for detection and classification purpose. The eXplainable Artificial Intelligence (XAI) techniques (e.g., model-agnostic techniques: Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Layer-Wise Relevance Propagation (LPR)) will be employed to explain the decision/actions made by the machine learning models. An explaination interface will be created to present the explainable report to the user.

Datasets will be provided by Anita Grigoriadis from the School of Cancer and Pharmaceutical Sciences, King's College London. You may need to sign a consensus form for not releasing the datasets to the public but use the datasets only for this project.

Useful Links:

  1. Object Recognition in Matlab

  2. An Intuitive Explanation of Convolutional Neural Networks

  3. CS231n: Convolutional Neural Networks for Visual Recognition

  4. A Beginner's Guide To Understanding Convolutional Neural Networks

  5. "Why Should I Trust You?": Explaining the Predictions of Any Classifier, https://arxiv.org/abs/1602.04938
  6. https://github.com/marcotcr/lime
  7. A Unified Approach to Interpreting Model Predictions, https://arxiv.org/abs/1705.07874
  8. https://github.com/slundberg/shap
  9. Interpretable Machine Learning A Guide for Making Black Box Models Explainable, https://christophm.github.io/interpretable-ml-book/
  10. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI, https://ieeexplore.ieee.org/document/9233366
  11. InDepth: Layer-Wise Relevance Propagation, https://towardsdatascience.com/indepth-layer-wise-relevance-propagation-340f95deb1ea
  12. Layer-Wise Relevance Propagation: An Overview, https://link.springer.com/chapter/10.1007/978-3-030-28954-6_10 (The book can be downloaded through Institional Login (Access via your institution, choose King's College London))
  13. Metrics for Explainable AI: Challenges and Prospects, https://arxiv.org/abs/1812.04608
  14. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics, https://www.mdpi.com/2079-9292/10/5/593/pdf
  15. Explaining Artificial Intelligence Demos, https://lrpserver.hhi.fraunhofer.de/
  16. Explaining Deep Learning Models for Structured Data using Layer-Wise Relevance Propagation, https://arxiv.org/abs/2011.13429
  17. Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification, https://www.frontiersin.org/articles/10.3389/fnagi.2019.00194/full
  18. Explaining Deep Learning Models Through Rule-Based Approximation and Visualization, https://ieeexplore.ieee.org/document/9107404

 

Potential Projects

1. Robotics Cats

Please email me (This email address is being protected from spambots. You need JavaScript enabled to view it.) for dicussion if you are interested in this topic.

  1. Nybble: World's Cutest Open Source Robotic Cat, https://www.indiegogo.com/projects/nybble-world-s-cutest-open-source-robotic-cat#/
  2. OpenCat, https://www.hackster.io/petoi/opencat-845129
  3. Build Your Own Robotic Cat with An Open Source Kit by Petoi, https://www.thisiscolossal.com/2019/04/nybble/