All research projects will be associated with one or more of our themes and teams will take a multidisciplinary approach to challenges associated with them. All projects supported by the Centre will focus on one or more of these themes and will involve academics from across disciplines. Our industry and government partners have active input to project supervision. We emphasise disruptive innovation across disciplinary boundaries, but innovation that is cognizant of its potential impact on society, security and safety.
When we embed AI systems in edge devices, there is an increasing need to enable these systems to work together so that we can develop AI systems at scale; the AI itself needs to work in a decentralised manner. One possible (and natural) way to do so is to build intelligent, autonomous devices (agents) that work together in a self-organised manner and with humans. Projects will investigate novel algorithmic solutions to decentralise machine learning, to enable autonomous systems to coordinate in a secure and trusted way, and to facilitate human-AI collaboration. In addition, we are interested in building new models that emerge through the fusion of novel devices and decentralised and agent-based AI algorithms, and investigate how these could come together to enable complex, resilient systems to be developed at scale respecting local autonomy, and yet providing system-level performance guarantees. Further research challenges include: optimal allocation of tasks across distributed resources under device-specific (e.g. edge versus cloud), geographic, and communications constraints; managing uncertainties over user preferences and demand fluctuations; game-theoretic/incentive models to optimise social welfare; automated and decentralised responses to cyber-attacks; federated learning systems; and decentralised optimisation mechanisms.
Current state-of-art AI models, particularly machine learning (ML) techniques such as deep neural nets, have very high computational requirements, making them inefficient to embed on devices. With the rise of IoT, however, it is essential to develop new, trusted solutions that can move AI to the network edge. The goal of this theme is to investigate solutions for efficient embedding of AI and ML techniques. In particular, projects may develop energy and memory efficient, yet secure AI/ML algorithms, emphasising holistic design for performance and optimisation. Restricted computational capacity makes edge devices attractive targets of cyber-attacks, and so there are key research questions around the employment of active defences. Similarly, constraints imposed by embedded devices introduce challenges around how to decompose classification, identification and other tasks, to identify what might be done efficiently at the edge, with other, dependent tasks dedicated to cloud infrastructures.
Computation for AI techniques such as Machine Learning is well known to rely on different fundamental operations than the standard set used for conventional computation. AI-based processing lends itself to an architecture characterised by multiply-accumulate units, on-the-fly adjustable memory and co-located memory and computation. Conventional technologies are not optimised for such tasks, affecting performance, and so we need to explore alternatives at a fundamental materials/device level. MINDS will support research projects in the invention, fabrication, characterisation and initial optimisation of emerging technologies exploiting nanomolecular effects; e.g. the resistive switching effect where we achieve the storage of memory in a highly confined volume and expend just enough energy to move a few atoms in order to reach each new memory state. Projects will develop understanding of the electrochemistry behind these technologies and using this to develop fabrication processes and characterisation techniques. CDT students will, therefore, play a significant role in pioneering novel electronics for AI.
The tasks that AI-based models address have varying complexity and real-time processing constraints, requiring flexible hardware acceleration, either locally on user devices or remotely in the cloud. Rather than using separate dedicated hardware for each problem, projects will explore scalable hardware, overlaid with flexible software, combining to support AI algorithms for diverse problems. We will program FPGAs (for cloud computing) and design ASICs (for user devices), to hardware accelerate the most-demanding tasks. Hardware investigated in this theme will focus on scalable solutions at run-time, allowing all resources to be dedicated to solving a single problem, or split to simultaneously solve multiple small problems. Different parts of the hardware may adopt heterogenous designs, optimised for different parts of the processing. Meanwhile, less-demanding processing, specific to different algorithms and problems, may be performed in software, running on CPUs tightly coupled to the hardware. The hardware, software and algorithms will be holistically designed and optimised, to address today’s challenges and allow adaptation for problems that will emerge in the future.