Week of Events
Summer internship 2024
Summer internship 2024 SMST, IIT Kharagpur, Kharagpur, West Bengal, India, 721302, Virtual: https://events.vtools.ieee.org/m/421935
IEEE Volunteer Research Showcase 2024
Peer Reviewed Research Presentation for Academia-Industry collaboration and Membership Development Co-sponsored by: IEEE University Program Grant of IEEE SB IIT Kharagpur, IEEE AP-MTTS SBC IIT Kharagpur, IEEE WiE AG Kharagpur Section (R10) Bldg: Nalanda Complex, IEEE Kharagpur Section Office, IIT Kharagpur, Kharagpur, West Bengal, India, 721302
IEEE SPS SBC Webinar: Interpretable Convolutional NNs and Graph CNNs (By Dr. Danilo P. Mandic)
IEEE SPS SBC Webinar: Interpretable Convolutional NNs and Graph CNNs (By Dr. Danilo P. Mandic)
The success of deep learning (DL) and convolutional neural networks (CNN) has also highlighted that NN-based analysis of signals and images of large sizes poses a considerable challenge, as the number of NN weights increases exponentially with data volume – the so called Curse of Dimensionality. In addition, the largely ad-hoc fashion of their development, albeit one reason for their rapid success, has also brought to light the intrinsic limitations of CNNs - in particular those related to their black box nature. To this end, we revisit the operation of CNNs from first principles and show that their key component – the convolutional layer – effectively performs matched filtering of its inputs with a set of templates (filters, kernels) of interest. This serves as a vehicle to establish a compact matched filtering perspective of the whole convolution-activation-pooling chain, which allows for a theoretically well founded and physically meaningful insight into the overall operation of CNNs. This is shown to help mitigate their interpretability and explainability issues, together with providing intuition for further developments and novel physically meaningful ways of their initialisation. Such an approach is next extended to Graph CNNs (GCNNs), which benefit from the universal function approximation property of NNs, pattern matching inherent to CNNs, and the ability of graphs to operate on nonlinear domains. GCNNs are revisited starting from the notion of a system on a graph, which serves to establish a matched-filtering interpretation of the whole convolution-activation-pooling chain within GCNNs, while inheriting the rigour and intuition from signal detection theory. This both sheds new light onto the otherwise black box approach to GCNNs and provides well-motivated and physically meaningful interpretation at every step of the operation and adaptation of GCNNs. It is our hope that the incorporation of domain knowledge, which is central to this approach, will help demystify CNNs and GCNNs, together with establishing a common language between the diverse communities working on Deep Learning and opening novel avenues for their further development. Speaker(s): Dr. Danilo P. Mandic, Virtual: https://events.vtools.ieee.org/m/423821