Mental Health and Wellbeing is an inevitable part of our quality of life index. Both physical and mental health are intertwined, we must make mental health a priority. Mental illness is an issue and a major global concern. In 2017, mental illnesses were the second most common cause of years lived with disability (YLDs) and the sixth most common cause of Disability Adjusted Life Years (DALYs), resulting in a serious problem for healthcare systems, especially in developing countries. In 2017, mental diseases accounted for 4.7% (3.7%-5.6%) of total DALYs in India, compared to 2.5% (2.0%-3.1%) in 1990. It is alarming to take the necessary steps now. If mental illnesses are left untreated, they can be debilitating in nature making the person disabled. The tragedy is when people are not open to speaking about mental illnesses due to societal negative attitudes, fear of labelling, and stigma. Mental Health Awareness Month has been observed in May in the United States since 1949. Gradually it started in other countries to spread awareness about mental illnesses through various mediums like social media, local events, and television. Close to 60 to 70 million people in India suffer from common and severe mental disorders. According to WHO average rate of suicide in India is 10.9 for every lakh people. India is the world’s suicide capital with over 2.6 lakh cases of suicide in a year. Still, Mental Illness is a Stigma in India. The most effective way to combat stigma and deal with barriers is to educate people about warning signs and symptoms of mental illness...Talk about mental illnesses and experiences openly...Encourage people to seek professional help who are suffering from mental illnesses...Practice Kindness and Compassion...It's time to Speak openly about Mental Illnesses... Speaker(s): Ms Aparna Jena, Virtual: https://events.vtools.ieee.org/m/422030
Calendar of Events
M
Mon
|
T
Tue
|
W
Wed
|
T
Thu
|
F
Fri
|
S
Sat
|
S
Sun
|
---|---|---|---|---|---|---|
0 events,
|
0 events,
|
1 event,
-
|
0 events,
|
1 event,
-
Over the past two decades, the significance of MR image reconstruction has tremendously increased, enabling reduced scan time, improved image quality, and extraction of additional information from the measured data. During this period, MRI has witnessed extensive developments in advanced computational algorithms for image reconstruction, many of which have been fueled by signal processing advances in several areas, including multi-channel sampling, compressed sensing, dictionary learning, low-rank and structured low-rank methods. Recently, also neural networks have been employed for image reconstruction achieving further improvements in scan time and image quality. Most importantly, some of these techniques have found their way in the products of MRI vendors and show significant impact in clinical practice. These developments, together with the advancements in computational hardware have opened a new research field of MRI reconstruction as a computational imaging problem. In this talk, I will discuss the framework of MRI reconstruction as a computational imaging problem and the advantages it provides in enhancing the MR performance thereby addressing important clinical needs. Speaker(s): Dr. Mariya Doneva, Virtual: https://events.vtools.ieee.org/m/422352 |
0 events,
|
0 events,
|
0 events,
|
2 events,
-
A virtual technical workshop will be organized by the experts of Empire XPU team, where they will give hands on demonstration of their cutting edge EM simulation software and cover important topics related to EM engineering. Specifically, the following topics would be covered: - Advanced EM simulation using fast FDTD algorithm - Antenna and Radar applications - Selected Empire Features for antenna and Radar simulations - Antenna Array modelling and design - Radar Rx phase difference evaluation Virtual: https://events.vtools.ieee.org/m/420515
-
Video coding is a fundamental and ubiquitous technology in the modern society. Generations of international video coding standards, such as the widely-deployed H.264/AVC and H.265/HEVC and the latest H.266/VVC, provide essential means for enabling video conferencing, video streaming, video sharing, e-commerce, entertainment, and many more video applications. These existing standards all rely on the fundamental theory of signal processing and information theory to encode generic video efficiently with a favorable rate distortion behavior. In recent years, rapid advancement in deep learning and artificial intelligence technology has allowed people to manipulate images and videos using deep generative models. Among these, of particular interest to the field of video coding is the application of deep generative models towards compressing talking-face video at ultra-low bit rates. By focusing on talking faces, generative models can effectively learn the inherent structure about composition, movement and posture of human faces and deliver promising results using very little bandwidth resource. At ultra-low bit rates, when even the latest video coding standard H.266/VVC is apt to suffer from significant blocking artifacts and blurriness beyond the point of recognition, generative methods can maintain clear facial features and vivid expression in the reconstructed video. Further, generative face video coding techniques are inherently capable of manipulating the reconstructed face and promise to deliver a more interactive experience. In this talk, we start with a quick overview of traditional and deep learning-based video coding techniques. We then focus on face video coding with generative networks, and present two schemes that send different deep information in the bitstream, one sending compact temporal motion features and the other sending 3D facial semantics. We compare their compression efficiency and visual quality with that of the latest H.266/VVC standard, and showcase the power of deep generative models in preserving vivid facial images with little bandwidth resource. We also present visualization results to exhibit the capability of the 3D facial semantics-based scheme in terms of interacting with the reconstructed face video and animating virtual faces. Speaker(s): Dr. Yan Ye, Virtual: https://events.vtools.ieee.org/m/422747 |
0 events,
|
0 events,
|
1 event,
Summer internship 2024 SMST, IIT Kharagpur, Kharagpur, West Bengal, India, 721302, Virtual: https://events.vtools.ieee.org/m/421935 |
2 events,
-
Brain-computer interface (BCI) is a direct link between the human brain and the computer. It involves registering the activity of the central nervous system and interpreting it to determine the state of the human brain or to decode a certain number of commands. In this talk we will discuss ways to detect and interpret brain activity, and some interesting achievements in various research labs. Also, we will look at the BCI market a show some of the devices and algorithms designed by Microsoft Research. Speaker(s): Dr. Ivan Tashev, Virtual: https://events.vtools.ieee.org/m/423252 |
1 event,
|
1 event,
|
2 events,
-
The success of deep learning (DL) and convolutional neural networks (CNN) has also highlighted that NN-based analysis of signals and images of large sizes poses a considerable challenge, as the number of NN weights increases exponentially with data volume – the so called Curse of Dimensionality. In addition, the largely ad-hoc fashion of their development, albeit one reason for their rapid success, has also brought to light the intrinsic limitations of CNNs - in particular those related to their black box nature. To this end, we revisit the operation of CNNs from first principles and show that their key component – the convolutional layer – effectively performs matched filtering of its inputs with a set of templates (filters, kernels) of interest. This serves as a vehicle to establish a compact matched filtering perspective of the whole convolution-activation-pooling chain, which allows for a theoretically well founded and physically meaningful insight into the overall operation of CNNs. This is shown to help mitigate their interpretability and explainability issues, together with providing intuition for further developments and novel physically meaningful ways of their initialisation. Such an approach is next extended to Graph CNNs (GCNNs), which benefit from the universal function approximation property of NNs, pattern matching inherent to CNNs, and the ability of graphs to operate on nonlinear domains. GCNNs are revisited starting from the notion of a system on a graph, which serves to establish a matched-filtering interpretation of the whole convolution-activation-pooling chain within GCNNs, while inheriting the rigour and intuition from signal detection theory. This both sheds new light onto the otherwise black box approach to GCNNs and provides well-motivated and physically meaningful interpretation at every step of the operation and adaptation of GCNNs. It is our hope that the incorporation of domain knowledge, which is central to this approach, will help demystify CNNs and GCNNs, together with establishing a common language between the diverse communities working on Deep Learning and opening novel avenues for their further development. Speaker(s): Dr. Danilo P. Mandic, Virtual: https://events.vtools.ieee.org/m/423821 |
1 event,
|
1 event,
|
1 event,
|
2 events,
Peer Reviewed Research Presentation for Academia-Industry collaboration and Membership Development Co-sponsored by: IEEE University Program Grant of IEEE SB IIT Kharagpur, IEEE AP-MTTS SBC IIT Kharagpur, IEEE WiE AG Kharagpur Section (R10) Bldg: Nalanda Complex, IEEE Kharagpur Section Office, IIT Kharagpur, Kharagpur, West Bengal, India, 721302 |
|
1 event,
|
2 events,
-
With the emergence of advanced machine learning techniques and large-scale datasets, holistic analysis of realistic soundscapes becomes more and more appealing. In the case of everyday soundscapes this can mean not only recognizing what are the sounds present in an acoustic scene, but also where they are located and when they occur. This talk will discuss the task of joint detection and localization of sound events addressing the above problem. The state of the art methods typically use spectral representations and deep neural networks based on convolutional, recurrent, and attention layers that share many similarities to neighboring fields. However, the task also has several unique challenges, which will require specific solutions. We will give an overview of the task setup for training machine learning models, acoustic features for representing multichannel signals, topologies of deep neural networks, and loss functions for training systems. Since the performance of the methods is heavily based on the training data used, we will also discuss datasets that can be used for the development of methods and their preparation. We will discuss the recent DCASE evaluation campaign tasks that addressed the problem of joint detection and localization of sound events. Speaker(s): Dr. Tuomas Virtanen, Virtual: https://events.vtools.ieee.org/m/424604 |
1 event,
|
1 event,
|
2 events,
-
Radar offers some unique capabilities compared to other sensing phenomenologies. For example, radar can operate at long ranges, during the day and night, and in most weather conditions. Synthetic aperture radar (SAR) enables formation of 2D and 3D images of ground scenes for a wide array of military and commercial applications. In this talk, Dr. Linda Moore will discuss current challenges in SAR signal processing, including the challenge of applying machine/deep learning techniques to SAR automatic target recognition (ATR). Measured and synthetic SAR data has been made publicly available by the U.S. Air Force Research Laboratory and can assist in developing new techniques for today's SAR signal processing challenges. Available data sets will be associated with relevant technical challenges and examples of related IEEE published work will be highlighted. Speaker(s): Dr. Linda Moore , Virtual: https://events.vtools.ieee.org/m/424936 |
1 event,
|
1 event,
|
1 event,
|
1 event,
|
2 events,
-
Prof. Qiaowei Yuan, a distinguished full professor at the Tohoku Institute of Technology in Japan, will deliver a DML offered by IEEE MTTS on 26th June 2024, being organized by IEEE AP-MTTS SBC IIT Kharagpur. Speaker(s): Prof. Qiaowei Yuan, Virtual: https://events.vtools.ieee.org/m/424853 |
1 event,
|
1 event,
|
2 events,
-
IEEE AP-MTTs SBC IIT Kharagpur is organizing a one-day field trip to SAMEER Kolkata on 29th June. The trip would be done with an AC Bus/Traveller, comprising the journey from IIT-KGP to the SAMEER campus at Kolkata, and back. Participating members would also be adorned by IEEE AP-MTTs SBC IIT-KGP T-shirts. SAMEER Kolkata, the Society for Applied Microwave and Electronics Engineering Research, is a premier institute dedicated to advancing research in the RF-Microwave domain. It plays a crucial role in developing innovative technologies and solutions for wireless communication systems, radar, and satellite communication. By fostering collaboration between academia and industry, SMAEER Kolkata contributes significantly to the growth and modernization of RF and microwave engineering in India. Bldg: SAMEER Kolkata Centre, Plot-L2, Block-GP, Sector-V, Salt Lake Electronics Complex, Kolkata, West Bengal, India, 700091 |
1 event,
|