||All of us possible aware of ‘deep learning’ term. Deep learning mainly in the architecture of deep neural network provides unprecedented performance in classification and prediction. Excitement is evident in all conferences and workshops. However, it comes with high challenges to understand the stuff and then eventual frustration. In this seminar, I will discuss some issues in neural networks. For example, why neural network has a standard architecture, regularization and overfitting issues, why convolutional neural net got so famous, certain things that are very difficult to understand (for example, dropout), alternative architectures (deep sparse representation, deep wavelet stacks), a very simple network with very good performance which almost anybody can implement (called extreme learning machine) and still frustration, etc.
Short Bio: Saikat Chatterjee is an assistant professor and docent in the Dept of Information Science and Engineering, KTH-Royal Institute of Technology, Sweden. He received Ph.D. degree from Indian Institute of Science, India. He has published more than 100 papers in international journals and conferences. He was a co-author of the paper that won the best student paper award at ICASSP 2010. He is the chair of EURASIP Special Area Team on Signal and Data Analytics for Machine Learning. His current research interests are signal processing, machine learning, data analytics and speech and audio processing.