- français
- English
Ml Roundtable
Next meeting:
Date: 2017-11-01
Room:ELD 329 1700
Topic: The effects of depth on DNN. gradient propagation, expressiveness, error propagation
Papers discussed (nonexhaustive):
"When neurons fail", NIPS 2017, by EPFLs own El Mhadi El Mhamdi:https://arxiv.org/abs/1706.08884
"The power of depth for deep neural networks",http://proceedings.mlr.press/v49/eldan16.pdf (see also a not-published preprint "On the Expressive Power of Deep Neural Networks" https://arxiv.org/pdf/1606.05336)
"Deep information propagation", https://arxiv.org/pdf/1611.01232 ( ICLR2017)
Highlights and links for previous meetings:
2017-10-18:
- a short layman presentation on the bottleneck principle by me, as well as the presentation of the coding challenges and failure influences in neuromorphic SNN design, the difficulty of training hardware SNNs and the energy savings that are achievable ( https://arxiv.org/pdf/1603.08270.pdf )
- an insight into the challenges of 3D MRI processing, dealing with 3D features, the challenge of avoiding over training on spurious features like sensor noise and the question why adding a Gaussian noise to MRI data improved training
- the usage of neural networks and hand crafted features in computer vision
- a taste of the basics of robustness of neural networks against individual neuron failure by El Mahdi El Mhamdi https://arxiv.org/pdf/1706.08884.pdf, as well as a sneak peak on his upcoming NIPS 2017 paper on robustness against malicious gradients
- related to that, a short discussion of the applicability of RanSaC (random sample consensus) on the problem of malicious gradient detection
- Ce wiki
- Cette page