生物智能和人工智能

原文链接:https://hai.stanford.edu/news/the_intertwined_quest_for_understanding_biological_intelligence_and_creating_artificial_intelligence/

因其他笔记无法使用,故存为博客。

读书笔记:

1:unlock the mystery of the three pounds of matter that sits between our ears.

2:In essence humans, as products of evolution, sometimes yearn to play the role of creator。

3:生物智能和人工智能的互相协作:

     The Hopfield network, a model in theoretical neuroscience that provided a unified framework for thinking about distributed, content-addressable memory storage and retrieval, also inspired the Boltzmann machine, which in turn provided a key first step in demonstrating the success of deep neural network models and inspired the idea of distributed satisfaction of many weak constraints as a model of computation in AI.

      Critical ingredients underlying deep convolutional networks currently dominating machine vision were directly inspired by the brain. These ingredients include hierarchical visual processing in the ventral stream, suggesting the importance of depth; the discovery of retinotopy as an organizing principle throughout visual cortex, leading to convolution; the discovery of simple and complex cells motivating operations like max pooling; and the discovery of neural normalization within cortex, which motivated various normalization stages in artificial networks.

       The human attentional system inspired the incorporation of attentional neural networks that can be trained to dynamically attend to or ignore different aspects of its state and inputs to make future computational decisions.

4:未来AI的生物学启发

    theoretical studies have shown that such synaptic complexity may indeed be essential to learning and memory . In fact network models of memory in which synapses have finite dynamic range, require such synapses be dynamical systems in their own right with complex temporal filtering properties to achieve reasonable network memory capacities. Moreover, more intelligent synapses have recently been explored in AI as a way to solve the catastrophic forgetting problem, in which a network trained to learn two tasks in sequence can only learn the second task, because learning the second task changes synaptic weights in such a way as to erase knowledge gained from learning the first task.

   如何有效的解决遗忘(并不是NLP中上下文语义之间的联系),即是否存在一个方法可以适用于很多类似问题。(推翻深度学习中的”天下没有免费的午餐“)。

    迁移学习目标是举一反三(具有一定的经验),学会骑自行车,会很快的学会骑摩托车。

    强化学习:自我反馈自我修正。

****************此处还需斟酌,推敲。

5:从系统级模块化大脑架构中获取线索

     we currently lack any engineering design principles that can explain how a complex sensing, communication, control and memory network like the brain can continuously scale in size and complexity over 500 million years while never losing the ability to adaptively function in dynamic environments.

6:周志华教授的一次报告中也指出,我们目前深度学习所取得进步和成果都是处在数据分布恒定、样本类别恒定、样本属性恒定、评价目标恒定的封闭静态环境中,即有效的深度模型,强的监督信息,较为稳定的学习环境才能有今天深度学习的繁荣,深度学习的未来充满挑战的。

附:

十分感谢各位大牛先驱的努力探索研究,同时也非常感谢微博@爱可可-爱生活老师的分享,感恩!

猜你喜欢

转载自blog.csdn.net/u013823233/article/details/84942938