Tiven Wang
Wang Tiven July 30, 2018
425 favorite favorites
bookmark bookmark
share share


  • 更多的神经元
  • 层之间更复杂的连接方式
  • 训练的计算能力爆炸式增长
  • 特征自动提取



  • Unsupervised Pretrained Networks
  • Convolutional Neural Networks
  • Recurrent Neural Networks
  • Recursive Neural Networks

Reinforcement learning (RL) (强化学习)

AlphaGo Zero is trained by self-play reinforcement learning. It combines a neural network and Monte Carlo Tree Search in an elegant policy iteration framework to achieve stable learning.

Q-learning is a reinforcement learning technique used in machine learning. The goal of Q-Learning is to learn a policy, which tells an agent which action to take under which circumstances. It does not require a model of the environment and can handle problems with stochastic transitions and rewards, without requiring adaptations.



The ImageNet project is a large visual database designed for use in visual object recognition software research. Over 14 million URLs of images have been hand-annotated by ImageNet to indicate what objects are pictured; in at least one million of the images, bounding boxes are also provided.

Generative Model





Inceptionism: Going Deeper into Neural Networks


Modeling artistic style

Generative Adversarial Networks

  • https://github.com/carpedm20/DCGAN-tensorflow
  • https://www.oreilly.com/ideas/deep-convolutional-generative-adversarial-networks-with-tensorflow

Recurrent Neural Networks

Common Architectural Principles of Deep Networks


  • Parameters
  • Layers
  • Activation functions
  • Loss functions
  • Optimization methods
  • Hyperparameters

building block networks of deep networks:

  • RBMs
  • Autoencoders

deep network architectures:

  • UPNs
  • CNNs
  • Recurrent neural networks
  • Recursive neural networks

Core Components

Activation functions

Hidden layer 常用的函数包括

  • Sigmoid
  • Tanh
  • Hard tanh
  • Rectified linear unit (ReLU) (and its variants)


Building Blocks


A Restricted Boltzmann Machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.

Boltzmann machine


In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensor data has not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.



Similar Posts


Back to Top