No training required: Exploring random encoders for sentence classification(解析)

Original link: https://arxiv.org/abs/1901.10444

Published: ICLR 2019

--------------------------------------------------------------------------------------------------------------------------------------------------------------------

 Introduced in the embedded sentence structure 3 (RANDOM SENTENCE ENCODERS),

  1. Bag of random embedding projections
  2. Random LSTMs
  3. Echo State Networks

Ideas: the use of a pre-trained word embedding as input, then no training sentence encoder (IE, pre-random initialization), then add a layer of a linear layer, using logistic regression classifier can.

 

 

BAG OF RANDOM EMBEDDING PROJECTIONS (BOREP)

Random initialization bag-of-embeddings right weight W,

 

Each element of random initialization,

 

 

 

We can get a sentence of

 

 

 

 

 

 F the pool  of pooling function, or may be a pooling max mean pooling. then followed by a non-linear function, such Relu (h) = max (0 , h).

 

RANDOM LSTMS

Same, LSTM weight matrix are initialized to random,

d is the hidden size LSTM. can be obtained in a sentence,

 

 

ECHO STATE NETWORKS

 

ESN can be expressed as the following form,

 

Here, too, the use of two-way ESN, last sentence can be expressed,

 Look directly at the author's conclusions

 

Guess you like

Origin www.cnblogs.com/skykill/p/11582750.html