StaticRNN¶
- class paddle.static.nn. StaticRNN ( name=None ) [source]
- 
         - Api_attr
- 
           Static Graph 
 StaticRNN class. The StaticRNN can process a batch of sequence data. The first dimension of inputs represents sequence length, the length of each input sequence must be equal. StaticRNN will unfold sequence into time steps, user needs to define how to process each time step during the withstep.- Parameters
- 
           name (str, optional) – Please refer to Name, Default None. 
 Examples import paddle.fluid as fluid import paddle.fluid.layers as layers vocab_size, hidden_size=10000, 200 x = fluid.data(name="x", shape=[None, 1, 1], dtype='int64') # create word sequence x_emb = layers.embedding( input=x, size=[vocab_size, hidden_size], dtype='float32', is_sparse=False) # transform batch size to dim 1 x_emb = layers.transpose(x_emb, perm=[1, 0, 2]) rnn = fluid.layers.StaticRNN() with rnn.step(): # mark created x_emb as input, each step process a word word = rnn.step_input(x_emb) # create prev memory parameter, batch size comes from word prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word) hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu') # use hidden to update prev rnn.update_memory(prev, hidden) # mark hidden as output rnn.step_output(hidden) # get StaticrNN final output result = rnn() - 
            
           step
           (
           )
           step¶
- 
           Define operators in each step. step is used in withblock, OP inwithblock will be executed sequence_len times (sequence_len is the length of input)
 - 
            
           memory
           (
           init=None, 
           shape=None, 
           batch_ref=None, 
           init_value=0.0, 
           init_batch_dim_idx=0, 
           ref_batch_dim_idx=1
           )
           memory¶
- 
           Create a memory variable for static rnn. If the initis not None,memorywill be initialized by this Variable. If theinitis None,shapeandbatch_refmust be set, and this function will create a new variable with shape and batch_ref to initializeinitVariable.- Parameters
- 
             - init (Variable, optional) – Tensor used to init memory. If it is not set, - shapeand- batch_refmust be provided. Default: None.
- shape (list|tuple) – When - initis None use this arg to initialize memory shape.
- Default (be set as batch_ref's ref_batch_dim_idx value.) – None. 
- batch_ref (Variable, optional) – When - initis None, memory’s batch size will
- Default – None. 
- init_value (float, optional) – When - initis None, used to init memory’s value. Default: 0.0.
- init_batch_dim_idx (int, optional) – the batch_size axis of the - initVariable. Default: 0.
- ref_batch_dim_idx (int, optional) – the batch_size axis of the - batch_refVariable. Default: 1.
 
- Returns
- 
             The memory variable. 
- Return type
- 
             Variable 
 - Examples 1:
- 
             import paddle.fluid as fluid import paddle.fluid.layers as layers vocab_size, hidden_size=10000, 200 x = fluid.data(name="x", shape=[None, 1, 1], dtype='int64') # create word sequence x_emb = layers.embedding( input=x, size=[vocab_size, hidden_size], dtype='float32', is_sparse=False) # transform batch size to dim 1 x_emb = layers.transpose(x_emb, perm=[1, 0, 2]) rnn = fluid.layers.StaticRNN() with rnn.step(): # mark created x_emb as input, each step process a word word = rnn.step_input(x_emb) # create prev memory parameter, batch size comes from word prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word) hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu') # use hidden to update prev rnn.update_memory(prev, hidden) 
- Examples 2:
- 
             import paddle.fluid as fluid import paddle.fluid.layers as layers vocab_size, hidden_size=10000, 200 x = fluid.data(name="x", shape=[None, 1, 1], dtype='int64') # create word sequence x_emb = layers.embedding( input=x, size=[vocab_size, hidden_size], dtype='float32', is_sparse=False) # transform batch size to dim 1 x_emb = layers.transpose(x_emb, perm=[1, 0, 2]) boot_memory = fluid.layers.data(name='boot', shape=[hidden_size], dtype='float32', lod_level=1) rnn = fluid.layers.StaticRNN() with rnn.step(): # mark created x_emb as input, each step process a word word = rnn.step_input(x_emb) # init memory prev = rnn.memory(init=boot_memory) hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu') # update hidden with prev rnn.update_memory(prev, hidden) 
 
 - 
            
           step_input
           (
           x
           )
           step_input¶
- 
           Mark a sequence as a StaticRNN input. - Parameters
- 
             x (Variable) – The input sequence, the shape of x should be [seq_len, …]. 
- Returns
- 
             The current time step data in the input sequence. 
- Return type
- 
             Variable 
 Examples import paddle.fluid as fluid import paddle.fluid.layers as layers vocab_size, hidden_size=10000, 200 x = fluid.data(name="x", shape=[None, 1, 1], dtype='int64') # create word sequence x_emb = layers.embedding( input=x, size=[vocab_size, hidden_size], dtype='float32', is_sparse=False) # transform batch size to dim 1 x_emb = layers.transpose(x_emb, perm=[1, 0, 2]) rnn = fluid.layers.StaticRNN() with rnn.step(): # mark created x_emb as input, each step process a word word = rnn.step_input(x_emb) # create prev memory parameter, batch size comes from word prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word) hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu') # use hidden to update prev rnn.update_memory(prev, hidden) 
 - 
            
           step_output
           (
           o
           )
           step_output¶
- 
           Mark a sequence as a StaticRNN output. - Parameters
- 
             o (Variable) – The output sequence. 
- Returns
- 
             None. 
 Examples import paddle.fluid as fluid import paddle.fluid.layers as layers vocab_size, hidden_size=10000, 200 x = fluid.data(name="x", shape=[None, 1, 1], dtype='int64') # create word sequence x_emb = layers.embedding( input=x, size=[vocab_size, hidden_size], dtype='float32', is_sparse=False) # transform batch size to dim 1 x_emb = layers.transpose(x_emb, perm=[1, 0, 2]) rnn = fluid.layers.StaticRNN() with rnn.step(): # mark created x_emb as input, each step process a word word = rnn.step_input(x_emb) # create prev memory parameter, batch size comes from word prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word) hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu') # use hidden to update prev rnn.update_memory(prev, hidden) rnn.step_output(hidden) result = rnn() 
 - 
            
           output
           (
           *outputs
           )
           output¶
- 
           Mark the StaticRNN output variables. - Parameters
- 
             outputs – The output Tensor, can mark multiple variables as output 
- Returns
- 
             None 
 Examples import paddle.fluid as fluid import paddle.fluid.layers as layers vocab_size, hidden_size=10000, 200 x = fluid.data(name="x", shape=[None, 1, 1], dtype='int64') # create word sequence x_emb = layers.embedding( input=x, size=[vocab_size, hidden_size], dtype='float32', is_sparse=False) # transform batch size to dim 1 x_emb = layers.transpose(x_emb, perm=[1, 0, 2]) rnn = fluid.layers.StaticRNN() with rnn.step(): # mark created x_emb as input, each step process a word word = rnn.step_input(x_emb) # create prev memory parameter, batch size comes from word prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word) hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu') # use hidden to update prev rnn.update_memory(prev, hidden) # mark each step's hidden and word as output rnn.output(hidden, word) result = rnn() 
 - 
            
           update_memory
           (
           mem, 
           var
           )
           update_memory¶
- 
           Update the memory from memtovar.- Parameters
- 
             - mem (Variable) – the memory variable. 
- var (Variable) – the plain variable generated in RNN block, used to update memory. var and mem should have same dims and data type. 
 
- Returns
- 
             None 
 
 
