Many-to-many classification with Keras LSTM
--------------------------------------------------
Hire the world's top talent on demand or became one of them at Toptal: https://topt.al/25cXVn
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Industries in Orbit Looping
--
Chapters
00:00 Many-To-Many Classification With Keras Lstm
01:19 Accepted Answer Score 5
02:33 Thank you
--
Full question
https://stackoverflow.com/questions/5445...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #tensorflow #machinelearning #keras #lstm
#avk47
    Hire the world's top talent on demand or became one of them at Toptal: https://topt.al/25cXVn
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Industries in Orbit Looping
--
Chapters
00:00 Many-To-Many Classification With Keras Lstm
01:19 Accepted Answer Score 5
02:33 Thank you
--
Full question
https://stackoverflow.com/questions/5445...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #tensorflow #machinelearning #keras #lstm
#avk47
ACCEPTED ANSWER
Score 5
There can be many approaches to this, i am specifying which can be good fit to your problem.
If you want to stack two LSTM layer, then return-seq can help to learn for another LSTM layer as shown in following example.
from keras.layers import Dense, Flatten, LSTM, Activation
from keras.layers import Dropout, RepeatVector, TimeDistributed
from keras import Input, Model
seq_length = 15
input_dims = 10
output_dims = 8 # number of classes
n_hidden = 10
model1_inputs = Input(shape=(seq_length,input_dims,))
model1_outputs = Input(shape=(output_dims,))
net1 = LSTM(n_hidden, return_sequences=True)(model1_inputs)
net1 = LSTM(n_hidden, return_sequences=False)(net1)
net1 = Dense(output_dims, activation='relu')(net1)
model1_outputs = net1
model1 = Model(inputs=model1_inputs, outputs = model1_outputs, name='model1')
## Fit the model
model1.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)        (None, 15, 10)            0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 15, 10)            840       
_________________________________________________________________
lstm_2 (LSTM)                (None, 10)                840       
_________________________________________________________________
dense_3 (Dense)              (None, 8)                 88        
_________________________________________________________________
- Another option is that you can use the complete return sequence as the features for the next layer. In that case make a simple 
Denselayer whose input will be[batch, seq_len*lstm_output_dims]. 
Note: These features can be useful for classification task, but mostly, we used stacked lstm layer and use its output with-out complete sequence as features for the classification layer.
This answer may be helpful to understand another approaches for LSTM architecture for different purpose.