Many-to-many classification with Keras LSTM
Become part of the top 3% of the developers by applying to Toptal https://topt.al/25cXVn
--
Music by Eric Matyas
https://www.soundimage.org
Track title: Drifting Through My Dreams
--
Chapters
00:00 Question
01:31 Accepted answer (Score 5)
03:05 Thank you
--
Full question
https://stackoverflow.com/questions/5445...
Question links:
[This answer]: https://stackoverflow.com/a/43047615/462...
Accepted answer links:
[This]: https://stackoverflow.com/questions/5413...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #tensorflow #machinelearning #keras #lstm
#avk47
--
Music by Eric Matyas
https://www.soundimage.org
Track title: Drifting Through My Dreams
--
Chapters
00:00 Question
01:31 Accepted answer (Score 5)
03:05 Thank you
--
Full question
https://stackoverflow.com/questions/5445...
Question links:
[This answer]: https://stackoverflow.com/a/43047615/462...
Accepted answer links:
[This]: https://stackoverflow.com/questions/5413...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #tensorflow #machinelearning #keras #lstm
#avk47
ACCEPTED ANSWER
Score 5
There can be many approaches to this, i am specifying which can be good fit to your problem.
If you want to stack two LSTM layer, then return-seq can help to learn for another LSTM layer as shown in following example.
from keras.layers import Dense, Flatten, LSTM, Activation
from keras.layers import Dropout, RepeatVector, TimeDistributed
from keras import Input, Model
seq_length = 15
input_dims = 10
output_dims = 8 # number of classes
n_hidden = 10
model1_inputs = Input(shape=(seq_length,input_dims,))
model1_outputs = Input(shape=(output_dims,))
net1 = LSTM(n_hidden, return_sequences=True)(model1_inputs)
net1 = LSTM(n_hidden, return_sequences=False)(net1)
net1 = Dense(output_dims, activation='relu')(net1)
model1_outputs = net1
model1 = Model(inputs=model1_inputs, outputs = model1_outputs, name='model1')
## Fit the model
model1.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 15, 10) 0
_________________________________________________________________
lstm_1 (LSTM) (None, 15, 10) 840
_________________________________________________________________
lstm_2 (LSTM) (None, 10) 840
_________________________________________________________________
dense_3 (Dense) (None, 8) 88
_________________________________________________________________
- Another option is that you can use the complete return sequence as the features for the next layer. In that case make a simple
Denselayer whose input will be[batch, seq_len*lstm_output_dims].
Note: These features can be useful for classification task, but mostly, we used stacked lstm layer and use its output with-out complete sequence as features for the classification layer.
This answer may be helpful to understand another approaches for LSTM architecture for different purpose.