python 2.7 - Parallel Processing of (Stratified) K-fold Cross Validation of a Keras LSTM model -


i know parallel processing of stratified k fold cross validation of keras lstm model should possible theoretically, uncertain how go it. want use parallel processing speeding computation time processing multiple evaluation folds @ time.

i implementing k fold cross validation keras model seen in this github issue comment. using pathos.multiprocessing.processingpool.map. works simple perceptron, fails lstm setup following error:

valueerror: tensor("embedding_1/embeddings:0", shape=(3990, 300), dtype=float32_ref) must same graph tensor("model_1/embedding_1/cast:0", shape=(?, 49), dtype=int32). 

i have tried using keras.backend.set_learning_phase have had no luck in have put (setting 1 when doing model fit, 0 when evaluationg in train_and_eval() , moving setting 1 before create_model() called). have tried running code with tf.graph().as_default():.


the code setup located @ gist reference, including perceptron code. missing embedding code different file, cannot run is.

snippet of code creating lstm keras model:

def lstm_hidden(x,                 units=hidden_nodes,                 time_step=time_step,                 sequences=flags.sequences,                 identifier=""):     """     easy function call creating multiple lstms in stack or sequence     """     if not sequences:         in range(0, time_step-1):             x = lstm(units,                      return_sequences=true,                      stateful=flags.stateful,                      activation='elu',                      name="lstm_hidden_" + identifier + "_" + str(i))(x)          last = lstm(units,                     return_sequences=false,                     stateful=flags.stateful,                     activation='elu',                     name="lstm_hidden_" + identifier + "_" + str(time_step))         x = last(x)          print("x_lstm_last shape = ", x.get_shape().as_list())         print("last.output_shape ", last.output_shape)     else:         in range(time_step):             x = lstm(units,                      return_sequences=true,                      stateful=flags.stateful,                      activation='elu',                      name="lstm_hidden_" + identifier + "_" + str(i))(x)      print("hidden lstm output = ", x.get_shape().as_list())     return x  def lstm((input_shape, embed_models_tup)):     """     basic lstm model recieves 2 sentences , embeds them words ,     learns relation.     """     print("input_shape = ", input_shape, " type = ", type(input_shape))      input1 = input(shape=input_shape)     input2 = input(shape=input_shape)      print("input1.shape = ", input1.get_shape().as_list())      (embed_model1, embed_model2)= embed_models_tup      emb1 = embed_model1(input1)     emb2 = embed_model2(input2)      print("\nemb1 shape = ", emb1.get_shape().as_list(), "\n")      print("input_shape lstm_hidden = ", emb1.get_shape().as_list())      if flags.bidir:         sent_emb1 = bilstm_hidden(emb1, input_shape[-1], input_shape[0],                                   identifier="1")         sent_emb2 = bilstm_hidden(emb2, input_shape[-1], input_shape[0],                                   identifier="2")     else:         sent_emb1 = lstm_hidden(emb1, input_shape[-1], input_shape[0],                                 identifier="1")         sent_emb2 = lstm_hidden(emb2, input_shape[-1], input_shape[0],                                 identifier="2")      concat = concatenate()     combine = concat([sent_emb1, sent_emb2])     print("concat output shape = ", concat.output_shape)      if not flags.sequences:         dense = dense(input_shape[0],                            activation='elu',                            kernel_initializer='he_normal')(combine)     else:         #todo may need use k.stack() on 2 sent_embs         #sent_embs = k.stack([sent_emb1, sent_emb2], axis=-1)         dense = timedistributed(dense(input_shape[0],                                       activation='elu',                                       kernel_initializer='he_normal'),                                 input_shape=input_shape)(combine)         print("time_distributed dense output shape = ",               dense.get_shape().as_list())         dense = flatten()(dense)      predictions = dense(1, activation='linear', name="single_dense")(dense)      model = model([input1, input2], predictions)     opt = rmsprop(lr=flags.learning_rate)     model.compile(optimizer=opt,#'rmsprop',                   loss='mean_squared_error',                   metrics=['accuracy',                            'mean_squared_error',                            'mean_absolute_error',                            'mean_absolute_percentage_error'                           ])     return model 


Comments

Popular posts from this blog

networking - Vagrant-provisioned VirtualBox VM is not reachable from Ubuntu host -

c# - ASP.NET Core - There is already an object named 'AspNetRoles' in the database -

ruby on rails - ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true -