tensorflow - generator called at the wrong time (keras) -


i use fit_generator() in keras 2.0.2 batch size 10 , steps 320 because have 3209 samples training. before first epoch begins, generator called 11 times, showing:

train -- ind: 0 10     ...     train -- ind: 100 110 

then, after first batch (1/320), prints out train -- ind: 110 120, think should train -- ind: 0 10. implementation train_generator() function incorrect? or why having issue?

here code generator:

epoch = 10 x_train_img = img[:train_size] # shape: (3209,512,512) x_test_img = img[train_size:]  # shape: (357,512,512)  def train_generator():     global x_train_img      last_ind = 0      while 1:         x_train = x_train_img[last_ind:last_ind+batch_size]         print('train -- ind: ',last_ind," ",last_ind+batch_size)         last_ind = last_ind+batch_size         x_train = x_train.astype('float32') / 255.         x_train = np.reshape(x_train, (len(x_train), 512, 512, 1))          yield (x_train, x_train)         if last_ind >= x_train_img.shape[0]:              last_ind = 0  def test_generator():      ...  train_steps = x_train_img.shape[0]//batch_size #320 test_steps = x_test_img.shape[0]//batch_size   #35  autoencoder.fit_generator(train_generator(),                  steps_per_epoch=train_steps,                  epochs=epoch,                 validation_data=test_generator(),                 validation_steps=test_steps,                 callbacks=[csv_logger] ) 

a better? way of writing generator:

def train_generator():     global x_train_img      while 1:         in range(0, x_train_img.shape[0], batch_size):             x_train = x_train_img[i:i+batch_size]             print('train -- ind: ',i," ",i+batch_size)             x_train = x_train.astype('float32') / 255.             x_train = np.reshape(x_train, (len(x_train), 512, 512, 1))              yield (x_train, x_train) 

by default, fit_generator() uses max_queue_size=10. you've observed that:

  1. before epoch starts, generator yields 10 batches fill queue. that's samples 0 through 100.
  2. then, epoch starts, , 1 batch popped queue modell fitting.
  3. the generator yields new batch fill empty space in queue. that's samples 100 through 110.
  4. then, progress bar updated. progress 1/320 printed on screen.
  5. steps 2 , 3 executed again, get ind: 110 120 printed.

so there's nothing wrong model fitting procedure. first batch generated indeed first 1 used fit model. it's there's queue hiding behind it, , generator got called several times fill queue, before first model update happens.


Comments

Popular posts from this blog

networking - Vagrant-provisioned VirtualBox VM is not reachable from Ubuntu host -

c# - ASP.NET Core - There is already an object named 'AspNetRoles' in the database -

ruby on rails - ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true -