model.fit() fit detailed

fit( x, y, batch_size=32, epochs=10, verbose=1, callbacks=None,
validation_split=0.0, validation_data=None, shuffle=True, 
class_weight=None, sample_weight=None, initial_epoch=0)
  • x: Input data. If the model has only one input, then the type of x is a numpy
    array, if the model has multiple inputs, then the type of x should be a list, and the elements of the list are numpy arrays corresponding to each input
  • y: label, numpy array
  • batch_size: Integer, specifies the number of samples contained in each batch during gradient descent. During training, a batch of samples will be calculated a gradient descent, so that the objective function is optimized by one step.
  • epochs: Integer, the epoch value when training is terminated. Training will stop when the epoch value is reached. When initial_epoch is not set, it is the total number of training rounds, otherwise the total number of training rounds is epochs-inital_epoch
  • verbose: log display, 0 means not outputting log information in standard output stream, 1 means outputting progress bar record, 2 means outputting one line record for each epoch
  • callbacks: list, the elements of which are objects of keras.callbacks.Callback. The callback function in this list will be called at the appropriate time during the training process, please refer to the callback function
  • validation_split: A floating point number between 0 and 1, used to specify a certain percentage of the training set as the validation set. The validation set will not participate in training, and the indicators of the model tested after each epoch, such as loss function, accuracy, etc. Note that validation_split is divided before shuffle, so if your data itself is in order, you need to manually shuffle it before specifying validation_split, otherwise there may be uneven validation set samples.
  • validation_data: A tuple of the form (X, y), which is the specified validation set. This parameter will override validation_spilt.
  • shuffle: Boolean value or string, generally a Boolean value, indicating whether to randomly shuffle the order of the input samples during the training process. If it is the string "batch", it is used to process the special case of HDF5 data, and it will scramble the data inside the batch.
  • class_weight: Dictionary, which maps different categories to different weights. This parameter is used to adjust the loss function during training (only for training)

  • sample_weight: numpy
    array of weights , used to adjust the loss function during training (only for training). You can pass a 1D vector with the same length as the sample to weight the samples 1 to 1, or when facing time series data, pass a matrix in the form of (samples, sequence_length) for each time step. The samples are assigned different weights. In this case, please make sure to add sample_weight_mode='temporal' when compiling the model.

  • initial_epoch: Start training from the epoch specified by this parameter, which is useful when continuing the previous training.

The fit function returns a History object. Its History.history property records the changes of the loss function and other indicators with the epoch. If there is a validation set, it also contains the changes of these indicators in the validation set. The following is really in the jupyter notebook In progress. definition

metwork=Sequential([
    layers.Dense....#省略
])

history=network.fit()
history.history#打印训练记录

Guess you like

Origin blog.csdn.net/weixin_40244676/article/details/105091539