Yakagadziriswa: adam optimizer keras yekudzidza mwero inoderedza

Chokwadi, ngatitange nenyaya yacho.

Mamodheru ekudzidza akadzama ave chinhu chakakosha chetekinoroji munguva yanhasi, uye akasiyana optimization algorithms saAdam Optimizer anoita basa rakakosha mukuita kwavo. Keras, ine simba uye iri nyore kushandisa yemahara yakavhurika sosi raibhurari yePython yekugadzira uye yekuongorora yakadzama modhi yekudzidza, inoputira inoshanda manhamba ekuverenga maraibhurari Theano uye TensorFlow. Kukosha kwekugadzirisa mwero wekudzidza mune akadaro optimization algorithms kwakakosha, sezvo ichigona kukanganisa zvakananga maitiro ekudzidza emuenzaniso. Muchinyorwa chino, tichakurukura maitiro ekudzikisira mwero wekudzidza muAdam optimizer muKeras nhanho-ne-nhanho nzira. Pano tichavharawo maraibhurari uye mabasa anobatanidzwa mukuita uku.

Kudikanwa kweKudzidza Ratidzo Kugadziridzwa

Kudzidza mwero yakakosha hyperparameter mune optimization algorithms, kusanganisira Adam optimizer. Iyo inosarudza saizi yenhanho pane imwe neimwe iteration ichienda kune hushoma hwekurasikirwa basa. Kunyanya, mwero wekufunda wakaderera unoda nguva dzakawanda dzekudzidziswa dzakapihwa matanho madiki muhuremu hwekuvandudza, nepo mwero wekufunda wakakura ungasvika padanho rekusangana nekukurumidza, asi njodzi dzinopfuura hushoma hwekurasikirwa basa.

Nokudaro, inzira yakajairwa kugadzirisa nekudzikisira mwero wekudzidza pamusoro penguva, inowanzonzi chiyero chekudzidzira kuora. Kuora kwechiyero chekudzidza kunovimbisa kuti modhi yekudzidza inosvika pazasi pebasa rekurasikirwa, ichidzivirira nhanho huru muchikamu chekudzidziswa chinogona kukonzera kuchinja kukuru.

Kuitwa kweKudzidza Mwero Kuora muKeras

MuKeras, kugadzirisa uku kunogona kuwanikwa nerubatsiro rweLearningRateScheduler uye ReduceLROnPlateau callback mabasa.

from keras.callbacks import LearningRateScheduler
import numpy as np

# Learning rate schedule
initial_learning_rate = 0.1
decay = initial_learning_rate / epochs

def lr_time_based_decay(epoch, lr):
    return lr * 1 / (1 + decay * epoch)

# Fit the model on the batches generated
model.fit(X_train, Y_train, epochs=epochs,callbacks=[LearningRateScheduler(lr_time_based_decay, verbose=1)])

Iyo LearningRateScheduler, zvinoenderana nenguva, inogona kushandura mwero wekudzidza. Nepo, ReduceLROnPlateau inotarisisa huwandu uye kana pasina kuvandudzwa kunoonekwa kune 'moyo murefu' nhamba yenguva, mwero wekudzidza unodzikiswa.

Kushanda naAdam Optimizer

Kana tichibata naAdam optimizer, isu tinotanga chiitiko chayo nekutsanangura mwero wekudzidza. Munguva yekugadzira modhi maitiro, isu tinoisa iyi optimizer muenzaniso.

from keras.optimizers import Adam

# Applying learning rate decay 
adam_opt = Adam(lr=0.001, decay=1e-6)
model.compile(loss='binary_crossentropy', optimizer=adam_opt)

Mune kodhi iri pamusoro, tiri kugovera adam_opt iyo Adam optimizer ine mwero wekudzidza we0.001 uye mwero wekuora we1e-6.

In mhedziso, mwero wekudzidza unodzora mafambisiro atinoita takananga kumutengo wepasi. Nekunyatso gadziridza mwero wekudzidza uyu, tinokwanisa kukwidziridza mashandiro uye kugona kwemodhi yedu. Iko kusanganiswa kweKeras nePython kunoita kuti rive basa rakatwasuka kugadzirisa mareti ekudzidza zvichitipa kutonga kwakawanda pamusoro pemaitiro edu ekugadzirisa modhi.

Related posts:

Leave a Comment