Parametric Rectified Linear Units, kana PReLU, inounza kuchinjika kuKeras convolution layer. Sezvinongoita fashoni inochinjika kukuchinja mafambiro, ndizvowo zvinogona kuita yako AI modhi. Iyi ficha inotora yakakurumbira Rectified Linear Unit (ReLU) shanda nhanho inoenderera mberi nekubvumira iyo yakashata mutserendende kuti udzidzwe kubva kune yekupinda data, pane kuramba yakagadziriswa. Mune mazwi anoshanda, izvi zvinoreva kuti nePRELU, ma AI ako modhi anogona kuburitsa uye kudzidza zvese zvakanaka uye zvisina kunaka maficha kubva kune yako yekuisa data, kuwedzera mashandiro avo uye kugona.
Kugadziriswa kwePReLU kunowedzera kudzika uye kusingafungidzirwe mikana kune dhizaini yeKeras convolution layer.. Iko kuchinjika kunopihwa nePRELU kwakafanana nekutsvaga chipfeko chakasiyana-siyana chinogona kusanganiswa uye kufananidzwa mumhando dzakasiyana uye mwaka, zvichipa kukosha kupfuura mutengo wayo.
Kunzwisisa Parametric Yakagadziriswa Linear Units
Parametric Rectified Linear Units inoumba chikamu chakakosha chenyika inogara ichikura yekudzidza kwakadzama. Ivo vanofemerwa neyakajairwa ReLU, inowanzonzi de facto activation basa rinoshandiswa mu convolution neural network (CNNs). Nekudaro, kusiyana neyakajairwa ReLU iyo inoseta ese asina kunaka ekuisa kune zero, PReLU inosuma diki gradient pese pese pese pese pese pese pazasi zero.
from keras.layers import PReLU # Define a CNN with Parametric ReLU activation model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=input_shape)) model.add(PReLU())
Kubatanidza PReLU muKeras Convolution Layers
Parametric ReLU inogona kuiswa zvine hungwaru muKeras Convolution Layers. MuKeras chimiro, basa iri rinogona kukumbirwa nyore nyore uye rinosanganisirwa mune yako neural network ine mitsetse mishoma yekodhi. Zvakawanda nenzira imwecheteyo sekubatanidza dhirezi diki dema ine eccentric accessory, ichi chimedu chisina kujairika mumanetiweki architecture chinogona kuchipa mupendero pamusoro pechinyakare magadzirirwo. Ngationei kuti izvi zvinoitwa sei nhanho-ne-nhanho.
from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers.advanced_activations import PReLU # Define the model model = Sequential() # Add convolution layer model.add(Conv2D(32, (3, 3), input_shape=(64, 64, 3))) model.add(PReLU()) # Add PReLU activation function model.add(MaxPooling2D(pool_size = (2, 2))) # Add a max pooling layer # Compile the model model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
PReLU vs. Mamwe Mashandiro Ekuita
Semafashoni, uko kukodzera kwezvitaera kunosiyana nemunhu mumwe nemumwe, PReLU inogona kunge isiri iyo yakasarudzika sarudzo yemabasa ese. Yakanyatso kuenderana nemaseti makuru uye matambudziko akaomarara. Nekudaro, kune madiki network, kana mabasa akareruka, ReLU kana Leaky ReLU inogona kukwana. Kusarudzwa kwebasa rekuita kwakafanana nekusarudza chimiro chakakodzera chechiitiko, zvese zvinoenderana nezvinodiwa uye zvipingaidzo zvebasa rako.
Uku kubatanidzwa kwehunyanzvi kubva kune ese epasirese eAI uye fashoni inoratidza kuti inonakidza uye inogoneka sei nyika idzi dzinogona kuve dzakasanganiswa. Zvisikwa zvako zvakanakisa muPython Keras, zvakasanganiswa neyako yakasarudzika maitiro, zvinogona kuita kuti basa reAI kusimudzira rive rinonakidza sekugadzirira chiitiko chefashoni. Chakakosha apa ndechekurangarira kuti nekuchinjika uye kuchinjika zvinouya zvisingatarisirwe zvingangoitika uye maitiro ekutaura.