小男孩‘自慰网亚洲一区二区,亚洲一级在线播放毛片,亚洲中文字幕av每天更新,黄aⅴ永久免费无码,91成人午夜在线精品,色网站免费在线观看,亚洲欧洲wwwww在线观看

分享

DL之NN/CNN:NN算法進階優(yōu)化(本地數(shù)據(jù)集50000張訓(xùn)練集圖片),六種不同優(yōu)化算法實現(xiàn)手寫數(shù)字圖片識別逐步提高99.6%準確率

 處女座的程序猿 2021-09-28

DL之NN/CNN:NN算法進階優(yōu)化(本地數(shù)據(jù)集50000張訓(xùn)練集圖片),六種不同優(yōu)化算法實現(xiàn)手寫數(shù)字圖片識別逐步提高99.6%準確率


設(shè)計思路

設(shè)計代碼

import mnist_loader
from network3 import Network                                         
from network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer  

training_data, validation_data, test_data = mnist_loader.load_data_wrapper()   
mini_batch_size = 10  

#NN算法:sigmoid函數(shù);準確率97%
net = Network([        
        FullyConnectedLayer(n_in=784, n_out=100),          
        SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size) 
net.SGD(training_data, 60, mini_batch_size, 0.1, validation_data, test_data) 

#CNN算法:1層Convolution+sigmoid函數(shù);準確率98.78%
net = Network([        
        ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), 
                      filter_shape=(20, 1, 5, 5),           
                      poolsize=(2, 2)),                     
        FullyConnectedLayer(n_in=20*12*12, n_out=100),      
        SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size) 

#CNN算法:2層Convolution+sigmoid函數(shù);準確率99.06%。層數(shù)過多并不會使準確率大幅度提高,有可能overfit,合適的層數(shù)需要通過實驗驗證出來,并不是越多越好
net = Network([
        ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), 
                      filter_shape=(20, 1, 5, 5), 
                      poolsize=(2, 2)),
        ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), 
                      filter_shape=(40, 20, 5, 5), 
                      poolsize=(2, 2)),
        FullyConnectedLayer(n_in=40*4*4, n_out=100),
        SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)

#CNN算法:用Rectified Linear Units即f(z) = max(0, z),代替sigmoid函數(shù);準確率99.23%
net = Network([
        ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), 
                      filter_shape=(20, 1, 5, 5), 
                      poolsize=(2, 2), 
                      activation_fn=ReLU),   #激活函數(shù)采用ReLU函數(shù)
        ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), 
                      filter_shape=(40, 20, 5, 5), 
                      poolsize=(2, 2), 
                      activation_fn=ReLU),
        FullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),
        SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)

#CNN算法:用ReLU函數(shù)+增大訓(xùn)練集25萬(原先50000*5,只需將每個像素向上下左右移動一個像素);準確率99.37%
$ python expand_mnist.py   #將圖片像素向上下左右移動
expanded_training_data, _, _ = network3.load_data_shared("../data/mnist_expanded.pkl.gz")  
net = Network([
        ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), 
                      filter_shape=(20, 1, 5, 5), 
                      poolsize=(2, 2), 
                      activation_fn=ReLU),
        ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), 
                      filter_shape=(40, 20, 5, 5), 
                      poolsize=(2, 2), 
                      activation_fn=ReLU),
        FullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),
        SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
net.SGD(expanded_training_data, 60, mini_batch_size, 0.03,validation_data, test_data, lmbda=0.1)


#CNN算法:用ReLU函數(shù)+增大訓(xùn)練集25萬+dropout(隨機選取一半神經(jīng)元)用在最后的FullyConnected層;準確率99.60%
expanded_training_data, _, _ = network3.load_data_shared("../data/mnist_expanded.pkl.gz")
net = Network([
        ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), 
                      filter_shape=(20, 1, 5, 5), 
                      poolsize=(2, 2), 
                      activation_fn=ReLU),
        ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), 
                      filter_shape=(40, 20, 5, 5), 
                      poolsize=(2, 2), 
                      activation_fn=ReLU),
        FullyConnectedLayer(
            n_in=40*4*4, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
        FullyConnectedLayer(
            n_in=1000, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
        SoftmaxLayer(n_in=1000, n_out=10, p_dropout=0.5)], 
        mini_batch_size)
net.SGD(expanded_training_data, 40, mini_batch_size, 0.03,validation_data, test_data)

相關(guān)文章
DL之NN/CNN:NN算法進階優(yōu)化(本地數(shù)據(jù)集50000張訓(xùn)練集圖片),六種不同優(yōu)化算法實現(xiàn)手寫數(shù)字識別逐步提高,應(yīng)用案例自動駕駛之識別周圍車牌號

    轉(zhuǎn)藏 分享 獻花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多