入门机器学习(西瓜书+南瓜书)神经网络总结(python代码实现)

入门机器学习(西瓜书+南瓜书)神经网络总结(python代码实现)

一、神经网络

1.1 通俗理解

这次的内容较难理解,因此,笔者尽量通过通俗的话来说说清楚什么是神经网络?他是怎么来的?为什么近几年会如此之热?在此基础之上的深度学习,有是怎么回事?下面让我们一步步来揭开神经网络的神秘面纱。
神经网络模型其实最早起源于我们熟知的线性模型,对,你没有听错。就是简单的

y

=

k

x

+

b

y=kx+b

y=kx+b,那时,他还叫做感知机。就是从神经元的构造中获得启发,直白说就是比较简单的线性连接与输出。如下图为神经元示意图,树突其作用是接受其他神经元轴突传来的冲动并传给细胞体。轴突其作用是接受外来刺激,再由细胞体传出。神经细胞可以视为有两种状态的机器,激活时为“是”,不激活时为“否”。神经细胞的状态取决于从其他神经细胞接收到的信号量,以及突触的性质(抑制或加强)。当信号量超过某个阈值( Threshold )时,细胞体就会被激活,产生电脉冲。电脉冲沿着轴突并通过突触传递到其它神经元。
在这里插入图片描述
生物不好的同学这些看不懂上面的说法也没有关系,科学家们把该结构简化,照猫画虎,根据神经元的结构构造了下图所示的感知机模型。这也是神经网络早期的雏形。公式表达也就是线性关系的组合,它是一个最简单的单层神经网络,包括输入、权重和输出。这个神经元

a

1

,

a

2

,

a

3

a1,a2,a3

a1,a2,a3是输入,

w

1

,

w

2

,

w

3

w1,w2,w3

w1,w2,w3是权重,输入节点后,经过激活函数,得到输出

z

z

z

z

=

σ

(

i

=

1

n

a

i

w

i

μ

)

z=sigma(sum_{i=1}^{n}a_{i}w_{i}-mu)

z=σ(i=1naiwiμ).(其中μ为阈值(Threshold),也叫偏置(Bias))。(通俗来说就是要求的值z用箭未的输入值a乘以箭头中的权值全部相加减去阈值μ),可以看到这里有一个

s

i

g

m

a

sigma

sigma,它的作用对应阈值

μ

mu

μ,就是说当z大于这个

μ

mu

μ,神经元被激活,

z

=

1

z=1

z=1,否则,神经元被抑制,

z

=

0

z=0

z=0
在这里插入图片描述
这就是最为原始的神经网络,也就是感知机(perceptron),早在1957年,由罗森布拉特发明,后期的神经网络与支持向量机都是以此为基础进行的,而感知机的基本原理就是逐点修正,首先在平面上随意取一条分类线,统计分类错误的点;然后随机对某个错误点就行修正,即变换直线的位置,使该错误点得以正确分类;接着再随机选择一个错误点进行纠正,分类线不断变化,直到所有的点都完全分类正确了,就得到了最佳的分类线。
但是,到了1969年,马文明斯基提出了针对感知机难以解决的“异或”问题,和感知机数据线性不可分的情形导致感知机进入10年的冷静期。
到了1974年,Werbos首次提出把BP算法应用到神经网络,也就是多层感知机(Multilayer Perception,MLP)也叫做人工神经网络(Artificial Neural Network,ANN)。是一个带有单隐层的神经网络。而随着神经网络的隐藏层出现,以及对激活函数的改进,多层感知机成功的解决了异或问题。
再到后来随着自动链式求导理论的发展,神经网络之父,Hinton成功的把使用BP(Back Propagation)算法来训练神经网络。
为了简化神经网络以减少神经网络中需要确定的大量参数,在1995年,YannLeCun(杨立坤)提出了著名的卷积神经网络(Conventional Neural Network,CNN),神经网络实现了局部连接,权重共享。
在后来,到2006年,Hinton和他的学生在《Nature》上发表了一篇文章,提出深度置信网络,正式的开启了深度学习的新纪元。在往后,随着CNN,RNN,ResNet的相继问世,深度学习开始被世人所接受和认可,深度学习也成为了21世纪最热门的领域之一。尤其是当AlphaGo打败世界围棋冠军李世石时,深度学习收到了世界的广泛关注。而随之而来的是人工智能时代。而在2019年,计算机界的诺贝尔奖——图灵奖,授予了深度学习的三位创始者Hinton,YanLeCuu,和Yoshua Bengio。也代表着深度学习真正的被学界所认可。而关于目前神经网络的研究,还仅仅停留在黑盒的阶段,也就是说神经网络,就像一个黑盒子,你输入数据,他会反馈出结果,而其中的原因与过程,我们一律不得而知。这也是目前神经网络的争议所在。希望后来的科学家可以打开这一神秘的潘多拉宝盒。
而对于相对简单的神经网络,我们这里再简单的通俗的说下,便于大家对神经网络有一个更深刻的理解。首先看下图,简单了解以下,以下从下到上依次是输入层(输入数据),隐藏层(包含数据信息),输出层(输出结果),其中的奥妙也在隐藏层。通俗理解,我们可以把这个网络看作是一个公司,最上面的是输出层是公司的高层,隐藏式是公司的各个经理,而输入层就是我们的员工,首先,员工负责收集数据上报给经理,而各个经理由于他的职务范围与喜好不同,会更偏好于不同的输入层,比如最左侧的隐藏层更偏好输入层左侧第一个员工和第三个员工的输入,而对第二个和员工第四个员工输入不太敏感,这样当第一个员工输入和第三员工输入的数据到时,隐藏层的经理立马心领神会,并且发掘到其中包含的信息,然后告知上级也就是输出层的高层。而输出层的第一份高层也对第一个隐藏层的经理和第四个隐藏层的经理敏感,第一个高层根据隐藏层第一个经理和第四个经理提供的信息,进而进行决策。这代表了输出。但是,有一点需要注意,因为神经网络上是全连接的,因此那第一个隐藏层的经理来说,他其实和所有的员工都有联系,只是在四个员工中对一个员工和第三个员工更加敏感,或者说关系跟高。这样,久而久之他们的关系越来越好,相互之间十分的敏感,这样基本上就可以粗略的理解了神经网络全连接的隐藏层采集数据信息和输出层根据数据信息决策的秘密了。

在这里插入图片描述

1.2 理论分析

1.2.1 BP神经网络的基本结构

最基本的神经网络包括输入层,输出层,隐藏层。
在这里插入图片描述

1.2.2 BP神经网络的重点——反向传播

反向传播的核心就是高等数学中得链式法则求导。反向传播主要有两步:
(1) 计算总误差

E

t

o

t

a

l

=

1

2

(

t

a

r

g

e

t

o

u

t

p

u

t

)

2

E_{total}=sum frac{1}{2}(target-output)^2

Etotal=21(targetoutput)2
(2) 隐含层到输出层的权值更新(利用整体误差对某个权值求偏导运用链式法则)
具体计算如图所示,以

w

5

w5

w5为例,先计算

E

t

o

t

a

l

w

5

frac{partial{E_{total}}}{partial{w_{5}}}

w5Etotal ,通过梯度下降(GD)来更新w5.
如果神经网络的层数很多时,会导致需要非常的多的导数值,而神经网络的误差传播算法就是将复杂的导数计算替换为数列的递推关系。更确切的说,误差是从后往前一层一层计算与更新的权重。
在这里插入图片描述

1.2.3 BP神经网络主要过程

神经网络主要过程概述
BP神经网络是一种非线性多层前向反馈网络,也就是多了一个反向传播的过程。基本思路就是,模型每进行一次前向传播之后,计算输出层与目标函数之间的误差,再将结果代入激活函数的导数计算之后,返回给离输出层最近的隐层,再计算当前隐层与上一层之间的误差,然后逐渐往回传播,直到第一个隐层为止。进行一次反向传播之后,还需要对权重参数进行更新。
神经网络的具体操作
(1) 初始化参数
(2) 构造损失函数Loss函数(①交叉熵②平方法)
(3) 正向传导→(

y

^

hat y

y^
(4) 结束条件:(①

y

^

y

2

<

ϵ

|hat y - y|^2<epsilon

y^y2<ϵ ②迭代次数)
(5) 根据梯度下降(GD)来←反向传播(目的是更新权重)

二、代码实现

这里设置一个案例,我们采用传统的代码实现。
假设,你有这样一个网络层: 第一层是输入层,包含两个神经元 i1,i2,和截距项 b1;第二层是隐含层, 包含两个神经元 h1,h2 和截距项 b2,第三层是输出 o1,o2,每条线上标的 wi 是层与层之间连接的权重,激活函数我们默认为 sigmoid 函数。现在对他们赋上初值,如下图:
目标:给出输入数据 i1,i2(0.05 和 0.10),使输出尽可能与原始输出 o1,o2(0.01 和 0.99)接近。
在这里插入图片描述

# !/usr/bin/env python
# @Time:2022/3/26 15:54
# @Author:华阳
# @File:ANN.py
# @Software:PyCharm
# 参数解释:
# "pd_" :偏导的前缀
# "d_" :导数的前缀
# "w_ho" :隐含层到输出层的权重系数索引
# "w_ih" :输入层到隐含层的权重系数的索引
import math
import matplotlib.pyplot as plt

class NeuralNetwork:
    LEARNING_RATE = 0.5
    def __init__(self, num_inputs, num_hidden, num_outputs, hidden_layer_weights=None, hidden_layer_bias=None,
                 output_layer_weights=None, output_layer_bias=None):
        self.num_inputs = num_inputs
        self.hidden_layer = NeuronLayer(num_hidden, hidden_layer_bias)
        self.output_layer = NeuronLayer(num_outputs, output_layer_bias)
        self.init_weights_from_inputs_to_hidden_layer_neurons(hidden_layer_weights)
        self.init_weights_from_hidden_layer_neurons_to_output_layer_neurons(output_layer_weights)

    def init_weights_from_inputs_to_hidden_layer_neurons(self, hidden_layer_weights):
        weight_num = 0
        for h in range(len(self.hidden_layer.neurons)):
            for i in range(self.num_inputs):
                if not hidden_layer_weights:
                    self.hidden_layer.neurons[h].weights.append(random.random())
                else:
                    self.hidden_layer.neurons[h].weights.append(hidden_layer_weights[weight_num])
                weight_num += 1

    def init_weights_from_hidden_layer_neurons_to_output_layer_neurons(self, output_layer_weights):
        weight_num = 0
        for o in range(len(self.output_layer.neurons)):
            for h in range(len(self.hidden_layer.neurons)):
                if not output_layer_weights:
                    self.output_layer.neurons[o].weights.append(random.random())
                else:
                    self.output_layer.neurons[o].weights.append(output_layer_weights[weight_num])
                weight_num += 1

    def inspect(self):
        print('------')
        print('* Inputs: {}'.format(self.num_inputs))
        print('------')
        print('Hidden Layer')
        self.hidden_layer.inspect()
        print('------')
        print('* Output Layer')
        self.output_layer.inspect()
        print('------')

    def feed_forward(self, inputs):
        hidden_layer_outputs = self.hidden_layer.feed_forward(inputs)
        return self.output_layer.feed_forward(hidden_layer_outputs)

    def train(self, training_inputs, training_outputs):
        self.feed_forward(training_inputs)
        # 1. 输出神经元的值
        pd_errors_wrt_output_neuron_total_net_input = [0] * len(self.output_layer.neurons)
        for o in range(len(self.output_layer.neurons)):
            # ∂E/∂zⱼ
            pd_errors_wrt_output_neuron_total_net_input[o] = self.output_layer.neurons[
                o].calculate_pd_error_wrt_total_net_input(training_outputs[o])
            # 2. 隐含层神经元的值
            pd_errors_wrt_hidden_neuron_total_net_input = [0] * len(self.hidden_layer.neurons)
            for h in range(len(self.hidden_layer.neurons)):
                # dE/dyⱼ = Σ ∂E/∂zⱼ * ∂z/∂yⱼ = Σ ∂E/∂zⱼ * wᵢⱼ
                d_error_wrt_hidden_neuron_output = 0
                for o in range(len(self.output_layer.neurons)):
                    d_error_wrt_hidden_neuron_output += pd_errors_wrt_output_neuron_total_net_input[o] * 
                                                        self.output_layer.neurons[o].weights[h]
                    # ∂E/∂zⱼ = dE/dyⱼ * ∂zⱼ/∂
                    pd_errors_wrt_hidden_neuron_total_net_input[h] = d_error_wrt_hidden_neuron_output * 
                                                                     self.hidden_layer.neurons[
                                                                         h].calculate_pd_total_net_input_wrt_input()
        # 3. 更新输出层权重系数
        for o in range(len(self.output_layer.neurons)):
            for w_ho in range(len(self.output_layer.neurons[o].weights)):
                # ∂Eⱼ/∂wᵢⱼ = ∂E/∂zⱼ * ∂zⱼ/∂wᵢⱼ
                pd_error_wrt_weight = pd_errors_wrt_output_neuron_total_net_input[o] * self.output_layer.neurons[
                    o].calculate_pd_total_net_input_wrt_weight(w_ho)
                # Δw = α * ∂Eⱼ/∂wᵢ
                self.output_layer.neurons[o].weights[w_ho] -= self.LEARNING_RATE * pd_error_wrt_weight
        # 4. 更新隐含层的权重系数
        for h in range(len(self.hidden_layer.neurons)):
            for w_ih in range(len(self.hidden_layer.neurons[h].weights)):
                # ∂Eⱼ/∂wᵢ = ∂E/∂zⱼ * ∂zⱼ/∂wᵢ
                pd_error_wrt_weight = pd_errors_wrt_hidden_neuron_total_net_input[h] * self.hidden_layer.neurons[
                    h].calculate_pd_total_net_input_wrt_weight(w_ih)
                # Δw = α * ∂Eⱼ/∂wᵢ
                self.hidden_layer.neurons[h].weights[w_ih] -= self.LEARNING_RATE * pd_error_wrt_weight

    def calculate_total_error(self, training_sets):
        total_error = 0
        for t in range(len(training_sets)):
            training_inputs, training_outputs = training_sets[t]
            self.feed_forward(training_inputs)
            for o in range(len(training_outputs)):
                total_error += self.output_layer.neurons[o].calculate_error(training_outputs[o])
        return total_error


class NeuronLayer:
    def __init__(self, num_neurons, bias):
        # 同一层的神经元共享一个截距项 b
        self.bias = bias if bias else random.random()
        self.neurons = []
        for i in range(num_neurons):
            self.neurons.append(Neuron(self.bias))

    def inspect(self):
        print('Neurons:', len(self.neurons))
        for n in range(len(self.neurons)):
            print(' Neuron', n)
            for w in range(len(self.neurons[n].weights)):
                print(' Weight:', self.neurons[n].weights[w])
            print(' Bias:', self.bias)

    def feed_forward(self, inputs):
        outputs = []
        for neuron in self.neurons:
            outputs.append(neuron.calculate_output(inputs))
        return outputs

    def get_outputs(self):
        outputs = []
        for neuron in self.neurons:
            outputs.append(neuron.output)
        return outputs


class Neuron:
    def __init__(self, bias):
        self.bias = bias
        self.weights = []

    def calculate_output(self, inputs):
        self.inputs = inputs
        self.output = self.squash(self.calculate_total_net_input())
        return self.output

    def calculate_total_net_input(self):
        total = 0
        for i in range(len(self.inputs)):
            total += self.inputs[i] * self.weights[i]
        return total + self.bias

    # 激活函数 sigmoid
    def squash(self, total_net_input):
        return 1 / (1 + math.exp(-total_net_input))

    def calculate_pd_error_wrt_total_net_input(self, target_output):
        return self.calculate_pd_error_wrt_output(target_output) * self.calculate_pd_total_net_input_wrt_input();

    # 每一个神经元的误差是由平方差公式计算的
    def calculate_error(self, target_output):
        return 0.5 * (target_output - self.output) ** 2

    def calculate_pd_error_wrt_output(self, target_output):
        return -(target_output - self.output)

    def calculate_pd_total_net_input_wrt_input(self):
        return self.output * (1 - self.output)

    def calculate_pd_total_net_input_wrt_weight(self, index):
        return self.inputs[index]


# 文中的例子:
nn = NeuralNetwork(2, 2, 2, hidden_layer_weights=[0.15, 0.2, 0.25, 0.3], hidden_layer_bias=0.35,
                   output_layer_weights=[0.4, 0.45, 0.5, 0.55], output_layer_bias=0.6)
losses = []
for i in range(1000):
    nn.train([0.05, 0.1], [0.01, 0.09])
    losses.append(round(nn.calculate_total_error([[[0.05, 0.1], [0.01, 0.09]]]), 9))
plt.plot(losses)
plt.xlabel("train epoch")
plt.ylabel("train loss")
plt.show()
nn.inspect()

代码运行结果:

------
* Inputs: 2
------
Hidden Layer
Neurons: 2
 Neuron 0
 Weight: 0.2964604103620042
 Weight: 0.49292082072400834
 Bias: 0.35
 Neuron 1
 Weight: 0.39084333156627366
 Weight: 0.5816866631325477
 Bias: 0.35
------
* Output Layer
Neurons: 2
 Neuron 0
 Weight: -3.060957226462873
 Weight: -3.0308626603447846
 Bias: 0.6
 Neuron 1
 Weight: -2.393475400842236
 Weight: -2.3602088337272704
 Bias: 0.6
------

在这里插入图片描述
看到上面的结果你是不是想到了放弃,哈哈,没必要的这样。这样确实对于搞AI的太难了,所以大佬们都把复杂的训练的与构造过程集成化,编写了一系列的框架,最常用的以pytorch,paddlepaddle,keras,tensorflow。其中个人认为keras,对新手比较友好。可以用于入门构造网络,但是更深层里的框架还是其他三种更加适合。而keras需要由其他框架比如tensorflow作为后台。
大家可以用按住win+R键,打开运行窗口,输入cmd。
在这里插入图片描述
输入cmd,回车后,会显示如下。
在这里插入图片描述
输入以下的命令,可以看看自己的电脑的显卡是不是NVIDIA。如果是AMD的,那么就安装cpu的吧,毕竟CUDA内核,只支持NVIDIA的显卡。

#AMD显卡
pip install tensorflow-cpu
#NVIDIA显卡
pip install tensorflow
#有了后台以后就安装keras喽
pip install keras
#如果速度慢的话,可以加入清华源的链接
pip install tensorflow-cpu -i https://pypi.tuna.tsinghua.edu.cn/simple/
#NVIDIA显卡
pip install tensorflow -i https://pypi.tuna.tsinghua.edu.cn/simple/
#有了后台以后就安装keras喽
pip install keras -i https://pypi.tuna.tsinghua.edu.cn/simple/

利用keras构建MLP进行二分类:印第安人糖尿病预测

# 2.1 引入相关模块
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense

# 2.2 准备数据
df = pd.read_csv('pima_data.csv', header=None)
data = df.values
X = data[:, :-1]
y = data[:, -1]
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler()
scaler.fit(X)
X = scaler.transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=4)
# 2.3 构建网络模型:定义输入层、隐含层、输出层神经元个数,采用的激活函数
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()

# 2.4 编译模型:确定损失函数,优化器,以及评估指标
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# 2.5 训练模型:确定迭代的次数,批尺寸,是否显示训练过程
model.fit(X_train, y_train, epochs=100, batch_size=20, verbose=True)
# 2.6 评估模型
score = model.evaluate(X_test,y_test,verbose=False)
print("准确率为:{:.2f}%".format(score[1]*100))

运行结果:

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 12)                108       
                                                                 
 dense_1 (Dense)             (None, 8)                 104       
                                                                 
 dense_2 (Dense)             (None, 1)                 9         
                                                                 
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________
Epoch 1/100
27/27 [==============================] - 1s 2ms/step - loss: 0.6956 - accuracy: 0.5102
Epoch 2/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6832 - accuracy: 0.6499
Epoch 3/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6765 - accuracy: 0.6499
Epoch 4/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6709 - accuracy: 0.6480
Epoch 5/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6660 - accuracy: 0.6480
Epoch 6/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6611 - accuracy: 0.6480
Epoch 7/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6569 - accuracy: 0.6480
Epoch 8/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6534 - accuracy: 0.6480
Epoch 9/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6499 - accuracy: 0.6480
Epoch 10/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6468 - accuracy: 0.6480
Epoch 11/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6431 - accuracy: 0.6499
Epoch 12/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6394 - accuracy: 0.6480
Epoch 13/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6361 - accuracy: 0.6555
Epoch 14/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6322 - accuracy: 0.6536
Epoch 15/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6278 - accuracy: 0.6611
Epoch 16/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6220 - accuracy: 0.6574
Epoch 17/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6162 - accuracy: 0.6723
Epoch 18/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6103 - accuracy: 0.6723
Epoch 19/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6063 - accuracy: 0.6760
Epoch 20/100
27/27 [==============================] - 0s 2ms/step - loss: 0.6028 - accuracy: 0.6760
Epoch 21/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5982 - accuracy: 0.6667
Epoch 22/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5939 - accuracy: 0.6741
Epoch 23/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5891 - accuracy: 0.6834
Epoch 24/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5854 - accuracy: 0.6834
Epoch 25/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5814 - accuracy: 0.6872
Epoch 26/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5793 - accuracy: 0.6853
Epoch 27/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5752 - accuracy: 0.6872
Epoch 28/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5716 - accuracy: 0.6927
Epoch 29/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5666 - accuracy: 0.7020
Epoch 30/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5642 - accuracy: 0.6927
Epoch 31/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5610 - accuracy: 0.6927
Epoch 32/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5565 - accuracy: 0.7002
Epoch 33/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5534 - accuracy: 0.7002
Epoch 34/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5510 - accuracy: 0.7132
Epoch 35/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5463 - accuracy: 0.7244
Epoch 36/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5439 - accuracy: 0.7151
Epoch 37/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5412 - accuracy: 0.7095
Epoch 38/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5370 - accuracy: 0.7151
Epoch 39/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5327 - accuracy: 0.7207
Epoch 40/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5297 - accuracy: 0.7356
Epoch 41/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5269 - accuracy: 0.7225
Epoch 42/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5233 - accuracy: 0.7300
Epoch 43/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5193 - accuracy: 0.7374
Epoch 44/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5172 - accuracy: 0.7318
Epoch 45/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5143 - accuracy: 0.7505
Epoch 46/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5087 - accuracy: 0.7523
Epoch 47/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5064 - accuracy: 0.7449
Epoch 48/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5031 - accuracy: 0.7467
Epoch 49/100
27/27 [==============================] - 0s 2ms/step - loss: 0.5013 - accuracy: 0.7561
Epoch 50/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4979 - accuracy: 0.7393
Epoch 51/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4953 - accuracy: 0.7523
Epoch 52/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4918 - accuracy: 0.7542
Epoch 53/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4907 - accuracy: 0.7635
Epoch 54/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4864 - accuracy: 0.7598
Epoch 55/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4850 - accuracy: 0.7542
Epoch 56/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4837 - accuracy: 0.7654
Epoch 57/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4800 - accuracy: 0.7579
Epoch 58/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4779 - accuracy: 0.7598
Epoch 59/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4778 - accuracy: 0.7635
Epoch 60/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4749 - accuracy: 0.7579
Epoch 61/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4734 - accuracy: 0.7691
Epoch 62/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4711 - accuracy: 0.7709
Epoch 63/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4708 - accuracy: 0.7821
Epoch 64/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4694 - accuracy: 0.7691
Epoch 65/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4668 - accuracy: 0.7691
Epoch 66/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4668 - accuracy: 0.7728
Epoch 67/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4676 - accuracy: 0.7691
Epoch 68/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4669 - accuracy: 0.7709
Epoch 69/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4622 - accuracy: 0.7803
Epoch 70/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4619 - accuracy: 0.7765
Epoch 71/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4595 - accuracy: 0.7784
Epoch 72/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4585 - accuracy: 0.7858
Epoch 73/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4585 - accuracy: 0.7803
Epoch 74/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4592 - accuracy: 0.7858
Epoch 75/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4561 - accuracy: 0.7877
Epoch 76/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4575 - accuracy: 0.7877
Epoch 77/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4571 - accuracy: 0.7877
Epoch 78/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4532 - accuracy: 0.7896
Epoch 79/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4583 - accuracy: 0.7840
Epoch 80/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4520 - accuracy: 0.7952
Epoch 81/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4520 - accuracy: 0.7914
Epoch 82/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4514 - accuracy: 0.7877
Epoch 83/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4552 - accuracy: 0.7840
Epoch 84/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4516 - accuracy: 0.7840
Epoch 85/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4559 - accuracy: 0.7858
Epoch 86/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4493 - accuracy: 0.7896
Epoch 87/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4498 - accuracy: 0.7914
Epoch 88/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4501 - accuracy: 0.7952
Epoch 89/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4470 - accuracy: 0.7970
Epoch 90/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4489 - accuracy: 0.7858
Epoch 91/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4470 - accuracy: 0.8007
Epoch 92/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4480 - accuracy: 0.8063
Epoch 93/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4463 - accuracy: 0.8007
Epoch 94/100
27/27 [==============================] - 0s 3ms/step - loss: 0.4460 - accuracy: 0.7914
Epoch 95/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4457 - accuracy: 0.7914
Epoch 96/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4469 - accuracy: 0.7989
Epoch 97/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4451 - accuracy: 0.8007
Epoch 98/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4438 - accuracy: 0.7989
Epoch 99/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4435 - accuracy: 0.8026
Epoch 100/100
27/27 [==============================] - 0s 2ms/step - loss: 0.4434 - accuracy: 0.8026
准确率为:75.76%

Process finished with exit code 0

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
THE END
分享
二维码
< <上一篇

)">
下一篇>>