〖TensorFlow2.0笔记1〗TensorFlow2.x介绍和详细安装指南(Win&Linux)!

TensorFlow2.x介绍和详细安装指南(Win&Linux)!

一. TensorFlow2.0正式版发布

1.1. TensorFlow2.0介绍

2019年10月1日是我国成立70周年,就在这举国同庆的同时,大洋彼岸的谷歌宣布,开源机器学习库 TensorFlow 2.0 现在可供公众使用。Tensorflow2.0 版本具有简易性、更清晰、扩展性三大特征,大大简化 API,提高了Tensorflow LiteTensorFlow.js 部署模型的能力。

Tensorflow2.0 已经和 Pytorch 非常非常接近了,看下面 TensorFlow1.x, Pytorch, TensorFlow2.x 三者的对比!

到底是使用Tensorflow2.0还是Pytorch,来看一组数据 Deep Learning Framework Power Scores 2018ArXiv (额开v)中 paper 的引用指数。

TensorFlow 的生态系统如下:

1.2. TensorFlow2.0学习建议

二. TensorFlow2.0框架3大优势

2.1. GPU加速功能

2.2. 自动求导功能

GradientTape 里面默认只会跟踪 tf.Variable() 类型,如果类型不是这个的话需要简单的包装一下。这里为 tf.tensor。tf.Variabletf.tensor 的一种特殊类型,因此需要简单的包装一下。使用TensorFlow自动求导的过程,其中: a,b,ctf.GradientTape()参与梯度计算的代码放到这里面。

import tensorflow as tf

x = tf.constant(1.)
a = tf.constant(2.)
b = tf.constant(3.)
c = tf.constant(4.)

with tf.GradientTape() as tape:
    tape.watch([a, b, c])
    y = a**2 * x + b * x + c

[dy_da, dy_db, dy_dc] = tape.gradient(y, [a, b, c])
print(dy_da, dy_db, dy_dc)

运行结果:可以看出第一项求出4.0,第二项为1.0,第三项也为1.0

tf.Tensor(4.0, shape=(), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32)

矩阵对向量求导

import tensorflow as tf
import  os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

x = tf.constant([[1.,3.],[2.5, 3.]])
a = tf.constant([[10. ,2.],[2., 3.]])         #变量a


with tf.GradientTape() as tape:
    tape.watch([a])
    y = a @ x

[dy_da] = tape.gradient(y, [a])
print("对a求导为:\n",dy_da.numpy())

运行结果:

ssh://zhangkf@192.168.36.48:22/home/zhangkf/anaconda3/envs/tf2.0/bin/python -u /home/zhangkf/tmp/pycharm_project_258/demo/TF2/daoshu.py
对a求导为:
 [[4.  5.5]
 [4.  5.5]]

Process finished with exit code 0

2.3. 神经网络API

三. 首先安装CUDA+CUDNN

3.1. Win10安装CUDA+CUDNN

或者使用Tensorflow官网提供的简单方法来安装显卡驱动和cuda,这个方法比较简单。
链接: GPU 支持

此步骤适用于安装 TensorFlow-GPU版本;这个步骤最是容易出错的。下载链接:CUDA Toolkit 10.0 Archive

找到对应的cudnn版本下载,链接:cudnn

也可以直接从我百度云链接下载:链接提取码:fo45 ; 先安装cuda,双击.exe文件执行;

选择custom(自定义)

这里特别注意一下几点:1. 就是电脑本身的显卡驱动号;2. 就是自己的电脑有没有安装visual stuio;首先把Nvidia GeForce应用程序关掉,这个没什么用。

2.把cuda中的visual studio integration去掉,这里因为我们自己的电脑没有安装visual stuio如果安装了。如果我们自己的电脑装了visual stuio如果安装了可以不用去掉。

这里也要注意左边一列新的驱动版本和当前驱动版本;需要保证左边一列(display driver)的驱动版本号,大于等于当前版本驱动号,否则有可能出错。要是满足了,这一列可以去掉勾,不用安装了。

配置环境变量PATH

前2项装完cuda会自动添加。

cudnn解压安装完成!解压完成,配置一下路径;

3.2. Linux安装CUDA+CUDNN

参考我之前的博客链接:Linux笔记:Ubuntu16.04&18.04系统安装nvidia驱动,cuda,cudnn!

四. 安装Anconda

4.1. Win10安装Anconda

参考我之前的链接:Win10系统下使用Anaconda安装

4.2. Linux安装Anconda

对于linux系统安装安装Anaconda这里不再过多的介绍,下载好文件如下图,下载链接建议清华源:清华大学开源软件镜像站,选择对应的系统类型;

下面只需要一个简单的命令:

bash Anaconda3-5.3.1-Linux-x86_64.sh
# 一直Enter键,最后在出现VSCode选项,选择NO,到此安装完成;输入conda命令查看!可能直接输入conda并没有出现任何东西,
这个时候,关闭一下客户端,重新打开;再次输入显示如下内容说明安装成功!

五. TensorFlow2.0正式版安装

5.1. 首先配置清华镜像源

# 使用清华镜像源
conda config --prepend channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --set show_channel_urls yes
# 或者使用国科大镜像源
conda config --prepend channels https://mirrors.ustc.edu.cn/anaconda/pkgs/main/
conda config --prepend channels https://mirrors.ustc.edu.cn/anaconda/pkgs/free/
conda config --set show_channel_urls yes

5.2. 安装TensorFlow2.0

由于TensorFlow2.0仍处于试验阶段,我建议将其安装在独立的虚拟环境中。我个人比较喜欢Anaconda,所以我将用它来演示安装过程:(2019-10-01更新,Tensorflow2.0已经正式发布!)

  • 1、首先基于自己安装的Anaconda 创建环境一个尝鲜环境:
conda create -n tf2.0 python=3.7

2、然后进入刚刚创建好的环境:

conda activate tf2.0

3、执行下面的安装命令:(后面的 -i 表示从国内清华源下载,速度比默认源快很多)

  • 注意: 安装GPU版本时候,需要安装cudatoolkitcudnn包就可以了,要注意一点需要安装cudatoolkit 10.0 版本,注意一点,如果系统的cudatoolkit小于10.0需要更新一下至10.0
# pip install tensorflow-gpu==2.0.0-alpha0						#GPU版本,前期版本
conda install cudatoolkit=10.0 cudnn 							#GPU版本 -更新2019-10-1,也可以先不安转。
pip install tensorflow-gpu==2.0.0 -i https://pypi.tuna.tsinghua.edu.cn/simple #GPU版本 -更新2019-10-1
# pip install tensorflow==2.0.0-alpha0							#CPU版本,前期版本
pip install tensorflow==2.0.0 -i https://pypi.tuna.tsinghua.edu.cn/simple #CPU版本 -更新2019-10-1

或者使用:

conda install tensorflow-gpu #GPU版本 -更新2019-10-1
conda install tensorflow-gpu #CPU版本 -更新2019-10-1

4、测试一下tensorflow-gpu版本是否安装成功

zhangkf@john:~$ conda activate tf2.0
(tf2.0) zhangkf@john-X:~$ pip install ipython                 # 目的是为了测试使用方便!
(tf2.0) zhangkf@john-X:~$ ipython
Python 3.7.4 (default, Aug 13 2019, 20:35:49) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import tensorflow as tf                                                                                  

In [2]: tf.__version__                                                                                           
Out[2]: '2.0.0'

In [3]: tf.test.is_gpu_available()                                                                               
......省略
......省略
Out[3]: True                   # 显示True代表安装成功!

5、然后就可以运行 python 检验一下tensorflow安装路径:

tf.__path__

六. TensorFlow2.0实战Mnist数据集

6.1. TensorFlow2.0代码

import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import models, layers, optimizers

import matplotlib.pyplot as plt

# Mnist数据集加载
(x_train_all, y_train_all), (x_test, y_test) = keras.datasets.mnist.load_data()
# Mnist数据集简单归一化
x_train_all, x_test = x_train_all / 255.0, x_test / 255.0

x_valid, x_train = x_train_all[:50000], x_train_all[50000:]  #验证集10000个
y_valid, y_train = y_train_all[:50000], y_train_all[50000:]

#打印一张照片
# def show_single_image(img_arr):
#      plt.imshow(img_arr,cmap='binary')
#      plt.show()
# show_single_image(x_train[2])

# 将模型的各层堆叠起来,以搭建 tf.keras.Sequential 模型。为训练选择优化器和损失函数:
model = models.Sequential([layers.Flatten(input_shape=(28, 28)),
                           layers.Dense(128, activation='relu'),
                           layers.Dropout(0.5),
                           layers.Dense(10, activation='softmax')
])
# 编译模型
model.compile(optimizer=optimizers.Adam(lr=1e-4), loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
# model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# 打印网络参数量
model.summary()
print(len(model.layers))
# 训练模型
history = model.fit(x_train, y_train, epochs=20, validation_freq=1,
                  validation_data=(x_valid,y_valid))
# 验证模型:
model.evaluate(x_test,  y_test, verbose=2)


history_dict = history.history         # history对象有一个history成员,它是一个字典,包含训练过程中的所有数据。
print(history_dict)

# 绘制loss曲线
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values)+1)
plt.plot(epochs, loss_values, 'bo', label='Training loss')         # bo代表蓝色圆点
plt.plot(epochs, val_loss_values, 'b', label='Validation loss')    # bo代表蓝色实线
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

# 绘制acc曲线
acc_values = history_dict['accuracy']
val_acc_values = history_dict['val_accuracy']
plt.plot(epochs, acc_values, 'ro', label='Training acc')           # bo代表蓝色圆点
plt.plot(epochs, val_acc_values, 'r', label='Validation acc')      # bo代表蓝色实线
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Acc')
plt.legend()
plt.show()

注意: 第一次执行会遇到数据集下载问题,如果下载速度感觉特别慢,可以参考我的文章补充:关于数据集下载慢的问题!

6.2. 打印一张图片

#打印一张照片
def show_single_image(img_arr):
     plt.imshow(img_arr,cmap='binary')
     plt.show()
show_single_image(x_train[0])
show_single_image(x_train[1])
show_single_image(x_train[2])

6.3. 绘制损失和准确率

plt绘图颜色可以参考:https://blog.csdn.net/wei18791957243/article/details/83831266

history_dict = history.history         # history对象有一个history成员,它是一个字典,包含训练过程中的所有数据。
print(history_dict)

# 绘制loss曲线
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values)+1)
# plt.plot(epochs, loss_values, 'bo', label='Training loss')         # bo代表蓝色圆点
# plt.plot(epochs, val_loss_values, 'b', label='Validation loss')    # bo代表蓝色实线
plt.plot(epochs, loss_values, c='r',linestyle='--',marker='o', label='Training loss')         
plt.plot(epochs, val_loss_values, c='b',linestyle=':',marker='*', label='Validation loss') 

plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

# 绘制acc曲线
acc_values = history_dict['accuracy']
val_acc_values = history_dict['val_accuracy']
# plt.plot(epochs, acc_values, 'ro', label='Training acc')           # bo代表蓝色圆点
# plt.plot(epochs, val_acc_values, 'r', label='Validation acc')      # bo代表蓝色实线
plt.plot(epochs, acc_values, c='r',linestyle='--',marker='o', label='Training acc')          
plt.plot(epochs, val_acc_values, c='b',linestyle=':',marker='*', label='Validation acc')  
    
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Acc')
plt.legend()
plt.show()

风格1

风格2

6.4. 代码执行结果

ssh://zhangkf@192.168.136.64:22/home/zhangkf/anaconda3/envs/tf2/bin/python -u /home/zhangkf/johnCodes/TF1/cbam.py
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 128)               100480    
_________________________________________________________________
dropout (Dropout)            (None, 128)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
4
Train on 10000 samples, validate on 50000 samples
Epoch 1/50
2020-01-03 17:19:25.021758: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
10000/10000 [==============================] - 6s 605us/sample - loss: 1.6369 - accuracy: 0.4987 - val_loss: 1.0236 - val_accuracy: 0.7904
Epoch 2/50
10000/10000 [==============================] - 5s 505us/sample - loss: 0.9063 - accuracy: 0.7508 - val_loss: 0.6750 - val_accuracy: 0.8418
Epoch 3/50
10000/10000 [==============================] - 5s 482us/sample - loss: 0.6746 - accuracy: 0.8143 - val_loss: 0.5474 - val_accuracy: 0.8613
Epoch 4/50
10000/10000 [==============================] - 5s 503us/sample - loss: 0.5763 - accuracy: 0.8417 - val_loss: 0.4789 - val_accuracy: 0.8742
Epoch 5/50
10000/10000 [==============================] - 5s 491us/sample - loss: 0.5045 - accuracy: 0.8605 - val_loss: 0.4359 - val_accuracy: 0.8814
Epoch 6/50
10000/10000 [==============================] - 5s 490us/sample - loss: 0.4609 - accuracy: 0.8679 - val_loss: 0.4064 - val_accuracy: 0.8889
Epoch 7/50
10000/10000 [==============================] - 5s 498us/sample - loss: 0.4226 - accuracy: 0.8795 - val_loss: 0.3858 - val_accuracy: 0.8923
Epoch 8/50
10000/10000 [==============================] - 5s 476us/sample - loss: 0.4016 - accuracy: 0.8880 - val_loss: 0.3667 - val_accuracy: 0.8975
Epoch 9/50
10000/10000 [==============================] - 5s 500us/sample - loss: 0.3790 - accuracy: 0.8888 - val_loss: 0.3533 - val_accuracy: 0.9008
Epoch 10/50
10000/10000 [==============================] - 5s 501us/sample - loss: 0.3589 - accuracy: 0.8952 - val_loss: 0.3400 - val_accuracy: 0.9033
Epoch 11/50
10000/10000 [==============================] - 5s 486us/sample - loss: 0.3473 - accuracy: 0.9030 - val_loss: 0.3289 - val_accuracy: 0.9056
Epoch 12/50
10000/10000 [==============================] - 5s 514us/sample - loss: 0.3289 - accuracy: 0.9047 - val_loss: 0.3197 - val_accuracy: 0.9081
Epoch 13/50
10000/10000 [==============================] - 5s 497us/sample - loss: 0.3223 - accuracy: 0.9096 - val_loss: 0.3114 - val_accuracy: 0.9101
Epoch 14/50
10000/10000 [==============================] - 5s 496us/sample - loss: 0.3132 - accuracy: 0.9087 - val_loss: 0.3031 - val_accuracy: 0.9122
Epoch 15/50
10000/10000 [==============================] - 5s 498us/sample - loss: 0.3015 - accuracy: 0.9144 - val_loss: 0.2972 - val_accuracy: 0.9135
Epoch 16/50
10000/10000 [==============================] - 5s 503us/sample - loss: 0.2877 - accuracy: 0.9184 - val_loss: 0.2902 - val_accuracy: 0.9159
Epoch 17/50
10000/10000 [==============================] - 5s 499us/sample - loss: 0.2770 - accuracy: 0.9224 - val_loss: 0.2844 - val_accuracy: 0.9171
Epoch 18/50
10000/10000 [==============================] - 5s 504us/sample - loss: 0.2728 - accuracy: 0.9222 - val_loss: 0.2785 - val_accuracy: 0.9186
Epoch 19/50
10000/10000 [==============================] - 5s 481us/sample - loss: 0.2667 - accuracy: 0.9210 - val_loss: 0.2731 - val_accuracy: 0.9206
Epoch 20/50
10000/10000 [==============================] - 5s 515us/sample - loss: 0.2618 - accuracy: 0.9282 - val_loss: 0.2689 - val_accuracy: 0.9213
Epoch 21/50
10000/10000 [==============================] - 5s 510us/sample - loss: 0.2555 - accuracy: 0.9274 - val_loss: 0.2656 - val_accuracy: 0.9226
Epoch 22/50
10000/10000 [==============================] - 5s 497us/sample - loss: 0.2452 - accuracy: 0.9295 - val_loss: 0.2613 - val_accuracy: 0.9233
Epoch 23/50
10000/10000 [==============================] - 5s 507us/sample - loss: 0.2393 - accuracy: 0.9303 - val_loss: 0.2568 - val_accuracy: 0.9246
Epoch 24/50
10000/10000 [==============================] - 5s 511us/sample - loss: 0.2287 - accuracy: 0.9349 - val_loss: 0.2534 - val_accuracy: 0.9257
Epoch 25/50
10000/10000 [==============================] - 5s 496us/sample - loss: 0.2257 - accuracy: 0.9370 - val_loss: 0.2493 - val_accuracy: 0.9266
Epoch 26/50
10000/10000 [==============================] - 5s 505us/sample - loss: 0.2191 - accuracy: 0.9375 - val_loss: 0.2467 - val_accuracy: 0.9273
Epoch 27/50
10000/10000 [==============================] - 5s 487us/sample - loss: 0.2179 - accuracy: 0.9370 - val_loss: 0.2429 - val_accuracy: 0.9284
Epoch 28/50
10000/10000 [==============================] - 5s 484us/sample - loss: 0.2054 - accuracy: 0.9420 - val_loss: 0.2405 - val_accuracy: 0.9287
Epoch 29/50
10000/10000 [==============================] - 5s 479us/sample - loss: 0.2134 - accuracy: 0.9380 - val_loss: 0.2379 - val_accuracy: 0.9307
Epoch 30/50
10000/10000 [==============================] - 5s 487us/sample - loss: 0.2033 - accuracy: 0.9432 - val_loss: 0.2348 - val_accuracy: 0.9304
Epoch 31/50
10000/10000 [==============================] - 5s 512us/sample - loss: 0.2021 - accuracy: 0.9423 - val_loss: 0.2321 - val_accuracy: 0.9312
Epoch 32/50
10000/10000 [==============================] - 5s 508us/sample - loss: 0.1967 - accuracy: 0.9436 - val_loss: 0.2306 - val_accuracy: 0.9317
Epoch 33/50
10000/10000 [==============================] - 5s 505us/sample - loss: 0.1888 - accuracy: 0.9444 - val_loss: 0.2279 - val_accuracy: 0.9328
Epoch 34/50
10000/10000 [==============================] - 5s 475us/sample - loss: 0.1866 - accuracy: 0.9494 - val_loss: 0.2253 - val_accuracy: 0.9333
Epoch 35/50
10000/10000 [==============================] - 5s 498us/sample - loss: 0.1812 - accuracy: 0.9494 - val_loss: 0.2234 - val_accuracy: 0.9335
Epoch 36/50
10000/10000 [==============================] - 5s 506us/sample - loss: 0.1750 - accuracy: 0.9521 - val_loss: 0.2204 - val_accuracy: 0.9345
Epoch 37/50
10000/10000 [==============================] - 5s 508us/sample - loss: 0.1761 - accuracy: 0.9498 - val_loss: 0.2193 - val_accuracy: 0.9348
Epoch 38/50
10000/10000 [==============================] - 5s 511us/sample - loss: 0.1722 - accuracy: 0.9500 - val_loss: 0.2173 - val_accuracy: 0.9350
Epoch 39/50
10000/10000 [==============================] - 5s 490us/sample - loss: 0.1688 - accuracy: 0.9538 - val_loss: 0.2154 - val_accuracy: 0.9360
Epoch 40/50
10000/10000 [==============================] - 5s 507us/sample - loss: 0.1638 - accuracy: 0.9541 - val_loss: 0.2142 - val_accuracy: 0.9360
Epoch 41/50
10000/10000 [==============================] - 5s 508us/sample - loss: 0.1593 - accuracy: 0.9546 - val_loss: 0.2128 - val_accuracy: 0.9371
Epoch 42/50
10000/10000 [==============================] - 5s 503us/sample - loss: 0.1611 - accuracy: 0.9532 - val_loss: 0.2106 - val_accuracy: 0.9374
Epoch 43/50
10000/10000 [==============================] - 5s 501us/sample - loss: 0.1600 - accuracy: 0.9533 - val_loss: 0.2093 - val_accuracy: 0.9373
Epoch 44/50
10000/10000 [==============================] - 5s 496us/sample - loss: 0.1559 - accuracy: 0.9564 - val_loss: 0.2082 - val_accuracy: 0.9378
Epoch 45/50
10000/10000 [==============================] - 5s 490us/sample - loss: 0.1518 - accuracy: 0.9571 - val_loss: 0.2068 - val_accuracy: 0.9382
Epoch 46/50
10000/10000 [==============================] - 5s 524us/sample - loss: 0.1481 - accuracy: 0.9586 - val_loss: 0.2045 - val_accuracy: 0.9392
Epoch 47/50
10000/10000 [==============================] - 5s 518us/sample - loss: 0.1448 - accuracy: 0.9607 - val_loss: 0.2042 - val_accuracy: 0.9389
Epoch 48/50
10000/10000 [==============================] - 5s 506us/sample - loss: 0.1464 - accuracy: 0.9577 - val_loss: 0.2026 - val_accuracy: 0.9397
Epoch 49/50
10000/10000 [==============================] - 5s 504us/sample - loss: 0.1400 - accuracy: 0.9609 - val_loss: 0.2020 - val_accuracy: 0.9400
Epoch 50/50
10000/10000 [==============================] - 5s 503us/sample - loss: 0.1388 - accuracy: 0.9606 - val_loss: 0.2004 - val_accuracy: 0.9399
10000/1 - 1s - loss: 0.1110 - accuracy: 0.9448
{'loss': [1.6369247730255128, 0.9062983194351196, 0.6746272377967835, 0.5763215946674347, 0.5045416011333466, 0.4609259304523468, 0.4226319133520126, 0.4016176699399948, 0.3790293601751328, 0.358876828122139, 0.3473489733695984, 0.3289035973072052, 0.32234994769096376, 0.31318984200954436, 0.30152170400619505, 0.28766803696155546, 0.2770005432963371, 0.27278262889385224, 0.266667086315155, 0.2617689757108688, 0.25548550642728807, 0.2452052030444145, 0.2392628249168396, 0.22874195029735564, 0.22568819737434387, 0.2190980857014656, 0.21794635989665986, 0.20540731687545777, 0.21341918300390245, 0.2033143123626709, 0.2020812345445156, 0.19672773296833038, 0.18878504244089125, 0.18663886888623238, 0.18120444531440735, 0.175020119535923, 0.17609654362797736, 0.17221742925047875, 0.1688426162481308, 0.16384108155965804, 0.15933493370413782, 0.1610976384282112, 0.1599728398144245, 0.15591111019849777, 0.15175455218553544, 0.14811449723243714, 0.144813884973526, 0.14636144714057445, 0.13998521909713746, 0.13879059435725213], 'accuracy': [0.4987, 0.7508, 0.8143, 0.8417, 0.8605, 0.8679, 0.8795, 0.888, 0.8888, 0.8952, 0.903, 0.9047, 0.9096, 0.9087, 0.9144, 0.9184, 0.9224, 0.9222, 0.921, 0.9282, 0.9274, 0.9295, 0.9303, 0.9349, 0.937, 0.9375, 0.937, 0.942, 0.938, 0.9432, 0.9423, 0.9436, 0.9444, 0.9494, 0.9494, 0.9521, 0.9498, 0.95, 0.9538, 0.9541, 0.9546, 0.9532, 0.9533, 0.9564, 0.9571, 0.9586, 0.9607, 0.9577, 0.9609, 0.9606], 'val_loss': [1.023638479309082, 0.6750377749633789, 0.5473850820732117, 0.4788583271884918, 0.43591173800468447, 0.4064372475910187, 0.3857950745344162, 0.366710502948761, 0.35327610171794893, 0.33998463090419767, 0.32885828042030335, 0.3197121269106865, 0.311429030418396, 0.30308594008207324, 0.29719826214790346, 0.29020631564855576, 0.2844043013691902, 0.27849485159635545, 0.2730598618674278, 0.26888268801689147, 0.26563949211835863, 0.26128692982316015, 0.25681639394521716, 0.25343574436306954, 0.24933239055037498, 0.24672061641573906, 0.24288358903169632, 0.24052829293847083, 0.23793197919249534, 0.23484366439580917, 0.23206947706222533, 0.2305875606203079, 0.22792339917302132, 0.2253209100973606, 0.22341893581211567, 0.22044476644456387, 0.219330649163723, 0.21725530656456948, 0.21540969261407852, 0.21419995639324188, 0.21284355126559734, 0.21064064900517462, 0.20933546797215938, 0.20818855203330516, 0.2067910530567169, 0.2045335038548708, 0.20419522406697274, 0.2025718673455715, 0.2020300270807743, 0.20038156677007676], 'val_accuracy': [0.79042, 0.84176, 0.86128, 0.8742, 0.88144, 0.88892, 0.89226, 0.8975, 0.90084, 0.90326, 0.90558, 0.90808, 0.91008, 0.91216, 0.91348, 0.91594, 0.91708, 0.91864, 0.92058, 0.92126, 0.92258, 0.9233, 0.9246, 0.92574, 0.9266, 0.92732, 0.92842, 0.92872, 0.9307, 0.93038, 0.93118, 0.9317, 0.93284, 0.93328, 0.9335, 0.93454, 0.9348, 0.93504, 0.93596, 0.936, 0.93706, 0.93738, 0.9373, 0.93782, 0.93818, 0.9392, 0.93894, 0.93974, 0.93996, 0.93994]}

Process finished with exit code 0

七. 需要全套课程视频+PPT+代码资源可以私聊我!

  • 方式1:CSDN私信我!
  • 方式2:QQ邮箱:1115291605@qq.com或者直接加我QQ!
相关推荐
©️2020 CSDN 皮肤主题: 游动-白 设计师:白松林 返回首页