您当前的位置: 首页 >  分类

宝哥大数据

暂无认证

  • 2浏览

    0关注

    1029博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

图像分类--猫与狗

宝哥大数据 发布时间:2019-12-20 11:46:46 ,浏览量:2

本文展示如何分类猫与狗图像,使tf.keras.preprocessing.image.ImageDataGenerator加载数据,构造tf.keras.Sequential模型进行分类。你将获得一些实践经验,并发展以下概念的直觉:

  • 使用tf.keras.preprocessing.ImageDataGenerator构建数据输入管道。ImageDataGenerator可以使Model有效处理磁盘上的数据。
  • 过拟合 – 如何识别并防止

本教程遵循一个基本的机器学习工作流程:

  • 检查和理解数据
  • 构建输入管道
  • 构建模型
  • 训练模型
  • 测试模型
  • 改进模型并重复该过程
1.1、加载数据 1.1.1、训练与验证集目录

def loadDataDir():
    _URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'

    path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
    print(path_to_zip)
    PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
    print(PATH)

    # 设置训练集与验证集的路径
    train_dir = os.path.join(PATH, 'train')
    validation_dir = os.path.join(PATH, 'validation')

    train_cats_dir = os.path.join(train_dir, 'cats')  # directory with our training cat pictures
    train_dogs_dir = os.path.join(train_dir, 'dogs')  # directory with our training dog pictures
    validation_cats_dir = os.path.join(validation_dir, 'cats')  # directory with our validation cat pictures
    validation_dogs_dir = os.path.join(validation_dir, 'dogs')  # directory with our validation dog pictures

    # os.listdir 获取路径下的文件
    num_cats_tr = len(os.listdir(train_cats_dir))
    num_dogs_tr = len(os.listdir(train_dogs_dir))

    num_cats_val = len(os.listdir(validation_cats_dir))
    num_dogs_val = len(os.listdir(validation_dogs_dir))

    total_train = num_cats_tr + num_dogs_tr
    total_val = num_cats_val + num_dogs_val

    print('total training cat images:', num_cats_tr)
    print('total training dog images:', num_dogs_tr)

    print('total validation cat images:', num_cats_val)
    print('total validation dog images:', num_dogs_val)
    print("--")
    print("Total training images:", total_train)
    print("Total validation images:", total_val)
    return train_cats_dir, train_dogs_dir,  validation_cats_dir, validation_dogs_dir
1.1.2、预处理

将图像格式化为适当的预处理后的浮点张量,然后再传送到网络:

  • 从磁盘读取图像。
  • 解码这些图像的内容,并转换成适当的网格格式,根据他们的RGB内容。
  • 把它们转换成浮点张量。
  • 归一化: 将张量从0到255之间的值缩放到0到1之间的值,因为神经网络更喜欢处理小的输入值。

幸运的是,所有这些任务都可以使用tf.keras提供的ImageDataGenerator类来完成。它可以从磁盘读取图像并将其预处理成适当的张量。它还将设置生成器,将这些图像转换成有助于训练网络的批量张量。

def createGenerator(train_dir, validation_dir):
    # 创建ImageDataGenerator
    # from tensorflow.keras.preprocessing.image import ImageDataGenerator
    train_image_generator = ImageDataGenerator(rescale=1. / 255)  # Generator for our training data
    validation_image_generator = ImageDataGenerator(rescale=1. / 255)  # Generator for our validation data

    # 使用generator, 从磁盘加载图片
    train_data_gen = train_image_generator.flow_from_directory(
        batch_size=batch_size,
        directory=train_dir,
        shuffle=True,
        target_size=(IMG_HEIGHT, IMG_WIDTH),
        class_mode='binary'
    )

    val_data_gen = validation_image_generator.flow_from_directory(
        batch_size=batch_size,
        directory=validation_dir,
        target_size=(IMG_HEIGHT, IMG_WIDTH),
        class_mode='binary')

    return train_data_gen, val_data_gen

Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
可视化训练集
# 抽取部分数据,做可视化展示
# next函数从数据集中获取一个彼此数据, 返回(x_train, y_train), x_train是特征,y_train是标签,此处丢弃标签,只展示图片
sample_training_images, _ = next(train_data_gen)
plotImages(sample_training_images)

在这里插入图片描述

1.2、创建模型
def createModel():
    model = Sequential([
        Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
        MaxPooling2D(),
        Conv2D(32, 3, padding='same', activation='relu'),
        MaxPooling2D(),
        Conv2D(64, 3, padding='same', activation='relu'),
        MaxPooling2D(),
        Flatten(),
        Dense(512, activation='relu'),
        Dense(1, activation='sigmoid')
    ])
    return model
1.3、编译模型
# 编译model
model.compile(
    optimizer='adam',
    loss='binary_crossentropy',
    metrics=['accuracy']
)


model.summary()

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 150, 150, 16)      448       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 75, 75, 16)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 75, 75, 32)        4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 37, 37, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 37, 37, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 18, 18, 64)        0         
_________________________________________________________________
flatten (Flatten)            (None, 20736)             0         
_________________________________________________________________
dense (Dense)                (None, 512)               10617344  
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 513       
=================================================================
Total params: 10,641,441
Trainable params: 10,641,441
Non-trainable params: 0
_________________________________________________________________
None
1.4、训练模型
# 训练模型
total_train_num = len(os.listdir(train_cats_dir)) + len(os.listdir(train_dogs_dir))
total_val_num = len(os.listdir(validation_cats_dir)) + len(os.listdir(validation_dogs_dir))
history = model.fit_generator(
    train_data_gen,
    steps_per_epoch=total_train_num,
    epochs=epochs,
    validation_data=val_data_gen,
    validation_steps=total_val_num
)
1.5、可视化训练结果
def visualizeTrainResults(history):
    acc = history.history['accuracy']
    val_acc = history.history['val_accuracy']

    loss = history.history['loss']
    val_loss = history.history['val_loss']

    epochs_range = range(epochs)

    plt.figure(figsize=(8, 8))
    plt.subplot(1, 2, 1)
    plt.plot(epochs_range, acc, label='Training Accuracy')
    plt.plot(epochs_range, val_acc, label='Validation Accuracy')
    plt.legend(loc='lower right')
    plt.title('Training and Validation Accuracy')

    plt.subplot(1, 2, 2)
    plt.plot(epochs_range, loss, label='Training Loss')
    plt.plot(epochs_range, val_loss, label='Validation Loss')
    plt.legend(loc='upper right')
    plt.title('Training and Validation Loss')
    plt.show()

在这里插入图片描述

1.6、过拟合

在上图中,训练集的准确性呈线性增长,但是验证集最终停止在70%,这显示过拟合。

1.7、数据增强(Data Augmentation)

过拟合通常是由于训练数据少,通过增加训练数据来结果该问题。Data Augmentation从现有的数据集中生成更多的训练数据,

1.7.1、水平翻转

通过horizontal_flip 设置到ImageDataGenerator实现该扩展。

def da1():
    train_image_generator = ImageDataGenerator(rescale=1. / 255, horizontal_flip=True) # 水平翻转

    # 使用generator, 从磁盘加载图片
    train_data_gen = train_image_generator.flow_from_directory(
        batch_size=batch_size,
        directory=train_dir,
        shuffle=True,
        target_size=(IMG_HEIGHT, IMG_WIDTH),
        class_mode='binary'
    )
    # 抽取同一个图像,重复5次, 以便对同一个图像进行五次增强
    augmented_images = [train_data_gen[0][0][0] for i in range(5)]
    # 可视化
    plotImages(augmented_images)

在这里插入图片描述

1.7.2、随机旋转图像
def da2():
    train_image_generator = ImageDataGenerator(rescale=1. / 255, rotation_range=45) # 随机旋转图像,此处设置45度

    # 使用generator, 从磁盘加载图片
    train_data_gen = train_image_generator.flow_from_directory(
        batch_size=batch_size,
        directory=train_dir,
        shuffle=True,
        target_size=(IMG_HEIGHT, IMG_WIDTH),
        class_mode='binary'
    )
    # 抽取同一个图像,重复5次, 以便对同一个图像进行五次增强
    augmented_images = [train_data_gen[0][0][0] for i in range(5)]
    # 可视化
    plotImages(augmented_images)

在这里插入图片描述

1.7.3、放大增强
def da3():
    train_image_generator = ImageDataGenerator(rescale=1. / 255, zoom_range=0.5) # 放大增强

    # 使用generator, 从磁盘加载图片
    train_data_gen = train_image_generator.flow_from_directory(
        batch_size=batch_size,
        directory=train_dir,
        shuffle=True,
        target_size=(IMG_HEIGHT, IMG_WIDTH),
        class_mode='binary'
    )
    # 抽取同一个图像,重复5次, 以便对同一个图像进行五次增强
    augmented_images = [train_data_gen[0][0][0] for i in range(5)]
    # 可视化
    plotImages(augmented_images)

在这里插入图片描述

1.7.4、组合使用

def da4():
    train_image_generator = ImageDataGenerator(rescale=1. / 255,
                                               rotation_range=45,       # 随机旋转,最大45度
                                               width_shift_range=.15,   # 宽度转变
                                               height_shift_range=.15,  # 高度转变
                                               horizontal_flip=True,    # 水平翻转
                                               zoom_range=0.5) #        放大增强

    # 使用generator, 从磁盘加载图片
    train_data_gen = train_image_generator.flow_from_directory(
        batch_size=batch_size,
        directory=train_dir,
        shuffle=True,
        target_size=(IMG_HEIGHT, IMG_WIDTH),
        class_mode='binary'
    )
    # 抽取同一个图像,重复5次, 以便对同一个图像进行五次增强
    augmented_images = [train_data_gen[0][0][0] for i in range(5)]
    # 可视化
    plotImages(augmented_images)

在这里插入图片描述

1.8、Dropout 1.8.1、使用Dropout创建一个网络

在这里,你应用dropout到第一和最后的最大池层。应用dropout将随机设置20%的神经元在每个训练阶段为零。这有助于避免对训练数据集的过度拟合。

model_new = Sequential([
    Conv2D(16, 3, padding='same', activation='relu', 
           input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(),
    Dropout(0.2), 	# Dropout, 
    Conv2D(32, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Conv2D(64, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Dropout(0.2),	# Dropout
    Flatten(),
    Dense(512, activation='relu'),
    Dense(1, activation='sigmoid')
])

新的训练结果 在这里插入图片描述

关注
打赏
1587549273
查看更多评论
立即登录/注册

微信扫码登录

0.0408s