Neural network 구현실습(TF 1.xx ver)

2019-01-28

.

  1. MNIST 데이터를 이용한 Neural network 구현실습(TF 1.xx ver)

그림, 실습코드 등 학습자료 출처 : https://datascienceschool.net

# MNIST 데이터를 이용한 신경망 구현

%matplotlib inline
%config InlineBackend.figure_formats = {'png', 'retina'}
import matplotlib as mpl
import matplotlib.pyplot as plt

from keras.datasets import mnist
(X_train0, y_train0), (X_test0, y_test0) = mnist.load_data()

plt.figure(figsize=(10, 3))
for i in range(36):
    plt.subplot(3, 12, i+1)
    plt.imshow(X_train0[i], cmap="gray")
    plt.axis("off")
plt.show()
Using TensorFlow backend.

mnist_nn_1_1

# tensorflow, keras 등 각종 패키지 임포트

import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)

import pandas as pd
import numpy as np

import keras
keras.__version__
'2.2.4'

# MNIST 이미지 데이터는 28 X 28 임을 아래의 결과를 보고 알 수 있다.

print(X_train0.shape, X_train0.dtype)
print(y_train0.shape, y_train0.dtype)
print(X_test0.shape, X_test0.dtype)
print(y_test0.shape, y_test0.dtype)
(60000, 28, 28) uint8
(60000,) uint8
(10000, 28, 28) uint8
(10000,) uint8

# 신경망에 데이터를 입력하기 위해 아래와 같은 가공을 해준다.

1) 현재 각각의 데이터가 (28 * 28) 행렬로 이루어져 있는데 (1 * 784) 백터로 변환해준다.

2) 데이터를 ‘float32’ 타입으로 변환 후 스케일링한다. 이는 이미지를 전처리하는 보편적인 방법 중 하나이다.

3) 데이터가 1 ~ 255 까지 구성되어 있기 때문에 255로 나누어서 0 ~ 1 로 스케일링을 해준다.

# 3번 예시
# 0 ~ 255까지 다양하다.
set(X_train0[0][5])
{0, 3, 18, 26, 126, 127, 136, 166, 175, 247, 255}
X_train = X_train0.reshape(60000, 784).astype('float32') / 255.0
X_test = X_test0.reshape(10000, 784).astype('float32') / 255.0
print(X_train.shape, X_train.dtype)
(60000, 784) float32

4) 이 값을 keras.np_utils.categorical()을 사용하여 원핫인코딩(One-Hot-Encoding)로 변환한다.

  • 원핫인코딩 전
y_train0
array([5, 0, 4, ..., 5, 6, 8], dtype=uint8)
  • 원핫인코딩 후
from keras.utils import np_utils

Y_train = np_utils.to_categorical(y_train0, 10)
Y_test = np_utils.to_categorical(y_test0, 10)
Y_train[:5]
array([[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
       [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
       [0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]], dtype=float32)

# 신경망 구현 순서

1) Sequential 모형 클래스 객체 생성

2) add 메서드로 레이어 추가. - 입력단부터 순차적으로 추가한다. - 레이어는 출력 뉴런 갯수를 첫번째 인수로 받는다. - 최초의 레이어는 input_dim 인수로 입력 크기를 설정해야 한다. - activation 인수로 활성화함수 설정

3) compile 메서드로 모형 완성. - loss인수로 비용함수 설정 - optimizer 인수로 최적화 알고리즘 설정 - metrics 인수로 트레이닝 단계에서 기록할 성능 기준 설정

4) fit 메서드로 트레이닝 - nb_epoch 로 에포크(epoch) 횟수 설정 - batch_size 로 배치크기(batch size) 설정 - verbose는 학습 중 출력되는 문구를 설정하는 것으로, 주피터노트북(Jupyter Notebook)을 사용할 때는 verbose=2로 설정하여 진행 막대(progress bar)가 나오지 않도록 설정한다.

# 신경망 구현

  • MNIST 이미지 신경망 구현 계획

필기 숫자에 대한 영상 정보를 입력 받아 숫자 0 ~ 9 까지의 조건부 확률을 출력하는 MLP를 구현할 것이다. 입력 영상이 28 x 28 해상도를 가진다면 입력 계층의 뉴런 수는 28×28=784 개가 된다. 출력은 숫자 0 ~ 9 까지의 조건부 확률을 출력하는 10 개의 뉴런을 가진다.

구현할 모형은 15 개의 뉴런을 가지는 1 개의 은닉 계층으로 이루어질 것이다.

1

  • 실제 MNIST 이미지 신경망 구현 코드
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD

np.random.seed(0)

model = Sequential()
# 입력계층 맟 은닉계층 구현코드
model.add(Dense(15, input_dim=784, activation="sigmoid"))
# 출력계층 구현코드
model.add(Dense(10, activation="sigmoid"))
model.compile(optimizer=SGD(lr=0.2), loss='mean_squared_error', metrics=["accuracy"])
  • 만들어진 모형은 model_to_dot 명령이나 summary 명령으로 모델 내부의 layers 리스트를 살펴봄으로써 내부 구조를 확인할 수 있다.
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot

SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))

mnist_nn_20_0

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 15)                11775     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                160       
=================================================================
Total params: 11,935
Trainable params: 11,935
Non-trainable params: 0
_________________________________________________________________
  • layers 속성으로 각 레이어의 특성을 살펴볼 수도 있다.
l1 = model.layers[0]
l2 = model.layers[1]

print(l1.name, type(l1), l1.output_shape, l1.activation.__name__, l1.count_params())
print(l2.name, type(l1), l2.output_shape, l2.activation.__name__, l2.count_params())
dense_1 <class 'keras.layers.core.Dense'> (None, 15) sigmoid 11775
dense_2 <class 'keras.layers.core.Dense'> (None, 10) sigmoid 160
  • 모델을 완성했다면 fit 메서드로 트레이닝을 시작한다.
%%time
hist = model.fit(X_train, Y_train,
                 epochs=10, batch_size=100,
                 validation_data=(X_test, Y_test),
                 verbose=2)
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
 - 1s - loss: 0.1019 - acc: 0.2440 - val_loss: 0.0864 - val_acc: 0.3212
Epoch 2/10
 - 1s - loss: 0.0845 - acc: 0.3921 - val_loss: 0.0821 - val_acc: 0.4409
Epoch 3/10
 - 1s - loss: 0.0796 - acc: 0.4997 - val_loss: 0.0765 - val_acc: 0.5340
Epoch 4/10
 - 1s - loss: 0.0740 - acc: 0.5620 - val_loss: 0.0707 - val_acc: 0.5852
Epoch 5/10
 - 1s - loss: 0.0682 - acc: 0.6149 - val_loss: 0.0649 - val_acc: 0.6522
Epoch 6/10
 - 1s - loss: 0.0625 - acc: 0.6759 - val_loss: 0.0594 - val_acc: 0.6998
Epoch 7/10
 - 1s - loss: 0.0576 - acc: 0.7101 - val_loss: 0.0551 - val_acc: 0.7316
Epoch 8/10
 - 1s - loss: 0.0537 - acc: 0.7325 - val_loss: 0.0516 - val_acc: 0.7485
Epoch 9/10
 - 1s - loss: 0.0505 - acc: 0.7474 - val_loss: 0.0486 - val_acc: 0.7639
Epoch 10/10
 - 1s - loss: 0.0478 - acc: 0.7608 - val_loss: 0.0461 - val_acc: 0.7759
Wall time: 7.24 s
  • 학습이 끝나면 기록된 변수를 확인한다. 다음 두 그래프는 방금 학습 시킨 모델의 비용함수와 성능지표에 대한 것이다.
plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.plot(hist.history['loss'])
plt.title("cost function of training")
plt.ylabel("cost function value")
plt.subplot(1, 2, 2)
plt.title("performance of training")
plt.ylabel("perfomance value")
plt.plot(hist.history['acc'], 'b-', label="train perfomance")
plt.plot(hist.history['val_acc'], 'r:', label="test performance")
plt.legend()
plt.tight_layout()
plt.show()

mnist_nn_27_0

# 가중치 정보확인

트레이닝이 끝난 모형의 가중치 정보는 get_weights 메서드로 구할 수 있다. 이 메서드는 신경망 모형에서 사용된 가중치 w 값과 b 값을 출력한다.

# 첫번째 레이어
w1 = l1.get_weights()
w1[0].shape, w1[1].shape
((784, 15), (15,))
# 두번째 레이어
w2 = l2.get_weights()
w2[0].shape, w2[1].shape
((15, 10), (10,))

# 위에서 만든 모형 사용

트레이닝이 끝난 모형은 predict 메서드로 y 값을 출력하거나 출력된 y값을 각 클래스에 대한 판별함수로 가정하고 predict_classes 메서드로 분류를 수행할 수 있다. 예로 테스트 데이터셋의 두번째 이미지를 예측하면 다음과 같다.

model.predict(X_test[1:2, :])
array([[0.14298885, 0.0444363 , 0.3573986 , 0.19647366, 0.03010687,
        0.14621396, 0.31485024, 0.01599035, 0.10410369, 0.00852606]],
      dtype=float32)
model.predict_classes(X_test[1:2, :], verbose=0)
array([2], dtype=int64)

테스트 데이터셋의 두번째 이미지를 출력해보면 다음처럼 실제로 2가 나온다

plt.figure(figsize=(1, 1))
plt.imshow(X_test0[1], cmap=mpl.cm.bone_r)
plt.grid(False)
plt.axis("off")
plt.show()

mnist_nn_35_0

# 만든 모형 저장

트레이닝이 끝난 모형은 save 메서드로 가중치와 함께 “hdf5” 형식으로 저장하였다가 나중에 load 명령으로 불러 사용할 수 있다.

model.save('MNIST_model.hdf5')
from keras.models import load_model

model2 = load_model('MNIST_model.hdf5')
print(model2.predict_classes(X_test[1:2, :], verbose=0))
  1. iris data를 이용한 신경망 구현연습

# step1) 데이터 확인

import pandas as pd
from sklearn.datasets import load_iris
iris = load_iris()
  • ‘feature_names’: [‘sepal length (cm)’,’sepal width (cm)’,’petal length (cm)’,’petal width (cm)’]
iris.data[:3]
array([[5.1, 3.5, 1.4, 0.2],
       [4.9, 3. , 1.4, 0.2],
       [4.7, 3.2, 1.3, 0.2]])
  • ‘target_names’: ‘setosa’ = 0, ‘versicolor’=1, ‘virginica’=2
iris.target
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])

# step2) 원활한 데이터 가공을 위한 판다스 데이터 프레임 생성

df_d = pd.DataFrame(iris.data, columns=["sepal_length",'sepal_width','petal_length',\
                                     "petal_width"])

df_d.tail()
sepal_length sepal_width petal_length petal_width
145 6.7 3.0 5.2 2.3
146 6.3 2.5 5.0 1.9
147 6.5 3.0 5.2 2.0
148 6.2 3.4 5.4 2.3
149 5.9 3.0 5.1 1.8
df_t = pd.DataFrame(iris.target, columns = ["target"])

# step3) ‘train data와 test data split’을 위한 ‘iris.data’, ‘iris.target’ 병합

df = pd.concat([df_d, df_t], axis=1)
df.tail(5)
sepal_length sepal_width petal_length petal_width target
145 6.7 3.0 5.2 2.3 2
146 6.3 2.5 5.0 1.9 2
147 6.5 3.0 5.2 2.0 2
148 6.2 3.4 5.4 2.3 2
149 5.9 3.0 5.1 1.8 2

# step4) train data와 test data split

from sklearn.model_selection import train_test_split

train, test = train_test_split(df, test_size=0.2, random_state=0)
len(train), len(test)
(120, 30)

# step5) x data와 y 데이터 분리

# train 데이터
x_train = train[["sepal_length","sepal_width","petal_length","petal_width"]]
y_train = train[["target"]]
# test 데이터
x_test = test[["sepal_length","sepal_width","petal_length","petal_width"]]
y_test = test[["target"]]

# step6) target value 원핫인코딩 조치

from keras.utils import np_utils

y_train = np_utils.to_categorical(y_train, 3)
y_test = np_utils.to_categorical(y_test, 3)
y_train[:5]
array([[0., 0., 1.],
       [0., 1., 0.],
       [1., 0., 0.],
       [0., 0., 1.],
       [0., 0., 1.]], dtype=float32)

# step7) 모델생성

from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD

np.random.seed(0)

model = Sequential()
model.add(Dense(8, input_dim=4, activation="sigmoid"))
model.add(Dense(3, activation="sigmoid"))
model.compile(optimizer=SGD(lr=0.2), loss='mean_squared_error', metrics=["accuracy"])
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot

SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))

iris_nn_21_0

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_11 (Dense)             (None, 8)                 40        
_________________________________________________________________
dense_12 (Dense)             (None, 3)                 27        
=================================================================
Total params: 67
Trainable params: 67
Non-trainable params: 0
_________________________________________________________________

# step8) 모델성능확인

%%time
hist = model.fit(x_train, y_train,
                 epochs=200, batch_size=10,
                 validation_data=(x_test, y_test),
                 verbose=2)
Train on 120 samples, validate on 30 samples
Epoch 1/200
 - 0s - loss: 0.2487 - acc: 0.2250 - val_loss: 0.2402 - val_acc: 0.2000
Epoch 2/200
 - 0s - loss: 0.2322 - acc: 0.3500 - val_loss: 0.2337 - val_acc: 0.2000
Epoch 3/200
 - 0s - loss: 0.2250 - acc: 0.3667 - val_loss: 0.2295 - val_acc: 0.2000
Epoch 4/200
 - 0s - loss: 0.2206 - acc: 0.3667 - val_loss: 0.2281 - val_acc: 0.2000
Epoch 5/200
 - 0s - loss: 0.2177 - acc: 0.3667 - val_loss: 0.2264 - val_acc: 0.2000
Epoch 6/200
 - 0s - loss: 0.2146 - acc: 0.3750 - val_loss: 0.2247 - val_acc: 0.2000
Epoch 7/200
 - 0s - loss: 0.2119 - acc: 0.3917 - val_loss: 0.2209 - val_acc: 0.2000
Epoch 8/200
 - 0s - loss: 0.2087 - acc: 0.5083 - val_loss: 0.2180 - val_acc: 0.5000
Epoch 9/200
 - 0s - loss: 0.2056 - acc: 0.6667 - val_loss: 0.2141 - val_acc: 0.5667
Epoch 10/200
 - 0s - loss: 0.2024 - acc: 0.6917 - val_loss: 0.2112 - val_acc: 0.5667
Epoch 11/200
 - 0s - loss: 0.1988 - acc: 0.6917 - val_loss: 0.2073 - val_acc: 0.5667
Epoch 12/200
 - 0s - loss: 0.1957 - acc: 0.6917 - val_loss: 0.2031 - val_acc: 0.5667
Epoch 13/200
 - 0s - loss: 0.1917 - acc: 0.6917 - val_loss: 0.1999 - val_acc: 0.5667
Epoch 14/200
 - 0s - loss: 0.1883 - acc: 0.6917 - val_loss: 0.1965 - val_acc: 0.5667
Epoch 15/200
 - 0s - loss: 0.1846 - acc: 0.6917 - val_loss: 0.1934 - val_acc: 0.5667
Epoch 16/200
 - 0s - loss: 0.1815 - acc: 0.6917 - val_loss: 0.1895 - val_acc: 0.5667
Epoch 17/200
 - 0s - loss: 0.1783 - acc: 0.6917 - val_loss: 0.1867 - val_acc: 0.5667
Epoch 18/200
 - 0s - loss: 0.1745 - acc: 0.6917 - val_loss: 0.1845 - val_acc: 0.5667
Epoch 19/200
 - 0s - loss: 0.1713 - acc: 0.6917 - val_loss: 0.1809 - val_acc: 0.5667
Epoch 20/200
 - 0s - loss: 0.1679 - acc: 0.6917 - val_loss: 0.1774 - val_acc: 0.5667
Epoch 21/200
 - 0s - loss: 0.1650 - acc: 0.6917 - val_loss: 0.1738 - val_acc: 0.5667
Epoch 22/200
 - 0s - loss: 0.1613 - acc: 0.6917 - val_loss: 0.1722 - val_acc: 0.5667
Epoch 23/200
 - 0s - loss: 0.1583 - acc: 0.6917 - val_loss: 0.1686 - val_acc: 0.5667
Epoch 24/200
 - 0s - loss: 0.1556 - acc: 0.6917 - val_loss: 0.1663 - val_acc: 0.5667
Epoch 25/200
 - 0s - loss: 0.1524 - acc: 0.6917 - val_loss: 0.1649 - val_acc: 0.5667
Epoch 26/200
 - 0s - loss: 0.1496 - acc: 0.6917 - val_loss: 0.1615 - val_acc: 0.5667
Epoch 27/200
 - 0s - loss: 0.1470 - acc: 0.6917 - val_loss: 0.1596 - val_acc: 0.5667
Epoch 28/200
 - 0s - loss: 0.1445 - acc: 0.6917 - val_loss: 0.1568 - val_acc: 0.5667
Epoch 29/200
 - 0s - loss: 0.1419 - acc: 0.6917 - val_loss: 0.1528 - val_acc: 0.5667
Epoch 30/200
 - 0s - loss: 0.1398 - acc: 0.6917 - val_loss: 0.1505 - val_acc: 0.5667
Epoch 31/200
 - 0s - loss: 0.1387 - acc: 0.6917 - val_loss: 0.1499 - val_acc: 0.5667
Epoch 32/200
 - 0s - loss: 0.1363 - acc: 0.6917 - val_loss: 0.1470 - val_acc: 0.5667
Epoch 33/200
 - 0s - loss: 0.1340 - acc: 0.6917 - val_loss: 0.1439 - val_acc: 0.5667
Epoch 34/200
 - 0s - loss: 0.1327 - acc: 0.6917 - val_loss: 0.1438 - val_acc: 0.5667
Epoch 35/200
 - 0s - loss: 0.1309 - acc: 0.6917 - val_loss: 0.1433 - val_acc: 0.5667
Epoch 36/200
 - 0s - loss: 0.1291 - acc: 0.6917 - val_loss: 0.1410 - val_acc: 0.5667
Epoch 37/200
 - 0s - loss: 0.1279 - acc: 0.6917 - val_loss: 0.1390 - val_acc: 0.5667
Epoch 38/200
 - 0s - loss: 0.1263 - acc: 0.6917 - val_loss: 0.1362 - val_acc: 0.5667
Epoch 39/200
 - 0s - loss: 0.1252 - acc: 0.6917 - val_loss: 0.1339 - val_acc: 0.5667
Epoch 40/200
 - 0s - loss: 0.1235 - acc: 0.7000 - val_loss: 0.1341 - val_acc: 0.5667
Epoch 41/200
 - 0s - loss: 0.1222 - acc: 0.7000 - val_loss: 0.1324 - val_acc: 0.5667
Epoch 42/200
 - 0s - loss: 0.1210 - acc: 0.7000 - val_loss: 0.1310 - val_acc: 0.5667
Epoch 43/200
 - 0s - loss: 0.1199 - acc: 0.7083 - val_loss: 0.1284 - val_acc: 0.5667
Epoch 44/200
 - 0s - loss: 0.1191 - acc: 0.7000 - val_loss: 0.1273 - val_acc: 0.5667
Epoch 45/200
 - 0s - loss: 0.1184 - acc: 0.7250 - val_loss: 0.1270 - val_acc: 0.5667
Epoch 46/200
 - 0s - loss: 0.1175 - acc: 0.7250 - val_loss: 0.1235 - val_acc: 0.6000
Epoch 47/200
 - 0s - loss: 0.1168 - acc: 0.7917 - val_loss: 0.1264 - val_acc: 0.5667
Epoch 48/200
 - 0s - loss: 0.1150 - acc: 0.7167 - val_loss: 0.1236 - val_acc: 0.5667
Epoch 49/200
 - 0s - loss: 0.1139 - acc: 0.7083 - val_loss: 0.1195 - val_acc: 0.7000
Epoch 50/200
 - 0s - loss: 0.1132 - acc: 0.7833 - val_loss: 0.1192 - val_acc: 0.6333
Epoch 51/200
 - 0s - loss: 0.1123 - acc: 0.7583 - val_loss: 0.1178 - val_acc: 0.7000
Epoch 52/200
 - 0s - loss: 0.1114 - acc: 0.7500 - val_loss: 0.1167 - val_acc: 0.7000
Epoch 53/200
 - 0s - loss: 0.1107 - acc: 0.7917 - val_loss: 0.1171 - val_acc: 0.6333
Epoch 54/200
 - 0s - loss: 0.1099 - acc: 0.7750 - val_loss: 0.1158 - val_acc: 0.7000
Epoch 55/200
 - 0s - loss: 0.1098 - acc: 0.8083 - val_loss: 0.1148 - val_acc: 0.7000
Epoch 56/200
 - 0s - loss: 0.1084 - acc: 0.7917 - val_loss: 0.1130 - val_acc: 0.7333
Epoch 57/200
 - 0s - loss: 0.1076 - acc: 0.7917 - val_loss: 0.1132 - val_acc: 0.7333
Epoch 58/200
 - 0s - loss: 0.1065 - acc: 0.8500 - val_loss: 0.1142 - val_acc: 0.6667
Epoch 59/200
 - 0s - loss: 0.1057 - acc: 0.7750 - val_loss: 0.1081 - val_acc: 0.8333
Epoch 60/200
 - 0s - loss: 0.1051 - acc: 0.8667 - val_loss: 0.1096 - val_acc: 0.7667
Epoch 61/200
 - 0s - loss: 0.1045 - acc: 0.8417 - val_loss: 0.1094 - val_acc: 0.7667
Epoch 62/200
 - 0s - loss: 0.1045 - acc: 0.8417 - val_loss: 0.1066 - val_acc: 0.8000
Epoch 63/200
 - 0s - loss: 0.1026 - acc: 0.8667 - val_loss: 0.1081 - val_acc: 0.7667
Epoch 64/200
 - 0s - loss: 0.1020 - acc: 0.8667 - val_loss: 0.1061 - val_acc: 0.8000
Epoch 65/200
 - 0s - loss: 0.1029 - acc: 0.8833 - val_loss: 0.1092 - val_acc: 0.7333
Epoch 66/200
 - 0s - loss: 0.1010 - acc: 0.8167 - val_loss: 0.1018 - val_acc: 0.8667
Epoch 67/200
 - 0s - loss: 0.1004 - acc: 0.8917 - val_loss: 0.1056 - val_acc: 0.7667
Epoch 68/200
 - 0s - loss: 0.0996 - acc: 0.8833 - val_loss: 0.1068 - val_acc: 0.7333
Epoch 69/200
 - 0s - loss: 0.0986 - acc: 0.8833 - val_loss: 0.1055 - val_acc: 0.7667
Epoch 70/200
 - 0s - loss: 0.0980 - acc: 0.8667 - val_loss: 0.1026 - val_acc: 0.8000
Epoch 71/200
 - 0s - loss: 0.0975 - acc: 0.9083 - val_loss: 0.1029 - val_acc: 0.7667
Epoch 72/200
 - 0s - loss: 0.0964 - acc: 0.9167 - val_loss: 0.1059 - val_acc: 0.7333
Epoch 73/200
 - 0s - loss: 0.0963 - acc: 0.8417 - val_loss: 0.0998 - val_acc: 0.8333
Epoch 74/200
 - 0s - loss: 0.0952 - acc: 0.9167 - val_loss: 0.0991 - val_acc: 0.8333
Epoch 75/200
 - 0s - loss: 0.0940 - acc: 0.9000 - val_loss: 0.0961 - val_acc: 0.8667
Epoch 76/200
 - 0s - loss: 0.0934 - acc: 0.9250 - val_loss: 0.0986 - val_acc: 0.8000
Epoch 77/200
 - 0s - loss: 0.0926 - acc: 0.8917 - val_loss: 0.0959 - val_acc: 0.8667
Epoch 78/200
 - 0s - loss: 0.0927 - acc: 0.8917 - val_loss: 0.0921 - val_acc: 0.9667
Epoch 79/200
 - 0s - loss: 0.0925 - acc: 0.9250 - val_loss: 0.0967 - val_acc: 0.8333
Epoch 80/200
 - 0s - loss: 0.0919 - acc: 0.9083 - val_loss: 0.0964 - val_acc: 0.8000
Epoch 81/200
 - 0s - loss: 0.0899 - acc: 0.9083 - val_loss: 0.0905 - val_acc: 0.9667
Epoch 82/200
 - 0s - loss: 0.0901 - acc: 0.9333 - val_loss: 0.0930 - val_acc: 0.8667
Epoch 83/200
 - 0s - loss: 0.0890 - acc: 0.9583 - val_loss: 0.0976 - val_acc: 0.7667
Epoch 84/200
 - 0s - loss: 0.0880 - acc: 0.8917 - val_loss: 0.0873 - val_acc: 0.9667
Epoch 85/200
 - 0s - loss: 0.0885 - acc: 0.9250 - val_loss: 0.0864 - val_acc: 0.9667
Epoch 86/200
 - 0s - loss: 0.0871 - acc: 0.9417 - val_loss: 0.0898 - val_acc: 0.8667
Epoch 87/200
 - 0s - loss: 0.0859 - acc: 0.9417 - val_loss: 0.0875 - val_acc: 0.9333
Epoch 88/200
 - 0s - loss: 0.0854 - acc: 0.9500 - val_loss: 0.0935 - val_acc: 0.8000
Epoch 89/200
 - 0s - loss: 0.0860 - acc: 0.9167 - val_loss: 0.0859 - val_acc: 0.9333
Epoch 90/200
 - 0s - loss: 0.0842 - acc: 0.9417 - val_loss: 0.0860 - val_acc: 0.9333
Epoch 91/200
 - 0s - loss: 0.0839 - acc: 0.9333 - val_loss: 0.0862 - val_acc: 0.9333
Epoch 92/200
 - 0s - loss: 0.0837 - acc: 0.9417 - val_loss: 0.0916 - val_acc: 0.8000
Epoch 93/200
 - 0s - loss: 0.0824 - acc: 0.9250 - val_loss: 0.0826 - val_acc: 0.9667
Epoch 94/200
 - 0s - loss: 0.0817 - acc: 0.9500 - val_loss: 0.0826 - val_acc: 0.9667
Epoch 95/200
 - 0s - loss: 0.0808 - acc: 0.9333 - val_loss: 0.0858 - val_acc: 0.8667
Epoch 96/200
 - 0s - loss: 0.0805 - acc: 0.9500 - val_loss: 0.0860 - val_acc: 0.8667
Epoch 97/200
 - 0s - loss: 0.0800 - acc: 0.9417 - val_loss: 0.0832 - val_acc: 0.9333
Epoch 98/200
 - 0s - loss: 0.0803 - acc: 0.9417 - val_loss: 0.0875 - val_acc: 0.8667
Epoch 99/200
 - 0s - loss: 0.0782 - acc: 0.9250 - val_loss: 0.0795 - val_acc: 0.9667
Epoch 100/200
 - 0s - loss: 0.0784 - acc: 0.9417 - val_loss: 0.0793 - val_acc: 0.9333
Epoch 101/200
 - 0s - loss: 0.0770 - acc: 0.9583 - val_loss: 0.0825 - val_acc: 0.8667
Epoch 102/200
 - 0s - loss: 0.0772 - acc: 0.9667 - val_loss: 0.0854 - val_acc: 0.8667
Epoch 103/200
 - 0s - loss: 0.0775 - acc: 0.9333 - val_loss: 0.0772 - val_acc: 0.9667
Epoch 104/200
 - 0s - loss: 0.0753 - acc: 0.9417 - val_loss: 0.0786 - val_acc: 0.9333
Epoch 105/200
 - 0s - loss: 0.0750 - acc: 0.9417 - val_loss: 0.0786 - val_acc: 0.9333
Epoch 106/200
 - 0s - loss: 0.0740 - acc: 0.9417 - val_loss: 0.0756 - val_acc: 0.9667
Epoch 107/200
 - 0s - loss: 0.0737 - acc: 0.9500 - val_loss: 0.0736 - val_acc: 0.9667
Epoch 108/200
 - 0s - loss: 0.0728 - acc: 0.9500 - val_loss: 0.0698 - val_acc: 1.0000
Epoch 109/200
 - 0s - loss: 0.0722 - acc: 0.9500 - val_loss: 0.0681 - val_acc: 1.0000
Epoch 110/200
 - 0s - loss: 0.0721 - acc: 0.9583 - val_loss: 0.0667 - val_acc: 1.0000
Epoch 111/200
 - 0s - loss: 0.0723 - acc: 0.9583 - val_loss: 0.0683 - val_acc: 1.0000
Epoch 112/200
 - 0s - loss: 0.0703 - acc: 0.9583 - val_loss: 0.0695 - val_acc: 1.0000
Epoch 113/200
 - 0s - loss: 0.0715 - acc: 0.9583 - val_loss: 0.0769 - val_acc: 0.8667
Epoch 114/200
 - 0s - loss: 0.0717 - acc: 0.9417 - val_loss: 0.0681 - val_acc: 1.0000
Epoch 115/200
 - 0s - loss: 0.0683 - acc: 0.9500 - val_loss: 0.0747 - val_acc: 0.9000
Epoch 116/200
 - 0s - loss: 0.0683 - acc: 0.9417 - val_loss: 0.0649 - val_acc: 1.0000
Epoch 117/200
 - 0s - loss: 0.0685 - acc: 0.9417 - val_loss: 0.0722 - val_acc: 0.9333
Epoch 118/200
 - 0s - loss: 0.0675 - acc: 0.9583 - val_loss: 0.0622 - val_acc: 1.0000
Epoch 119/200
 - 0s - loss: 0.0666 - acc: 0.9667 - val_loss: 0.0721 - val_acc: 0.9333
Epoch 120/200
 - 0s - loss: 0.0664 - acc: 0.9417 - val_loss: 0.0619 - val_acc: 1.0000
Epoch 121/200
 - 0s - loss: 0.0652 - acc: 0.9583 - val_loss: 0.0667 - val_acc: 0.9667
Epoch 122/200
 - 0s - loss: 0.0671 - acc: 0.9500 - val_loss: 0.0631 - val_acc: 1.0000
Epoch 123/200
 - 0s - loss: 0.0645 - acc: 0.9583 - val_loss: 0.0583 - val_acc: 1.0000
Epoch 124/200
 - 0s - loss: 0.0647 - acc: 0.9500 - val_loss: 0.0569 - val_acc: 1.0000
Epoch 125/200
 - 0s - loss: 0.0642 - acc: 0.9583 - val_loss: 0.0592 - val_acc: 1.0000
Epoch 126/200
 - 0s - loss: 0.0622 - acc: 0.9583 - val_loss: 0.0630 - val_acc: 0.9667
Epoch 127/200
 - 0s - loss: 0.0639 - acc: 0.9583 - val_loss: 0.0638 - val_acc: 0.9667
Epoch 128/200
 - 0s - loss: 0.0620 - acc: 0.9583 - val_loss: 0.0646 - val_acc: 0.9667
Epoch 129/200
 - 0s - loss: 0.0617 - acc: 0.9500 - val_loss: 0.0537 - val_acc: 1.0000
Epoch 130/200
 - 0s - loss: 0.0612 - acc: 0.9583 - val_loss: 0.0610 - val_acc: 0.9667
Epoch 131/200
 - 0s - loss: 0.0606 - acc: 0.9583 - val_loss: 0.0652 - val_acc: 0.9333
Epoch 132/200
 - 0s - loss: 0.0602 - acc: 0.9500 - val_loss: 0.0552 - val_acc: 1.0000
Epoch 133/200
 - 0s - loss: 0.0597 - acc: 0.9583 - val_loss: 0.0543 - val_acc: 1.0000
Epoch 134/200
 - 0s - loss: 0.0603 - acc: 0.9750 - val_loss: 0.0601 - val_acc: 0.9667
Epoch 135/200
 - 0s - loss: 0.0582 - acc: 0.9583 - val_loss: 0.0547 - val_acc: 1.0000
Epoch 136/200
 - 0s - loss: 0.0577 - acc: 0.9583 - val_loss: 0.0530 - val_acc: 1.0000
Epoch 137/200
 - 0s - loss: 0.0584 - acc: 0.9500 - val_loss: 0.0555 - val_acc: 1.0000
Epoch 138/200
 - 0s - loss: 0.0582 - acc: 0.9583 - val_loss: 0.0536 - val_acc: 1.0000
Epoch 139/200
 - 0s - loss: 0.0571 - acc: 0.9583 - val_loss: 0.0605 - val_acc: 0.9667
Epoch 140/200
 - 0s - loss: 0.0565 - acc: 0.9667 - val_loss: 0.0557 - val_acc: 1.0000
Epoch 141/200
 - 0s - loss: 0.0561 - acc: 0.9500 - val_loss: 0.0617 - val_acc: 0.9333
Epoch 142/200
 - 0s - loss: 0.0562 - acc: 0.9500 - val_loss: 0.0527 - val_acc: 1.0000
Epoch 143/200
 - 0s - loss: 0.0556 - acc: 0.9583 - val_loss: 0.0516 - val_acc: 1.0000
Epoch 144/200
 - 0s - loss: 0.0561 - acc: 0.9583 - val_loss: 0.0524 - val_acc: 1.0000
Epoch 145/200
 - 0s - loss: 0.0543 - acc: 0.9667 - val_loss: 0.0529 - val_acc: 1.0000
Epoch 146/200
 - 0s - loss: 0.0542 - acc: 0.9583 - val_loss: 0.0490 - val_acc: 1.0000
Epoch 147/200
 - 0s - loss: 0.0532 - acc: 0.9667 - val_loss: 0.0507 - val_acc: 1.0000
Epoch 148/200
 - 0s - loss: 0.0528 - acc: 0.9750 - val_loss: 0.0585 - val_acc: 0.9333
Epoch 149/200
 - 0s - loss: 0.0542 - acc: 0.9417 - val_loss: 0.0511 - val_acc: 1.0000
Epoch 150/200
 - 0s - loss: 0.0523 - acc: 0.9583 - val_loss: 0.0512 - val_acc: 1.0000
Epoch 151/200
 - 0s - loss: 0.0520 - acc: 0.9667 - val_loss: 0.0476 - val_acc: 1.0000
Epoch 152/200
 - 0s - loss: 0.0519 - acc: 0.9750 - val_loss: 0.0534 - val_acc: 0.9667
Epoch 153/200
 - 0s - loss: 0.0514 - acc: 0.9667 - val_loss: 0.0471 - val_acc: 1.0000
Epoch 154/200
 - 0s - loss: 0.0509 - acc: 0.9667 - val_loss: 0.0552 - val_acc: 0.9667
Epoch 155/200
 - 0s - loss: 0.0507 - acc: 0.9583 - val_loss: 0.0527 - val_acc: 0.9667
Epoch 156/200
 - 0s - loss: 0.0501 - acc: 0.9583 - val_loss: 0.0443 - val_acc: 1.0000
Epoch 157/200
 - 0s - loss: 0.0494 - acc: 0.9583 - val_loss: 0.0472 - val_acc: 1.0000
Epoch 158/200
 - 0s - loss: 0.0489 - acc: 0.9583 - val_loss: 0.0444 - val_acc: 1.0000
Epoch 159/200
 - 0s - loss: 0.0493 - acc: 0.9583 - val_loss: 0.0448 - val_acc: 1.0000
Epoch 160/200
 - 0s - loss: 0.0484 - acc: 0.9750 - val_loss: 0.0509 - val_acc: 0.9667
Epoch 161/200
 - 0s - loss: 0.0497 - acc: 0.9500 - val_loss: 0.0442 - val_acc: 1.0000
Epoch 162/200
 - 0s - loss: 0.0479 - acc: 0.9583 - val_loss: 0.0420 - val_acc: 1.0000
Epoch 163/200
 - 0s - loss: 0.0471 - acc: 0.9667 - val_loss: 0.0449 - val_acc: 1.0000
Epoch 164/200
 - 0s - loss: 0.0481 - acc: 0.9750 - val_loss: 0.0508 - val_acc: 0.9667
Epoch 165/200
 - 0s - loss: 0.0481 - acc: 0.9583 - val_loss: 0.0474 - val_acc: 0.9667
Epoch 166/200
 - 0s - loss: 0.0468 - acc: 0.9667 - val_loss: 0.0410 - val_acc: 1.0000
Epoch 167/200
 - 0s - loss: 0.0460 - acc: 0.9583 - val_loss: 0.0389 - val_acc: 1.0000
Epoch 168/200
 - 0s - loss: 0.0463 - acc: 0.9500 - val_loss: 0.0407 - val_acc: 1.0000
Epoch 169/200
 - 0s - loss: 0.0462 - acc: 0.9583 - val_loss: 0.0375 - val_acc: 1.0000
Epoch 170/200
 - 0s - loss: 0.0452 - acc: 0.9583 - val_loss: 0.0465 - val_acc: 0.9667
Epoch 171/200
 - 0s - loss: 0.0451 - acc: 0.9833 - val_loss: 0.0463 - val_acc: 0.9667
Epoch 172/200
 - 0s - loss: 0.0447 - acc: 0.9667 - val_loss: 0.0364 - val_acc: 1.0000
Epoch 173/200
 - 0s - loss: 0.0450 - acc: 0.9667 - val_loss: 0.0381 - val_acc: 1.0000
Epoch 174/200
 - 0s - loss: 0.0448 - acc: 0.9583 - val_loss: 0.0402 - val_acc: 1.0000
Epoch 175/200
 - 0s - loss: 0.0456 - acc: 0.9583 - val_loss: 0.0391 - val_acc: 1.0000
Epoch 176/200
 - 0s - loss: 0.0446 - acc: 0.9667 - val_loss: 0.0369 - val_acc: 1.0000
Epoch 177/200
 - 0s - loss: 0.0434 - acc: 0.9667 - val_loss: 0.0352 - val_acc: 1.0000
Epoch 178/200
 - 0s - loss: 0.0435 - acc: 0.9667 - val_loss: 0.0396 - val_acc: 1.0000
Epoch 179/200
 - 0s - loss: 0.0421 - acc: 0.9667 - val_loss: 0.0335 - val_acc: 1.0000
Epoch 180/200
 - 0s - loss: 0.0427 - acc: 0.9667 - val_loss: 0.0362 - val_acc: 1.0000
Epoch 181/200
 - 0s - loss: 0.0428 - acc: 0.9667 - val_loss: 0.0353 - val_acc: 1.0000
Epoch 182/200
 - 0s - loss: 0.0414 - acc: 0.9667 - val_loss: 0.0380 - val_acc: 1.0000
Epoch 183/200
 - 0s - loss: 0.0422 - acc: 0.9583 - val_loss: 0.0356 - val_acc: 1.0000
Epoch 184/200
 - 0s - loss: 0.0419 - acc: 0.9750 - val_loss: 0.0380 - val_acc: 1.0000
Epoch 185/200
 - 0s - loss: 0.0406 - acc: 0.9583 - val_loss: 0.0355 - val_acc: 1.0000
Epoch 186/200
 - 0s - loss: 0.0406 - acc: 0.9583 - val_loss: 0.0336 - val_acc: 1.0000
Epoch 187/200
 - 0s - loss: 0.0401 - acc: 0.9750 - val_loss: 0.0316 - val_acc: 1.0000
Epoch 188/200
 - 0s - loss: 0.0399 - acc: 0.9500 - val_loss: 0.0344 - val_acc: 1.0000
Epoch 189/200
 - 0s - loss: 0.0400 - acc: 0.9583 - val_loss: 0.0349 - val_acc: 1.0000
Epoch 190/200
 - 0s - loss: 0.0398 - acc: 0.9667 - val_loss: 0.0318 - val_acc: 1.0000
Epoch 191/200
 - 0s - loss: 0.0409 - acc: 0.9667 - val_loss: 0.0396 - val_acc: 1.0000
Epoch 192/200
 - 0s - loss: 0.0396 - acc: 0.9667 - val_loss: 0.0371 - val_acc: 1.0000
Epoch 193/200
 - 0s - loss: 0.0391 - acc: 0.9667 - val_loss: 0.0320 - val_acc: 1.0000
Epoch 194/200
 - 0s - loss: 0.0386 - acc: 0.9667 - val_loss: 0.0387 - val_acc: 1.0000
Epoch 195/200
 - 0s - loss: 0.0395 - acc: 0.9583 - val_loss: 0.0336 - val_acc: 1.0000
Epoch 196/200
 - 0s - loss: 0.0396 - acc: 0.9667 - val_loss: 0.0324 - val_acc: 1.0000
Epoch 197/200
 - 0s - loss: 0.0390 - acc: 0.9500 - val_loss: 0.0355 - val_acc: 1.0000
Epoch 198/200
 - 0s - loss: 0.0383 - acc: 0.9583 - val_loss: 0.0347 - val_acc: 1.0000
Epoch 199/200
 - 0s - loss: 0.0387 - acc: 0.9583 - val_loss: 0.0283 - val_acc: 0.9667
Epoch 200/200
 - 0s - loss: 0.0383 - acc: 0.9667 - val_loss: 0.0284 - val_acc: 1.0000
Wall time: 2.23 s
%matplotlib inline
%config InlineBackend.figure_formats = {'png', 'retina'}
import matplotlib as mpl
import matplotlib.pyplot as plt

plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.plot(hist.history['loss'])
plt.title("cost function of training")
plt.ylabel("cost function value")
plt.subplot(1, 2, 2)
plt.title("performance of training")
plt.ylabel("perfomance value")
plt.plot(hist.history['acc'], 'b-', label="train perfomance")
plt.plot(hist.history['val_acc'], 'r:', label="test performance")
plt.legend()
plt.tight_layout()
plt.show()

iris_nn_25_0