변수분포문제 - 변수치우침 문제 해결하기
2021-08-08
.
Data_Preprocessing_TIL(20210808)
[학습자료]
패스트캠퍼스 온라인 강의 “파이썬을 활용한 데이터 전처리 Level UP 올인원 패키지 Online.” 를 공부하고 정리한 내용입니다.
URL : https://fastcampus.co.kr/data_online_preprocess
[학습내용]
- 변수치우침 문제란
모델링에 가장 적합한 분포는 정규분포이나 현실적으로 많은 경우에서는 변수가 특정방향으로 치우쳐 있다.
한쪽으로 치우친 변수에서 치우친 반대방향의 값 (꼬리부분) 들이 이상치처럼 작용할 수 있으므로, 이러한 치우침을 제거해줘야 한다.
- 변수치우침이 있는지 확인방법 : 왜도 (skewness)
왜도는 분포의 비대칭정도를 나타내는 통계량으로, 왜도값에 따른 분포는 다음과 같다.
보통 왜도의 절대값이 1.5 이상이면 분포가 치우쳐 있다고 판단한다.
scipy의 scipy.stats
라는 함수로 왜도를 측정할 수 있다.
scipy.stats
는 다양한 확률 통계 관련 함수를 모듈로 제공한다.
scipy.stats.mode
: 최빈값을 구하는 함수
scipy.stats.skew
: 왜도를 구하는 함수
scipy.stats.kurtosis
: 첨도를 구하는 함수
- 변수치우침 해결방안
변수치우침을 해결하는 기본적인 아이디어는 값간에 차이를 줄이는데 있다.
대표적인 처리방법은 아래와 같다.
- 실습
import os
import pandas as pd
os.chdir(r"C:/Users/user/Desktop/aa/5. 머신러닝 모델의 성능 향상을 위한 전처리\데이터")
df = pd.read_csv("Sonar_Mines_Rocks.csv")
df
Band1 | Band2 | Band3 | Band4 | Band5 | Band6 | Band7 | Band8 | Band9 | Band10 | ... | Band52 | Band53 | Band54 | Band55 | Band56 | Band57 | Band58 | Band59 | Band60 | Y | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.020 | 0.037 | 0.043 | 0.021 | 0.095 | 0.099 | 0.154 | 0.160 | 0.311 | 0.211 | ... | 0.003 | 0.006 | 0.016 | 0.007 | 0.017 | 0.018 | 0.008 | 0.009 | 0.003 | R |
1 | 0.045 | 0.052 | 0.084 | 0.069 | 0.118 | 0.258 | 0.216 | 0.348 | 0.334 | 0.287 | ... | 0.008 | 0.009 | 0.005 | 0.009 | 0.019 | 0.014 | 0.005 | 0.005 | 0.004 | R |
2 | 0.026 | 0.058 | 0.110 | 0.108 | 0.097 | 0.228 | 0.243 | 0.377 | 0.560 | 0.619 | ... | 0.023 | 0.017 | 0.010 | 0.018 | 0.024 | 0.032 | 0.016 | 0.010 | 0.008 | R |
3 | 0.010 | 0.017 | 0.062 | 0.020 | 0.020 | 0.037 | 0.110 | 0.128 | 0.060 | 0.126 | ... | 0.012 | 0.004 | 0.015 | 0.008 | 0.007 | 0.005 | 0.004 | 0.004 | 0.012 | R |
4 | 0.076 | 0.067 | 0.048 | 0.039 | 0.059 | 0.065 | 0.121 | 0.247 | 0.356 | 0.446 | ... | 0.003 | 0.005 | 0.010 | 0.011 | 0.002 | 0.007 | 0.005 | 0.011 | 0.009 | R |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
203 | 0.019 | 0.035 | 0.017 | 0.018 | 0.039 | 0.163 | 0.203 | 0.169 | 0.233 | 0.268 | ... | 0.012 | 0.010 | 0.020 | 0.003 | 0.010 | 0.006 | 0.012 | 0.019 | 0.016 | M |
204 | 0.032 | 0.010 | 0.030 | 0.056 | 0.076 | 0.096 | 0.099 | 0.102 | 0.103 | 0.215 | ... | 0.006 | 0.009 | 0.014 | 0.006 | 0.006 | 0.003 | 0.003 | 0.006 | 0.007 | M |
205 | 0.052 | 0.044 | 0.018 | 0.029 | 0.035 | 0.117 | 0.126 | 0.118 | 0.126 | 0.253 | ... | 0.016 | 0.003 | 0.005 | 0.006 | 0.009 | 0.014 | 0.014 | 0.008 | 0.003 | M |
206 | 0.030 | 0.035 | 0.049 | 0.061 | 0.017 | 0.135 | 0.146 | 0.112 | 0.194 | 0.235 | ... | 0.009 | 0.005 | 0.013 | 0.004 | 0.004 | 0.003 | 0.008 | 0.004 | 0.005 | M |
207 | 0.026 | 0.036 | 0.014 | 0.027 | 0.021 | 0.034 | 0.066 | 0.140 | 0.184 | 0.235 | ... | 0.015 | 0.013 | 0.005 | 0.004 | 0.006 | 0.004 | 0.004 | 0.006 | 0.012 | M |
208 rows × 61 columns
# 특징과 라벨 분리
X = df.drop('Y', axis = 1)
Y = df['Y']
# 학습 데이터와 평가 데이터로 분리
from sklearn.model_selection import train_test_split
Train_X, Test_X, Train_Y, Test_Y = train_test_split(X, Y)
# 왜도 확인 => Band4의 왜도가 가장 큼 => 어떻게 생겼는지 확인해보자
Train_X.skew()
Band1 2.228868
Band2 2.260476
Band3 2.951125
Band4 3.788544
Band5 2.107194
Band6 1.288695
Band7 1.057396
Band8 1.329445
Band9 1.334646
Band10 1.352715
Band11 1.059416
Band12 0.614725
Band13 0.773976
Band14 1.023539
Band15 0.660078
Band16 0.605930
Band17 0.583182
Band18 0.486804
Band19 0.263132
Band20 -0.067000
Band21 -0.225531
Band22 -0.340840
Band23 -0.582071
Band24 -0.667840
Band25 -0.720573
Band26 -0.662067
Band27 -0.642183
Band28 -0.606825
Band29 -0.448073
Band30 -0.048007
Band31 0.279177
Band32 0.329666
Band33 0.526372
Band34 0.645258
Band35 0.609381
Band36 0.664764
Band37 0.744208
Band38 0.967784
Band39 0.912742
Band40 0.849675
Band41 0.737968
Band42 0.756553
Band43 1.069909
Band44 1.387088
Band45 1.460144
Band46 1.791579
Band47 1.893334
Band48 1.241043
Band49 1.165043
Band50 1.442659
Band51 3.042810
Band52 2.242982
Band53 0.821256
Band54 1.016587
Band55 1.437494
Band56 1.054704
Band57 1.460105
Band58 1.930916
Band59 1.668129
Band60 3.241471
dtype: float64
%matplotlib inline
df['Band4'].hist()
<AxesSubplot:>
치우침을 제거했을 때의 성능 비교를 위한 모델 개발
# 라벨 숫자로 바꾸기
Train_Y.replace({"M":-1, "R":1}, inplace = True)
Test_Y.replace({"M":-1, "R":1}, inplace = True)
# 원본 데이터로 모델링
from sklearn.metrics import f1_score
from sklearn.neural_network import MLPClassifier as MLP
model = MLP(random_state = 153, max_iter = 1000).fit(Train_X, Train_Y)
pred_Y = model.predict(Test_X)
score = f1_score(Test_Y, pred_Y)
print(score)
0.7659574468085106
# 왜도 기반 치우친 변수 제거
import numpy as np
# 왜도의 절대값이 1.5 이상인 컬럼만 가져오기
biased_variables = Train_X.columns[Train_X.skew().abs() > 1.5]
biased_variables
Index(['Band1', 'Band2', 'Band3', 'Band4', 'Band5', 'Band8', 'Band9', 'Band46',
'Band47', 'Band50', 'Band51', 'Band55', 'Band56', 'Band57', 'Band58',
'Band59', 'Band60'],
dtype='object')
# 치우침 제거
Train_X[biased_variables] = Train_X[biased_variables] - Train_X[biased_variables].min() + 1
Train_X[biased_variables]
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py:3191: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self[k1] = value[k2]
Band1 | Band2 | Band3 | Band4 | Band5 | Band8 | Band9 | Band46 | Band47 | Band50 | Band51 | Band55 | Band56 | Band57 | Band58 | Band59 | Band60 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
115 | 1.039 | 1.043 | 1.043 | 1.078 | 1.035 | 1.142 | 1.074 | 1.037 | 1.080 | 1.006 | 1.022 | 1.004 | 1.016 | 1.014 | 1.008 | 1.025 | 1.019 |
44 | 1.024 | 1.044 | 1.037 | 1.018 | 1.125 | 1.204 | 1.077 | 1.451 | 1.310 | 1.021 | 1.021 | 1.012 | 1.020 | 1.002 | 1.020 | 1.026 | 1.017 |
144 | 1.028 | 1.068 | 1.097 | 1.096 | 1.073 | 1.071 | 1.168 | 1.529 | 1.329 | 1.033 | 1.034 | 1.008 | 1.005 | 1.020 | 1.013 | 1.012 | 1.001 |
99 | 1.018 | 1.041 | 1.053 | 1.072 | 1.055 | 1.261 | 1.113 | 1.175 | 1.084 | 1.015 | 1.008 | 1.028 | 1.009 | 1.024 | 1.022 | 1.019 | 1.009 |
185 | 1.032 | 1.061 | 1.036 | 1.020 | 1.037 | 1.175 | 1.257 | 1.190 | 1.094 | 1.014 | 1.033 | 1.009 | 1.004 | 1.006 | 1.005 | 1.003 | 1.005 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
52 | 1.007 | 1.004 | 1.006 | 1.017 | 1.052 | 1.062 | 1.050 | 1.014 | 1.047 | 1.004 | 1.002 | 1.010 | 1.005 | 1.003 | 1.006 | 1.006 | 1.002 |
83 | 1.020 | 1.033 | 1.037 | 1.018 | 1.101 | 1.091 | 1.107 | 1.170 | 1.183 | 1.022 | 1.010 | 1.007 | 1.009 | 1.003 | 1.003 | 1.001 | 1.006 |
71 | 1.002 | 1.007 | 1.007 | 1.033 | 1.046 | 1.093 | 1.116 | 1.096 | 1.038 | 1.006 | 1.008 | 1.003 | 1.000 | 1.002 | 1.005 | 1.002 | 1.001 |
155 | 1.019 | 1.012 | 1.000 | 1.039 | 1.064 | 1.111 | 1.159 | 1.076 | 1.073 | 1.013 | 1.017 | 1.004 | 1.003 | 1.002 | 1.004 | 1.005 | 1.001 |
63 | 1.005 | 1.009 | 1.000 | 1.000 | 1.013 | 1.085 | 1.076 | 1.069 | 1.060 | 1.015 | 1.003 | 1.003 | 1.002 | 1.003 | 1.003 | 1.005 | 1.002 |
156 rows × 17 columns
Train_X[biased_variables] = np.log10(Train_X[biased_variables])
Train_X[biased_variables]
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py:3191: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self[k1] = value[k2]
Band1 | Band2 | Band3 | Band4 | Band5 | Band8 | Band9 | Band46 | Band47 | Band50 | Band51 | Band55 | Band56 | Band57 | Band58 | Band59 | Band60 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
115 | 0.016616 | 0.018284 | 0.018284 | 0.032619 | 0.014940 | 0.057666 | 0.031004 | 0.015779 | 0.033424 | 0.002598 | 0.009451 | 0.001734 | 0.006894 | 0.006038 | 0.003461 | 0.010724 | 0.008174 |
44 | 0.010300 | 0.018700 | 0.015779 | 0.007748 | 0.051153 | 0.080626 | 0.032216 | 0.161667 | 0.117271 | 0.009026 | 0.009026 | 0.005181 | 0.008600 | 0.000868 | 0.008600 | 0.011147 | 0.007321 |
144 | 0.011993 | 0.028571 | 0.040207 | 0.039811 | 0.030600 | 0.029789 | 0.067443 | 0.184407 | 0.123525 | 0.014100 | 0.014521 | 0.003461 | 0.002166 | 0.008600 | 0.005609 | 0.005181 | 0.000434 |
99 | 0.007748 | 0.017451 | 0.022428 | 0.030195 | 0.023252 | 0.100715 | 0.046495 | 0.070038 | 0.035029 | 0.006466 | 0.003461 | 0.011993 | 0.003891 | 0.010300 | 0.009451 | 0.008174 | 0.003891 |
185 | 0.013680 | 0.025715 | 0.015360 | 0.008600 | 0.015779 | 0.070038 | 0.099335 | 0.075547 | 0.039017 | 0.006038 | 0.014100 | 0.003891 | 0.001734 | 0.002598 | 0.002166 | 0.001301 | 0.002166 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
52 | 0.003029 | 0.001734 | 0.002598 | 0.007321 | 0.022016 | 0.026125 | 0.021189 | 0.006038 | 0.019947 | 0.001734 | 0.000868 | 0.004321 | 0.002166 | 0.001301 | 0.002598 | 0.002598 | 0.000868 |
83 | 0.008600 | 0.014100 | 0.015779 | 0.007748 | 0.041787 | 0.037825 | 0.044148 | 0.068186 | 0.072985 | 0.009451 | 0.004321 | 0.003029 | 0.003891 | 0.001301 | 0.001301 | 0.000434 | 0.002598 |
71 | 0.000868 | 0.003029 | 0.003029 | 0.014100 | 0.019532 | 0.038620 | 0.047664 | 0.039811 | 0.016197 | 0.002598 | 0.003461 | 0.001301 | 0.000000 | 0.000868 | 0.002166 | 0.000868 | 0.000434 |
155 | 0.008174 | 0.005181 | 0.000000 | 0.016616 | 0.026942 | 0.045714 | 0.064083 | 0.031812 | 0.030600 | 0.005609 | 0.007321 | 0.001734 | 0.001301 | 0.000868 | 0.001734 | 0.002166 | 0.000434 |
63 | 0.002166 | 0.003891 | 0.000000 | 0.000000 | 0.005609 | 0.035430 | 0.031812 | 0.028978 | 0.025306 | 0.006466 | 0.001301 | 0.001301 | 0.000868 | 0.001301 | 0.001301 | 0.002166 | 0.000868 |
156 rows × 17 columns
# 치우침 제거 후 모델 평가
model = MLP(random_state = 153, max_iter = 1000).fit(Train_X, Train_Y)
# 테스트도 데이터도 같은 방법으로 전처리를 수행
Test_X[biased_variables] = Test_X[biased_variables] - Test_X[biased_variables].min() + 1
Test_X[biased_variables] = Test_X[biased_variables].apply(np.log)
pred_Y = model.predict(Test_X)
score = f1_score(Test_Y, pred_Y)
print(score)
0.8837209302325582
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py:3191: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self[k1] = value[k2]