# 二、为什么要归一化和反归一化？

## 1.归一化的好处：

（1）不同特征量纲不同，可以通过归一化消除量纲的影响

（2）可以加快模型处理数据的速度

# 四、实操代码

## 1.导入相关库

``````import numpy as np
from sklearn.preprocessing import MinMaxScaler
import keras
from tensorflow.keras.callbacks import EarlyStopping
from gc import callbacks``````

## 2.构建样本集

``````#构建样本集
n = 1000
input_dim = 5
time_steps = 10
data0 = np.random.rand(n, input_dim+1)*3
data = []
#基于时间步time_steps创建样本
for i in range(0, data0.shape[0]-time_steps):
data_i = data0[i:i+time_steps, :]
data.append(data_i)
data = np.array(data)

#划分训练集和测试集
ratio = 0.65
k = int(ratio*data.shape[0])
train = data[:k]
test = data[k:]``````

## 3.数据归一化

``````#进行归一化
scaler = MinMaxScaler(feature_range=(0, 1))
#由于建立时滞会去除time_steps个数据，所以要加上
scaler = scaler.fit(data0[:k+time_steps])#将训练集对应的原始数据作为标准
train_norm = np.zeros((train.shape[0], train.shape[1], train.shape[2]))
test_norm = np.zeros((test.shape[0], test.shape[1], test.shape[2]))
for time in range(time_steps):
train_norm[:, time, :] = scaler.transform(train[:, time, :])
test_norm[:, time, :] = scaler.transform(test[:, time, :])
``````

## 4.训练模型并进行预测

``````#划分XY
X_train = train_norm[:, :, :-1]
X_test = test_norm[:, :, :-1]
Y_train = train_norm[:, -1, -1]  # 实际上Y部分不需要时序扩充
Y_test = test_norm[:, -1, -1]

#建立bilstm模型
modell = keras.Sequential()
keras.layers.LSTM(units=128, batch_input_shape=(n, time_steps, input_dim))))  # relu

#训练模型
monitor = EarlyStopping(monitor='loss', patience=30)
history = modell.fit(X_train, Y_train, callbacks=[monitor], epochs=10)

#预测
predict = modell.predict(X_test)
``````

## 5.预测结果的反归一化

``````#反归一化
max_standard = scaler.data_max_[-1]
min_standard = scaler.data_min_[-1]

real_predict = predict*(max_standard-min_standard)+min_standard
real_y=Y_test*(max_standard-min_standard)+min_standard``````

THE END