强化学习(二)—— 价值学习(Value-Based)及DQN
1. DQN介绍
Deep Q Network
- 目标:最大化累计奖励(回报U)
- 通过神经网络近似最优动作价值函数:
Q
(
s
,
a
;
W
)
≈
Q
∗
(
s
,
a
)
Q(s,a;W)≈Q^*(s,a)
-
Q函数可以帮助获得最优动作:
a
t
=
arg max
a
Q
(
s
t
,
a
;
W
)
a_t = displaystyleargmax_aQ(s_t,a;W)
- 状态转移函数p可以获得下一时刻的状态:
s
t
+
1
=
p
(
⋅
∣
s
t
,
a
t
)
s_{t+1} = p(·|s_t,a_t)
2. TD算法介绍
时序差分算法(Temporal Difference)
- 得到t时刻的状态和动作
S
t
=
s
t
;
A
t
=
a
t
S_t = s_t; A_t=a_t
- 计算t时刻的Q函数值
q
t
=
Q
(
s
t
,
a
t
;
W
t
)
q_t = Q(s_t,a_t;W_t)
- 计算t时刻的梯度
d
t
=
∂
Q
(
s
t
,
a
t
;
W
)
∂
W
∣
W
=
W
t
d_t = frac{partial Q(s_t,a_t;W)}{partial W}|_{W=W_t}
- 获得t时刻的奖励和t+1时刻的状态
s
t
+
1
;
r
t
s_{t+1}; r_t
- 计算TD的目标值:
y
t
=
r
t
+
γ
max
a
Q
(
s
t
+
1
,
a
;
W
t
)
y_t = r_t+gamma displaystylemax_aQ(s_{t+1},a;W_t)
- 进行梯度更新
w
t
+
1
=
w
t
−
α
⋅
(
q
t
−
y
t
)
⋅
d
t
w_{t+1} = w_t-alpha ·(q_t-y_t)·d_t
3. 案例
成功控制锤子保持平衡
# -*- coding: utf-8 -*-
# @Time : 2022/3/28 16:39
# @Author : CyrusMay WJ
# @FileName: run.py
# @Software: PyCharm
# @Blog :https://blog.csdn.net/Cyrus_May
import gym
import time
import tensorflow as tf
import numpy as np
env = gym.make("CartPole-v0")
gamma = 0.9
adam = tf.optimizers.Adam()
state = env.reset()
act = [1,0]
x_before = np.array([list(state)+act])
model = tf.keras.Sequential([
tf.keras.layers.Dense(128,activation="sigmoid"),
tf.keras.layers.Dense(64,activation="sigmoid"),
tf.keras.layers.Dense(1),
])
model.build(input_shape=[None,6])
for epoch in range(2000):
with tf.GradientTape() as tape:
q = model(x_before)
dt = tape.gradient(q,model.trainable_variables)
env.render()
state,reward,done,info = env.step(act[-1])
state = list(state)
flag = int(tf.argmax(model(np.array([state+[1,0],state+[0,1]]))[:,0]))
act = [0,0]
act[flag] = 1
x_before = np.array([state + act])
y = reward + gamma*model(x_before)
dt = [(q[0][0]-y[0][0])*dt[i] for i in range(len(dt))]
adam.apply_gradients([(i,j) for i,j in zip(dt,model.trainable_variables)])
print(epoch,":",q[0][0]-y[0][0])
if done:
# time.sleep(1)
state = env.reset()
act = [1, 0]
x_before = np.array([list(state) + act])
continue
print("end!")
for epoch in range(100):
state = env.reset()
act = [1, 0]
env.render()
state,reward,done,info = env.step(act[-1])
state = list(state)
flag = int(tf.argmax(model(np.array([state+[1,0],state+[0,1]]))[:,0]))
act = [0,0]
act[flag] = 1
if done:
print(epoch)
break
env.close()
本文部分为参考B站学习视频书写的笔记!
by CyrusMay 2022 03 28
如果命运是风
什么又是我的弦
——————五月天(一半人生)——————