超参数设置参考:https://github.com/ranjitation/DQN-for-LunarLander/blob/master/dqn_agent.py

之前CartPole照着Deeplizard的教程做给做废了,于是换了OpenAI - Gym另外一个小游戏LunarLander,尝试自己从零实现DQN。

官方文档的描述如下:

Landing pad is always at coordinates (0,0). Coordinates are the first two numbers in state vector. Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back. Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Solved is 200 points. Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt. Four discrete actions available: do nothing, fire left orientation engine, fire main engine, fire right orientation engine.

目标是让着陆器平稳降落在landing pad上,每个状态是一个8维向量,包括水平坐标x,垂直坐标y,水平速度,垂直速度,角度,角速度,腿1是否触地,腿2是否触地。一共4种动作,包括什么都不做,左引擎点火,右引擎点火,主引擎点火。

算法流程大致如下:

'''
训练时,外层for枚举当前回合。每回合,初始化环境和初始state,然后枚举当前回合的时间戳。对于每个单位时间,按照ε-greedy策略选定一个动作a,采取动作a并得到一组经验(s, a, r, s')。然后把当前状态更新为s'(没写导致多调了一个下午...),并把这组经验放进buffer。(每过一定单位时间)从buffer中sample出一个batch的经验用来optimize目前的policy_net,之后将policy_net的参数赋给target_net(之前写成赋给policy_net自己导致多改了一天bug....)。当前回合的total_reward加上这一个单位时间的reward。(为了加快训练,时间戳超过1000直接进入下一回合,防止着陆器长时间悬停在空中)进行一次epsilon decay。测试时直接把epsilon设成0(只exploit不再explore)然后多跑一些回合算下total_reward平均值,超过200即可。
'''

总结一下还有两个小疑点--

1. 似乎对不同的case,训练policy_net,更新target_net以及epsilon decay的频率都不太一样?

2. 随机种子是真的玄学......

代码:

import random
import gym
import numpy as np
import matplotlib.pyplot as plt
from itertools import countimport torchtorch.cuda.current_device()
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as FBUFFER_SIZE = 100000
BATCH_SIZE = 64
GAMMA = 0.99  # discount factor
LR = 5e-4
UPDATE_PERIOD = 4
EPS_ED = 0.01
EPS_DECAY = 0.99
SLIDE_LEN = 20
MAX_TIME = 1000device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')env = gym.make('LunarLander-v2')
env.seed(0)
random.seed(0)class Net(nn.Module):def __init__(self, h1=128, h2=64):super(Net, self).__init__()self.seed = torch.manual_seed(0)self.fc1 = nn.Linear(8, h1)self.fc2 = nn.Linear(h1, h2)self.fc3 = nn.Linear(h2, 4)def forward(self, t):t = F.relu(self.fc1(t))t = F.relu(self.fc2(t))t = self.fc3(t)return tclass Experience:def __init__(self, cur_state, action, reward, nxt_state, done):self.cur_state = cur_stateself.action = actionself.reward = rewardself.nxt_state = nxt_stateself.done = doneclass Buffer:def __init__(self):# random.seed(0)self.n = BUFFER_SIZEself.memory = [None for _ in range(BUFFER_SIZE)]self.pt = 0self.flag = 0  # to indicate whether the buffer can provide a batch of datadef push(self, experience):self.memory[self.pt] = experienceself.pt = (self.pt + 1) % self.nself.flag = min(self.flag + 1, self.n)def sample(self, sample_size):return random.sample(self.memory[:self.flag], sample_size)class Agent:def __init__(self):# random.seed(0)self.eps = 1.0self.buff = Buffer()self.policy_net = Net()self.target_net = Net()self.optim = optim.Adam(self.policy_net.parameters(), lr=LR)self.update_networks()self.total_rewards = []self.avg_rewards = []def update_networks(self):self.target_net.load_state_dict(self.policy_net.state_dict())def update_experiences(self, cur_state, action, reward, nxt_state, done):experience = Experience(cur_state, action, reward, nxt_state, done)self.buff.push(experience)def sample_experiences(self):samples = self.buff.sample(BATCH_SIZE)for _, ele in enumerate(samples):if _ == 0:cur_states = ele.cur_state.unsqueeze(0)actions = ele.actionrewards = ele.rewardnxt_states = ele.nxt_state.unsqueeze(0)dones = ele.doneelse:cur_states = torch.cat((cur_states, ele.cur_state.unsqueeze(0)), dim=0)actions = torch.cat((actions, ele.action), dim=0)rewards = torch.cat((rewards, ele.reward), dim=0)nxt_states = torch.cat((nxt_states, ele.nxt_state.unsqueeze(0)), dim=0)dones = torch.cat((dones, ele.done), dim=0)return cur_states, actions, rewards, nxt_states, donesdef get_action(self, state):rnd = random.random()if rnd > self.eps:  # exploitvalues = self.policy_net(state)act = torch.argmax(values, dim=0).item()else:act = random.randint(0, 3)return actdef optimize_policy(self):criterion = nn.MSELoss()cur_states, actions, rewards, nxt_states, dones = self.sample_experiences()cur_states = cur_states.to(device).float()actions = actions.to(device).long()rewards = rewards.to(device).float()nxt_states = nxt_states.to(device).float()dones = dones.to(device)self.policy_net = self.policy_net.to(device)self.target_net = self.target_net.to(device)# for i in range(10):policy_values = torch.gather(self.policy_net(cur_states), dim=1, index=actions.unsqueeze(-1))with torch.no_grad():next_values = torch.max(self.target_net(nxt_states), dim=1)[0]target_values = rewards + GAMMA * next_values * (1 - dones)target_values = target_values.unsqueeze(1)self.optim.zero_grad()loss = criterion(policy_values, target_values)loss.backward()# print("Loss:", loss.item())self.optim.step()self.policy_net = self.policy_net.cpu()self.target_net = self.target_net.cpu()return loss.item()def train(self, episodes):for episode in range(episodes):total_reward = 0cur_state = env.reset()cur_state = torch.from_numpy(cur_state)for tim in count():action = self.get_action(cur_state)# img = env.render(mode='rgb_array')nxt_state, reward, done, _ = env.step(action)nxt_state = torch.from_numpy(nxt_state)action = torch.tensor(action).unsqueeze(-1)reward = torch.tensor(reward).unsqueeze(-1)done = torch.tensor(1 if done else 0).unsqueeze(-1)self.buff.push(Experience(cur_state, action, reward, nxt_state, done))cur_state = nxt_state  # !!!if self.buff.flag >= BATCH_SIZE and self.buff.pt % UPDATE_PERIOD == 0:self.update_networks()self.optimize_policy()total_reward += reward.item()if done or tim >= MAX_TIME:self.update_rewards(total_reward)breakself.plot_rewards()if self.eps > EPS_ED:self.eps *= EPS_DECAYtorch.save(self.policy_net.state_dict(), 'policy_net.pkl')def update_rewards(self, total_reward):self.total_rewards.append(total_reward)cur = len(self.total_rewards) - 1rewards = 0for i in range(cur, max(-1, cur - SLIDE_LEN), -1):rewards += self.total_rewards[i]avg = rewards / min(SLIDE_LEN, len(self.total_rewards))self.avg_rewards.append(avg)def plot_rewards(self):plt.clf()plt.xlabel('Episodes')plt.ylabel('Rewards')plt.plot(self.total_rewards, color='r', label='Current')plt.plot(self.avg_rewards, color='b', label='Average')plt.legend()plt.pause(0.001)print("Episode", len(self.total_rewards))print("Current reward", self.total_rewards[-1])print("Average reward", self.avg_rewards[-1])print("Epsilon", self.eps)plt.savefig('Train.jpg')def test(self, episodes):self.eps = 0ret = 0for episode in range(episodes):total_reward = 0cur_state = env.reset()cur_state = torch.from_numpy(cur_state)for tim in count():action = self.get_action(cur_state)img = env.render(mode='rgb_array')nxt_state, reward, done, _ = env.step(action)cur_state = torch.from_numpy(nxt_state)total_reward += rewardif done or tim >= MAX_TIME:breakprint("Episode", episode+1)print("Current reward", total_reward)ret += total_rewardprint("Average reward of", episodes, "episodes:", ret / episodes)agent = Agent()
agent.train(700)
agent.test(100)env.close()

训练的reward曲线如图:

测试结果:

......

Episode 96
Current reward 210.40330984897227
Episode 97
Current reward 269.30063656546673
Episode 98
Current reward 297.40313034589826
Episode 99
Current reward 242.37884580171982
Episode 100
Current reward 235.21898442946033
Average reward of 100 episodes: 245.67580368550185

(2021.4.11二更)

最近手撸了一个Policy Gradient,因为对“根据policy来sample”的理解出现偏差,断断续续debug了好几天......

# -*- coding: utf-8 -*-
"""LunarLander - PG.ipynbAutomatically generated by Colaboratory.Original file is located athttps://colab.research.google.com/drive/16U2WE7925uWv8FMKwyP_aY6QYsbAxFJ5
"""
import random
import gym
import numpy as np
import matplotlib.pyplot as plt
from itertools import countimport torchimport torch.nn as nn
import torch.optim as optim
import torch.nn.functional as Fenv = gym.make('LunarLander-v2')
env.seed(0)
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)MAX_TIME = 1000
LR = 3e-4
EPOCHS = 4000
SLIDE_LEN = 20
NOISE_RATE = 0.1
GAMMA = 0.99device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')class Net(nn.Module):def __init__(self, h1=128, h2=128):super(Net, self).__init__()self.fc1 = nn.Linear(8, h1)self.fc2 = nn.Linear(h1, h2)self.fc3 = nn.Linear(h2, 4)def forward(self, t):t = F.relu(self.fc1(t))t = F.relu(self.fc2(t))t = F.log_softmax(self.fc3(t), dim=0)return tclass Trajectory:def __init__(self):self.reward = 0self.rewards = []def __len__(self):return len(self.track)def push(self, cur_reward):self.reward += cur_rewardself.rewards.append(cur_reward)def get_suffix_sum(a, gamma=GAMMA):tmp = a[::-1]for i in range(1, len(tmp)):tmp[i] += tmp[i - 1] * gammareturn tmp[::-1]class Agent:def __init__(self):self.policy = Net()self.losses = []self.opt = optim.Adam(self.policy.parameters(), lr=LR)self.total_rewards = [0]self.avg_rewards = [0]self.action_space = [i for i in range(4)]def get_action(self, cur_state, mode='train'):output = self.policy(cur_state)# action = output.argmax()probs = torch.exp(output).detach().cpu().numpy()action = np.random.choice(self.action_space, p=probs)action = torch.tensor(action).long().to(device)# sample the action instead of taking the "optimal" so faroutput, action = output.unsqueeze(0), action.unsqueeze(0)criterion = nn.NLLLoss()loss = criterion(output, action)self.losses.append(loss)return action.item()def train_one_episode(self, device=device):self.policy.to(device)total_loss = 0self.losses.clear()cur_trajectory = Trajectory()cur_state = env.reset()cur_state = torch.from_numpy(cur_state).to(device)for tim in count():action = self.get_action(cur_state)# print(tim, action)# img = env.render(mode='rgb_array')nxt_state, reward, done, _ = env.step(action)nxt_state = torch.from_numpy(nxt_state).to(device)action = torch.tensor(action).unsqueeze(-1).to(device)reward = torch.tensor(reward).unsqueeze(-1).to(device)done = torch.tensor(1 if done else 0).unsqueeze(-1).to(device)cur_trajectory.push(reward.item())cur_state = nxt_state  # !!!if done or tim >= MAX_TIME:self.update_rewards(cur_trajectory.reward)breakreward_weight = get_suffix_sum(cur_trajectory.rewards)reward_weight = torch.from_numpy(np.array(reward_weight)).to(device)# plot_tensor(np.array(cur_trajectory.rewards), 'rewards')# plot_tensor(reward_weight.cpu().numpy(), 'discounted_suffix_reward_weight')assert len(self.losses) == len(cur_trajectory.rewards)mean = reward_weight.mean()std = reward_weight.std()reward_weight = (reward_weight - mean) / stdfor i in range(len(self.losses)):total_loss += self.losses[i] * reward_weight[i]self.plot_rewards()self.opt.zero_grad()total_loss.backward()# for name, para in self.policy.named_parameters():#     print(name, para.grad.mean())self.opt.step()self.policy.cpu()torch.save(self.policy.state_dict(), 'policy.pkl')def update_rewards(self, total_reward):self.total_rewards.append(total_reward)cur = len(self.total_rewards) - 1rewards = 0for i in range(cur, max(-1, cur - SLIDE_LEN), -1):rewards += self.total_rewards[i]avg = rewards / min(SLIDE_LEN, len(self.total_rewards))self.avg_rewards.append(avg)def plot_rewards(self):plt.clf()plt.xlabel('Episodes')plt.ylabel('Rewards')plt.plot(self.total_rewards, color='g', label='Current')plt.plot(self.avg_rewards, color='b', label='Average')plt.legend()plt.pause(0.001)print("Episode", len(self.total_rewards))print("Current reward", self.total_rewards[-1])print("Average reward", self.avg_rewards[-1])plt.savefig('Train.jpg')def train(self, epochs=EPOCHS, device=device):for epoch in range(epochs):self.train_one_episode(device)def test(self, episodes, device=device):self.policy.load_state_dict(torch.load("policy.pkl"))self.policy.to(device)ret = 0for episode in range(episodes):total_reward = 0cur_state = env.reset()cur_state = torch.from_numpy(cur_state).to(device)for tim in count():action = self.get_action(cur_state)img = env.render(mode='rgb_array')nxt_state, reward, done, _ = env.step(action)cur_state = torch.from_numpy(nxt_state).to(device)total_reward += rewardif done or tim >= MAX_TIME:breakprint("Episode", episode + 1)print("Current reward", total_reward)ret += total_rewardprint("Average reward of", episodes, "episodes:", ret / episodes)agent = Agent()
agent.train()
agent.test(20)

最后测试20论结果:

Episode 1
Current reward 84.33441682399996
Episode 2
Current reward 216.4304736195864
Episode 3
Current reward 120.18777822341227
Episode 4
Current reward 79.38344301251452
Episode 5
Current reward 149.77976608818616
Episode 6
Current reward 139.1168128341676
Episode 7
Current reward 254.51534398848398
Episode 8
Current reward 133.54801428683155
Episode 9
Current reward 186.94804010946848
Episode 10
Current reward 202.23091982552887
Episode 11
Current reward 108.69519273351803
Episode 12
Current reward 234.48558150706708
Episode 13
Current reward 233.7075148866764
Episode 14
Current reward 234.104787749662
Episode 15
Current reward 201.50699844779575
Episode 16
Current reward 235.9508429194292
Episode 17
Current reward 111.90205065489462
Episode 18
Current reward 123.62077742772023
Episode 19
Current reward 228.49487126388732
Episode 20
Current reward 130.76856178782705
Average reward of 20 episodes: 170.4856094095329

Process finished with exit code 0

其中reward不到200的基本也是平稳降落,只不过不知道为什么落地后没有关闭引擎......还在左右调整?有解决方案的大佬欢迎在评论区指点一下~

Deep Reinforcement Learning入门 - DQN/Policy Gradient实现LunarLander-v2相关推荐

  1. Policy gradient Method of Deep Reinforcement learning (Part One)

    目录 Abstract Part one: Basic knowledge Policy Environment Dynamics Policy Policy Approximation Policy ...

  2. 深度学习(19): Deep Reinforcement learning(Policy gradientinteract with environment)

    Deep Reinforcement learning AL=DL+RL Machine 观察到环境的状态,做出一些行为对环境产生影响,环境根据machine的改变给予一个reward.正向的acti ...

  3. [DQN] Playing Atari with Deep Reinforcement Learning

    论文链接:https://arxiv.org/abs/1312.5602 引用:Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with d ...

  4. 深度强化学习:入门(Deep Reinforcement Learning: Scratching the surface)

    原文链接:https://blog.csdn.net/qq_32690999/article/details/78594220 本博客是对学习李宏毅教授在youtube上传的课程视频<Deep ...

  5. 【DQN】解析 DeepMind 深度强化学习 (Deep Reinforcement Learning) 技术

    原文:http://www.jianshu.com/p/d347bb2ca53c 声明:感谢 Tambet Matiisen 的创作,这里只对最为核心的部分进行的翻译 Two years ago, a ...

  6. Deep Reinforcement Learning超简单入门项目 Pytorch实现接水果游戏AI

    学习过传统的监督和无监督学习方法后,我们现在已经可以自行开发机器学习系统来解决一些实际问题了.我们能实现一些事件的预测,一些模式的分类,还有数据的聚类等项目.但是这些好像和我们心目中的人工智能仍有差距 ...

  7. 深度强化学习综述论文 A Brief Survey of Deep Reinforcement Learning

    A Brief Survey of Deep Reinforcement Learning 深度强化学习的简要概述 作者: Kai Arulkumaran, Marc Peter Deisenroth ...

  8. 深度强化学习—— 译 Deep Reinforcement Learning(part 0: 目录、简介、背景)

    深度强化学习--概述 翻译说明 综述 1 简介 2 背景 2.1 人工智能 2.2 机器学习 2.3 深度学习 2.4 强化学习 2.4.1 Problem Setup 2.4.2 值函数 2.4.3 ...

  9. DDPG:CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING

    CONTINOUS CONTROL WITH DEEP REINFORCEMENT LEARNING 论文地址 https://arxiv.org/abs/1509.02971 个人翻译,并不权威 T ...

最新文章

  1. 干货 | 手把手教你用115行代码做个数独解析器!(附代码)
  2. Scrapy项目实战
  3. 学生信息管理系统小结
  4. 动态规划 dp03 最长公共子串问题 c代码
  5. 在 Java 中应用设计模式 - Factory Method
  6. 类的特殊成员反射异常处理
  7. 第四十五期:万亿级日访问量下,Redis在微博的9年优化历程
  8. mysql数据表中取几列_MySQL实现表中取出随机数据
  9. pythonUI---ttk.Treeview使用心得(内含表格形式加垂直水平滚轮方法)
  10. ExtJs异步ajax调用和同步ajax调用公用方法(转)
  11. python深度学习之TensorFlow
  12. 使用Java根据约定格式生成Oracle建表语句
  13. 网站漏洞修补之苹果cms建站系统
  14. Failed to load resource: the server responded with a status of 404 (Not Found) favicon.ico文件找不到
  15. 人工智能-搜索----启发式搜索
  16. java程序员 女装_java程序员面试着装要求是什么?
  17. 帝国cms php循环,帝国CMS listshowclass循环栏目标签
  18. 涤纶电容的作用原理及优点缺点
  19. 遥感水文前景_我国“人才紧缺”的7大专业,就业前景好,快来看看
  20. 955.WLB 红包封面来啦!送给希望不加班的你~

热门文章

  1. 重要消息!国务院明确电子票允许作为报销凭证
  2. Java 学者出国求学的总天数
  3. Edraw soft
  4. 载歌在谷线上春晚「主播节目单」终极曝光!
  5. USACO-Moo Operations
  6. 6-1 简单排序 (100分)
  7. iOS 一个丝滑的全屏滑动返回手势
  8. CentOs7 提示没有安装包docker的解决办法
  9. 给OpenCV初学者的礼物——OpenCV人脸检测入门教程
  10. 苹果logo_苹果,太会玩LOGO了~