本文为强化学习笔记,主要参考以下内容:

  • Reinforcement Learning: An Introduction
  • 代码全部来自 GitHub
  • 习题答案参考 Github

目录

  • Blackjack
  • Code
    • Policy initialization
    • Environment
    • Monte Carlo Prediction
    • Monte Carlo ES (Exploring starts)
    • Off-policy Monte Carlo Prediction

Blackjack

The object of the popular casino card game of blackjack is to obtain cards the sum of whose numerical values is as great as possible without exceeding 21. All face cards (J,Q,K)(J, Q, K)(J,Q,K) count as 10, and an ace can count either as 1 or as 11.

We consider the version in which each player competes independently against the dealer(庄家).

  • The game begins with two cards dealt to both dealer and player. One of the dealer’s cards is face up and the other is face down.
  • If the player has 21 immediately (an ace and a 10-card), it is called a naturalnaturalnatural (天和). He then wins unless the dealer also has a natural, in which case the game is a draw.
  • If the player does not have a natural, then he can request additional cards, one by one (hitshitshits(要牌)), until he either stops (sticksstickssticks(停牌)) or exceeds 21 (goesgoesgoes bustbustbust(爆牌) ).
  • If he goes bust, he loses; if he sticks, then it becomes the dealer’s turn.
  • The dealer hits or sticks according to a fixed strategy: he sticks on any sum of 17 or greater, and hits otherwise. If the dealer goes bust, then the player wins; otherwise, the outcome—win, lose, or draw—is determined by whose final sum is closer to 21.

Playing blackjack is naturally formulated as an episodic finite MDP.

  • Each game of blackjack is an episode.
  • Rewards of +1+1+1, −1−1−1, and 000 are given for winning, losing, and drawing, respectively. All rewards within a game are zero, and we do not discount (γ=1\gamma = 1γ=1); therefore these terminal rewards are also the returns.
  • The player’s actions are to hit or to stick. The states depend on the player’s cards and the dealer’s showing card.

We assume that cards are dealt from an infinite deck (i.e., with replacement) so that there is no advantage to keeping track of the cards already dealt.

  • If the player holds an ace that he could count as 11 without going bust, then the ace is said to be usableusableusable. In this case it is always counted as 11 because counting it as 1 would make the sum 11 or less, in which case there is no decision to be made because, obviously, the player should always hit.
  • Thus, the player makes decisions on the basis of three variables: his current sum (12~21), the dealer’s one showing card (ace~10), and whether or not he holds a usable ace. This makes for a total of 200 states.

Code

  • As the initial policy we use the policy evaluated in the previous blackjack example, that which sticks only on 20 or 21.
  • Figure 5.2 shows the optimal policy for blackjack found by Monte Carlo ES.

#######################################################################
# Copyright (C)                                                       #
# 2016-2018 Shangtong Zhang(zhangshangtong.cpp@gmail.com)             #
# 2016 Kenta Shimada(hyperkentakun@gmail.com)                         #
# 2017 Nicky van Foreest(vanforeest@gmail.com)                        #
# Permission given to modify the code as long as you keep this        #
# declaration at the top                                              #
#######################################################################
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm   # tqdm 是一个快速,可扩展的Python进度条,可以在 Python 长循环中添加一个进度提示信息

Policy initialization

# actions: hit or stand
ACTION_HIT = 0
ACTION_STAND = 1  #  "stick" in the book
ACTIONS = [ACTION_HIT, ACTION_STAND]# policy for player (hit or stand)
POLICY_PLAYER = np.zeros(22, dtype=np.int)
# initial policy
for i in range(12, 20):POLICY_PLAYER[i] = ACTION_HIT
POLICY_PLAYER[20] = ACTION_STAND
POLICY_PLAYER[21] = ACTION_STAND# function form of target policy of player
def target_policy_player(usable_ace_player, player_sum, dealer_card):return POLICY_PLAYER[player_sum]# function form of behavior policy of player
# a stochastic and more exploratory policy
def behavior_policy_player(usable_ace_player, player_sum, dealer_card):if np.random.binomial(1, 0.5) == 1:    # choose a random action with p=.5return ACTION_STANDreturn ACTION_HIT# policy for dealer (dealer follows a fixed strategy)
POLICY_DEALER = np.zeros(22)
for i in range(12, 17):POLICY_DEALER[i] = ACTION_HIT
for i in range(17, 22):POLICY_DEALER[i] = ACTION_STAND

Environment

# get a new card
def get_card():card = np.random.randint(1, 14)card = min(card, 10)return card     # get the value of a card (11 for ace).
def card_value(card_id):return 11 if card_id == 1 else card_id
# play a game
# @policy_player: specify policy for player (target policy / behavior policy)
# @initial_state: [whether player has a usable Ace, sum of player's cards, one card of dealer]
#                 None -> generate a random initial state
# @initial_action: the initial action
#                  None -> use policy_player to generate initial action
def play(policy_player, initial_state=None, initial_action=None):# player status# sum of playerplayer_sum = 0# trajectory of player (track an entire episode)# Since discount date is 0, reward equals the final return. So we only need to record the state-action pairsplayer_trajectory = []# whether player uses Ace as 11usable_ace_player = False# dealer statusdealer_card1 = 0dealer_card2 = 0usable_ace_dealer = Falseif initial_state is None:# generate a random initial statewhile player_sum < 12:# if sum of player is less than 12, always hitcard = get_card()player_sum += card_value(card) # 11 for ace# If the player's sum is larger than 21, he may hold one or two aces.if player_sum > 21:assert player_sum == 22# last card must be aceplayer_sum -= 10else:usable_ace_player |= (1 == card)# initialize cards of dealer, suppose dealer will show the first card he getsdealer_card1 = get_card()dealer_card2 = get_card()else:# use specified initial stateusable_ace_player, player_sum, dealer_card1 = initial_statedealer_card2 = get_card()# initial state of the gamestate = [usable_ace_player, player_sum, dealer_card1]# initialize dealer's sumdealer_sum = card_value(dealer_card1) + card_value(dealer_card2)usable_ace_dealer = 1 in (dealer_card1, dealer_card2)# if the dealer's sum is larger than 21, he must hold two aces.if dealer_sum > 21:assert dealer_sum == 22# use one Ace as 1 rather than 11dealer_sum -= 10assert dealer_sum <= 21assert player_sum <= 21# game starts!# player's turnwhile True:if initial_action is not None:action = initial_actioninitial_action = Noneelse:# get action based on current sumaction = policy_player(usable_ace_player, player_sum, dealer_card1)# track player's trajectory for importance samplingplayer_trajectory.append([(usable_ace_player, player_sum, dealer_card1), action])if action == ACTION_STAND:break# if hit, get new cardcard = get_card()# Keep track of the ace count. the usable_ace_player flag is insufficient alone as it cannot# distinguish between having one ace or two.ace_count = int(usable_ace_player)if card == 1:ace_count += 1player_sum += card_value(card)# If the player has a usable ace, use it as 1 to avoid busting and continue.while player_sum > 21 and ace_count:player_sum -= 10ace_count -= 1# player bustsif player_sum > 21:return state, -1, player_trajectoryassert player_sum <= 21usable_ace_player = (ace_count == 1)# dealer's turnwhile True:# get action based on current sumaction = POLICY_DEALER[dealer_sum] # fixed strategyif action == ACTION_STAND:break# if hit, get a new cardnew_card = get_card()ace_count = int(usable_ace_dealer)if new_card == 1:ace_count += 1dealer_sum += card_value(new_card)# If the dealer has a usable ace, use it as 1 to avoid busting and continue.while dealer_sum > 21 and ace_count:dealer_sum -= 10ace_count -= 1# dealer bustsif dealer_sum > 21:return state, 1, player_trajectoryusable_ace_dealer = (ace_count == 1)# compare the sum between player and dealerassert player_sum <= 21 and dealer_sum <= 21if player_sum > dealer_sum:return state, 1, player_trajectoryelif player_sum == dealer_sum:return state, 0, player_trajectoryelse:return state, -1, player_trajectory

Monte Carlo Prediction

Consider the policy that sticks if the player’s sum is 20 or 21, and otherwise hits. To find the state-value function for this policy by a Monte Carlo approach, one simulates many blackjack games using the policy and averages the returns following each state. In this way, we obtained the estimates of the state-value function shown below.

Blackjack does not contain two duplicate state in any episode, making first-visit and every-visit method essentially the same thing.

  • The estimates for states with a usable ace are less certain and less regular(不规律) because these states are less common.
  • In any event, after 500,000 games the value function is very well approximated.

# Monte Carlo Sample with On-Policy
# evaluate the initial policy
def monte_carlo_on_policy(episodes):states_usable_ace = np.zeros((10, 10))     # v(s) * N# initialze counts to 1 to avoid 0 being dividedstates_usable_ace_count = np.ones((10, 10))  # Nstates_no_usable_ace = np.zeros((10, 10))# initialze counts to 1 to avoid 0 being dividedstates_no_usable_ace_count = np.ones((10, 10))# every-visit MCfor i in tqdm(range(0, episodes)):_, reward, player_trajectory = play(target_policy_player)for (usable_ace, player_sum, dealer_card), _ in player_trajectory:player_sum -= 12dealer_card -= 1if usable_ace:states_usable_ace_count[player_sum, dealer_card] += 1states_usable_ace[player_sum, dealer_card] += rewardelse:states_no_usable_ace_count[player_sum, dealer_card] += 1states_no_usable_ace[player_sum, dealer_card] += rewardreturn states_usable_ace / states_usable_ace_count, states_no_usable_ace / states_no_usable_ace_count
def figure_5_1():states_usable_ace_1, states_no_usable_ace_1 = monte_carlo_on_policy(10000)states_usable_ace_2, states_no_usable_ace_2 = monte_carlo_on_policy(500000)states = [states_usable_ace_1,states_usable_ace_2,states_no_usable_ace_1,states_no_usable_ace_2]titles = ['Usable Ace, 10000 Episodes','Usable Ace, 500000 Episodes','No Usable Ace, 10000 Episodes','No Usable Ace, 500000 Episodes']_, axes = plt.subplots(2, 2, figsize=(40, 30))plt.subplots_adjust(wspace=0.1, hspace=0.2)axes = axes.flatten()for state, title, axis in zip(states, titles, axes):fig = sns.heatmap(np.flipud(state), cmap="YlGnBu", ax=axis, xticklabels=range(1, 11),yticklabels=list(reversed(range(12, 22))))fig.set_ylabel('player sum', fontsize=30)fig.set_xlabel('dealer showing', fontsize=30)fig.set_title(title, fontsize=30)plt.savefig('../images/figure_5_1.png')plt.close()

Monte Carlo ES (Exploring starts)

# Monte Carlo with Exploring Starts
def monte_carlo_es(episodes):   # (playerSum, dealerCard, usableAce, action)state_action_values = np.zeros((10, 10, 2, 2))         # Q(s, a) * N# initialze counts to 1 to avoid division by 0state_action_pair_count = np.ones((10, 10, 2, 2))   # N# behavior policy is greedy (on-line policy)def behavior_policy(usable_ace, player_sum, dealer_card):usable_ace = int(usable_ace)player_sum -= 12dealer_card -= 1# get argmax of the average returns(s, a)values_ = state_action_values[player_sum, dealer_card, usable_ace, :] / \state_action_pair_count[player_sum, dealer_card, usable_ace, :]# choose action with the max Q(s, a)# if multiple actions have the same value -> choose among them at random return np.random.choice([action_ for action_, value_ in enumerate(values_) if value_ == np.max(values_)])# play for several episodesfor episode in tqdm(range(episodes)):# for each episode, use a randomly initialized state and action (exploring starts)initial_state = [bool(np.random.choice([0, 1])),np.random.choice(range(12, 22)),np.random.choice(range(1, 11))]initial_action = np.random.choice(ACTIONS)# use initial policy for the first episodecurrent_policy = behavior_policy if episode else target_policy_player _, reward, trajectory = play(current_policy, initial_state, initial_action)first_visit_check = set()for (usable_ace, player_sum, dealer_card), action in trajectory:usable_ace = int(usable_ace)player_sum -= 12dealer_card -= 1state_action = (usable_ace, player_sum, dealer_card, action)# first-visit MCif state_action in first_visit_check:continuefirst_visit_check.add(state_action)# update values of state-action pairsstate_action_values[player_sum, dealer_card, usable_ace, action] += rewardstate_action_pair_count[player_sum, dealer_card, usable_ace, action] += 1return state_action_values / state_action_pair_count    # Q(s, a)
def figure_5_2():state_action_values = monte_carlo_es(500000)state_value_no_usable_ace = np.max(state_action_values[:, :, 0, :], axis=-1)state_value_usable_ace = np.max(state_action_values[:, :, 1, :], axis=-1)# get the optimal policy (greedy)action_no_usable_ace = np.argmax(state_action_values[:, :, 0, :], axis=-1)action_usable_ace = np.argmax(state_action_values[:, :, 1, :], axis=-1)images = [action_usable_ace,state_value_usable_ace,action_no_usable_ace,state_value_no_usable_ace]titles = ['Optimal policy with usable Ace','Optimal value with usable Ace','Optimal policy without usable Ace','Optimal value without usable Ace']_, axes = plt.subplots(2, 2, figsize=(40, 30))plt.subplots_adjust(wspace=0.1, hspace=0.2)axes = axes.flatten()for image, title, axis in zip(images, titles, axes):fig = sns.heatmap(np.flipud(image), cmap="YlGnBu", ax=axis, xticklabels=range(1, 11),yticklabels=list(reversed(range(12, 22))))fig.set_ylabel('player sum', fontsize=30)fig.set_xlabel('dealer showing', fontsize=30)fig.set_title(title, fontsize=30)plt.savefig('../images/figure_5_2.png')plt.close()

Off-policy Monte Carlo Prediction

We applied both ordinary and weighted importance-sampling methods to estimate the value of a single blackjack state from off-policy data.

  • In this example, we evaluated the state in which the dealer is showing a deuce (2), the sum of the player’s cards is 13, and the player has a usable ace (that is, the player holds an ace and a deuce, or equivalently three aces).
  • The data was generated by starting in this state then choosing to hit or stick at random with equal probability (the behavior policy).
  • The target policy was to stick only on a sum of 20 or 21.

The value of this state under the target policy is approximately −0.27726−0.27726−0.27726 (this was determined by separately generating one-hundred million episodes using the target policy and averaging their returns).

Both off-policy methods closely approximated this value after 1000 off-policy episodes using the random policy. To make sure they did this reliably, we performed 100 independent runs, each starting from estimates of zero and learning for 10,000 episodes.

The error approaches zero for both algorithms, but the weighted importance-sampling method has much lower error at the beginning, as is typical in practice.


ordinary importance sampling

weighted importance sampling

# Monte Carlo Sample with Off-Policy
def monte_carlo_off_policy(episodes):initial_state = [True, 13, 2]rhos = []returns = []for i in range(0, episodes):# play with behavior policy_, reward, player_trajectory = play(behavior_policy_player, initial_state=initial_state)# get the importance rationumerator = 1.0denominator = 1.0for (usable_ace, player_sum, dealer_card), action in player_trajectory:if action == target_policy_player(usable_ace, player_sum, dealer_card):denominator *= 0.5else:numerator = 0.0breakrho = numerator / denominatorrhos.append(rho)returns.append(reward)rhos = np.asarray(rhos)returns = np.asarray(returns)weighted_returns = rhos * returnsweighted_returns = np.add.accumulate(weighted_returns)rhos = np.add.accumulate(rhos)ordinary_sampling = weighted_returns / np.arange(1, episodes + 1)with np.errstate(divide='ignore',invalid='ignore'):weighted_sampling = np.where(rhos != 0, weighted_returns / rhos, 0)return ordinary_sampling, weighted_sampling
def figure_5_3():true_value = -0.27726episodes = 10000runs = 100error_ordinary = np.zeros(episodes)error_weighted = np.zeros(episodes)for i in tqdm(range(0, runs)):ordinary_sampling_, weighted_sampling_ = monte_carlo_off_policy(episodes)# get the squared errorerror_ordinary += np.power(ordinary_sampling_ - true_value, 2)error_weighted += np.power(weighted_sampling_ - true_value, 2)error_ordinary /= runserror_weighted /= runsplt.plot(error_ordinary, label='Ordinary Importance Sampling')plt.plot(error_weighted, label='Weighted Importance Sampling')plt.xlabel('Episodes (log scale)')plt.ylabel('Mean square error')plt.xscale('log')plt.legend()plt.savefig('../images/figure_5_3.png')plt.close()

RL(Chapter 5): Blackjack (二十一点)相关推荐

  1. 二十一点算法 --freeCodeCamp

    今天在freeCodeCamp上做题时,有一道题目讲的是21点的算法,从网上搜索了一下21点的规则如下: 21点(blackjack)算法[转] -- Beat the dealer 来源: 顾斌梦追 ...

  2. 强化学习丨蒙特卡洛方法及关于“二十一点”游戏的编程仿真

    目录 一.蒙特卡洛方法简介 二.蒙特卡洛预测 2.1 算法介绍 2.2 二十一点(Blackjack) 2.3 算法应用 三.蒙特卡洛控制 3.1 基于试探性出发的蒙特卡洛(蒙特卡洛ES) 3.1.1 ...

  3. 21点小游戏java编程_用Java编写一个二十一点小游戏

    21点又名黑杰克(Blackjack),起源于法国,已流传到世界各地,有着悠久的历史.现在在世界各地的赌场中都可以看到二十一点,随着互联网的发展,二十一点开始走向网络时代.该游戏由2到6个人玩,使用除 ...

  4. Java实战项目(三)——二十一点游戏

    一.项目目标 利用Java swing技术能够实现玩家与电脑进行二十一点游戏.要求如下: 纸牌数:共52张纸牌,除去大小王两张纸牌. 花色:红桃.黑桃.方块.梅花. 纸牌的面值:A到10的纸牌面值按照 ...

  5. RL(Chapter 3): GridWorld

    本文为强化学习笔记,主要参考以下内容: Reinforcement Learning: An Introduction 代码全部来自 GitHub 习题答案参考 Github 目录 GridWorld ...

  6. 【Java】二十一点小游戏

    游戏规则 游戏规则 编程实现 import java.util.ArrayList; import java.util.Arrays; import java.util.List; import ja ...

  7. RL(Chapter 4): Gambler’s Problem

    本文为强化学习笔记,主要参考以下内容: Reinforcement Learning: An Introduction 代码全部来自 GitHub 习题答案参考 Github 目录 Gambler's ...

  8. RL(Chapter 6): Windy Gridworld

    本文为强化学习笔记,主要参考以下内容: Reinforcement Learning: An Introduction 代码全部来自 GitHub 习题答案参考 Github 目录 Windy Gri ...

  9. RL(Chapter 3): Finite Markov Decision Processes (有限马尔可夫决策过程)

    本文为强化学习笔记,主要参考以下内容: Reinforcement Learning: An Introduction 代码全部来自 GitHub 习题答案参考 Github 目录 The Agent ...

最新文章

  1. WindowsTime服务设置
  2. 安全测试的目的,发现哪些问题
  3. 进入DRF和ANGULAR的整合学习,这三篇入门内容一定要学好的
  4. 浙大远程教育计算机作业3,2016浙大远程教育计算机应用基础作业-3剖析
  5. 【传统网络】与【SDN】的【DDos攻击与检测】
  6. Centos7最小化安装
  7. linux 终端最大化命令,11个让你吃惊的Linux终端命令
  8. matlab实现图片类型的转换
  9. Road to Coder _Game
  10. micropython(3):使用thonny ide 开发,并控制 LED 设备
  11. JWT的数字签名的简单理解
  12. 不占广告位增加网站收入揭秘
  13. element 验证出现英文_vue.js+element 默认提示中英文操作
  14. 电脑开机后没反应,如何解决?
  15. python中的高级特性
  16. shell 学习笔记---运算符
  17. 不良资产案件执行难的原因
  18. 读书笔记 - 学会写作: 五个吸引人的情节套路
  19. cropping IplImage most effectively
  20. 一个简单的中国亲戚关系计算器 实现思路整理

热门文章

  1. cadence 原理图不能打印成PDF 解决方案
  2. 专业的个人记帐软件 爱上记帐 1.0.1
  3. 通用汽车新战略:“逃离”汽车制造,能否冲破“围城”之困
  4. PAT 基础编程题 7-2 然后是几点 (15 分)
  5. keep-alive上加v-if导致缓存失效
  6. PATH linux环境变量 LD_LIBRARY_PATH详解
  7. isArray 函数,转自 笨笨狗 blog
  8. Eloquent JavaScript 笔记 九: Regular Expressions(下)
  9. 账本App的制作教程
  10. 20220712 初识JS