|
本文目前主要是寫給自己的一個筆記,接下來這段時間會逐步記錄我是怎么通過學(xué)習(xí)使用TensorFlow+Keras訓(xùn)練神經(jīng)網(wǎng)絡(luò)自己玩兒游戲,如果能間接幫助到他人就最好不過了,不喜勿噴。 上一篇我們已經(jīng)找到了需要輸入神經(jīng)網(wǎng)絡(luò)的數(shù)據(jù)(也就是 這一篇我們就來看看怎么樣寫一個強(qiáng)化學(xué)習(xí)的簡單算法Q-Learn,并且運(yùn)用到之前的最簡單的游戲中。 在這里找到一個莫煩大神優(yōu)酷的視頻集錦,上面不止講了Keras,還講了各種關(guān)于機(jī)器學(xué)習(xí)的視頻,簡單易懂,非常有幫助! 看了一個DQN的視頻,非常簡短有力的介紹了我們需要怎么樣搭建一個神經(jīng)網(wǎng)絡(luò)來玩游戲。 接下來開始用Q-learning來寫程序,如圖所示是Q-learning算法的偽碼。 import numpy as np import pandas as pd import time np.random.seed(2) # reproducible N_STATES = 6 # the length of the 1 dimensional world ACTIONS = ['left', 'right'] # available actions EPSILON = 0.9 # greedy police ALPHA = 0.1 # learning rate GAMMA = 0.9 # discount factor MAX_EPISODES = 13 # maximum episodes FRESH_TIME = 0.3 # fresh time for one move def build_q_table(n_states, actions): table = pd.DataFrame( np.zeros((n_states, len(actions))), # q_table initial values columns=actions, # actions's name ) # print(table) # show table return table def choose_action(state, q_table): # This is how to choose an action state_actions = q_table.iloc[state, :] if (np.random.uniform() > EPSILON) or ((state_actions == 0).all()): # act non-greedy or state-action have no value action_name = np.random.choice(ACTIONS) else: # act greedy action_name = state_actions.idxmax() # replace argmax to idxmax as argmax means a different function in newer version of pandas return action_name def get_env_feedback(S, A): # This is how agent will interact with the environment if A == 'right': # move right if S == N_STATES - 2: # terminate 只有到達(dá)終點(diǎn)才有獎勵 S_ = 'terminal' R = 1 else: S_ = S + 1 R = 0 else: # move left走左邊的獎勵永遠(yuǎn)是0 R = 0 if S == 0: S_ = S # reach the wall else: S_ = S - 1 return S_, R def update_env(S, episode, step_counter): # This is how environment be updated env_list = ['-']*(N_STATES-1) + ['T'] # '---------T' our environment if S == 'terminal': interaction = 'Episode %s: total_steps = %s' % (episode+1, step_counter) print('\r{}'.format(interaction), end='') time.sleep(2) print('\r ', end='') else: env_list[S] = 'o' interaction = ''.join(env_list) print('\r{}'.format(interaction), end='') time.sleep(FRESH_TIME) def rl(): # main part of RL loop q_table = build_q_table(N_STATES, ACTIONS) # 創(chuàng)建一個全為0的q表,用的是pandas for episode in range(MAX_EPISODES): # 主循環(huán),看一共需要訓(xùn)練多少次,也就是一共成功找到寶藏多少次 step_counter = 0 S = 0 is_terminated = False update_env(S, episode, step_counter) # 在環(huán)境中更新狀態(tài) while not is_terminated: # 如果沒找到寶藏就要一直找 A = choose_action(S, q_table) # 通過q表來選擇 下一步的動作 S_, R = get_env_feedback(S, A) # 用這個動作來走,并且得到他的獎勵 q_predict = q_table.loc[S, A] # 得到q表的預(yù)測值,也就是走這一步之前會先看看不走也就是不更新q表會是什么情況 其實(shí)也就是前一個狀態(tài)執(zhí)行這個動作的q值 if S_ != 'terminal': q_target = R + GAMMA * q_table.iloc[S_, :].max() # GAMMA是衰減率 乘上實(shí)際應(yīng)該走的q值里最大的 else: q_target = R # next state is terminal is_terminated = True # terminate this episode q_table.loc[S, A] += ALPHA * (q_target - q_predict) # update 更新的值實(shí)際上是q_predict在q表中的值,也就是上一個狀態(tài)執(zhí)行這個動作的q值 S = S_ # move to next state update_env(S, episode, step_counter+1) step_counter += 1 return q_table if __name__ == "__main__": q_table = rl() print('\r\nQ-table:\n') print(q_table) 只是一個非常簡單的Q-learning運(yùn)用,具體的講解可以看莫煩大神的視頻,我再程序上又加了一些中文注釋以便理解 接下來改寫莫煩大神的視頻,使用Q-learning玩兒之前的CartPole-v0游戲。 由于這款游戲傳入的狀態(tài)值是一個連續(xù)性變量,不是固定數(shù)量的狀態(tài)值,導(dǎo)致Q-learning算法中的q表一直在更新新的值,不能達(dá)到算法的最初目的,所以這樣做是不行的。不過還是po一下源碼,為以后的算法做準(zhǔn)備。 # -*- coding: UTF-8 -*- from Qlearning import QLearningTable import gym if __name__ == '__main__': print('開始學(xué)習(xí)') RL = QLearningTable(actions=list(range(2))) # 得到Q-learning算法類的實(shí)例,可以修改學(xué)習(xí)率等參數(shù) env = gym.make('CartPole-v1') # env = gym.make('AirRaid-ram-v0') for i_episode in range(2000): observation = env.reset() for t in range(1000): env.render() # print(observation) action = RL.choose_action(str(observation)) # 使用q表來選擇 下一步的動作 # action = env.action_space.sample() # print(action) observation_, reward, done, info = env.step(action) # 把當(dāng)前動作傳入環(huán)境中,得到真實(shí)的獎勵和觀測 RL.learn(str(observation), action, reward, str(observation_)) # 通過真實(shí)的獎勵觀測和估計(jì)的獎勵 更新q表 observation = observation_ # 真正的走下一步 if done: print("Episode finished after {} timesteps".format(t + 1)) break 這個是Q-learning算法模塊代碼: # -*- coding: UTF-8 -*- import numpy as np import pandas as pd class QLearningTable: def __init__(self, actions, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9): self.actions = actions # a list self.lr = learning_rate self.gamma = reward_decay self.epsilon = e_greedy self.q_table = pd.DataFrame(columns=self.actions, dtype=np.float64) def choose_action(self, observation): self.check_state_exist(observation) # action selection if np.random.uniform() < self.epsilon: # choose best action state_action = self.q_table.loc[observation, :] state_action = state_action.reindex(np.random.permutation(state_action.index)) # some actions have same value action = state_action.idxmax() else: # choose random action action = np.random.choice(self.actions) return action def learn(self, s, a, r, s_): self.check_state_exist(s_) q_predict = self.q_table.loc[s, a] if s_ != 'terminal': q_target = r + self.gamma * self.q_table.loc[s_, :].max() # next state is not terminal else: q_target = r # next state is terminal self.q_table.loc[s, a] += self.lr * (q_target - q_predict) # update def check_state_exist(self, state): if state not in self.q_table.index: # append new state to q table self.q_table = self.q_table.append( pd.Series( [0]*len(self.actions), index=self.q_table.columns, name=state, ) ) 根據(jù)實(shí)踐檢測Q-learning算法只適合在有限量的狀態(tài)值下運(yùn)用,對于連續(xù)值沒有什么用,其實(shí)看懂了算法應(yīng)該就很容易理解這一點(diǎn),只是覺得做個試驗(yàn)也不能所以就做了。 這一篇學(xué)了Q-learning,發(fā)現(xiàn)對我們玩兒游戲這種狀態(tài)值超級多的情況并不適用。就對走迷宮有點(diǎn)用,但是走迷宮又有dfs深搜這樣的算法,所以感覺Q-learning只能算強(qiáng)化學(xué)習(xí)里面比較啟發(fā)式的算法吧,沒什么實(shí)際用。不過我一個初學(xué)者也不知道。哈哈哈~ 這篇是真的不喜勿噴了。 下一篇我們將使用Sarsa和Sarsa-lambda來玩一個迷宮游戲。 |
|
|