Python OpenAI Gym 中级教程:强化学习实践项目

在本篇博客中,我们将通过一个实际项目来演示如何在 OpenAI Gym 中应用强化学习算法。我们选择一个简单而经典的问题:CartPole,这是一个控制小车平衡杆的问题。我们将使用深度 Q 网络(DQN)算法来解决这个问题。

1. 安装依赖

首先,确保你已经安装了必要的依赖:

pip install gym[box2d] tensorflow

2. 强化学习项目实践

2.1 创建 DQN 模型

我们将使用 TensorFlow 创建一个简单的深度 Q 网络模型。

import tensorflow as tffrom tensorflow.keras import layers, modelsclass DQN(models.Model):def __init__(self, num_actions):super(DQN, self).__init__()self.layer1 = layers.Dense(24, activation='relu')self.layer2 = layers.Dense(24, activation='relu')self.output_layer = layers.Dense(num_actions, activation='linear')def call(self, state):x = self.layer1(state)x = self.layer2(x)return self.output_layer(x)
2.2 创建经验回放缓冲区

为了训练 DQN 模型,我们将使用经验回放缓冲区来存储过去的经验。

import randomfrom collections import dequeclass ReplayBuffer:def __init__(self, capacity):self.capacity = capacityself.buffer = deque(maxlen=capacity)def add(self, experience):self.buffer.append(experience)def sample(self, batch_size):batch = random.sample(self.buffer, batch_size)states, actions, rewards, next_states, dones = zip(*batch)return (tf.concat(states, axis=0),tf.convert_to_tensor(actions, dtype=tf.float32),tf.convert_to_tensor(rewards, dtype=tf.float32),tf.concat(next_states, axis=0),tf.convert_to_tensor(dones, dtype=tf.float32),)
2.3 DQN 训练

我们将定义一个函数来训练 DQN 模型。

import numpy as npimport gymdef train_dqn(env, model, target_model, replay_buffer, num_episodes=1000, batch_size=32, gamma=0.99, target_update_frequency=100):optimizer = tf.optimizers.Adam(learning_rate=0.001)huber_loss = tf.keras.losses.Huber()epsilon = 1.0epsilon_decay = 0.995min_epsilon = 0.01for episode in range(1, num_episodes + 1):state = env.reset()state = tf.convert_to_tensor(state, dtype=tf.float32)total_reward = 0while True:# epsilon-greedy策略选择动作if np.random.rand() < epsilon:action = env.action_space.sample()else:q_values = model(state[None, :])action = tf.argmax(q_values[0]).numpy()next_state, reward, done, _ = env.step(action)next_state = tf.convert_to_tensor(next_state, dtype=tf.float32)total_reward += rewardreplay_buffer.add((state, action, reward, next_state, done))state = next_state# 经验回放if len(replay_buffer.buffer) >= batch_size:states, actions, rewards, next_states, dones = replay_buffer.sample(batch_size)with tf.GradientTape() as tape:q_values = model(states)next_q_values = target_model(next_states)target_q_values = rewards + gamma * tf.reduce_max(next_q_values, axis=1) * (1 - dones)selected_q_values = tf.reduce_sum(q_values * tf.one_hot(actions, env.action_space.n), axis=1)loss = huber_loss(selected_q_values, target_q_values)gradients = tape.gradient(loss, model.trainable_variables)optimizer.apply_gradients(zip(gradients, model.trainable_variables))# 更新目标网络if episode % target_update_frequency == 0:target_model.set_weights(model.get_weights())if done:epsilon = max(epsilon * epsilon_decay, min_epsilon)print(f"Episode: {episode}, Total Reward: {total_reward}, Epsilon: {epsilon}")break
2.4 主函数

最后,我们将定义一个主函数来运行我们的强化学习项目。

if __name__ == "__main__":# 创建环境和模型env = gym.make("CartPole-v1")model = DQN(env.action_space.n)target_model = DQN(env.action_space.n)target_model.set_weights(model.get_weights())# 创建经验回放缓冲区replay_buffer = ReplayBuffer(capacity=10000)# 训练 DQN 模型train_dqn(env, model, target_model, replay_buffer, num_episodes=500)

3. 总结

通过这个实际项目,我们演示了如何在 OpenAI Gym 中使用深度 Q 网络(DQN)来解决经典的 CartPole 问题。我们创建了一个简单的 DQN 模型,实现了经验回放缓冲区,并进行了训练。这个项目为初学者提供了一个实践的起点,同时展示了在强化学习任务中使用 TensorFlow 和 OpenAI Gym 的基本步骤。希望这篇博客能够帮助你更好地理解和应用强化学习算法。