简介

策略梯度方法是强化学习中的核心算法之一,通过直接对策略进行参数化并优化其参数来实现目标。以下是一个简单的代码示例,使用Python和TensorFlow实现策略梯度算法。

代码示例

import numpy as np
import tensorflow as tf
from environment import SimpleEnv  # 假设的环境模块

# 定义策略网络
class PolicyNetwork(tf.keras.Model):
    def __init__(self):
        super(PolicyNetwork, self).__init__()
        self.dense1 = tf.keras.layers.Dense(128, activation='relu')
        self.dense2 = tf.keras.layers.Dense(64, activation='relu')
        self.output = tf.keras.layers.Dense(2, activation='softmax')  # 2个动作
    
    def call(self, state):
        x = self.dense1(state)
        x = self.dense2(x)
        return self.output(x)

# 初始化网络和优化器
policy = PolicyNetwork()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)

# 训练循环
for episode in range(1000):
    states, actions, rewards = [], [], []
    state = env.reset()
    while True:
        state = tf.convert_to_tensor(state, dtype=tf.float32)
        action_probs = policy(state)
        action = tf.random.categorical(action_probs, 1)[0, 0]
        next_state, reward, done = env.step(action)
        states.append(state)
        actions.append(action)
        rewards.append(reward)
        if done:
            break
        state = next_state
    
    # 计算折扣回报
    discounted_rewards = []
    cumulative_reward = 0
    for reward in reversed(rewards):
        cumulative_reward = reward + 0.99 * cumulative_reward
        discounted_rewards.append(cumulative_reward)
    discounted_rewards = np.array(discounted_rewards[::-1])  # 反转回原始顺序
    
    # 计算损失并更新策略
    with tf.GradientTape() as tape:
        loss = -tf.reduce_sum(policy(states) * tf.one_hot(actions, 2) * discounted_rewards)
    grads = tape.gradient(loss, policy.trainable_variables)
    optimizer.apply_gradients(zip(grads, policy.trainable_variables))

关键步骤解析

  1. 策略网络构建
    使用全连接层实现状态到动作概率的映射,如<center><img src="https://cloud-image.ullrai.com/q/神经网络_结构/" alt="神经网络_结构"/></center>

  2. 环境交互
    通过SimpleEnv模拟环境,记录每一步的状态、动作和奖励,如<center><img src="https://cloud-image.ullrai.com/q/强化学习_环境交互/" alt="强化学习_环境交互"/></center>

  3. 折扣回报计算
    discounted_rewards加权动作概率,指导策略更新,如<center><img src="https://cloud-image.ullrai.com/q/策略梯度_奖励计算/" alt="策略梯度_奖励计算"/></center>

  4. 梯度优化
    通过反向传播最小化负损失,提升策略表现,如<center><img src="https://cloud-image.ullrai.com/q/策略梯度_优化过程/" alt="策略梯度_优化过程"/></center>

扩展阅读

如需深入了解策略梯度理论,可访问强化学习基础理论教程