关于agent的保存以及在结束点处继续训练,以及解决经验池满的问题

Save Candidate Agents
During training, you can save candidate agents that meet conditions you specify in the SaveAgentCriteria and SaveAgentValue options of your rlTrainingOptions object. For instance, you can save any agent whose episode reward exceeds a certain value, even if the overall condition for terminating training is not yet satisfied. For example, save agents when the episode reward is greater than 100.

opt = rlTrainingOptions('SaveAgentCriteria',"EpisodeReward",'SaveAgentValue',100');
train stores saved agents in a MAT-file in the folder you specify using the SaveAgentDirectory option of rlTrainingOptions. Saved agents can be useful, for instance, to test candidate agents generated during a long-running training process. For details about saving criteria and saving location, see rlTrainingOptions.

After training is complete, you can save the final trained agent from the MATLAB® workspace using the save function. For example, save the agent myAgent to the file finalAgent.mat in the current working directory.

save(opt.SaveAgentDirectory + "/finalAgent.mat",'agent')
By default, when DDPG and DQN agents are saved, the experience buffer data is not saved. If you plan to further train your saved agent, you can start training with the previous experience buffer as a starting point. In this case, set the SaveExperienceBufferWithAgent option to true. For some agents, such as those with large experience buffers and image-based observations, the memory required for saving the experience buffer is large. In these cases, you must ensure that enough memory is available for the saved agents.

 

训练结束后,运行仿真时出现报错:

Not enough room in the buffer to store the new experiences. Make sure the bufferSize argument is big enough. 

(缓冲区中没有足够的空间来存储新experiences。确保bufferSize参数足够大。)

 

部署函数

 

你可能感兴趣的:(人工智能,算法,计算机,微机,嵌入式,硬件)