Impoola: The Power of Average Pooling for Image-based Deep Reinforcement Learning

By Raphael Trumpp, Ansgar Schäfftlein, Mirco Theile, and Marco Caccamo

Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.


Download:

Abstract:

As image-based deep reinforcement learning tackles more challenging tasks, increasing model size has become an important factor in improving performance. Recent studies achieved this by focusing on the parameter efficiency of scaled networks, typically using Impala-CNN, a 15-layer ResNet-inspired network, as the image encoder. However, while Impala-CNN evidently outperforms older CNN architectures, potential advancements in network design for deep reinforcement learning-specific image encoders remain largely unexplored. We find that replacing the flattening of output feature maps in Impala-CNN with global average pooling leads to a notable performance improvement. This approach outperforms larger and more complex models in the Procgen Benchmark, particularly in terms of generalization. We call our proposed encoder model Impoola-CNN. A decrease in the network's translation sensitivity may be central to this improvement, as we observe the most significant gains in games without agent-centered observations. Our results demonstrate that network scaling is not just about increasing model size - efficient network design is also an essential factor.


Citation Information:

Raphael Trumpp, Ansgar Schäfftlein, Mirco Theile, and Marco Caccamo. "Impoola: The Power of Average Pooling for Image-based Deep Reinforcement Learning." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

BibTeX:
@article{trumpp2025impoola,
    title={Impoola: {T}he Power of Average Pooling for Image-based Deep Reinforcement Learning},
    author={Trumpp, Raphael and Sch{\"{a}}fftlein, Ansgar and Theile, Mirco and Caccamo, Marco},
    journal={Reinforcement Learning Journal},
    year={2025}
}