ShortSpan.ai logo

NASimJax speeds RL pentesting but exposes brittle methods

Pentesting
Published: Mon, Mar 23, 2026 • By Natalie Kestrel
NASimJax speeds RL pentesting but exposes brittle methods
NASimJax retools a network attack simulator in JAX, pushing throughput up to 100× and 1.6M steps/s on a single GPU. That speed exposes what scales and what breaks: a two-stage action picker beats masking on larger nets, sparse training improves generalisation, and Prioritised Level Replay can collapse without care.

Reinforcement learning (RL) for penetration testing promises automation, but it has been stuck behind slow simulators. NASimJax removes that brake. It reimplements a network attack simulator in JAX, runs everything on accelerators, and reports up to 100× higher environment throughput, reaching about 1.6 million steps per second on a single GPU. That kind of throughput turns weeks of training into hours and makes large sweeps possible.

What the research shows

With speed in hand, the authors probe scale and generalisation. Two results matter for practitioners thinking about attacker modelling. First, a two-stage action selection scheme (pick a host, then a per-host action) outperforms the usual flat action masking as networks grow. On 26-host networks it solves 82% of scenarios versus 66% for masking; at 40 hosts, 42% versus 14%. Second, training on sparser topologies creates an implicit curriculum that travels well. Policies trained this way generalise better zero-shot, including to denser networks they did not see during training.

They also compare unsupervised environment design methods. Prioritised Level Replay (PLR) handles dense training distributions better than plain Domain Randomisation, keeping performance at larger scales by replaying harder instances. That is the good news. The bad news: PLR’s episode-reset behaviour collides with the two-stage selector’s credit assignment and can cause a near-total collapse at a particular density in 40-host tests. A variant that withholds exploratory gradient updates (PLR⊥) avoids the collapse.

This is not just an engineering footnote. It is a reminder that scale exposes method interactions you will not spot on toy problems. If you are planning to chain curricula, replay, and action decompositions, test the joints, not just the parts.

What they don’t test

NASimJax frames automated pentesting as a Contextual Partially Observable Markov Decision Process (POMDP): each episode is a different network context, generated procedurally to be structurally diverse and, crucially, guaranteed solvable. That guarantee reads like a convenience for learning, but it is also a bias. Real networks are often messy, misconfigured, and sometimes effectively unsolvable under a fixed budget. Reward shaping choices, including scaling by the number of subnets and a sensitive-host bonus, further steer behaviour. Those are sensible for training stability; they also risk teaching policies to chase simulator-specific incentives.

There are other gaps between simulator and street. No active defenders, no detection, no rate limits, no deception. Services do not move, credentials do not churn, and timing does not fail in the way production does. The policies are evaluated in simulation only. There is no test on emulated or real networks, so the sim-to-real question is still wide open.

Scale is another caution. Forty hosts is a useful benchmark, not an enterprise. The two-stage approach clearly helps as the action space grows linearly, but even its 42% solve rate at 40 hosts is not something you would trust for coverage. The flat masking baseline is weak, so the real baseline to beat in practice would be a strong heuristic or scripted playbook. Also, hitting 1 billion training steps is great for research; it will still be out of reach for many internal teams, and the paper notes that JIT compilation overhead bites at small budgets.

Still, the platform itself is valuable. A fast, batched, accelerator-friendly environment with a clear API lowers friction for reproducible work and lets both offence modellers and defenders run bigger experiments. If you run an internal AI red team, this could stress-test segmentation assumptions and lateral-movement paths across varied topologies. Treat any reported generalisation with care, though: the “guaranteed-solvable” generator and reward shaping make transfer look easier than production will.

The open questions are the ones that matter operationally. How do these policies behave when scans are slow and noisy, when controls rate-limit probes, or when logs trigger response? Does the sparse-to-dense curriculum still help under those constraints? Can the two-stage selector scale to hundreds of hosts without new credit-assignment failures? NASimJax gives the field the speed to find out. Now it needs the realism to make the answers stick.

Additional analysis of the original ArXiv paper

📋 Original Paper Title and Abstract

NASimJax: GPU-Accelerated Policy Learning Framework for Penetration Testing

Authors: Raphael Simon, José Carrasquel, Wim Mees, and Pieter Libin
Penetration testing, the practice of simulating cyberattacks to identify vulnerabilities, is a complex sequential decision-making task that is inherently partially observable and features large action spaces. Training reinforcement learning (RL) policies for this domain faces a fundamental bottleneck: existing simulators are too slow to train on realistic network scenarios at scale, resulting in policies that fail to generalize. We present NASimJax, a complete JAX-based reimplementation of the Network Attack Simulator (NASim), achieving up to 100x higher environment throughput than the original simulator. By running the entire training pipeline on hardware accelerators, NASimJax enables experimentation on larger networks under fixed compute budgets that were previously infeasible. We formulate automated penetration testing as a Contextual POMDP and introduce a network generation pipeline that produces structurally diverse and guaranteed-solvable scenarios. Together, these provide a principled basis for studying zero-shot policy generalization. We use the framework to investigate action-space scaling and generalization across networks of up to 40 hosts. We find that Prioritized Level Replay better handles dense training distributions than Domain Randomization, particularly at larger scales, and that training on sparser topologies yields an implicit curriculum that improves out-of-distribution generalization, even on topologies denser than those seen during training. To handle linearly growing action spaces, we propose a two-stage action decomposition (2SAS) that substantially outperforms flat action masking at scale. Finally, we identify a failure mode arising from the interaction between Prioritized Level Replay's episode-reset behaviour and 2SAS's credit assignment structure. NASimJax thus provides a fast, flexible, and realistic platform for advancing RL-based penetration testing.

🔍 ShortSpan Analysis of the Paper

Problem

The paper studies automated penetration testing as a sequential, partially observable decision problem with large, combinatorial action spaces. Existing simulators are CPU-bound and too slow to generate the millions of environment interactions required for modern reinforcement learning (RL), which limits training scale, hyperparameter sweeps and the ability of policies to generalise to unseen networks. Slow environments therefore form a fundamental bottleneck for RL research in offensive security.

Approach

The authors present NASimJax, a complete JAX reimplementation and extension of a prior network attack simulator. NASimJax runs the full train-and-simulate pipeline on accelerators using JIT compilation and vectorised execution, exposes a Contextual POMDP formulation where each episode is a distinct network context, and provides a procedural network generator that yields structurally diverse, realistic and guaranteed-solvable scenarios. The environment uses a compact batched state representation and adheres to the Gymnax API. To address linearly growing action spaces the authors propose a two-stage action selection (2SAS) that first selects a host then a per-host action. They also evaluate flat action masking as a baseline and compare unsupervised environment design methods: Domain Randomization (DR), Prioritized Level Replay (PLR) and a variant that withholds exploratory gradient updates (PLR⊥). Additional engineering choices include reward scaling by the number of subnets and sensitive-host reward to stabilise learning and a Bayesian hyperparameter search over 250 trials. Experiments study throughput and learning on networks of 16, 26 and 40 hosts with large training budgets (up to 1 billion steps in some settings).

Key Findings

  • Throughput: NASimJax achieves up to 100× higher environment throughput versus the original implementation, reaching about 1.6 million steps per second on a single GPU, enabling much larger training budgets and broader hyperparameter search.
  • Action-space scaling: The two-stage action selection (2SAS) substantially outperforms flat action masking as networks grow. On 26-host networks 2SAS achieved an 82% solve rate versus 66% for masking; on 40-host networks 2SAS achieved 42% versus 14% for masking.
  • Generalisation and training distribution: Training on sparser topology distributions produces an implicit curriculum that improves zero-shot generalisation, often yielding better transfer even to denser evaluation networks than training directly on the target density.
  • UED methods: Prioritized Level Replay better handles dense training distributions than Domain Randomization, maintaining stronger performance at larger scales by exploiting replay to form an implicit curriculum.
  • Failure mode identified: A destructive interaction occurs between PLR’s episode-reset exploration behaviour and 2SAS’s joint credit assignment, producing near-total collapse for PLR-2SAS at a particular density in 40-host experiments. Withholding exploration updates during PLR (PLR⊥) avoids this collapse.

Limitations

Results are limited to simulated networks up to 40 hosts and the specific generative settings used. JIT compilation incurs overhead that affects small-budget runs. The two-stage decomposition introduces credit-assignment challenges under some replay regimes, and the work does not close the sim-to-real gap; policies are evaluated in simulation only. Further work is required to resolve the identified failure mode and to test transfer to real networks or emulated environments.

Why It Matters

NASimJax provides a fast, configurable platform for researching RL-based penetration testing, enabling experiments at scales previously infeasible. It supports principled study of zero-shot generalisation, action-space methods and curriculum effects, which is valuable for both offensive modelling and defensive robustness evaluation. The platform also highlights practical engineering failure modes and thus informs safer evaluation practices; at the same time it carries dual-use considerations because faster training of attack policies can facilitate stronger attacker models if misused.


Related Articles

Related Research on arXiv

Get the Weekly AI Security Digest

Top research and analysis delivered to your inbox every week. No spam, unsubscribe anytime.