Benchmarks · Proof

Handle Big Files and Objects Quickly with Streaming and Caching, While Keeping Everything Secure.

PumaMesh benchmark proof shows large files, datasets, model bundles, and mission packages moving quickly across distance while governance stays in the movement path.

The buyer takeaway is simple: speed, protection, policy, and audit can work together instead of becoming separate projects.

140 GB <60sLarge payload delivered cross-Pacific
240 GB <90sSoftware artifacts, datasets, or model bundles
360 GB <2.5 minMission packages and modern artifact sets
25.8 Gbpssustained throughput over long distance
220× SCPpeak throughput advantage over single-stream transfer
230× rsyncpeak throughput advantage on the same path
11.5× S3advantage over multipart upload at 50 GB
1 TB / ~6 minlarge-scale proof across distance
Why It Works

Makes Full Use of Your Existing Network for Streaming, Caching, Searching, and Moving Files and Objects.

Legacy transfer tools often leave long-distance links underused. PumaMesh was built so large files, AI payloads, software artifacts, and regulated data can move quickly while policy and evidence stay attached.

Parallel Streams · QUIC

Use all of the link, not just one pipe

QUIC multiplexed streams push data across available long-haul capacity in parallel instead of serializing through a single connection. That is where the 220× SCP advantage comes from.

Open Standards · QUIC RFC 9000

Give technical teams something to inspect

QUIC is an IETF open standard (RFC 9000) — not a proprietary acceleration black box. Security and network teams can review exactly what is running on the wire.

Governance In the Path

Speed without weaker controls

Classification, policy, and audit stay in the loop. Faster movement does not mean looser governance.

Infographic comparing TCP single-stream to QUIC parallel streams: one grey pipe versus four teal tubes with 220x faster label in gold
Test Setup

Real distance, real payload pressure.

The benchmark runs on cloud infrastructure across real distance with incompressible data, so the result reflects transfer behavior instead of synthetic compression gain.

Environment

Cross-Pacific cloud path

The path spans Ohio to Tokyo, giving buyers proof across meaningful distance and latency.

Test Data

Incompressible data

The test data does not reward synthetic compression tricks, so the result reflects movement behavior.

Transport · QUIC

Built for distributed data across real distance

QUIC's congestion control and parallel stream design are built for high-latency long-haul paths — the kind where TCP gives up bandwidth and SCP leaves the link sitting idle.

Measured Result

Large movement completed in minutes, not hours.

On the measured cloud path, PumaMesh sustained throughput that makes large payload movement practical for AI, regulated exchange, software delivery, and mission workflows.

1 TB / ~6 min

PumaMesh crossed the Pacific with 1 TB in roughly six minutes, turning a long-haul transfer into an operationally useful workflow instead of a waiting period.

Comparisons

Why legacy transfer creates business delay.

On the same path and data, legacy transfer patterns leave teams waiting even when the network has capacity available. The tools below represent the most common ways teams move large files today — each is a well-established standard, and each hits a ceiling that QUIC-based transfer does not.

SCP · Secure Copy Protocol

Default transfer leaves capacity unused

SCP is the standard command-line tool for copying files securely between hosts over SSH — used widely in DevOps, government, and research environments. Because it moves data through a single TCP stream, it cannot use more than a small slice of a high-bandwidth link, especially over long distances. SCP used only a fraction of the available link, which is why long-haul movement still blocks teams.

rsync · Remote Sync

Optimization tricks do not fix every payload

rsync is a widely used open-source utility that synchronizes files between machines, often over SSH. It is popular because it only transfers the parts of files that changed, which saves bandwidth on text, code, and compressible data. With incompressible data — such as encrypted archives, model weights, or pre-compressed binaries — that delta optimization disappears and rsync did not meaningfully improve the long-haul transfer problem.

boto3 multipart · AWS S3 SDK

Parallel upload helps, but still leaves room

boto3 is the official AWS SDK for Python, and multipart upload is its mechanism for breaking a large file into chunks and uploading them in parallel to S3. This is the most common way teams move large objects into cloud storage. Multipart upload improves on single-stream tools but still falls short of the measured PumaMesh result — hitting an 11.5× throughput gap at 50 GB.

Protect · Understand · Move · Accelerate

Why the benchmark matters: security stays in the path.

The benchmark numbers matter because they include the controls buyers still need: protection, policy, audit, and operational visibility.

P

Protection on every path

Movement stays protected across direct and authorized forwarding paths. See the Mesh →

U

Policy before movement

File context helps decide where each payload is allowed to land before bytes leave the source.

M

Fast over distance

25.8 Gbps sustained on a long-distance cloud path.

A

Evidence as byproduct

Transfer evidence is created during the work, not assembled as a separate audit project.

Operational Proof

The proof lives where operators work.

Operators see movement, performance, direction, completion state, topology, and audit context in the product. The proof is operational, not just a lab artifact.

Transit

Per-transfer telemetry in-product

Bandwidth, file size, duration, direction, and completion state show up next to the transfer — no separate dashboard to stitch together.

Fabric

Topology behind every transfer

The Fabric view shows distributed nodes, long-haul paths, and the network context behind each throughput result.

Audit

Every run is evidence

Transfers are logged with operator, policy, and classification context — so performance results double as compliance evidence.

PumaMesh Transit view showing per-transfer bandwidth, file size, duration, and direction