Handle Big Files and Objects Quickly with Streaming and Caching, While Keeping Everything Secure.
PumaMesh benchmark proof shows large files, datasets, model bundles, and mission packages moving quickly across distance while governance stays in the movement path.
The buyer takeaway is simple: speed, protection, policy, and audit can work together instead of becoming separate projects.
Makes Full Use of Your Existing Network for Streaming, Caching, Searching, and Moving Files and Objects.
Legacy transfer tools often leave long-distance links underused. PumaMesh was built so large files, AI payloads, software artifacts, and regulated data can move quickly while policy and evidence stay attached.
Parallel Streams · QUIC
Use all of the link, not just one pipeQUIC multiplexed streams push data across available long-haul capacity in parallel instead of serializing through a single connection. That is where the 220× SCP advantage comes from.
Open Standards · QUIC RFC 9000
Give technical teams something to inspectQUIC is an IETF open standard (RFC 9000) — not a proprietary acceleration black box. Security and network teams can review exactly what is running on the wire.
Governance In the Path
Speed without weaker controlsClassification, policy, and audit stay in the loop. Faster movement does not mean looser governance.
Real distance, real payload pressure.
The benchmark runs on cloud infrastructure across real distance with incompressible data, so the result reflects transfer behavior instead of synthetic compression gain.
Environment
Cross-Pacific cloud pathThe path spans Ohio to Tokyo, giving buyers proof across meaningful distance and latency.
Test Data
Incompressible dataThe test data does not reward synthetic compression tricks, so the result reflects movement behavior.
Transport · QUIC
Built for distributed data across real distanceQUIC's congestion control and parallel stream design are built for high-latency long-haul paths — the kind where TCP gives up bandwidth and SCP leaves the link sitting idle.
Large movement completed in minutes, not hours.
On the measured cloud path, PumaMesh sustained throughput that makes large payload movement practical for AI, regulated exchange, software delivery, and mission workflows.
PumaMesh crossed the Pacific with 1 TB in roughly six minutes, turning a long-haul transfer into an operationally useful workflow instead of a waiting period.
Why legacy transfer creates business delay.
On the same path and data, legacy transfer patterns leave teams waiting even when the network has capacity available. The tools below represent the most common ways teams move large files today — each is a well-established standard, and each hits a ceiling that QUIC-based transfer does not.
SCP · Secure Copy Protocol
Default transfer leaves capacity unusedSCP is the standard command-line tool for copying files securely between hosts over SSH — used widely in DevOps, government, and research environments. Because it moves data through a single TCP stream, it cannot use more than a small slice of a high-bandwidth link, especially over long distances. SCP used only a fraction of the available link, which is why long-haul movement still blocks teams.
rsync · Remote Sync
Optimization tricks do not fix every payloadrsync is a widely used open-source utility that synchronizes files between machines, often over SSH. It is popular because it only transfers the parts of files that changed, which saves bandwidth on text, code, and compressible data. With incompressible data — such as encrypted archives, model weights, or pre-compressed binaries — that delta optimization disappears and rsync did not meaningfully improve the long-haul transfer problem.
boto3 multipart · AWS S3 SDK
Parallel upload helps, but still leaves roomboto3 is the official AWS SDK for Python, and multipart upload is its mechanism for breaking a large file into chunks and uploading them in parallel to S3. This is the most common way teams move large objects into cloud storage. Multipart upload improves on single-stream tools but still falls short of the measured PumaMesh result — hitting an 11.5× throughput gap at 50 GB.
Why the benchmark matters: security stays in the path.
The benchmark numbers matter because they include the controls buyers still need: protection, policy, audit, and operational visibility.
Protection on every path
Movement stays protected across direct and authorized forwarding paths. See the Mesh →
Policy before movement
File context helps decide where each payload is allowed to land before bytes leave the source.
Fast over distance
25.8 Gbps sustained on a long-distance cloud path.
Evidence as byproduct
Transfer evidence is created during the work, not assembled as a separate audit project.
The proof lives where operators work.
Operators see movement, performance, direction, completion state, topology, and audit context in the product. The proof is operational, not just a lab artifact.
Transit
Per-transfer telemetry in-productBandwidth, file size, duration, direction, and completion state show up next to the transfer — no separate dashboard to stitch together.
Fabric
Topology behind every transferThe Fabric view shows distributed nodes, long-haul paths, and the network context behind each throughput result.
Audit
Every run is evidenceTransfers are logged with operator, policy, and classification context — so performance results double as compliance evidence.