Skip to content

Releases: ipfs/kubo

v0.35.0

21 May 18:00
v0.35.0
a78d155
Compare
Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

This release brings significant UX and performance improvements to data onboarding, provisioning, and retrieval systems.

New configuration options let you customize the shape of UnixFS DAGs generated during the data import, control the scope of DAGs announced on the Amino DHT, select which delegated routing endpoints are queried, and choose whether to enable HTTP retrieval alongside Bitswap over Libp2p.

Continue reading for more details.

πŸ—£ Discuss

If you have comments, questions, or feedback on this release, please post here.

If you experienced any bugs with the release, please post an issue.

πŸ”¦ Highlights

Opt-in HTTP Retrieval client

This release adds experimental support for retrieving blocks directly over HTTPS (HTTP/2), complementing the existing Bitswap over Libp2p.

The opt-in client enables Kubo to use delegated routing results with /tls/http multiaddrs, connecting to HTTPS servers that support Trustless HTTP Gateway's Block Responses (?format=raw, application/vnd.ipld.raw). Fetching blocks via HTTPS (HTTP/2) simplifies infrastructure and reduces costs for storage providers by leveraging HTTP caching and CDNs.

To enable this feature for testing and feedback, set:

$ ipfs config --json HTTPRetrieval.Enabled true

See HTTPRetrieval for more details.

Dedicated Reprovider.Strategy for MFS

The Mutable File System (MFS) in Kubo is a UnixFS filesystem managed with ipfs files commands. It supports familiar file operations like cp and mv within a folder-tree structure, automatically updating a MerkleDAG and a "root CID" that reflects the current MFS state. Files in MFS are protected from garbage collection, offering a simpler alternative to ipfs pin. This makes it a popular choice for tools like IPFS Desktop and the WebUI.

Previously, the pinned reprovider strategy required manual pin management: each dataset update meant pinning the new version and unpinning the old one. Now, new strategiesβ€”mfs and pinned+mfsβ€”let users limit announcements to data explicitly placed in MFS. This simplifies updating datasets and announcing only the latest version to the Amino DHT.

Users relying on the pinned strategy can switch to pinned+mfs and use MFS alone to manage updates and announcements, eliminating the need for manual pinning and unpinning. We hope this makes it easier to publish just the data that matters to you.

See Reprovider.Strategy for more details.

Experimental support for MFS as a FUSE mount point

The MFS root (filesystem behind the ipfs files API) is now available as a read/write FUSE mount point at Mounts.MFS. This filesystem is mounted in the same way as Mounts.IPFS and Mounts.IPNS when running ipfs mount or ipfs daemon --mount.

Note that the operations supported by the MFS FUSE mountpoint are limited, since MFS doesn't store file attributes.

See Mounts and docs/fuse.md for more details.

Grid view in WebUI

The WebUI, accessible at http://127.0.0.1:5001/webui/, now includes support for the grid view on the Files screen:

image

Enhanced DAG-Shaping Controls

This release advances CIDv1 support by introducing fine-grained control over UnixFS DAG shaping during data ingestion with the ipfs add command.

Wider DAG trees (more links per node, higher fanout, larger thresholds) are beneficial for large files and directories with many files, reducing tree depth and lookup latency in high-latency networks, but they increase node size, straining memory and CPU on resource-constrained devices. Narrower trees (lower link count, lower fanout, smaller thresholds) are preferable for smaller directories, frequent updates, or low-power clients, minimizing overhead and ensuring compatibility, though they may increase traversal steps for very large datasets.

Kubo now allows users to act on these tradeoffs and customize the width of the DAG created by ipfs add command.

New DAG-Shaping ipfs add Options

Three new options allow you to override default settings for specific import operations:

  • --max-file-links: Sets the maximum number of child links for a single file chunk.
  • --max-directory-links: Defines the maximum number of child entries in a "basic" (single-chunk) directory.
    • Note: Directories exceeding this limit or the Import.UnixFSHAMTDirectorySizeThreshold are converted to HAMT-based (sharded across multiple blocks) structures.
  • --max-hamt-fanout: Specifies the maximum number of child nodes for HAMT internal structures.

Persistent DAG-Shaping Import.* Configuration

You can set default values for these options using the following configuration settings:

Updated DAG-Shaping Import Profiles

The release updated configuration profiles to incorporate these new Import.* settings:

  • Updated Profile: test-cid-v1 now includes current defaults as explicit Import.UnixFSFileMaxLinks=174, Import.UnixFSDirectoryMaxLinks=0, Import.UnixFSHAMTDirectoryMaxFanout=256 and Import.UnixFSHAMTDirectorySizeThreshold=256KiB
  • New Profile: test-cid-v1-wide adopts experimental directory DAG-shaping defaults, increasing the maximum file DAG width from 174 to 1024, HAMT fanout from 256 to 1024, and raising the HAMT directory sharding threshold from 256KiB to 1MiB, aligning with 1MiB file chunks.

Tip

Apply one of CIDv1 test profiles with ipfs config profile apply test-cid-v1[-wide].

Datastore Metrics Now Opt-In

To reduce overhead in the default configuration, datastore metrics are no longer enabled by default when initializing a Kubo repository with ipfs init.
Metrics prefixed with <dsname>_datastore (e.g., flatfs_datastore_..., leveldb_datastore_...) are not exposed unless explicitly enabled. For a complete list of affected default metrics, refer to prometheus_metrics_added_by_measure_profile.

Convenience opt-in profiles can be enabled at initi...

Read more

v0.35.0-rc2

15 May 23:25
v0.35.0-rc2
623902e
Compare
Choose a tag to compare
v0.35.0-rc2 Pre-release
Pre-release

This RC release was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.35.md
Release status: #10760

v0.35.0-rc1

07 May 16:29
v0.35.0-rc1
6e89271
Compare
Choose a tag to compare
v0.35.0-rc1 Pre-release
Pre-release

This RC release was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.35.md
Release status: #10760

v0.34.1

25 Mar 18:11
v0.34.1
4649554
Compare
Choose a tag to compare

This patch release was brought to you by the Shipyard team.

See 0.34 Release Notes for full list of changes since 0.33.x

v0.34.0

20 Mar 21:51
v0.34.0
5cca561
Compare
Choose a tag to compare

This release was brought to you by the Shipyard team.

πŸ—£ Discuss

If you have comments, questions, or feedback on this release, please post here.

If you experienced any bugs with the release, please post an issue.

πŸ”¦ Highlights

AutoTLS now enabled by default for nodes with 1 hour uptime

Starting now, any publicly dialable Kubo node with a /tcp listener that remains online for at least one hour will receive a TLS certificate through the AutoTLS feature.
This occurs automatically, with no need for manual setup.

To bypass the 1-hour delay and enable AutoTLS immediately, users can explicitly opt-in by running the following commands:

$ ipfs config --json AutoTLS.Enabled true
$ ipfs config --json AutoTLS.RegistrationDelay 0

AutoTLS will remain disabled under the following conditions:

  • The node already has a manually configured /ws (WebSocket) listener
  • A private network is in use with a swarm.key
  • TCP or WebSocket transports are disabled, or there is no /tcp listener

To troubleshoot, use GOLOG_LOG_LEVEL="error,autotls=info.

For more details, check out the AutoTLS configuration documentation or dive deeper with AutoTLS libp2p blog post.

New WebUI features

The WebUI, accessible at http://127.0.0.1:5001/webui/, now includes support for CAR file import and QR code sharing directly from the Files view. Additionally, the Peers screen has been updated with the latest ipfs-geoip dataset.

RPC and CLI command changes

  • ipfs config is now validating json fields (#10679).
  • Deprecated the bitswap reprovide command. Make sure to switch to modern routing reprovide. (#10677)
  • The stats reprovide command now shows additional stats for Routing.AcceleratedDHTClient, indicating the last and next reprovide times. (#10677)
  • ipfs files cp now performs basic codec check and will error when source is not a valid UnixFS (only dag-pb and raw codecs are allowed in MFS)

Bitswap improvements from Boxo

This release includes performance and reliability improvements and fixes for minor resource leaks. One of the performance changes greatly improves the bitswap clients ability to operate under high load, that could previously result in an out of memory condition.

IPNS publishing TTL change

Many complaints about IPNS being slow are tied to the default --ttl in ipfs name publish, which was set to 1 hour. To address this, we’ve lowered the default IPNS Record TTL during publishing to 5 minutes, matching similar TTL defaults in DNS. This update is now part of boxo/ipfs (GO, boxo#859) and @helia/ipns (JS, helia#749).

Tip

IPNS TTL recommendations when even faster update propagation is desired:

  • As a Publisher: Lower the --ttl (e.g., ipfs name publish --ttl=1m) to further reduce caching delays. If using DNSLink, ensure the DNS TXT record TTL matches the IPNS record TTL.
  • As a Gateway Operator: Override publisher TTLs for faster updates using configurations like Ipns.MaxCacheTTL in Kubo or RAINBOW_IPNS_MAX_CACHE_TTL in Rainbow.

IPFS_LOG_LEVEL deprecated

The variable has been deprecated. Please use GOLOG_LOG_LEVEL instead for configuring logging levels.

Pebble datastore format update

If the pebble database format is not explicitly set in the config, then automatically upgrade it to the latest format version supported by the release ob pebble used by kubo. This will ensure that the database format is sufficiently up-to-date to be compatible with a major version upgrade of pebble. This is necessary before upgrading to use pebble v2.

Badger datastore update

An update was made to the badger v1 datastore that avoids use of mmap in 32-bit environments, which has been seen to cause issues on some platforms. Please be aware that this could lead to a performance regression for users of badger in a 32-bit environment. Badger users are advised to move to the flatds or pebble datastore.

Datastore Implementation Updates

The go-ds-xxx datastore implementations have been updated to support the updated go-datastore v0.8.2 query API. This update removes the datastore implementations' dependency on goprocess and updates the query API.

One Multi-error Package

Kubo previously depended on multiple multi-error packages, github.com/hashicorp/go-multierror and go.uber.org/multierr. These have nearly identical functionality so there was no need to use both. Therefore, go.uber.org/multierr was selected as the package to depend on. Any future code needing multi-error functionality should use go.uber.org/multierr to avoid introducing unneeded dependencies.

Fix hanging pinset operations during reprovides

The reprovide process can be quite slow. In default settings, the reprovide process will start reading CIDs that belong to the pinset. During this operation, starvation can occur for other operations that need pinset access (see #10596).

We have now switch to buffering pinset-related cids that are going to be reprovided in memory, so that we can free pinset mutexes as soon as possible so that pinset-writes and subsequent read operations can proceed. The downside is larger pinsets will need some extra memory, with an estimation of ~1GiB of RAM memory-use per 20 million items to be reprovided.

Use Reprovider.Strategy to balance announcement prioritization, speed, and memory utilization.

πŸ“¦οΈ Important dependency updates

πŸ“ Changelog

Full Changelog
  • github.com/ipfs/kubo:
    • chore: v0.34.0
    • chore: v0.34.0-rc2
    • docs: mention Reprovider.Strategy config
    • docs: ipns ttl change
    • feat: ipfs-webui v4.6 (#10756) (ipfs/kubo#10756)
    • docs(readme): update min. requirements + cleanup (#10750) (ipfs/kubo#10750)
    • Upgrade to Boxo v0.29.1 (#10755) ([#10755](ht...
Read more

v0.34.0-rc2

14 Mar 20:57
v0.34.0-rc2
Compare
Choose a tag to compare
v0.34.0-rc2 Pre-release
Pre-release

See the draft changelog: docs/changelogs/v0.34.md

And related issue: #10685

This release is brought to you by the Shipyard team.

v0.34.0-rc1

06 Mar 00:19
v0.34.0-rc1
3a8320d
Compare
Choose a tag to compare
v0.34.0-rc1 Pre-release
Pre-release

See the draft changelog: docs/changelogs/v0.34.md

And related issue: #10685

This release is brought to you by the Shipyard team.

v0.33.2

14 Feb 00:24
v0.33.2
ad1868a
Compare
Choose a tag to compare

This is a tiny patch release with a single change:

See 0.33 Release Notes for full list of changes since 0.32.x

πŸ—£ Discuss

If you have comments, questions, or feedback on this release, please post here.
If you experienced any bugs with the release, please post an issue.

πŸ“ Changelog

Full Changelog
  • github.com/ipfs/kubo:
    • chore: v0.33.2
  • github.com/libp2p/go-libp2p (v0.38.2 -> v0.38.3):

πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ Contributors

This release was brought to you by the Shipyard team.

Contributor Commits Lines Β± Files Changed
sukun 1 +122/-23 7
Marcin Rataj 1 +1/-1 1

v0.33.1

04 Feb 21:35
v0.33.1
9bfbc4e
Compare
Choose a tag to compare

This is a patch release with an important boxo/bitswap fix that we believe should reach you without waiting for 0.34 :)
See 0.33.0 for full list of changes since 0.32.1.

πŸ”¦ Highlights

Bitswap improvements from Boxo

This release includes boxo/bitswap performance and reliability improvements and fixes for minor resource leaks. One of the performance changes greatly improves the bitswap clients ability to operate under high load, that could previously result in an out of memory condition.

Improved IPNS interop

Improved compatibility with third-party IPNS publishers by restoring support for compact binary CIDs in the Value field of IPNS Records (IPNS Specs). As long the signature is valid, Kubo will now resolve such records (likely created by non-Kubo nodes) and convert raw CIDs into valid /ipfs/cid content paths.
Note: This only adds support for resolving externally created recordsβ€”Kubo’s IPNS record creation remains unchanged. IPNS records with empty Value fields default to zero-length /ipfs/bafkqaaa to maintain backward compatibility with code expecting a valid content path.

πŸ“¦οΈ Important dependency updates

πŸ—£ Discuss

This release was brought to you by the Shipyard team.

If you have comments, questions, or feedback on this release, please post here.

If you experienced any bugs with the release, please post an issue.

πŸ“ Changelog

Full Changelog v0.33.1

πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ Contributors

Contributor Commits Lines Β± Files Changed
Dreamacro 1 +304/-376 119
Andrew Gillis 7 +306/-200 20
Guillaume Michel 5 +122/-98 14
Marcin Rataj 2 +113/-7 4
gammazero 6 +41/-11 6
Sergey Gorbunov 1 +14/-2 2
Daniel Norman 1 +9/-0 1

v0.33.0

29 Jan 22:14
v0.33.0
8b65738
Compare
Choose a tag to compare

This release was brought to you by the Shipyard team.

πŸ—£ Discuss

If you have comments, questions, or feedback on this release, please post here.

If you experienced any bugs with the release, please post an issue.

πŸ”¦ Highlights

Shared TCP listeners

Kubo now supports sharing the same TCP port (4001 by default) by both raw TCP and WebSockets libp2p transports.

This feature is not yet compatible with Private Networks and can be disabled by setting LIBP2P_TCP_MUX=false if causes any issues.

AutoTLS takes care of Secure WebSockets setup

It is no longer necessary to manually add /tcp/../ws listeners to Addresses.Swarm when AutoTLS.Enabled is set to true. Kubo will detect if /ws listener is missing and add one on the same port as pre-existing TCP (e.g. /tcp/4001), removing the need for any extra configuration.

Tip

Give it a try:

$ ipfs config --json AutoTLS.Enabled true

And restart the node. If you are behind NAT, make sure your node is publicly diallable (uPnP or port forwarding), and wait a few minutes to pass all checks and for the changes to take effect.

See AutoTLS for more information.

Bitswap improvements from Boxo

This release includes some refactorings and improvements affecting Bitswap which should improve reliability. One of the changes affects blocks providing. Previously, the bitswap layer took care itself of announcing new blocks -added or received- with the configured provider (i.e. DHT). This bypassed the "Reprovider", that is, the system that manages precisely "providing" the blocks stored by Kubo. The Reprovider knows how to take advantage of the AcceleratedDHTClient, is able to handle priorities, logs statistics and is able to resume on daemon reboot where it left off. From now on, Bitswap will not be doing any providing on-the-side and all announcements are managed by the reprovider. In some cases, when the reproviding queue is full with other elements, this may cause additional delays, but more likely this will result in improved block-providing behaviour overall.

Using default libp2p_rcmgr metrics

Bespoke rcmgr metrics were removed, Kubo now exposes only the default libp2p_rcmgr metrics from go-libp2p.
This makes it easier to compare Kubo with custom implementations based on go-libp2p.
If you depended on removed ones, please fill an issue to add them to the upstream go-libp2p.

Flatfs does not sync on each write

New repositories initialized with flatfs in Datastore.Spec will have sync set to false.

The old default was overly conservative and caused performance issues in big repositories that did a lot of writes. There is usually no need to flush on every block write to disk before continuing. Setting this to false is safe as kubo will automatically flush writes to disk before and after performing critical operations like pinning. However, we still provide users with ability to set this to true to be extra-safe (at the cost of a slowdown when adding files in bulk).

ipfs add --to-files no longer works with --wrap

Onboarding files and directories with ipfs add --to-files now requires non-empty names. due to this, The --to-files and --wrap options are now mutually exclusive (#10612).

ipfs --api supports HTTPS RPC endpoints

CLI and RPC client now supports accessing Kubo RPC over https:// protocol when multiaddr ending with /https or /tls/http is passed to ipfs --api:

$ ipfs id --api /dns/kubo-rpc.example.net/tcp/5001/tls/http
# β†’ https://kubo-rpc.example.net:5001

New options for faster writes: WriteThrough, BlockKeyCacheSize, BatchMaxNodes, BatchMaxSize

Now that Kubo supports pebble as an experimental datastore backend, it becomes very useful to expose some additional configuration options for how the blockservice/blockstore/datastore combo behaves.

Usually, LSM-tree based datastore like Pebble or Badger have very fast write performance (blocks are streamed to disk) while incurring in read-amplification penalties (blocks need to be looked up in the index to know where they are on disk), specially noticiable on spinning disks.

Prior to this version, BlockService and Blockstore implementations performed a Has(cid) for every block that was going to be written, skipping the writes altogether if the block was already present in the datastore. The performance impact of this Has() call can vary. The Datastore implementation itself might include block-caching and things like bloom-filters to speed up lookups and mitigate read-penalties. Our Blockstore implementation also supports a bloom-filter (controlled by BloomFilterSize and disabled by default), and a two-queue cache for keys and block sizes. If we assume that most of the blocks added to Kubo are new blocks, not already present in the datastore, or that the datastore itself includes mechanisms to optimize writes and avoid writing the same data twice, the calls to Has() at both BlockService and Blockstore layers seem superflous to they point they even harm write performance.

For these reasons, from now on, the default is to use a "write-through" mode for the Blockservice and the Blockstore. We have added a new option Datastore.WriteThrough, which defaults to true. Previous behaviour can be obtained by manually setting it to false.

We have also made the size of the two-queue blockstore cache configurable with another option: Datastore.BlockKeyCacheSize, which defaults to 65536 (64KiB). Additionally, this caching layer can be disabled altogether by setting it to 0. In particular, this option controls the size of a blockstore caching layer that records whether the blockstore has certain block and their sizes (but does not cache the contents, so it stays relativey small in general).

Finally, we have added two new options to the Import section to control the maximum size of write-batches: BatchMaxNodes and BatchMaxSize. These are set by default to 128 nodes and 20MiB. Increasing them will batch more items together when importing data with ipfs dag import, which can speed things up. It is importance to find a balance between available memory (used to hold the batch), disk latencies (when writing the batch) and processing power (when preparing the batch, as nodes are sorted and duplicates removed).

As a reminder, details from all the options are explained in the configuration documentation.

We recommend users trying Pebble as a datastore backend to disable both blockstore bloom-filter and key caching layers and enable write through as a way to evaluate the raw performance of the underlying datastore, which includes its own bloom-filter and caching layers (default cache size is 8MiB and can be configured in the options.

MFS stability with large number of writes

We have fixed a number of issues that were triggered by writing or copying many files onto an MFS folder: increased memory usage first, then CPU, disk usage, and eventually a deadlock on write operations. The details of the fixes can be read at #10630 and #10623. The result is that writing large amounts of files to an MFS folder should now be possible without major issues. It is possible, as before, to speed up the operations using the `ipfs fi...

Read more