<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Message Latency in Waku Relay with Rate Limiting Nullifiers</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alvaro</forename><surname>Revuelta</surname></persName>
							<email>alrevuelta@status.im</email>
							<affiliation key="aff0">
								<orgName type="department">Vac Research and Development</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sergei</forename><surname>Tikhomirov</surname></persName>
							<email>sergei@status.im</email>
							<affiliation key="aff0">
								<orgName type="department">Vac Research and Development</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Aaryamann</forename><surname>Challani</surname></persName>
							<email>aaryamann@vac.dev</email>
							<affiliation key="aff0">
								<orgName type="department">Vac Research and Development</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hanno</forename><surname>Cornelius</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Vac Research and Development</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Simon</forename><forename type="middle">Pierre</forename><surname>Vivier</surname></persName>
							<email>simvivier@status.im</email>
							<affiliation key="aff0">
								<orgName type="department">Vac Research and Development</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Message Latency in Waku Relay with Rate Limiting Nullifiers</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">A0229758C89DFA2F49E0FBC6D97B04C5</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>GossipSub</term>
					<term>Waku</term>
					<term>zkSNARKS</term>
					<term>RLN</term>
					<term>anonymity</term>
					<term>latency</term>
					<term>rate-limiting</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Waku is a privacy-preserving, generalized, and decentralized messaging protocol suite. Waku uses GossipSub for message routing and Rate Limiting Nullifiers (RLN) for spam protection. GossipSub ensures fast and reliable peer-to-peer message delivery in a permissionless environment, while RLN enforces a common publishing rate limit using zero-knowledge proofs.</p><p>This paper presents a practical evaluation of message propagation latency in Waku. First, we estimate latencies analytically, building a simple mathematical model for latency under varying conditions. Second, we run a large-scale single-host simulation with 1000 nodes. Third, we set up a multi-host Waku deployment using five nodes in different locations across the world. Finally, we compare our analytical estimations to the results of the simulation and the real-world measurement.</p><p>The experimental results are in line with our theoretical model. Under realistic assumptions, mediumsized messages (25 KB) are delivered within 1 second. We conclude that Waku can achieve satisfactory latency for typical use cases, such as decentralized messengers, while providing scalability and anonymity.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Peer-to-peer (P2P) protocols allow for fast and resilient data exchange. Their applications include data dissemination in blockchains and decentralized social networks. The goal of P2P protocol design is to navigate the trade-offs between speed, reliability, efficiency, security, and privacy.</p><p>Flooding and gossip are the two main approaches to message propagation in P2P networks. In flooding, nodes push messages to (a subset of) their neighbors eagerly, which ensures fast and reliable propagation at the cost of redundant bandwidth usage (bandwidth amplification). In gossip, nodes announce messages to their neighbors, and the full message is relayed only if the neighbor expresses interest in it. Gossip is more complex than flooding but more efficient in terms of bandwidth.</p><p>Security and privacy are especially important considerations for P2P protocols. In a permissionless network, an adversary can set up many nodes and launch a coordinated Sybil attack, such as isolating a victim from the rest of the network (eclipse attack), or overwhelming the network with unwanted (spam) messages causing denial of service (DoS). Centralized mitigation methods, e.g. linking users' accounts to personally identifiable information such as a phone numbers, are harmful for privacy. Although proof-of-work was originally proposed as a spam countermeasure <ref type="bibr">[1]</ref>, it has since proved impractical due to computational requirements for end users' devices.</p><p>Publish-subscribe (PubSub) is a popular design pattern for P2P networks. Messages in a PubSub network are classified by topic. Nodes only receive messages from topics they are subscribed to.</p><p>Waku is a protocol suite for generalized, permissionless, and privacy-preserving P2P messaging. Waku Relay is the main Waku routing protocol based on GossipSub, a P2P PubSub protocol that combines flooding and gossip. Additionally, Waku Relay offers zero-knowledge-based spam protection, peer discovery, and sharding. Waku light protocols (Store, Filter, and Lightpush) allow resource-restricted devices to interact with Waku Relay nodes.</p><p>Rate Limiting Nullifiers (RLN) is a novel approach to spam protection for Waku Relay <ref type="bibr" target="#b1">[2]</ref>. Let us refer to Waku nodes that wish to publish messages as publishers. RLN enforces a common rate limit upon every publisher using smart contracts and zero-knowledge proofs. The publisher registers on-chain, and provides a proof of their valid membership alongside each message they broadcast. Relaying nodes verify the proof before forwarding the message. Publishers that exceed the rate limit get disconnected from the network.</p><p>Low latency is important for user-facing applications, such as messengers. However, RLN proof verification at every node increases the latency in Waku.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Our contributions</head><p>In this work, we aim to quantify the latency of Waku in realistic settings. We estimate the expected latency based on the assumptions regarding network delays, available bandwidth, and benchmarks for cryptographic operations that routing nodes perform locally. To validate our estimations, we run a single-host simulation and a multi-host measurement. In a single-host simulation with 1000 nodes, we obtain the distribution of message latencies. We then compare simulation results to real latencies measured using five Waku nodes deployed in different geographic regions.</p><p>Our results indicate that the overall latency is satisfactory for a user-facing application. Under realistic assumptions about node count and connectivity, messages of 25 KB or less get delivered within 1 second. We conclude that RLN is a practical spam countermeasure for a scalable, permissionless, decentralized messaging network.</p><p>The rest of this paper is structured as follows. In Section 2, we provide the necessary background on P2P networks, GossipSub, Waku, and RLN. In Section 3, we introduce the analytical model for latency in Waku. We describe our experimental methodology in Section 4. We present and discuss our results in Section 5, review related work in Section 6, outline avenues for future work in Section 7, and summarize the key findings in Section 8.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">GossipSub</head><p>GossipSub <ref type="bibr" target="#b2">[3]</ref> is a P2P protocol that combines PubSub and Gossip. It was initially developed with the blockchain use case in mind (namely, for Ethereum 2.0 and Filecoin). GossipSub nodes may be connected in one of two ways.</p><p>• A gossip connection is only used to announce the messages that a node has recently seen to its neighbor, who may then selectively request full messages (the "lazy pull" approach). • A mesh connection is used to relay full messages (the "eager push" approach). Nodes may also receive and send gossip announcements on mesh connections. To avoid excessive bandwidth consumption, nodes only maintain a handful of mesh connections (typically between 𝐷 𝑙𝑜𝑤 = 4 and 𝐷 ℎ𝑖𝑔ℎ = 12).</p><p>A node can graft a connection, upgrading it from gossip to mesh, or prune a connection, downgrading it from mesh to gossip-only. GossipSub nodes assign dynamic reputation scores to their neighbors, incentivizing good behavior. Multiple parameters are considered in score calculation <ref type="bibr" target="#b3">[4]</ref>, such as the time in the mesh and the number of invalid messages relayed. Connections to low-score peers are eventually dropped, whereas connections to high-score peers are kept.</p><p>GossipSub has high guarantees of delivery in adversarial environments, while having a reasonable amplification factor and latency <ref type="bibr" target="#b2">[3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">TCP</head><p>We assume TCP as the underlying transport protocol. TCP flow control may affect the latency of message transmission. The TCP window size is negotiated in the early stages of a TCP transfer, and limits the amount of bytes a sender can send before waiting for an acknowledgement from the receiver. This implies that the round-trip time (𝑅𝑇 𝑇 ) between the nodes affects the latency. The maximum TCP window size is 65 KB, although RFC 1323 <ref type="bibr" target="#b5">[5]</ref> introduces window scaling to well beyond this limit. We assume that all nodes implement this extension, as is the standard in all modern operating systems. Furthermore, data is transferred in individual segments each with a maximum size derived from the underlying data link layer maximum transmission unit (MTU) size. MTU size varies, but is typically 1.5 KB on Ethernet interfaces. Since each segment adds a small header, which increases the bandwidth overhead, MTU size may also affect latency. However, for small messages this effect is negligible. For large messages, TCP configuration and features such as MTU size, could become significant. These secondary effects are nontrivial to model and considered out of scope.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Zero-knowledge proofs</head><p>Zero-knowledge proofs (ZKPs) are cryptographic protocols enabling one party (the prover) to convince another (the verifier) of a statement's truth without revealing additional informa-tion <ref type="bibr" target="#b6">[6,</ref><ref type="bibr" target="#b7">7]</ref>. zkSNARKs, a subset of ZKPs<ref type="foot" target="#foot_0">1</ref> , are additionally characterized by succinctness (the proofs are small) and non-interactivity (besides the prover sending the proof to the verifier, no communication between the parties is needed). The Ethereum Virtual Machine facilitates zkSNARK verification within smart contract execution <ref type="bibr" target="#b8">[8]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Spam protection in P2P networks</head><p>Without countermeasures, attacker can abuse the resources of a permissionless P2P network. Early P2P file-sharing networks relied on reputation for rate limiting. Peers would keep score of their neighbor's behavior and allocate their resources (such as bandwidth) accordingly.</p><p>Proof-of-work (PoW), later used in Bitcoin and other blockchains, was initially invented as a spam countermeasure <ref type="bibr">[1]</ref>. A notable advantage of PoW-based solutions, at least in theory, is stronger privacy protection, as peers do not need to be identified. However, PoW as a rate limiting tool in general-purpose messaging networks has not gained popularity (a notable attempt was Whisper <ref type="bibr" target="#b9">[9]</ref>). The key challenge turned out to be setting the PoW puzzle difficulty high enough to deter attackers, but low enough to keep the network usable on resource-restricted devices.</p><p>P2P-protocols that underlie blockchains, such as libp2p in Ethereum, protect from spam with a combination of peer scoring and a degree of monetary protection stemming from the financial nature of the messages being broadcast. In particular, relaying nodes ensure that transactions pay a minimum fee required by the protocol, which burdens a potential spammer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.">Waku</head><p>Waku is a suite of generalized messaging protocols that follows the GossipSub model, offers ZK-based privacy-preserving spam protection, and supports resource-restricted devices via light protocols. Waku is built according to the following design principles:</p><p>• privacy-preserving: avoid linking Waku users' identities to any sensitive information such as on-chain identity or IP address, thus providing transport privacy; • decentralized: remove any central point of failure by using P2P architecture and encouraging a mesh topology; • permissionless: allow anyone to join the network using open-source software, possibly using a resource-restricted device; • generalized: support multiple use cases with unicast or multicast communication patterns.</p><p>Waku is built on top of libp2p [10], a modular networking stack for P2P protocols. In particular, Waku Relay, the backbone P2P protocol in the Waku suite, is built on top of libp2p's GossipSub implementation. The Waku reference implementation, called nwaku <ref type="bibr" target="#b10">[11]</ref>, is written in Nim, a compiled high-level programming language. Other implementations include gowaku <ref type="bibr">[12]</ref> in Go and js-waku [13] in Javascript. Waku is used as the backend for Status <ref type="bibr" target="#b11">[14]</ref>, a messaging app. The Waku Network (TWN) is a deployment of the Waku suite of protocols that launched in late 2023 <ref type="bibr" target="#b12">[15]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.1.">Rate Limiting Nullifiers</head><p>Rate Limiting Nullifier (RLN) is a zero-knowledge gadget used in Waku <ref type="bibr" target="#b1">[2]</ref> for privacy-preserving spam protection. RLN operates as follows:</p><p>Publisher registration The publisher generates the private key 𝑠 𝑘 and derives the corresponding commitment: 𝑐 = 𝐻(𝑠 𝑘 ). 𝐻 is a cryptographic hash function (namely, Poseidon <ref type="bibr" target="#b13">[16]</ref>). The publisher can obtain an RLN membership by registering its commitment 𝑐 with an on-chain smart contract. The registration is permissionless. The contract stores a list of all valid commitments. Holding a valid membership entails knowing the secret key 𝑠 𝑘 such that its commitment 𝑐 is part of the list of valid commitments.</p><p>Each registration incurs on-chain transaction fee. The registration requirement thus deters attackers from easily creating multiple (Sybil) identities. In the original RLN proposal <ref type="bibr" target="#b1">[2]</ref>, registration also involves putting down a deposit that is slashed in case of publisher's misbehavior.</p><p>Prerequisites for relaying nodes All Waku nodes are supposed to relay messages. Each node must be in sync with the current list of valid commitments. Synchronization is done through RPC calls to either an Ethereum RPC Provider (Infura, Alchemy, etc.) or a node's own Ethereum Execution Client. Each node locally constructs a Merkle tree 𝑇 of all currently valid commitments. The tree must be updated when necessary according to on-chain events.</p><p>Message propagation An RLN identifier 𝑟𝑙𝑛_𝑖𝑑𝑒𝑛𝑡𝑖𝑓 𝑖𝑒𝑟 uniquely identifies an application and prevents cross-application replay attacks. The 𝑒𝑝𝑜𝑐ℎ is a value derived from or equal to the current UNIX timestamp divided by the length of time over which we want to rate limit. We define<ref type="foot" target="#foot_1">2</ref> an external nullifier ∅ as follows:</p><formula xml:id="formula_0">∅ = 𝐻(𝑒𝑝𝑜𝑐ℎ, 𝑟𝑙𝑛_𝑖𝑑𝑒𝑛𝑡𝑖𝑓 𝑖𝑒𝑟)</formula><p>We define an internal nullifier 𝜑 as follows:</p><formula xml:id="formula_1">𝜑 = 𝐻(𝑠 𝑘 , ∅)</formula><p>To publish a message, the publisher generates a proof 𝜋 of holding a valid membership for the current epoch. The zero-knowledge property of zkSNARKs ensures that 𝜋 conceals the publisher's identity beyond confirming their valid membership. The publisher attaches 𝜋, internal nullifier 𝜑, 𝑒𝑝𝑜𝑐ℎ, 𝑟𝑙𝑛_𝑖𝑑𝑒𝑛𝑡𝑖𝑓 𝑖𝑒𝑟, and the Merkle root 𝜏 of tree 𝑇 to the message.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Enforcing the rate limit</head><p>We define the maximum epoch gap 𝑔 as the maximum allowed gap between the relaying node's internal clock and the epoch for which a proof for an incoming message was generated. Currently, 𝑔 is set to 20 s.</p><p>Each node maintains a set of nullifiers (𝜑) of messages relayed within the last 𝑔 seconds. A message is considered valid if:</p><p>• its proof 𝜋 proves that the publisher holds a valid membership; • no other messages within the last 𝑔 seconds had the same nullifier 𝜑;</p><p>• the gap between the epoch used for proof generation and the node's internal clock does not exceed 𝑔.</p><p>Only valid messages are forwarded. Nodes that relay invalid messages have their GossipSub scores lowered. In particular, unsuccessful validation affects the 𝑃 4 score component ("Invalid Messages for a topic" in GossipSub terms <ref type="bibr" target="#b3">[4]</ref>). Eventually, the violating node is isolated from the network. Note that the violating node is not necessarily the original publisher. This mechanism discourages forwarding invalid messages, as a node that does so will itself be punished.</p><p>In summary, RLN prevents malicious actors from overwhelming the network with messages. At the same time, RLN respects the privacy of publishers, as they only have to prove that they hold a valid membership without specifying any further details.</p><p>Note that the current nwaku implementation does not yet support economic punishment of malicious publishers, which was described in the original RLN paper <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Analytical model for propagation latency</head><p>We define latency as the time between the publisher's expressed intention <ref type="foot" target="#foot_2">3</ref> to publish a message and the receiver node receiving it.</p><p>For a given message propagation, the total latency is the sum of individual delays along the shortest path from the publisher to the receiver. Therefore, the total path latency depends on individual hop-level delays and on the number of hops. From a network-wide perspective, the distribution of latencies depends on the number of nodes in the network, their degrees, and the network topology.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Latency components on a given path</head><p>Let us denote the number of the nodes in the network as 𝑁 , and the degree of each node as 𝐷 (we assume that the network is a regular graph). The overall latency 𝐿 for a propagation path can be divided into three components: proof generation time at the publishing node, transmission latency, and proof validation at each relaying node (see Figure <ref type="figure" target="#fig_0">1</ref> as an illustration for 𝐷 = 2 and 𝑁 = 8). Let us consider the three latency components in turn.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.1.">Proof generation</head><p>The publisher generates one proof 𝜋 of valid RLN membership per message sent. 𝐿 𝑖 𝑔 denotes the time it takes the publisher 𝑖 to generate 𝜋. Proof generation time depends on the hardware platform (see Table <ref type="table" target="#tab_0">1</ref> for benchmarks).</p><p>After generating 𝜋, the publisher may optionally verify the proof locally before broadcasting the message. This might be useful, for instance, if the publisher outsources proof generation to a third party and wants to independently check that the proof is valid. In our calculations, we do not account for this optional step.  Benchmarks for RLN proof generation and verification times for nwaku <ref type="bibr" target="#b14">[17]</ref> on various platforms (the platform we used in the multi-host measurement described in Section 4.2 is indicated in bold).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.2.">Message transmission</head><p>𝐿 𝑖→𝑗 𝑡 denotes the message transmission time from node 𝑖 to its neighboring node 𝑗. This time depends on the message size, the available bandwidth between the nodes, and the underlying transport protocol (TCP assumed).</p><p>We discard the effect of TCP on latency (see Section 2.2), under the assumption that messages are small, no retransmissions occur, the connection handshake was already established, and the flow control is already adapted between the neighboring nodes. Therefore, once the neighboring nodes have adapted their TCP window size and window scaling, the transmission time 𝐿 ℎ 𝑡 for small messages (such as under 100 KB) should be close to 𝑅𝑇 𝑇 /2 in theory.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.3.">Message validation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝐿 𝑖</head><p>𝑣 denotes the validation time at a relaying node 𝑃 𝑖 . Remember that every node validates every incoming message before forwarding it. Validation involves multiple steps, such as decoding and RLN proof verification, which is the most resource-consuming step. Messages are cached to avoid duplicate validation. We have performed proof verification benchmarks on different platforms (see Table <ref type="table" target="#tab_0">1</ref> for results and <ref type="bibr" target="#b14">[17]</ref> for methodology).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Analytical formula for latency</head><p>With all the latency contributions from Sections 3.1.1, 3.1.2 and 3.1.3, we can model the total latency with Equation <ref type="formula" target="#formula_2">1</ref>. 𝐿 𝑔 and 𝐿 𝑣 can be taken from Table <ref type="table" target="#tab_0">1</ref> depending on the platform. Note that message decoding and other non-significant time contributions are not considered. 𝑖 denotes the index of a node in the path (𝑖 = 0 is the publisher, 𝑖 = ℎ is the receiver). ℎ can take any value in [ℎ 𝑚𝑖𝑛 , ℎ 𝑚𝑎𝑥 ], where ℎ 𝑚𝑖𝑛 = 1 can be used for the best-case latency for neighboring nodes.</p><formula xml:id="formula_2">𝐿 = 𝐿 0 𝑔 + ℎ ∑︁ 𝑖=1 (𝐿 (𝑖−1)→𝑖 𝑡 + 𝐿 𝑖 𝑣 )<label>(1)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Bandwidth amplification</head><p>The bandwidth amplification factor is the ratio of total bandwidth utilized to relay a message at each node to the size of that message. Since in Waku Relay each message is routed exactly once on each mesh connection, the amplification factor is roughly equal <ref type="foot" target="#foot_3">4</ref> to the mesh degree 𝐷 at each node. Increasing 𝐷 increases bandwidth amplification (independent of 𝑁 ) and decreases latency. The effect of 𝐷 on latency lasts only up to a certain threshold for any given 𝑁 (Figure <ref type="figure" target="#fig_1">2</ref>). We observe that while higher values of 𝐷 allow for faster message dissemination, it only makes sense to increase 𝐷 up to a certain threshold to avoid useless bandwidth amplification. For that reason, in real Waku networks, 𝐷 is limited to 𝐷 ℎ𝑖𝑔ℎ = 12.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Estimating the number of hops</head><p>Waku Relay is based on GossipSub, which strongly enforces same-degree topology. Peer discovery is randomized, and nodes try to maintain the degree of 𝐷 ∈ [𝐷 𝑚𝑖𝑛 , 𝐷 𝑚𝑎𝑥 ]. These protocol properties allow us to make a reasonable assumption that the network topology tends towards a mesh topology rather than hub-and-spoke <ref type="foot" target="#foot_4">5</ref> .</p><p>As an simplified model, consider a network with a constant node degree 𝐷 and 𝑁 nodes in total. Let us denote as 𝑁 ℎ the number of nodes that have received the message after ℎ propagation steps. We can derive ℎ 𝑚𝑎𝑥 , which is a lower bound on ℎ for given values of 𝑁 and 𝐷 that is sufficient for all nodes to receive a message.</p><p>Let us consider two cases: 𝐷 = 2 and 𝐷 &gt; 2. The edge case where 𝐷 = 2 implies a circle topology, where a message is only relayed to two new nodes on each propagation step. In other words, propagation is described by an arithmetic rather than geometric sequence. After ℎ propagation steps, 𝑁 ℎ = 1 + 2ℎ nodes know the message. Therefore, ℎ 𝑚𝑎𝑥 = ⌈︀ 𝑁 −1 2</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>⌉︀</head><p>. Now, let's consider the general case of 𝐷 &gt; 2. Before the first step, only one node (the publisher) knows the message. After the first step, all 𝐷 neighbors of the publisher receive the message. For all subsequent steps, the process can be characterized by a geometric sequence: The best-topology maximal path length ℎ 𝑚𝑎𝑥 and bandwidth amplification for various values of of node degree 𝐷, calculated using Equation <ref type="formula" target="#formula_4">2</ref>.</p><formula xml:id="formula_3">𝑁 ℎ = {︃ 𝐷 + 1 if ℎ = 1 𝐷 × (𝐷 − 1) ℎ−1 + 1 if ℎ &gt; 1</formula><p>Applying the formula for the partial sum of a geometric sequence, we derive Equation <ref type="formula" target="#formula_4">2</ref>:</p><formula xml:id="formula_4">ℎ 𝑚𝑎𝑥 = ⌈︁ log (︁ (𝑁 −1)(𝐷−2) 𝐷 + 1 )︁ log(𝐷 − 1) ⌉︁<label>(2)</label></formula><p>The best-case maximum number of hops ℎ 𝑚𝑎𝑥 is inversely related to bandwidth amplification (Figure <ref type="figure" target="#fig_1">2</ref>). We ignore the case of 𝐷 = 2 in Figure <ref type="figure" target="#fig_1">2</ref> because it is unrealistic for Waku in practice. Intuitively, higher node degrees lead to faster message propagation but at the same time increase bandwidth consumption. While this observation holds in practice and in our subsequent experiments, the exact formula is not necessarily applicable for Waku because of two reasons. First, Equation 2 concerns the theoretically optimal topology in terms of propagation efficiency. Second, nodes in Waku have a variable degree, which may lead to different connectivity properties compared to the simplified model of Equation <ref type="formula" target="#formula_4">2</ref>. Keeping these limitations in mind, we can plug 𝑁 = 1000 and 𝐷 = 6 into Equation 2 and derive ℎ 𝑚𝑎𝑥 = 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Methodology</head><p>We estimate latency in Waku using a single-host simulation (Section 4.1) and a multi-host node deployment in different geographic locations (Section 4.2). The two approaches complement each other. The single-host simulation allows us to model a large network, while nodes in different locations allow us to measure the impact of real network conditions. The multi-host measurements are intended to assess the performance of Waku as a message traverses an individual path of geographically distributed nodes.</p><p>We use nwaku implementation in all experiments and consider message sizes of 2, 25, 100, and 500 KB. The depth of the Merkle tree 𝑇 is 20 in all experiments, as per the protocol specification <ref type="bibr" target="#b15">[18]</ref>. Deeper Merkle trees support more users and provide stronger anonymity, but require more computational resources for proof generation and verification. The tree depth of 20 was chosen as a reasonable trade-off.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Single-host simulation</head><p>We simulate a network of 1000 nodes using Shadow <ref type="bibr" target="#b16">[19]</ref>, a simulation framework. Each node tries to maintain 𝐷 = 6 (mesh) connections and is additionally connected to 25 gossip peers. 𝐷 = 6 is the desired node degree in a typical libp2p network <ref type="foot" target="#foot_5">6</ref> .</p><p>The upstream and downstream bandwidth is set to 100 Mbps. The latency is set to 150 ms. See <ref type="bibr" target="#b17">[21]</ref> for the full configuration file.</p><p>Ten nodes are designated as publishers. Each of these publishes one message, which is propagated on all mesh connections. This leads to a total of nearly 10000 message received events (9990, accounting for the fact that the publishers do not need to receive the messages they send). Effects from the first two messages are excluded from our measurements, to prevent latency bias due to TCP window size negotiation.</p><p>The Shadow simulation framework does not model CPU time. To account for processing delays, we manually introduce delays for RLN proof generation and verification, according to our benchmarks (see Table <ref type="table" target="#tab_1">2</ref>).</p><p>In all simulations, four hops were sufficient to deliver a message to nodes. This result is lower to what Equation 2 suggests (namely, five hops for 𝑁 = 1000 and 𝐷 = 6). Due to variable node degrees in our simulations, Equation 2 is not directly applicable, although we can use it as a sanity check. Simulation results inform our choice of a four-hop path in the subsequent multi-host measurements (Section 4.2).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Multi-host measurements</head><p>To estimate the effects of real-world communication delays, we deploy 𝑁 = 5 nodes in different geographic locations. We use Digital Ocean <ref type="bibr" target="#b18">[22]</ref> machines with 8 GB RAM and 4 vCPU (virtual CPUs). The five nodes are statically connected (without peer discovery) in a linear fashion, in the following order by location: Singapore, Bangalore, San Francisco, New York, and Frankfurt. Hence, node degrees are 𝐷 = 1 for the sender (Singapore) and receiver (Frankfurt) nodes, and 𝐷 = 2 for the intermediary nodes. Each node can only receive messages on a mesh connection from the previous node on the path, and relay the message on a mesh connection to the following node. Messages are therefore forced to travel along this route, eagerly pushed along the path (no messages are pulled using gossip dissemination). The goal of this experiment is to measure the latency of a message as it travels through a path of geographically distributed nodes. The setup is chosen to simulate a path that is sufficiently long to get to most nodes. We choose the path length of ℎ = 4 for the multihost measurement. Based on the single-host simulation results (Section 4.1), we observe that the majority of messages take no more than four hops to propagate in a mesh network with 𝑁 = 1000 and 𝐷 = 6 (see interpretation of Figure <ref type="figure" target="#fig_2">3</ref>). The exact distribution of path lengths depends not only on the network size and the node degree, but also on topology. We argue that measuring a path of length four is justified to validate the single-host simulation results in realistic network conditions. The measurement results can also be extrapolated to longer paths using the analytical formula for latency (Equation <ref type="formula" target="#formula_2">1</ref>).</p><p>We measure latency as the difference in the arrival times of a message, according to the local clock of the relevant nodes. We use NTP <ref type="bibr" target="#b19">[23]</ref> to synchronize the clocks on the nodes, reducing possible errors to a few milliseconds.</p><p>We use ping times between the relevant city pairs as the baseline for message latency (see Table <ref type="table" target="#tab_1">2</ref>). We use the ping protocol implemented in libp2p [10] and the wakucanary tool <ref type="bibr" target="#b20">[24]</ref>. Ping is measured as an average of the round-trip time (RTT) of 5 requests done within a 15-minute time span.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results and Discussion</head><p>Table <ref type="table" target="#tab_2">3</ref> shows the multi-host measurements of the proof generation time 𝐿 𝑔 , transmission time 𝐿 𝑖→𝑗 𝑡 , and validation time 𝐿 𝑣 , as discussed in Section 3. As expected, larger messages cause higher latency. Latencies between geographically closer nodes are smaller. The transmission time 𝐿 𝑡 of small messages is close to half the RTT as measured in Table <ref type="table" target="#tab_1">2</ref>. For example, the RTT between Bangalore and San Francisco is 220 ms, and 𝐿 𝑆𝑖→𝐵𝑎 𝑡 = 105 ms. Proof generation time is much larger than validation time (Table <ref type="table" target="#tab_0">1</ref>). Proof generation does not depend on the message size: proofs are generated on message hashes, and we do not count hashing as part of proof generation. According to our benchmarks (Table <ref type="table">4</ref>), the hashing time is under 1.2 ms in the worst case, which is insignificant compared to the overall latency.</p><p>Validation time 𝐿 𝑣 , on the other hand, generally increases with message size. This is explained by the fact that validation includes not only proof verification but also decoding the message. The decoding time is linear in the size of the message. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 4</head><p>Benchmarks for Keccak256 hashing and conversion to a BN254 field element using go-zerokit-rln library <ref type="bibr" target="#b21">[25]</ref> (goos: darwin, goarch: arm64).</p><p>Now consider the distribution of latencies obtained in the single-host simulation compared to the latencies measured in a multi-host deployment (Figure <ref type="figure" target="#fig_2">3</ref>). Simulated latency distributions for small messages tend to be discrete, reflecting separate hops. As we increase the message size, the latency distribution starts resembling a normal distribution. We explain this as follows: for larger messages, a larger share of the total latency is spent on message transmission. For small messages, the transmission is nearly instant compared to validation time at each node, which explains the peaks in the upper charts.</p><p>Simulation latencies largely fall between the minimal and maximal latencies as measured in the multi-host deployment (dashed lines in Figure <ref type="figure" target="#fig_2">3</ref>). Simulations seem to underestimate the latency for 100 KB and 500 KB messages, most likely due to the fact that our simulation framework does not accurately model CPU time (we do account for proof verification time as benchmarked in Table <ref type="table" target="#tab_0">1</ref>, but do not account for other related tasks, such as message decoding). In a similar vein, simulations overestimate latency for small messages, which is especially visible in the plot for 2 KB messages. Messages of 25 KB or smaller are always delivered in under 1 s, both in simulations and in the multi-host measurements. 95% of large messages (500 KB) are delivered in under 1.7 seconds according to simulation results (Table <ref type="table">5</ref>).</p><p>For a four-hop multi-host path, proof generation represents between around 10% to 50% of the total latency (for 500 KB and 2 KB messages, respectively). We consider this tolerable, as the proof is generated once per message. Proof verification (which is only a part of message validation) does not contribute significantly to the total latency.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 5</head><p>The minimum, average, 95-percentile, and maximum total latencies (ms) in the single-host distributions (marked "s-h") and multi-host measurements (marked "m-h") for different message sizes (KB). We do not provide the averages and the 95-percentile data for the multi-host measurements due to a small number of data points.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Related Work</head><p>P2P protocols for message dissemination in blockchains have been studied extensively, including works on latency measurements in Bitcoin <ref type="bibr" target="#b22">[26,</ref><ref type="bibr" target="#b23">27]</ref> and Ethereum <ref type="bibr" target="#b24">[28]</ref>. GossipSub <ref type="bibr" target="#b2">[3]</ref> has been designed for information propagation in Ethereum and Filecoin. Waku <ref type="bibr" target="#b25">[29]</ref>, based on GossipSub, uses RLN <ref type="bibr" target="#b1">[2]</ref>, a ZKP-based protocol, for rate limiting. Decentralized messaging is a related but separate line of work with a long history <ref type="bibr" target="#b26">[30]</ref>. Decentralized messaging protocols normally define the rules of communication between a client and a server, and between two servers. Clients connect to the servers of their choosing, while servers forward users' messages to one another, forming a federation. Notable examples include: ActivityPub <ref type="bibr" target="#b27">[31]</ref> (which e.g. Mastodon is based on <ref type="bibr" target="#b28">[32]</ref>), Matrix <ref type="bibr" target="#b29">[33]</ref>, Nostr <ref type="bibr" target="#b30">[34]</ref>, XMPP [35], Diaspora <ref type="bibr" target="#b31">[36]</ref>, AT Protocol <ref type="bibr" target="#b32">[37]</ref>, and Farcaster <ref type="bibr" target="#b33">[38]</ref>.</p><p>Waku distinguishes itself from federated protocols: it aims to be generalized (not only useful for chat-like applications) and provides transport privacy. Waku also embeds scalability and security mechanisms into its transport and routing layer. Specifically, it leverages RLN to prevent abuse of the open infrastructure and provides various scalability avenues, including sharding and light protocols (see Section 1).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Future Work</head><p>Dynamic topologies. Our analysis assumes a stable full mesh topology (see Section 3). However, real-world networks experience significant churn: nodes join and leave dynamically. Churn may lead to temporary neighborhoods of denser or sparser connectivity. One future avenue of research is investigating the impact of node churn and different topologies on propagation latency. Higher message rates. We have analyzed network performance at low message rates. Increasing message rates leads to more control messages, higher processing overhead, and other transient effects impacting latency. Future work should explore these effects systematically.</p><p>Testing under more complex scenarios. Future research might evaluate the performance of Waku with RLN under more challenging conditions, which may involve node faults, network failures, clock de-synchronization, and other adversarial conditions.</p><p>On-chain RLN membership tree. We currently require all RLN publishers and verifiers to construct and maintain a copy of the membership Merkle tree 𝑇 locally (see Section 2.5.1), which may be impractical for resource-restricted nodes. We are investigating models where the entire tree 𝑇 resides on-chain, allowing for delegated proof generation and simplified verification.</p><p>Security and privacy analysis. Further research should focus on identifying vulnerabilities and attack vectors. Enhancing security mechanisms to prevent spam, Sybil attacks, and adversarial behavior will contribute to the robustness of the network.</p><p>Comparison with traditional P2P protocols without RLN. Another research direction may consider comparable P2P protocols with other rate-limiting methods (for instance, reputation-based) and compare their properties with those of Waku.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Conclusion</head><p>In this work, we studied message propagation latency in Waku -a GossipSub-based P2P messaging protocol that uses RLN <ref type="bibr" target="#b1">[2]</ref> for privacy-preserving rate limiting. We simulated a network of 1000 nodes under realistic assumptions (Section 4.1), and used a geographically distributed cloud deployment (Section 4.2). The results show that the delays that RLN imposes are not overwhelming compared to the overall message latency. Proof generation, which the publisher does only once per message, requires under 300 ms per proof and does not depend on message size, assuming that the message has been hashed. Proof verification, performed by every routing node, is roughly an order of magnitude faster than proof generation. Overall, RLN-related tasks account for between around 10% to 50% of the total latency, depending on message size. This percentage is higher for small messages, as proof generation is independent of message size, and transmission time is lower for small messages. In absolute numbers, all messages of 25 KB and smaller are delivered in under 1 second. We conclude that RLN can be a practical rate limiting tool in real-world protocols underpinning user-facing applications.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: An example of a message propagation in a network with 𝑁 = 8 and 𝐷 = 2, with latency components indicated. The message is relayed from 𝑃 1 to 𝑃 8 via four hops.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2:The best-topology maximal path length ℎ 𝑚𝑎𝑥 and bandwidth amplification for various values of of node degree 𝐷, calculated using Equation2.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Single-host simulation results of 1000 nwaku<ref type="bibr" target="#b10">[11]</ref> nodes with 𝐷 = 6 in Shadow<ref type="bibr" target="#b16">[19]</ref> simulator. Message propagation latency distribution is shown for different message sizes. The average, 95% percentile, min and max values are shown, values in ms. In red worst and best case propagation times from multi-host simulation extracted from Table3.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>The message is relayed from 𝑃 1 to 𝑃 8 via four hops.</figDesc><table><row><cell>Platform</cell><cell cols="2">Generation (ms) Verification (ms)</cell></row><row><cell>MacBook M1 Pro 16GB</cell><cell>85.7</cell><cell>2.7</cell></row><row><cell>Bare Metal 256GB AMD EPYC 7502P 32-Core</cell><cell>171.6</cell><cell>7.9</cell></row><row><cell>Digital Ocean 8GB/4CPU</cell><cell>276.3</cell><cell>4.5</cell></row><row><cell>Digital Ocean 2GB/2CPU</cell><cell>329.5</cell><cell>4.4</cell></row><row><cell>Raspberry Pi 4B 4GB</cell><cell>766.8</cell><cell>18.7</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Round-time trip (RTT) ping latency (ms) between deployed nodes in various locations.</figDesc><table><row><cell>From</cell><cell cols="6">To Bangalore Frankfurt Singapore San Francisco New York</cell></row><row><cell>Bangalore</cell><cell></cell><cell>N/A</cell><cell>136</cell><cell>60</cell><cell>220</cell><cell>213</cell></row><row><cell>Frankfurt</cell><cell></cell><cell>132</cell><cell>N/A</cell><cell>159</cell><cell>150</cell><cell>87</cell></row><row><cell>Singapore</cell><cell></cell><cell>59</cell><cell>160</cell><cell>N/A</cell><cell>176</cell><cell>236</cell></row><row><cell cols="2">San Francisco</cell><cell>218</cell><cell>146</cell><cell>176</cell><cell>N/A</cell><cell>68</cell></row><row><cell>New York</cell><cell></cell><cell>214</cell><cell>90</cell><cell>235</cell><cell>68</cell><cell>N/A</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Multi-host network measurements of five nwaku<ref type="bibr" target="#b10">[11]</ref> nodes deployed at different locations for various message sizes. The message travels from Singapore to Frankfurt via the other nodes in the order listed. Latencies (one-way, or 𝑅𝑇 𝑇 /2) are in ms. 𝐿 𝑔 , 𝐿 𝑡 , and 𝐿 𝑣 are averages across five runs. 𝐿 𝑎𝑣𝑔 𝑡𝑜𝑡𝑎𝑙 is an average of the total latencies across five runs. Note: 𝐿 𝑎𝑣𝑔 𝑡𝑜𝑡𝑎𝑙 (the average of sums of sub-latencies for each run) is not equal to the sum of averages (i.e., the value in the last column is not necessarily equal to the sum of other values in that row).</figDesc><table><row><cell></cell><cell>Singapore</cell><cell cols="2">Bangalore</cell><cell cols="2">San Francisco</cell><cell cols="2">New York</cell><cell cols="2">Frankfurt</cell></row><row><cell></cell><cell cols="2">𝐿 𝑔 𝐿 𝑆𝑖→𝐵𝑎 𝑡</cell><cell>𝐿 𝐵𝑎 𝑣</cell><cell>𝐿 𝐵𝑎→𝑆𝐹 𝑡</cell><cell>𝐿 𝑆𝐹 𝑣</cell><cell>𝐿 𝑆𝐹 →𝑁 𝑌 𝑡</cell><cell>𝐿 𝑁 𝑌 𝑣</cell><cell>𝐿 𝑁 𝑌 →𝐹 𝑟 𝑡</cell><cell>𝐿 𝐹 𝑟 𝑣</cell><cell>𝐿 𝑎𝑣𝑔 𝑡𝑜𝑡𝑎𝑙</cell></row><row><cell>Msg size (KB)</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>2</cell><cell>236 38</cell><cell></cell><cell cols="2">6 105</cell><cell cols="2">7 29</cell><cell cols="2">5 42</cell><cell>6</cell><cell>477</cell></row><row><cell>25</cell><cell>258 81</cell><cell></cell><cell cols="2">10 344</cell><cell cols="2">11 100</cell><cell cols="2">6 127</cell><cell>7</cell><cell>945</cell></row><row><cell>100</cell><cell cols="2">223 119</cell><cell cols="2">8 755</cell><cell cols="2">12 261</cell><cell cols="2">9 289</cell><cell>11</cell><cell>1689</cell></row><row><cell>500</cell><cell cols="2">247 275</cell><cell cols="2">16 1017</cell><cell cols="2">19 391</cell><cell cols="2">21 462</cell><cell>17</cell><cell>2468</cell></row><row><cell></cell><cell cols="6">Message size (KB) Hashing time (ms)</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>2</cell><cell></cell><cell cols="2">0.005516</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>25</cell><cell></cell><cell cols="2">0.061144</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>100</cell><cell></cell><cell cols="2">0.243716</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>500</cell><cell></cell><cell cols="2">1.222206</cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 .</head><label>3</label><figDesc></figDesc><table><row><cell cols="7">Msg size Min (m-h) Min (s-h) Avg (s-h) 95-perc. (s-h) Max (s-h) Max (m-h)</cell></row><row><cell>2</cell><cell>280</cell><cell>356</cell><cell>497</cell><cell>597</cell><cell>604</cell><cell>498</cell></row><row><cell>25</cell><cell>349</cell><cell>432</cell><cell>756</cell><cell>930</cell><cell>993</cell><cell>931</cell></row><row><cell>100</cell><cell>350</cell><cell>439</cell><cell>900</cell><cell>1076</cell><cell>1221</cell><cell>1781</cell></row><row><cell>500</cell><cell>539</cell><cell>471</cell><cell>1358</cell><cell>1665</cell><cell>2468</cell><cell>3141</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Strictly speaking, zkSNARKs function as arguments rather than proofs of knowledge<ref type="bibr" target="#b6">[6]</ref>.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Our definition of ∅ differs from the original definition<ref type="bibr" target="#b1">[2]</ref> in that we add the RLN identifier to support multiple applications running on the same Waku network deployment.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">In other words, latency includes preparatory steps before the message is relayed, such as generating the RLN proof.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">Not exactly equal due to additional increase in control message frequency.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">In hub-and-spoke, most nodes are only connected to a handful of well-connected hubs, which is undesirable for decentralization and censorship resistance.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">See libp2p documentation [20]: "In libp2p's default implementation the ideal network peering degree is 6 with anywhere from 4-12 being acceptable. "</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We would like to acknowledge Mikel's contributions to benchmarking Waku and RLN on Raspberry Pi.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Back</surname></persName>
		</author>
		<title level="m">Hashcash-a denial of service counter-measure</title>
				<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">WAKU-RLN-RELAY: Privacy-preserving peer-to-peer economic spam protection</title>
		<author>
			<persName><forename type="first">S</forename><surname>Taheri-Boshrooyeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Thorén</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Whitehat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">J</forename><surname>Koh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kilic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Gurkan</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICDCSW56584.2022.00022</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW)</title>
				<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="73" to="78" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Gossipsub: Attack-resilient message propagation in the filecoin and ETH2.0 networks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Vyzovitis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Napora</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mccormick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Dias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Psaras</surname></persName>
		</author>
		<idno>CoRR abs/2007.02754</idno>
		<ptr target="https://arxiv.org/abs/2007.02754.arXiv:2007.02754" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Labs</surname></persName>
		</author>
		<ptr target="https://github.com/libp2p/specs/blob/b9efe152c29f93f7a87931c14d78ae11e7924d5a/pubsub/gossipsub/gossipsub-v1" />
		<title level="m">Gossipsub v1.1. score function formal specification</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m">md#the-score-function</title>
				<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="2024" to="2026" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">TCP Extensions for High Performance</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Borman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">T</forename><surname>Braden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Jacobson</surname></persName>
		</author>
		<idno type="DOI">10.17487/RFC1323</idno>
		<ptr target="https://www.rfc-editor.org/info/rfc1323.doi:10.17487/RFC1323" />
	</analytic>
	<monogr>
		<title level="m">RFC 1323</title>
				<imprint>
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">zk-SNARKs: A Gentle Introduction</title>
		<author>
			<persName><forename type="first">A</forename><surname>Nitulescu</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Ballesteros</forename><surname>Rodríguez</surname></persName>
		</author>
		<title level="m">zk-SNARKs analysis and implementation on Ethereum</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="https://ethereum.org/en/zero-knowledge-proofs/" />
		<title level="m">What are zero-knowledge proofs</title>
				<imprint>
			<date type="published" when="2024-03-11">2024. 2024-03-11</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">V</forename><surname>Gluhovsky</surname></persName>
		</author>
		<ptr target="https://eips.ethereum.org/EIPS/eip-627" />
		<title level="m">Whisper specification</title>
				<imprint>
			<date type="published" when="2017">2017. 2024-04-23</date>
		</imprint>
	</monogr>
	<note>Eip-627</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="https://github.com/waku-org/nwaku/" />
		<title level="m">nwaku repo</title>
				<imprint>
			<date type="published" when="2024-02-13">2024. 2024-02-13</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<ptr target="https://status.app/" />
		<title level="m">Status app official website</title>
				<imprint>
			<date type="published" when="2024-03-05">2024. 2024-03-05</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<ptr target="https://cointelegraph.com/press-releases/waku-launches-first-decentralised-privacy-preserving-dos-protections-for-p2p-messaging" />
		<title level="m">Waku Network launch press-release</title>
				<imprint>
			<date type="published" when="2023">2023. 2024-02-29</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Poseidon: A new hash function for zero-knowledge proof systems</title>
		<author>
			<persName><forename type="first">L</forename><surname>Grassi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Khovratovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Rechberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schofnegger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">USENIX Security Symposium, USENIX Association</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="519" to="535" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Revuelta</surname></persName>
		</author>
		<ptr target="https://github.com/waku-org/nwaku/blob/097cb36279897801a5963343c7298220b6290228/apps/benchmarks/benchmarks.nim" />
		<title level="m">nwaku benchmark</title>
				<imprint>
			<date type="published" when="2024-02-13">2024. 2024-02-13</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Whitehat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Taheri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Thorén</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kilic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Dimovski</surname></persName>
		</author>
		<ptr target="https://github.com/vacp2p/rfc-index/blob/main/vac/32/rln-v1.md" />
		<title level="m">Rate Limit Nullifier</title>
				<imprint>
			<date type="published" when="2024-04-23">2024. 2024-04-23</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<ptr target="https://shadow.github.io/" />
		<title level="m">The Shadow simulator. a discrete-event network simulator that directly executes real application code</title>
				<imprint>
			<date type="published" when="2024-02-13">2024. 2024-02-13</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<ptr target="https://github.com/waku-org/research/blob/3a2da04e548b034dfaf63aada54d63197aa8ef8e/rln-delay-simulations/shadow.yaml" />
		<title level="m">nwaku shadow simulation config</title>
				<imprint>
			<date type="published" when="2024-02-13">2024. 2024-02-13</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<ptr target="https://www.digitalocean.com/" />
		<title level="m">Digital ocean</title>
				<imprint>
			<date type="published" when="2024-02-13">2024. 2024-02-13</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<ptr target="https://datatracker.ietf.org/doc/html/rfc5905" />
		<title level="m">Network Time Protocol version 4: Protocol and algorithms specification</title>
				<imprint>
			<date type="published" when="2024-03-04">2024. 2024-03-04</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<ptr target="https://github.com/waku-org/nwaku/tree/097cb36279897801a5963343c7298220b6290228/apps/wakucanary" />
		<title level="m">waku canary tool</title>
				<imprint>
			<date type="published" when="2024-02-13">2024. 2024-02-13</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<ptr target="Ac-cessed:2024-04-22" />
		<title level="m">Hashing benchmark code</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Information propagation in the bitcoin network</title>
		<author>
			<persName><forename type="first">C</forename><surname>Decker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wattenhofer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">P2P, IEEE</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Timing analysis for inferring the topology of the bitcoin peer-to-peer network</title>
		<author>
			<persName><forename type="first">T</forename><surname>Neudecker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Andelfinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hartenstein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th IEEE International Conference on Advanced and Trusted Computing (ATC)</title>
				<meeting>the 13th IEEE International Conference on Advanced and Trusted Computing (ATC)</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Evaluation of ethereum end-to-end transaction latency</title>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Qiao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">NTMS, IEEE</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<ptr target="https://waku.org/" />
		<title level="m">Waku official website</title>
				<imprint>
			<date type="published" when="2024-02-27">2024. 2024-02-27</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Sok: Secure messaging</title>
		<author>
			<persName><forename type="first">N</forename><surname>Unger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dechand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bonneau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fahl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Perl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Goldberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Smith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Symposium on Security and Privacy</title>
				<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="232" to="249" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<ptr target="https://www.w3.org/TR/activitypub/" />
		<title level="m">Activitypub official website</title>
				<imprint>
			<date type="published" when="2024-02-27">2024. 2024-02-27</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Challenges in the decentralised web: The mastodon case</title>
		<author>
			<persName><forename type="first">A</forename><surname>Raman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Joglekar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">D</forename><surname>Cristofaro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tyson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Internet Measurement Conference</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="217" to="229" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<ptr target="https://spec.matrix.org/latest/" />
		<title level="m">Matrix official website</title>
				<imprint>
			<date type="published" when="2024-02-27">2024. 2024-02-27</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<ptr target="https://nostr.how/en/what-is-nostr" />
		<title level="m">Nostr official website</title>
				<imprint>
			<date type="published" when="2024-02-27">2024. 2024-02-27</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<ptr target="https://diasporafoundation.org/" />
		<title level="m">Diaspora official website</title>
				<imprint>
			<date type="published" when="2024-02-27">2024. 2024-02-27</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Kleppmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Frazee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Graber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Holmgren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ivy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Newbold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Volpert</surname></persName>
		</author>
		<idno>CoRR abs/2402.03239</idno>
		<title level="m">Bluesky and the AT protocol: Usable decentralized social media</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<ptr target="https://docs.farcaster.xyz" />
		<title level="m">Farcaster official website</title>
				<imprint>
			<date type="published" when="2024-02-27">2024. 2024-02-27</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
