<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Uncategorized &#8211; CFLabs</title>
	<atom:link href="https://cflabs.ai/category/uncategorized/feed/" rel="self" type="application/rss+xml" />
	<link>https://cflabs.ai</link>
	<description></description>
	<lastBuildDate>Tue, 16 Sep 2025 08:08:12 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>

 
	<item>
		<title>The Prover Network Landscape: A Comparative Analysis of Succint and Boundless Networks</title>
		<link>https://medium.com/@CFrontier_Labs/the-prover-network-landscape-a-comparative-analysis-of-succint-and-boundless-networks-b1c362b4e67d</link>
					<comments>https://medium.com/@CFrontier_Labs/the-prover-network-landscape-a-comparative-analysis-of-succint-and-boundless-networks-b1c362b4e67d#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 08 Sep 2025 08:21:34 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Homepage]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=405</guid>

					<description><![CDATA[Abstract 1. Network Snapshot: Scale, Trends, and Centralization At a glance,&#160;Boundless&#160;shows&#160;broader participation and openness, with many small and mid-sized nodes [&#8230;]]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading" id="2883">Abstract</h2>



<ul class="wp-block-list">
<li><strong>Network scale.</strong> Both networks have reached a genuinely <strong>scaled</strong> phase for proofs and requests. <strong>Boundless</strong> has created a high-frequency open marketplace that absorbs substantial compute; <strong>Succinct</strong> spans many major protocols with steadily rising proof demand. Net-net, supply and demand are expanding in tandem, with activity and throughput trending up.</li>



<li><strong>Economic models.</strong> <strong>Boundless</strong> uses a <strong>reverse Dutch auction</strong>: the price starts at a minimum (often <strong>0 ETH</strong>), then rises linearly to a cap and stays there until the lock deadline. On beta mainnet today, provers frequently bid <strong>0</strong> on the prove fee and instead compete to <strong>lock</strong> orders — often using <strong>MEV-style</strong> tactics to win first inclusion.<br><strong>Succinct</strong> runs a real-time auction <strong>denominated in $PROVE</strong>: provers bid a per-unit compute price; the lowest bid wins (ties are randomly selected), and each job includes a fixed <strong>base fee</strong>. Provers must <strong>stake $PROVE</strong> to participate; failure to deliver on time is penalized (slashing).</li>



<li><strong>Miner appeal (current earnings).</strong> On <strong>Boundless</strong>, jobs typically pay <strong>near zero</strong> in direct fees; rewards primarily come from leaderboard points redeemable for <strong>$ZKC</strong>. <strong>Succinct</strong> pays out <strong>$PROVE</strong> immediately (base fee + metered compute), which is tradable — so near-term economics are clearer.</li>



<li><strong>Who’s a better fit right now?</strong> If you have <strong>GPU</strong> but limited staking capital, <strong>Boundless</strong> offers low entry barriers and an open network to bank future token rewards. If you can purchase and stake <strong>$PROVE</strong>, <strong>Succinct</strong> offers more <strong>immediate</strong> per-job rewards in $PROVE and can be more lucrative near-term.</li>
</ul>



<h2 class="wp-block-heading" id="b984">1. Network Snapshot: Scale, Trends, and Centralization</h2>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:980/1*zNKUVGFBGX3TlXBdgpWBCA.png" alt=""/><figcaption class="wp-element-caption"><strong>Table 1</strong>&nbsp;summarizes key scale and participation metrics for&nbsp;<strong>Boundless</strong>&nbsp;and&nbsp;<strong>Succinct</strong>.</figcaption></figure>



<p id="d1c3">At a glance,&nbsp;<strong>Boundless</strong>&nbsp;shows&nbsp;<strong>broader participation and openness</strong>, with many small and mid-sized nodes contributing — implying&nbsp;<strong>greater decentralization</strong>.&nbsp;<strong>Succinct</strong>&nbsp;currently has fewer provers; early compute may concentrate among a smaller set of high-reputation, higher-bid nodes.</p>



<p id="cc89">From a&nbsp;<strong>growth-trend</strong>&nbsp;perspective, as more blockchains and applications integrate with these networks, proof demand should continue to&nbsp;<strong>grow rapidly</strong>, and both&nbsp;<strong>participation</strong>&nbsp;and&nbsp;<strong>throughput</strong>&nbsp;are likely to rise accordingly.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*-QZ9D7c9BqgsyvSn0T8OTA.png" alt=""/><figcaption class="wp-element-caption">Boundless daily orders count trend</figcaption></figure>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*EbfR48qkm-zR1Uvxge7O2Q.png" alt=""/><figcaption class="wp-element-caption">Succinct daily orders count trend</figcaption></figure>



<p id="099d">Boundless’s compute contribution appears&nbsp;<strong>long-tailed</strong>&nbsp;— work is spread across a wider set of miners, limiting any single node’s share. Succinct’s&nbsp;<strong>higher entry bar</strong>&nbsp;makes the network&nbsp;<strong>more concentrated</strong>&nbsp;in the early stage: only nodes with sufficient stake and performance consistently win jobs, favoring “the strong get stronger.” Over time, as more participants acquire and stake&nbsp;<strong>$PROVE</strong>, the number of active provers may rise.</p>



<h2 class="wp-block-heading" id="05a3">2. Economic Models: Auction &amp; Incentive Design</h2>



<p id="e7fe">The two networks differ fundamentally in&nbsp;<strong>pricing</strong>&nbsp;and&nbsp;<strong>incentives</strong>.</p>



<h2 class="wp-block-heading" id="0f1f">Boundless: Reverse Dutch Auction</h2>



<p id="6c86">When a user submits a ZK proof request, Boundless prices the job via a&nbsp;<strong>reverse Dutch auction</strong>:</p>



<ul class="wp-block-list">
<li>The requester specifies a <strong>min/max price</strong> and timing parameters.</li>



<li>The price <strong>starts very low</strong> (often <strong>0</strong>) → <strong>rises linearly</strong> to an upper bound → if no one locks the job by the <strong>lock</strong> deadline, it expires.</li>
</ul>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*XnaEqTjuro_EFvdy293-sw.png" alt=""/><figcaption class="wp-element-caption"><em>Reverse Dutch auction price curve</em></figcaption></figure>



<p id="59c9">Early on, with token incentives and abundant supply, competition pushed most jobs to&nbsp;<strong>clear at 0 fee</strong>&nbsp;— provers did unpaid work to farm token rewards. Boundless has earmarked&nbsp;<strong>5,000,000 $ZKC (0.5% of supply)</strong>&nbsp;for test incentives; in Season 1 and Season 2, weekly pools are&nbsp;<strong>0.1%</strong>. Each prover earns&nbsp;<strong>points</strong>&nbsp;based on total compute cycles, success rate, and speed, then shares the pool&nbsp;<strong>pro-rata</strong>&nbsp;after the event. This&nbsp;<strong>“work now, tokens later”</strong>&nbsp;model encouraged provers to aggressively take jobs and undercut prices, producing a near-free proof market.</p>



<h2 class="wp-block-heading" id="ba01">Succinct: Staked Real-Time Auction</h2>



<p id="b326">Succinct splits job cost into&nbsp;<strong>two parts</strong>:</p>



<ul class="wp-block-list">
<li>A fixed <strong>base fee</strong>; and</li>



<li>A <strong>per-PGU</strong> (proof gas unit) <strong>bid</strong> set by real-time auction. The network assigns the job to the <strong>lowest</strong> qualified bidder (ties randomly broken).</li>
</ul>



<p id="e164"><strong>Total fee = Base Fee + Bid Price per PGU×PGUs consumed</strong></p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="64f1">Provers must&nbsp;<strong>stake $PROVE</strong>&nbsp;to bid. Staking raises the bar and ensures&nbsp;<strong>skin in the game</strong>: if a winner misses the deadline, part of their stake is&nbsp;<strong>slashed</strong>. This “pay-for-work” design creates a closed-loop token economy:&nbsp;<strong>requesters</strong>&nbsp;buy compute in&nbsp;<strong>$PROVE</strong>,&nbsp;<strong>provers</strong>&nbsp;earn&nbsp;<strong>$PROVE</strong>&nbsp;by delivering, and&nbsp;<strong>stakers</strong>&nbsp;support security and share rewards.</p>



<p id="af08"><strong>Bottom line:</strong>&nbsp;<strong>Boundless</strong>&nbsp;maximizes&nbsp;<strong>openness and competitive pressure</strong>&nbsp;— in today’s oversupplied phase, fees compress toward zero and token incentives backfill.&nbsp;<strong>Succinct</strong>&nbsp;balances&nbsp;<strong>incentives and reliability</strong>&nbsp;via staking and immediate payment. Boundless is&nbsp;<strong>deferred-incentive</strong>; Succinct is&nbsp;<strong>immediate-incentive</strong>.</p>



<h2 class="wp-block-heading" id="e4d7">3. Revenue &amp; Cost Analysis</h2>



<h2 class="wp-block-heading" id="2432">3.1 Revenue</h2>



<p id="2729"><strong>Boundless.</strong>&nbsp;Today, mining on Boundless yields&nbsp;<strong>little to no direct ETH</strong>&nbsp;revenue. Intense competition plus extra incentives push most auctions to&nbsp;<strong>0 fee</strong>&nbsp;— i.e., miners earn&nbsp;<strong>almost nothing</strong>&nbsp;per job.<br>The main upside is&nbsp;<strong>leaderboard points</strong>&nbsp;redeemable for&nbsp;<strong>$ZKC</strong>&nbsp;later. A total of&nbsp;<strong>5 million $ZKC (0.5%)</strong>&nbsp;is allocated for test incentives. Your share depends on your&nbsp;<strong>relative</strong>&nbsp;points — computed from&nbsp;<strong>cycles completed</strong>,&nbsp;<strong>success rate</strong>, and&nbsp;<strong>peak throughput</strong>. Strategy-wise, miners should&nbsp;<strong>take more jobs, do more work</strong>, and accumulate cycles — even if a given job pays zero — to maximize their fraction of the final pool. Note the&nbsp;<strong>uncertainty</strong>: $ZKC’s future value is unknown and distribution comes&nbsp;<strong>later</strong>, so current work is a&nbsp;<strong>strategic investment</strong>.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*1SBuzETwPCbFvnmji8BDbw.png" alt=""/><figcaption class="wp-element-caption"><em>Top miners’ per-job fee revenue ≈ 0</em></figcaption></figure>



<p id="f88c"><strong>Succinct.</strong>&nbsp;Each completed proof&nbsp;<strong>immediately</strong>&nbsp;pays&nbsp;<strong>$PROVE</strong>, so there’s&nbsp;<strong>direct cash flow</strong>.</p>



<p id="1e23">Revenue per job = Base Fee + Bid Price per PGU×PGUs</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*VV1WGefNYQzpVn_Oqz3-iw.png" alt=""/></figure>



<p id="23c1">Two distinct bidding strategies tend to emerge:</p>



<ol class="wp-block-list">
<li><strong>High-volume, low-margin.</strong> Bid the per-cycle price <strong>very low</strong> (near zero) to increase win rate and effectively <strong>farm base fees</strong> across many <strong>small</strong> jobs.</li>



<li><strong>Fewer, high-margin “big jobs.”</strong> Quote a <strong>higher</strong> unit price for <strong>large</strong> jobs; leverage strong hardware to earn sizable variable fees.</li>
</ol>



<p id="9dd3">In practice, many provers currently&nbsp;<strong>undercut</strong>&nbsp;to secure jobs and at least capture the&nbsp;<strong>base fee</strong>&nbsp;plus small variable fees — behaving like “base-fee farmers.” For&nbsp;<strong>urgent</strong>&nbsp;or&nbsp;<strong>heavy</strong>&nbsp;proofs, requesters often pay&nbsp;<strong>higher</strong>&nbsp;rates to reduce latency, creating upside for high-performance miners. Overall, Succinct miner earnings hinge on&nbsp;<strong>$PROVE’s market price</strong>&nbsp;and&nbsp;<strong>bidding strategy</strong>; because $PROVE is&nbsp;<strong>liquid</strong>, Succinct offers&nbsp;<strong>clearer, near-term</strong>&nbsp;economics.</p>



<h2 class="wp-block-heading" id="81fe">3.2 Participation Costs</h2>



<p id="982c"><strong>Boundless.</strong>&nbsp;Entry is&nbsp;<strong>nearly permissionless</strong>: anyone with suitable hardware (primarily&nbsp;<strong>GPU</strong>) can join — no network-specific token staking required, and thus no stake lockup risk. However, because&nbsp;<strong>auction prices are near zero</strong>, miners must manage&nbsp;<strong>electricity and hardware costs</strong>&nbsp;carefully: at&nbsp;<strong>0 fee</strong>, you’re effectively&nbsp;<strong>mining at a loss</strong>&nbsp;unless expected&nbsp;<strong>$ZKC</strong>&nbsp;value compensates later. The good news: with&nbsp;<strong>no eligibility barriers</strong>, even modest rigs can win small jobs (win rate depends on bidding and competition). In short, Boundless’s main barrier is&nbsp;<strong>hardware capex/Opex</strong>;&nbsp;<strong>economic</strong>&nbsp;and&nbsp;<strong>policy</strong>&nbsp;barriers are low.&nbsp;<em>(Note: things may change at mainnet.)</em></p>



<p id="ed7a"><strong>Succinct.</strong>&nbsp;The bar is&nbsp;<strong>higher</strong>. You must&nbsp;<strong>stake $PROVE</strong>&nbsp;to register as an active prover — tying up capital and bearing&nbsp;<strong>token price risk</strong>. If slashed (e.g., late or failed delivery), you take an immediate loss. Competition is&nbsp;<strong>intense</strong>: performant nodes often bid very low, so weaker hardware or high latency may lead to&nbsp;<strong>few wins</strong>. The network is&nbsp;<strong>more concentrated</strong>&nbsp;early; established operators dominate. Succinct suits&nbsp;<strong>professional</strong>&nbsp;providers with strong engineering and capital, rather than casual miners.</p>



<h2 class="wp-block-heading" id="20d6">4. Conclusion</h2>



<p id="18d8"><strong>Shared direction.</strong>&nbsp;Both Boundless and Succinct aim to turn&nbsp;<strong>“producing and verifying ZK proofs”</strong>&nbsp;from a heavy, in-house operation into&nbsp;<strong>on-demand compute</strong>&nbsp;— like power or bandwidth — matched by market mechanisms. Boundless maximizes&nbsp;<strong>open access</strong>; Succinct embeds&nbsp;<strong>staking + price discovery</strong>&nbsp;for reliable delivery. Different paths, same goal: let applications&nbsp;<strong>pull proof capacity on demand</strong>&nbsp;— as easily as calling an API.</p>



<p id="29ff"><strong>Reliability vs. decentralization.</strong>&nbsp;Openness invites&nbsp;<strong>variance</strong>; constraints raise&nbsp;<strong>entry costs</strong>. Boundless lowers barriers to gain&nbsp;<strong>scale and long-tail participation</strong>, using market penalties/redistribution to curb bad behavior. Succinct front-loads&nbsp;<strong>professionalism and reliability</strong>&nbsp;via staking and slashing —&nbsp;<strong>more concentrated</strong>&nbsp;but&nbsp;<strong>controllable</strong>&nbsp;in the near term. These designs are&nbsp;<strong>not</strong>&nbsp;mutually exclusive: as demand grows and tooling matures, open networks will add&nbsp;<strong>quality signals and reputation</strong>, while staked networks can&nbsp;<strong>decentralize</strong>&nbsp;via pooling and delegation.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/the-prover-network-landscape-a-comparative-analysis-of-succint-and-boundless-networks-b1c362b4e67d/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Labrador: A New Era of Post-Quantum ZK Proofs</title>
		<link>https://medium.com/@CFrontier_Labs/labrador-a-new-era-of-post-quantum-zk-proofs-78dc8f31f243</link>
					<comments>https://medium.com/@CFrontier_Labs/labrador-a-new-era-of-post-quantum-zk-proofs-78dc8f31f243#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 29 Jul 2025 08:50:10 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=410</guid>

					<description><![CDATA[TLDR : Introduction With the rapid advancement of zero-knowledge (ZK) technology, a new project called&#160;Labrador&#160;has emerged, promising to revolutionize how [&#8230;]]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading" id="5325">TLDR :</h2>



<ol class="wp-block-list">
<li><strong>Post-Quantum Security</strong>: Labrador is a lattice-based zkSNARK system designed to be secure against future quantum computers, offering a robust solution for long-term privacy and scalability in blockchain systems.</li>



<li><strong>Efficient Proofs</strong>: It produces succinct proofs (~50 KB) for large computations using a recursive compression technique, maintaining efficiency while being quantum-resistant.</li>



<li><strong>Modular and Transparent</strong>: Unlike traditional SNARKs, Labrador doesn’t require a trusted setup, providing transparency and scalability for future-proof blockchain applications.</li>



<li><strong>Future Potential</strong>: Labrador strikes a balance between security and efficiency, making it ideal for use cases that prioritize post-quantum safety, with further optimizations expected in upcoming versions.</li>
</ol>



<h2 class="wp-block-heading" id="b2d8">Introduction</h2>



<p id="636b">With the rapid advancement of zero-knowledge (ZK) technology, a new project called&nbsp;<strong>Labrador</strong>&nbsp;has emerged, promising to revolutionize how we think about secure proofs in the post-quantum era. Labrador is more than just a cute name –<strong>first practical lattice-based zkSNARK</strong>&nbsp;(Zero-Knowledge Succinct Non-interactive Argument of Knowledge).</p>



<p id="be0a">In simpler terms, it’s a system for proving things without revealing secrets, built on advanced math that even future quantum computers can’t easily break. ZK enthusiasts have been excited about Labrador because it offers&nbsp;<strong>compact proof sizes (around 50 KB)</strong>&nbsp;while relying solely on&nbsp;<strong>post-quantum secure assumptions</strong>&nbsp;(specifically, lattice cryptography). This project was introduced by cryptographers in 2023 and has quickly become an important tool in the ZK space, already finding use in areas like post-quantum signature aggregation.</p>



<p id="68a5">In this article, we will explore Labrador’s design and approach in depth — from its proving system and novel cryptographic techniques to how it might fit into modular blockchain architectures and data availability layers. We’ll also compare Labrador with other ZK ecosystems to understand its unique place in the landscape. Our goal is to explain these complex concepts in plain language, using analogies and clear examples, so that even readers without a formal math or physics background can grasp the big ideas behind Labrador.</p>



<h2 class="wp-block-heading" id="a9a0">Why Post-Quantum ZK Matters</h2>



<p id="8e56">Before diving into Labrador itself, it’s important to understand the&nbsp;<strong>context and motivation</strong>&nbsp;behind it. Today’s popular ZK proof systems — such as many SNARKs used in blockchain projects — often rely on classical cryptographic assumptions like the hardness of elliptic curve discrete log problems. These assumptions work well against current computers, but a sufficiently powerful&nbsp;<strong>quantum computer</strong>&nbsp;in the future could break them using algorithms like Shor’s algorithm (which can solve discrete log and factor large numbers efficiently).</p>



<p id="12ac">In other words, many of our favorite SNARKs that use elliptic curves or similar techniques would be vulnerable in a post-quantum world On the other hand,&nbsp;<strong>STARKs</strong>&nbsp;and some other proof systems avoid elliptic curves by using only hash functions (treated as “random oracles”), which are believed to resist quantum attacks if their parameters (e.g. hash sizes) are large enough.</p>



<p id="1411">However, those “hash-based” proofs tend to have much larger proof sizes and slower performance — a trade-off for being quantum-safe. This is where&nbsp;<strong>lattice-based cryptography</strong>&nbsp;comes into play. Lattice problems are a class of math problems believed to be hard even for quantum computers, and they’ve become the foundation of many post-quantum cryptographic schemes (you might have heard of post-quantum encryption and signature algorithms standardized by NIST, many of which are lattice-based).</p>



<p id="c8da">In the zero-knowledge realm, researchers have been working on lattice-based proof systems to combine&nbsp;<strong>quantum resistance with efficiency</strong>. The&nbsp;<strong>Labrador project is a culmination of these efforts</strong>, offering a proof system that is not only&nbsp;<strong>post-quantum secure</strong>&nbsp;but also&nbsp;<strong>succinct and efficient</strong>. By using lattices, Labrador aims to give us the best of both worlds: the peace of mind that our proofs will hold up against quantum adversaries, and the practicality of small proofs and reasonable verification times. For ZK enthusiasts, this means future-proofing privacy and scalability solutions on blockchains and beyond.</p>



<h2 class="wp-block-heading" id="3ebb">Labrador’s Design at a Glance</h2>



<p id="ab46">At its core,&nbsp;<strong>Labrador is a zkSNARK</strong>&nbsp;— meaning it produces succinct proofs that a statement is true without revealing&nbsp;<em>why</em>&nbsp;it’s true. What sets Labrador apart is&nbsp;<em>how</em>&nbsp;it produces those proofs. Traditional SNARKs like Groth16 or PLONK rely on elliptic curve pairings and often require a trusted setup ceremony. STARKs, on the other hand, use only hashes and no trusted setup, but their proofs are larger (hundreds of kilobytes or more) due to the Merkle Tree opening proof and FRI.</p>



<p id="aac7">Labrador introduces a&nbsp;<strong>new proving system based on lattice cryptography</strong>, specifically the hardness of the&nbsp;<strong>Module-SIS</strong>&nbsp;problem (we’ll explain this shortly). This allows Labrador to be&nbsp;<strong>transparent (no trusted setup)</strong>&nbsp;and&nbsp;<strong>post-quantum secure</strong>, similar to STARKs in those respects, but with proof sizes that are significantly smaller than earlier post-quantum . In fact, for a large arithmetic circuit , a Labrador proof is only on the order of&nbsp;<strong>50–60 KB</strong>&nbsp;at 128-bit . That’s roughly an order of magnitude smaller than comparable quantum-safe proofs from previous techniques (like hash-based Aurora or Ligero proofs).</p>



<p id="5448">How does Labrador achieve this? The clue lies in its name:&nbsp;<strong>LaBRADOR stands for “Lattice-Based Recursively Amortized Demonstration of R1CS.”</strong>&nbsp;The protocol is built in two parts — a&nbsp;<strong>base proving protocol</strong>&nbsp;and a&nbsp;<strong>recursive compression step</strong>&nbsp;— that together yield a tiny proof regardless of the original computation size. The base protocol generates an initial proof that is somewhat large, but then Labrador repeatedly applies a recursion to this proof, each time&nbsp;<strong>“folding” or compressing</strong>&nbsp;the proof further. After recursive steps (which in practical terms might be ~7 rounds for typical cases), the proof size becomes essentially&nbsp;<strong>constant</strong>.</p>



<p id="b8fd">This is a huge deal — it means even very complex computations can have a proof that is small enough to post on a blockchain. However, nothing comes entirely for free: one trade-off is that Labrador’s verifier (the algorithm that checks the proof) has to do&nbsp;<strong>linear work</strong>&nbsp;in the size of the computation. In other words, verifying a Labrador proof still takes time proportional to the original circuit size, which is slower than the near-instant verification of traditional SNARKs.</p>



<p id="2d1c">Despite this drawback, Labrador’s design is a milestone because it shows a viable path to&nbsp;<strong>practical quantum-resistant ZK proofs with succinct sizes</strong>. Its design is modular, meaning the core proof construction (the recursive amortization of R1CS constraints) can potentially integrate with other components or improvements (and indeed it inspired follow-up work like Greyhound, which refines the polynomial commitment aspect).</p>



<p id="42a3">At a glance, here are&nbsp;<strong>Labrador’s key features</strong>:</p>



<ul class="wp-block-list">
<li><strong>Post-Quantum Security:</strong> Built on lattice assumptions (Module-SIS) believed to be secure against quantum attacks.</li>



<li><strong>Transparency:</strong> No trusted setup required; anyone can verify proofs with publicly known parameters (only random lattices needed).</li>



<li><strong>Succinct Proofs:</strong> Near-constant proof size (~50 KB) even for large computations, thanks to recursive proof compression.</li>



<li><strong>R1CS Compatibility:</strong> Can prove arbitrary computations expressed as Rank-1 Constraint Systems (like many SNARKs do), making it broadly applicable.</li>



<li><strong>Modular Design:</strong> Splits into a base protocol and recursive steps, which opens the door to swapping in improvements (e.g., better polynomial commitments) without redesigning from scratch.</li>
</ul>



<p id="5fce">This high-level view shows why Labrador is exciting — it promises the level of scalability and efficiency we expect from SNARKs, while also being ready for a post-quantum world and avoiding the pitfalls of trusted setups.</p>



<h2 class="wp-block-heading" id="d290">How Does Labrador’s Proving System Work?</h2>



<p id="e2f5">Let’s unpack, in plain terms,&nbsp;<strong>how Labrador generates a proof</strong>. We’ll avoid formal math, instead using an analogy of “proving a large puzzle is solved by breaking it into smaller puzzles.” Imagine you have a massive jigsaw puzzle (this represents the original computation or circuit you want to prove). Proving you solved it directly would be hard to verify due to its sheer size. Instead, Labrador’s approach is: solve the puzzle, then compress the evidence of that solution step by step until it’s tiny. Here’s a step-by-step rundown of Labrador’s proving process:</p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="ccb5"><strong>Commit to the Witness (Ajtai Commitment):</strong>&nbsp;First, the prover hides the solution to the puzzle (the secret witness for the computation) in a kind of&nbsp;<strong>cryptographic “safe”</strong>&nbsp;known as an&nbsp;<em>Ajtai commitment</em>. This is like mixing your puzzle’s solution pieces with some random noise so that you send only a locked box of mixed pieces to the verifier. The verifier can’t see your actual pieces (so zero-knowledge is preserved), but the commitment “locks in” your solution — you can’t change it later without breaking the box, which is assumed to be computationally infeasible because of the lattice (Module-SIS) hardness. In essence, an Ajtai commitment uses a large random matrix (publicly known) and multiplies it by your secret vector (the witness) to produce a commitment. This commitment has a homomorphic property (meaning operations on the commitment correspond to operations on secret vector that will be very useful later.</p>



<p id="ed8a"><strong>Prove Constraint Satisfaction (Base Protocol):</strong>&nbsp;Next, the prover needs to convince the verifier that the secret witness vector actually satisfies all the required constraints of the puzzle&nbsp;<em>without revealing secret vector</em>. Labrador’s base protocol does this through an&nbsp;<strong>interactive proof</strong>&nbsp;(in actual implementation, it will be transformed into a non-commutative form through the Fiat-Shamir transformation.). In simple terms, the prover and verifier engage in a series of steps: the verifier tosses some random challenge sand the prover responds with information that should only make sense if all the constraints hold. One clever trick here is the prover&nbsp;<strong>aggregates many constraints into one</strong>&nbsp;using the random challenges.</p>



<p id="f3ea">By the end of the base protocol, the prover effectively furnishes a proof (consisting of some committed values and responses) that all constraints are satisfied and the witness has a small “norm” (meaning the solution numbers aren’t huge, which is important for security).</p>



<p id="d90d"><strong>Recursive Proof Compression:</strong>&nbsp;Here comes the magic — Labrador uses&nbsp;<strong>recursion to shrink the proof further</strong>. The insight is that the proof generated by the base protocol has its own internal structure and constraints (the things the verifier would check can themselves be viewed as a kind of smaller puzzle). Labrador literally takes the proof from step 2 and treats&nbsp;<em>that proof’s data as a new “witness” to prove</em>!. In other words, we get a second proof that certifies “the first proof was correctly formed and valid.”</p>



<p id="7669"><strong>Verification:</strong>&nbsp;The verifier checks the final proof by performing a series of lattice-based computations defined by the protocol. Without diving into math, the verifier basically ensures that all the commitments and responses align correctly — akin to checking the solution of that last tiny puzzle which, by construction, guarantees the big puzzle was solved.This is a known limitation: unlike classic SNARKs that might verify in constant time, here we trade some verification speed for the benefit of post-quantum security and no trusted setup. However, linear time for a verifier (which is typically a smart contract or a blockchain node in a ZK-rollup scenario) might be acceptable for moderate N, and research is ongoing to improve this (for instance, newer schemes like Greyhound target sublinear verification).</p>



<p id="1814">Throughout this process,&nbsp;<strong>zero-knowledge</strong>&nbsp;is maintained — the verifier learns nothing about the actual witness (solution) besides the fact that it satisfies the constraints. All the clever random challenges, commitments, and algebra ensure that any&nbsp;<em>deviations</em>&nbsp;from a correct witness would lead to a detectable inconsistency with high probability, but a correct witness produces a perfectly valid proof that the verifier accepts. It’s like compressing a file repeatedly; each time you compress, the file gets smaller, and after enough iterations, you have a tiny archive that still, when opened layer by layer, contains the full original data.</p>



<h2 class="wp-block-heading" id="49e5">Novel Cryptographic Techniques Behind Labrador</h2>



<p id="f64d">Labrador’s power comes from some ingenious cryptographic building blocks, primarily drawn from lattice-based cryptography. Let’s demystify a few of the&nbsp;<strong>novel techniques</strong>&nbsp;it uses (without going too deep into math):</p>



<ul class="wp-block-list">
<li><strong>Module-SIS Problem (Lattice Assumption):</strong> The security of Labrador is founded on the hardness of the <em>Module Short Integer Solution (Module-SIS)</em> problem. This is a lattice problem. In simpler terms, SIS asks: given a bunch of big numbers (or vectors) that are essentially random, can you find a combination of them with <em>small</em> coefficients that adds up exactly to zero? It’s like having a set of very large puzzle pieces and asking if you can very delicately balance some of them (using small weights) so they cancel out. This is believed to be extremely hard, even for quantum computers. Module-SIS is just a fancier version where those numbers are not plain integers but polynomials on prime filed (imagine each “number” is actually a polynomial, which adds a structured twist to the problem). Because no efficient algorithm is known to solve Module-SIS (unless we crack fundamental lattice problems like SVP, which is considered unlikely), it provides the unbreakable “lock” for Labrador’s commitments and proofs. In contrast to assumptions used in traditional SNARKs (like elliptic curve discrete log), Module-SIS is <strong>post-quantum safe</strong> and well-studied in academic literature.</li>



<li><strong>Norm Bounds and “Short” Vectors:</strong> A recurring concept in Labrador’s protocol is that the witness vectors must be “short” (have small norm). Intuitively, this means the secret numbers aren’t astronomically large — they’re within a manageable range. Enforcing a norm bound is important because the SIS problem’s hardness relies on finding <em>short</em> solutions. If arbitrarily large coefficients were allowed, trivial solutions exist (just take huge coefficients to cancel things out). In the proof, the prover demonstrates that their witness respects these bounds (often by clever constraint design and random challenges that would blow up if the norm was too high). For a non-math analogy: think of the witness as a path through a dense forest. The norm bound ensures the path doesn’t wander too far off — it stays within a “short” distance. The verifier can be confident that the prover didn’t take some wild detour (which might represent an invalid solution) because that would violate the known bound.</li>



<li><strong>Quadratic Equations (Rank-1 Constraints):</strong> Labrador’s constraint system uses quadratic equations — essentially inner products and constant terms. This is actually similar in spirit to how R1CS works in typical SNARKs (an R1CS constraint is a quadratic equation equating a product of linear combinations to another linear combination). By working with quadratic forms in the lattice setting, Labrador can encode arbitrary arithmetic circuits. The reason quadratic (degree-2) constraints are used is that they are expressive enough to capture complex logic (you can multiply variables) but structured enough to be proven succinctly. The fact that the commitments are linear and the constraints are quadratic is key — it allows recursion because the verification conditions (which involve quadratic checks of committed values) can themselves be turned into similar quadratic constraints at the next layer.</li>
</ul>



<p id="1939">These cryptographic techniques are quite advanced under the hood, but the takeaway is that Labrador blends them to create a synergy:&nbsp;<strong>lattice commitments</strong>&nbsp;give strong security and linearity,&nbsp;<strong>recursive amortization</strong>&nbsp;gives succinctness, and conventional techniques like Fiat-Shamir ensure practicality (non-interactivity). The design of Labrador is a testament to how modern ZK research is combining ideas from different domains — here we see number theory, linear algebra (matrices and vectors), and theoretical computer science (interactive proofs) all working together. And notably, everything in Labrador rests on assumptions that are believed to be safe in the quantum age, which is a major selling point as we move towards the future.</p>



<h2 class="wp-block-heading" id="1bfe">Comparing Labrador with Other ZK Ecosystems</h2>



<p id="853a">The ZK landscape is vibrant, with various proof systems and projects each having their own strengths and trade-offs. Let’s&nbsp;<strong>compare Labrador</strong>&nbsp;with some of the notable ones to get a sense of where it stands:</p>



<ul class="wp-block-list">
<li><strong>Labrador vs. Classical SNARKs (Groth16/PLONK/etc.):</strong> Traditional SNARKs like Groth16 have <strong>tiny proofs (around 1–2 KB)</strong> and extremely fast verification (a couple of pairings checks). However, they require a <strong>trusted setup</strong> (which can be complex and carries a security risk if the setup is corrupted) and rely on cryptographic assumptions (elliptic curve pairings) that are <strong>not post-quantum secure</strong>. Labrador, in contrast, has larger proofs (~50KB) and slower verification (linear time), but <strong>no trusted setup</strong> and <strong>quantum resilience</strong>. In a current setting (without quantum computers yet), Groth16 or PLONK might be more practical for on-chain verification due to their speed and size. But Labrador offers a future-proof alternative — you’d trade some performance and convenience now to avoid needing a complete overhaul of your system later when quantum computers arrive. Also, Labrador’s transparency simplifies deployment (no need for multi-party ceremonies to generate keys). A project like Zcash, for instance, famously used Groth16 with a toxic waste ceremony; in a hypothetical Zcash-like application in the future, Labrador could eliminate that ceremony entirely.</li>



<li><strong>Labrador vs. Other Emerging Post-Quantum ZK Systems:</strong> Labrador is not alone — the post-quantum ZK space is hot. For instance, <strong>Greyhound</strong> (2025) is a newer protocol that actually builds on Labrador’s approach but focuses on the polynomial commitment part of the proof. Greyhound achieved a scheme with <em>sublinear verification</em> and similar proof sizes It’s not a full proof system by itself (more a component), but it indicates the direction: improving on Labrador’s weaknesses (namely verification cost). Dan Boneh even remarked that these developments mark the first time a post-quantum SNARK might outperform a pre-quantum one in some aspects. This is important context for Labrador: it kick-started a “lattice SNARK race,” and within a couple of years, we already see rapid improvements. So, Labrador today should be viewed as the foundation — a proof of concept that practical lattice SNARKs are possible — and not the end of the story. We can expect future versions to be even more efficient.</li>



<li><strong>Use Case Perspective:</strong> Different ZK ecosystems target different use cases. For example, <strong>ZK-rollups on Ethereum (like zkSync, Scroll)</strong> aim for immediate practical scalability and thus use proven SNARK tech (PLONKish schemes) for speed on existing hardware. <strong>Privacy-focused chains (like Aztec or Penumbra)</strong> also use existing SNARKs for efficiency and UX. Labrador might not replace those in the short term, because it’s more forward-looking. However, for any application that values long-term security and can tolerate a bit more overhead, Labrador is compelling. Imagine a government or institution wanting a ZK system that will remain secure for decades — a post-quantum proof system is highly attractive for that scenario.</li>
</ul>



<p id="2f52">In summary, Labrador distinguishes itself by its&nbsp;<strong>post-quantum pedigree and succinct proof size</strong>, while it inherits some downsides like heavier verification from its lattice-based nature. Its competitors either sacrifice post-quantum security for speed (classical SNARKs, Halo) or sacrifice proof size for post-quantum security (STARKs). Labrador strikes a middle ground: quantum-safe and relatively efficient in proof size, at the cost of verifier workload. As the technology matures, it’s possible this middle ground will only improve, closing the gap with or even surpassing the older methods. For ZK enthusiasts, it’s an exciting development — it means the ecosystem is preparing for a future where both security and performance can be achieved without compromise.</p>



<h2 class="wp-block-heading" id="8d44">Conclusion</h2>



<p id="a8e3">The advent of Labrador signals a new chapter in the zero-knowledge story — one where&nbsp;<strong>post-quantum security and practicality intersect</strong>. For years, ZK enthusiasts have seen a trade-off: we had extremely efficient SNARKs that might one day be broken by quantum computers, and we had quantum-proof systems that were too impractical for real use. Labrador’s lattice-based approach shows that this gap can be bridged. It introduces a proving system that, through creative recursive design and lattice cryptography, achieves compact proofs without relying on conjectures that quantum machines could shatter. Writing a&nbsp;<strong>Medium-style</strong>&nbsp;article about such a technical subject posed the challenge of preserving depth while avoiding formalism. We navigated concepts like Module-SIS, recursive proof amortization, and data availability by using analogies and clear language — describing commitments as locked boxes, proofs as puzzles, and blockchains as layered frameworks. Hopefully, this dual-language exploration has made the Labrador project more accessible to a broader audience.</p>



<p id="2820">For ZK enthusiasts, understanding Labrador is more than just learning about one project — it’s about glimpsing the future of the entire ecosystem. Innovations like Labrador will benefit everyone, across borders, as we collectively strive for a world of greater privacy, scalability, and security. And you don’t need a PhD in math or physics to be part of this journey — as we’ve seen, the core ideas can be appreciated with a bit of imagination and open-mindedness.</p>



<p id="0894">In closing, the Labrador project stands as a testament to human ingenuity in cryptography: by going back to fundamental hard problems (lattices) and pushing creative techniques (recursive proofs), researchers have unlocked new possibilities. We can look forward to a&nbsp;<strong>modular, post-quantum ZK infrastructure</strong>&nbsp;where tools like Labrador ensure that our decentralized applications remain trustworthy, even in the face of tomorrow’s technological challenges. The Labrador might be a friendly breed of dog, but in the ZK world, it’s also the name of a guardian ensuring our proofs stay strong and our secrets stay safe.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/labrador-a-new-era-of-post-quantum-zk-proofs-78dc8f31f243/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Boundless The Signal: An Architecture and Paradigm Innovation for Zero-Knowledge Cross-Chain Ultimacy Protocols</title>
		<link>https://medium.com/@CFrontier_Labs/boundless-the-signal-an-architecture-and-paradigm-innovation-for-zero-knowledge-cross-chain-2758a95da695</link>
					<comments>https://medium.com/@CFrontier_Labs/boundless-the-signal-an-architecture-and-paradigm-innovation-for-zero-knowledge-cross-chain-2758a95da695#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 25 Jul 2025 08:50:49 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=412</guid>

					<description><![CDATA[“Beyond traditional applications like ZK Rollups, emerging use cases in the ZK space now include cross‑chain operations such as The [&#8230;]]]></description>
										<content:encoded><![CDATA[
<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p id="641a"><strong>“Beyond traditional applications like ZK Rollups, emerging use cases in the ZK space now include cross‑chain operations such as The Signal.”</strong></p>
</blockquote>



<p id="204a">As blockchain researchers, we are witnessing a huge need for cross-chain interoperability, but “cross-chain finality” is still a missing link in the current infrastructure. Although bridging different on-chain assets and data is increasingly common, traditional multi-sig bridges and repeators schemes still rely on human trust, and security incidents are frequent: In 2020–2022, the cumulative loss of cross-chain Bridges due to hacker attacks exceeded $2.5 billion! The consensus nature of blockchain ensures the finality of states within a single chain, but there is no such a trustless finality proof mechanism between chains. In other words, different blockchains cannot directly “mutually identify” each other’s final state, which is a long-standing trust gap in multi-chain ecology.</p>



<p id="336a">In recent years, people have adopted multi-sign Oracle or light node relay to transfer state between chains, but these schemes either sacrifice decentralization or are complex and inefficient. Is there a way for one chain to verify the final state of another chain in a purely mathematical way? The Signal protocol from the Boundless team provides an answer: it compreszes the finality of a source chain (currently Ethereum) into a tiny proof of validity that can be quickly verified on any target chain, via an open-source zero-knowledge (ZK) consensus client. In other words, The Signal leverages zero-knowledge proof techniques to build a “finality claim” that any chain can be trusted to verify. This marks the paradigm of blockchain cross-chain interaction from relying on multi-signature trust to trust minimization based on mathematical proof.</p>



<h2 class="wp-block-heading" id="46fa">Cross-chain Finality: The Missing Link in Blockchain Consensus</h2>



<p id="fcdc">The consensus mechanism of blockchain ensures that all nodes on the same chain agree on state changes and provides finality after a certain time, meaning that the block reaches an irreversible, changed state. However, there is no natural consensus connection between different chains: one chain cannot know when a new block on the other chain is finalized. At present, cross-chain transfer usually uses bridge engagement or third-party services to “prove” events (such as token lock) on the source chain. These proofs are often signed by multi-signer verifiers or provided by relay nodes. This introduces new trust assumptions and attack surfaces: if the verifier colludes or the private key is stolen, the cross-chain bridge can fail. For example, Wormhole, Ronin, Nomad and other multi-signature Bridges have been hacked one after another, resulting in huge asset losses.</p>



<p id="66ca">The root cause is the lack of native mutual trust mechanism between different chains. Ideally, each chain would like to read the state and finality of the other chain directly, as reliably as reading the local state. In the past, technical limitations allowed us to use stopgap solutions — trust intermediaries, expensive light node verification, or optimistic waiting — to transfer state across the chain. Cross-chain finality proof is the solution to this problem: if we could have a small cryptograph that directly proves that “block Y of source chain X is finalized and the Merkle root of state is Z”, any contract or user on the chain could verify it without trusting any other entity, then the security and efficiency of cross-chain interaction would be greatly improved.</p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="3a15">Nowadays, the development of zero-knowledge proof technology has brought a breakthrough opportunity. ZK proofs have evolved from being able to verify only simple computations, to being sufficient to prove entire chains of states and consensus processes. Boundless leverages this fact to transform the finality of the Ethereum beacon chain into portable cryptographic proofs, creating a trusted cornerstone for cross-chain interoperability. The Signal generates the corresponding ZK proof every time Finality occurs in the Ethereum beacon chain and broadcasts it to other chains, so that any chain can verify the final confirmed state of Ethereum “without anyone”. This means that cross-chain bridging no longer needs to rely on multi-signature signatures or messages provided by a centralized oracle — decentralized applications can build their business logic directly on this mathematically guaranteed cross-chain finality. As Boundless put it, the ecosystem would thus gain a shared “finality signal,” where liquidity and logic could flow freely between chains like Internet apis, while still maintaining blockchain-level security.</p>



<h2 class="wp-block-heading" id="c684">The principle and architecture of The Signal protocol</h2>



<p id="0f58">Beyond traditional applications like ZK Rollups, emerging use cases in the ZK space now include cross‑chain operations such as The Signal.The Signal is a cross-chain client running on a zero-knowledge proof system. Its core idea is to let the target chain verify the consensus result of the source chain without trust. Its architecture includes offline proof generation, decentralized bidding incentive, proof aggregation, and on-chain verification release modules.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*qkY0xAEGY9wkJZGpnRMXwA.png" alt=""/></figure>



<h2 class="wp-block-heading" id="fa71">Workflow: Generation and verification of zero-knowledge finality proofs</h2>



<ol class="wp-block-list">
<li>Request initiation and prover: When an application or user needs to prove that a state or event on the source chain (e.g., Ethereum) has been finalized, a Proof Request is submitted to the Boundless proof market contract. The request contains the specific statement to be proved (e.g., “the root of Ethereum state at block height”), the corresponding prover ID, and the amount of money it is willing to pay. This procedure is called every time a new block of the beacon chain is finalized, forming a new proof request. As a result, The Signal broadcasts Ethereum consensus proof requirements to the market in the form of standardized tasks.</li>



<li>Auction bidding and Prover locking: Once a request is posted, numerous Prover nodes on the Boundless network begin to compete to undertake the task.(The economic aspects/design will be discussed later in the text).</li>



<li>Offline computation and ZK proof generation: The winning Prover node then executes the specified zero-knowledge prover locally. Boundless is based on RISC Zero’s zkVM technology, which enables Prover to run arbitrarily complex programs in an isolated verification environment while generating a complete proof.For The Signal, this step is equivalent to running an Ethereum light client: the Prover fetches the necessary data from the Ethereum network, and then executes the validation algorithm for the beacon chain finality rule in zkVM. The proof is small and unforgeable, proving that “the above Ethereum consensus verifier executes correctly with the given input and outputs a final state value of X”.</li>



<li>Aggregate proof and commit: To optimize on-chain verification cost, Prover can package and aggregate the results of multiple proof requests before on-chain submission.</li>



<li>On-chain verification and result publication: When the aggregated proof is submitted, Boundless contract executes the verification logic immediately: utilizing the pre-deployed zero-knowledge verification contract, the Groth16 proof is mathematically verified. If the validation passes, the contract confirms that all requests for this batch have been successfully completed. The contract then extracts the output of that task from the Merkle containment evidence for each request, marks the request as fulfilled, and forwards the pre-locked payment to the Prover (the pledge is also unlocked and returned).</li>
</ol>



<h2 class="wp-block-heading" id="5a21">Typical Use case: New possibilities for cross-chain verification</h2>



<p id="5bf0">The universal “ultimacy Signal” provided by The Signal brings trust minimization solutions for many scenarios in multi-chain ecosystems. Here are some typical applications:</p>



<ul class="wp-block-list">
<li>Cross-chain reserve audit on DeFi platforms: Decentralized financial applications are often deployed across chains. For example, a stable coin is circulated on multiple chains, but its reserve assets are mainly stored on the Ethereum mainnet. Through The Signal, the stable-coin protocol can periodically request proof of reserve balance on Ethereum and send the result to issuance contracts on other chains. As a result, users on each chain can verify the stabcoin’s asset support in real time with minimal trust, without relying on a central audit report. Similarly, a cross-chain decentralized exchange (DEX) can prove whether it has sufficient liquidity on different chains.</li>



<li>Trust Minimization Bridge and Asset interworking: The ultimate vision of The Signal is to replace existing multi-signature and relay Bridges and become a secure infrastructure for cross-chain message and asset transmission. For example, if A user wants to use an asset in chain B on chain A, he can lock the asset on chain B and generate a proof that the locking event has finally occurred through Boundless. After verifying the proof on chain A, he can coin the corresponding derivative token. The whole process does not need to trust the third party signature, the multi-signer is replaced by mathematical proof, and the security of the bridge directly depends on the security of the underlying chain itself. Once the major blockchains start broadcasting their ultimacy signals, cross-chain interoperation will converge to a single cryptograph “wavelength” — that is, each chain is communicating under the same trust paradigm, and users no longer need to worry about additional trust holes in the bridging process.</li>
</ul>



<h2 class="wp-block-heading" id="0939">The Advantage of Mathematical Trust: The Signal vs. Multi-signature Bridges</h2>



<p id="935c">Compared with traditional cross-chain Bridges that rely on multi-signature verifiers or relay nodes, zero-knowledge proof Bridges led by The Signal show significant advantages in trust, security and fairness:</p>



<ul class="wp-block-list">
<li>Trust model and security: Multi-signature Bridges usually rely on a few verifiers to sign and confirm. Once more than half of the nodes are compromised or imploded, the security of the bridge is no longer available. The Signal is based on the underlying chain consensus and cryptographical guarantees: as long as the source chain itself is reliable (e.g., Ethereum finality is guaranteed by thousands of verifiers), the ZK proofs it generates are trusted.</li>



<li>Openness, fairness and decentralization: Multi-signature or consortium Bridges are often operated by specific institutions or nodes, which has potential centralization and monopoly risks. However, Boundless’s proof network is open to any participant with hardware, and free competitive pricing is achieved through on-chain auctions. This means that no single entity can control the validation of cross-chain transactions, all provers compete under the same rule, and the one with the lowest offer (the one that is most beneficial to the user) wins.</li>



<li>Transparency and auditability: Under zero-knowledge cross-chain schemes, each cross-chain verification is accompanied by publicly verifiable cryptographic evidence, leaving a record on the chain. Anyone can check the proofs and related transactions after the fact and audit the entire process to see if it worked as agreed.</li>



<li>Efficiency &amp; Performance: Zero-knowledge proof Bridges trade safety for performance. On the contrary, because it does not require a long waiting time or a large number of verifiers to interact, it can speed up the cross-chain confirmation under the premise of ensuring security. As mentioned earlier, with ZK enables, Rollup went from a week of final cross-chain confirmation to hours or less. This efficiency has a huge impact on user experience and cash utilization.</li>
</ul>



<h2 class="wp-block-heading" id="934a">Token Incentives and the Prover Ecosystem</h2>



<p id="cdac">In Boundless design, tokenomics also plays an important role in driving network growth. Boundless’s native token, $ZKC, serves as both an incentive and a governance tool to promote a virtuous cycle of the Prover computing ecosystem.</p>



<p id="d43f">First, Boundless introduces the innovative concept of “Proof of Verifiable Work” (PoVW), which converts the effort of Prover performing zero-knowledge computation into token reward. Specifically, whenever the Prover successfully completes a proof task and submits a valid result, it not only obtains the commission fee paid by the requestor, but also obtains the corresponding amount of ZKC token reward according to the actual computation cycle consumed by the task. This reward mechanism realizes accurate measurement through the “calculation cycle label” mentioned above, and the reward is directly linked to the useful calculation amount actually completed, which eliminates the speculative behavior of simple “fight speed to grab orders” or brush orders. This is similar to classical PoW mining where work is tied to reward, but PoVW goes a step further and rewards only computations that are useful to the network (i.e., verification requirements), rather than wasting compute power on meaningless hash collisions. Through this model, the Boundless team plans to realize true ZK mining in the near future, where the global GPU computing power provides proof services for each chain, and at the same time, the native benefits of the blockchain are obtained, which motivates more computing power to participate.</p>



<p id="0b2a">Secondly, ZKC token has the function of governance and pledge to maintain the long-term fairness of the market. Community governance can adjust the rhythm of token issuance and fee parameters to ensure that the incentive mechanism is dynamically balanced with the network scale. Prover and users can also pledge ZKC to participate in market activities and voting decisions, so as to have a stronger sense of participation and constraint on the network. This design makes Boundless “owned by the user”, which is in the spirit of decentralized infrastructure. It is worth noting that Boundless allocated 5 million ZKCS (about 0.5% of the total supply) as an incentive pool during the mainnet Beta phase, which was distributed in proportion to the effective work completed by Prover during the first 5 weeks. This led to an early influx of compute power: within a week, many miners around the world had connected to Gpus to compete, and demand and supply of compute power both increased, creating a virtuous cycle. According to statistics, more than 30 projects plan to integrate Boundless as a proof backend when mainnet Beta launches, which also means that Prover will welcome diverse workloads and revenue sources.</p>



<p id="ba6b">Finally, the token model reinforces the fairness and sustainability of the network. Since the reward comes directly from the real computing work, the computing power provider can get a fair reward, avoiding the manipulation of the revenue by the central mining pool. With more participants, competition will drive proof prices closer to actual costs, improving overall efficiency. The ZKC incentive, in turn, ensures that provers have an incentive to remain online even when the demand is insufficient in the early stage of the network, so as to pass through the cold start phase and gradually attract more applications to access the binding demand. Boundless officially concludes that such a model “directly connects rewards to efficient work,” supporting a fair, decentralized marketplace for computing. It is under The dual wheel of economic incentives and technical mechanisms that the cross-chain ZK infrastructure represented by The Signal can continue to grow and move towards a safer and richer multi-chain future.</p>



<h2 class="wp-block-heading" id="3a6b">Conclusion</h2>



<p id="67c4">The emergence of The Signal protocol has brought an unprecedented paradigm shift for blockchain cross-chain interoperability. With zero-knowledge proofs, an inherently reliable tool, we were finally able to allow different blockchains to share consensus results without additional trusted endorsements — like establishing a common “signaling language” for the blockchain world. Starting with Ethereum, more and more chains will join this wavelength, leading to a vision where each chain broadcasts its own proof of finality and all chains become one under mathematical trust.</p>



<p id="8ec8">For developers, this means no more painful trade-offs between security and efficiency when building cross-chain applications; For users, cross-chain asset circulation and information synchronization will be as smooth and safe as single-chain operations. Boundless The Signal has demonstrated the feasibility and value of this model in the mainnet test. As the ecosystem grows and more chains are added, zero-knowledge cross-chain finality is likely to become the norm in the next generation of blockchain infrastructure. It fills in the trust link that has been missing in the multi-chain world for a long time, and makes the blockchain truly move towards the direction of scalability and composability without sacrificing trust. In this process, the technological and economic paradigm innovation initiated by Boundless undoubtedly deserves our continued attention and in-depth research. The cross-chain future is already taking shape, and its boundaries will be broadened by mathematics and innovation. As the name suggests, the way forward is Boundless.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/boundless-the-signal-an-architecture-and-paradigm-innovation-for-zero-knowledge-cross-chain-2758a95da695/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Binius Proof System, PCS</title>
		<link>https://medium.com/@CFrontier_Labs/binius-proof-system-pcs-f04bf3f22e18</link>
					<comments>https://medium.com/@CFrontier_Labs/binius-proof-system-pcs-f04bf3f22e18#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 03 Jul 2025 09:00:11 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=414</guid>

					<description><![CDATA[TL;DR Binius Proof System is ZK proof system based on Binary Tower, which offers small memory footprint and high efficient [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p id="620f"><strong>TL;DR</strong></p>



<p id="ba16">Binius Proof System is ZK proof system based on Binary Tower, which offers small memory footprint and high efficient field computation. We analysis the PCS in the Binius — which is an essential component to any ZK proof system, in an intuitive but strict manner. We introduce the motivations to the PCS used in Binius, then analysis commitment, and ring switch, Multi-variate Sumcheck protocol, FRI which are used to open prove of the commitment.</p>



<h2 class="wp-block-heading" id="9d92"><strong>1. Definition and Motivation</strong></h2>



<p id="b184"><strong>PCS Commit</strong></p>



<p id="d373">The polynomial t(X) corresponding to the trace or witness contains sensitive information and is relatively large in size (due to its high degree). To achieve ZKSNARK in the proof process, we cannot directly include the polynomial (e.g., its coefficients) in the proof. Instead, the polynomial’s coefficients are compressed to produce a smaller value (e.g., a 256-bit hash).</p>



<p id="c52d">This value is called the commitment value, and the process is known as the commit computation.</p>



<p id="5441">For example, in RISC0, for a given polynomial ( t ), NTT is performed on the commitment domain to obtain the leaves of a Merkle Tree, which is then constructed, with the tree’s root serving as the commitment value.</p>



<p id="6dd2"><strong>PCS Open Proof</strong></p>



<p id="d547">During the proof process, it is necessary to evaluate the polynomial t at a specific challenge value r. The prover needs to compute the function value β = t(α) of the polynomial t and prove that this computation is honest (i.e., to avoid using a forged polynomial).</p>



<p id="a587">This process is called the PCS open proof. The commit and open proof are the two most critical steps in PCS. Generally, the commit step is straightforward, while the open proof is more complex.</p>



<p id="2512">In RISC0 V2, the PCS is based on Merkle Tree (Hash) + FRI (RS code), while in Varuna, the PCS is based on KZK10 MSM. In Binius, the PCS is based on Merkle Tree (Hash) + Sumcheck + FRI (RS code).</p>



<p id="4c56">Notably, the use of FRI differs significantly between the two. In RISC0, FRI is used for low-degree testing (combined with quotient polynomials) to perform the open proof, whereas in Binius, FRI is used for the open proof of large-domain polynomials (large and small domain polynomials will be discussed later). Additionally, the construction of RS codes is entirely different.</p>



<p id="8df3">This is Binius’s primary contribution. Its PCS is based on Merkle Tree (Hash: Groestl256), Multi-Variate Sumcheck, and FRI (RS code). Merkle Tree and Hash are relatively straightforward, while the other concepts are more complex, as discussed below.</p>



<p id="bd8f"><strong>RS Code</strong></p>



<p id="3275">RS code refers to a set of codewords, where a “codeword” is defined as the set of all function values of a polynomial t evaluated on a set S. The process of computing the codeword for a polynomial t is called encoding, with the polynomial t referred to as the message. We assume the t is a polynomial on finite field F.</p>



<p id="80f3">For example, in RISC0, the codeword of polynomial t consists of all its function values on the commitment domain (that is set S ). Computing these values using multiplicative FFT requires that the commitment domain forms a cyclic multiplicative group with order is a power of 2 (in the complex field, FFT is feasible because there exist cyclic multiplicative groups of any order on the unit circle in the complex plane). Since all computations occur in a finite field F, it is required that F contains such a multiplicative subgroup.</p>



<p id="d46f">Additionally, for RS codes, the codeword length, i.e., the size of the commitment domain H, must satisfy |H| &gt; deg⁡(t)+1 = trace length , as RS codes are error-correcting codes (errors in the information can be easily detected, corresponding to soundness), which necessitates the introduction of redundancy.</p>



<p id="19e0"><strong>Additive NTT</strong></p>



<p id="a5fe">Defined as follows: given the coefficients of a polynomial t: t_{0}, …, t_{2^l-1}, compute the function values of another univariate polynomial P over the hypercube B_{l+R}.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Iu-Pqh8HPusyQA5lob0LXQ.png" alt=""/></figure>



<p id="1b1e">In which,</p>



<ul class="wp-block-list">
<li>B_l is the additive subgroup (corresponding to the trace domain in RISC0),</li>



<li>B_{l+R} is the additive subgroup containing B_l, corresponding to the commitment domain in R0, though in practice it is a coset,</li>



<li>R is the blowup factor, which is 2 in R0 and 1 in Binius,</li>



<li>X_j(X) is the basis function of the polynomial vector space over F, completely different from the standard basis functions 1, X, X², …, For its strict definition, refer to <a href="https://eprint.iacr.org/2024/504" target="_blank" rel="noreferrer noopener">this</a>.</li>
</ul>



<p id="cebf"><strong>MLE</strong><br>Multi-Linear Extension: For a given function t:B_l → F, its multi-linear extension is a polynomial function tilde_t(X_0,…,X_{l−1}) that satisfies:</p>



<ul class="wp-block-list">
<li>tilde_t is linear in each variable X_j, i.e., the degree of tilde_t with respect to any variable X_j is at most 1, making tilde_t multi-linear;</li>



<li>The function values of tilde_t on the hypercube B_l are equal to the function values of t, tilde_t is an extension of t.</li>
</ul>



<p id="3eca">As shown in Equation (Eq. 2) for tilde_t, the polynomial tilde_eq is precisely the Lagrange basis for the multivariate polynomial (where the set formed by the MLE of all l-variables is viewed as a 2^l-dimensional vector space).</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*dT32SLVF1dQX_sO4QqJ4tw.png" alt=""/></figure>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*vag5gveF9UiIM-GOn6eGJQ.png" alt=""/></figure>



<p id="5980">Why MLE?<br>In RISC0, the algebraic proof relies on the divisibility between the constraint polynomial and the vanishing polynomial on the evaluation domain. However, in a binary field, there is no multiplicative subgroup, and the vanishing polynomial lacks a simple or efficient computation method.</p>



<p id="7acd">To address this, Binius adopts a Zero Check based on multivariate polynomials for the proof. The Zero Check is proven using the multivariate polynomial Sumcheck protocol (which we will introduce later). The trace is treated as the simplest case of a multivariate polynomial, i.e., a multilinear polynomial, obtained through MLE.<br>(Why not use the univariate polynomial Sumcheck from Varuna for the proof? For the same reason: the absence of a multiplicative subgroup.)</p>



<h2 class="wp-block-heading" id="ae7d"><strong>2. Binius PCS Commitment</strong></h2>



<p id="9b3e">All computations in Binius are performed in a binary field, such as F2 ~ F2⁷, which offers the advantages of efficient computation and reduced memory usage.</p>



<p id="f1b1">However, to achieve these benefits while adhering to the constraints of Reed-Solomon (RS) codes, conflicts arise. The design of the Polynomial Commitment Scheme (PCS) in Binius is primarily aimed at resolving these conflicts.</p>



<ul class="wp-block-list">
<li>Conflict 1: In a binary field tower, there is no multiplicative subgroup of a power of 2 (due to Lagrange’s theorem), so multiplicative FFT cannot be used. However, additive subgroups exist in the binary field, enabling the use of additive NTT.</li>



<li>Conflict 2: In the coefficient field (small field) of the polynomial t, there is no sufficiently large subgroup to serve as the commitment domain for RS codes. RS codes require the commitment domain size |H| > deg⁡(t)+1 = trace length, but we want the trace length to be sufficiently large (e.g., segment size), while keeping the memory usage of t minimal, i.e., preferring a small coefficient field F. Additionally, H must be a subset of the finite field F(since all encoding computations occur in F). For example, if the finite field F2⁴ and the segment size or |H| is 2¹⁹, no subset of F2⁴ can serve as H, since |F2⁴| = 2¹⁶ &lt; 2¹⁹.</li>
</ul>



<p id="3f9d">For Conflict 2, a naive approach would be to directly embed all coefficients of t into a larger field (e.g. F2⁷), but this increases memory usage and computational overhead (i.e., embedding overhead).</p>



<p id="d5a4">Binius addresses this by using a packing technique, where multiple adjacent coefficients of t are packed into a single element in the larger field. For example, if the polynomial t on F2⁴ has coefficients [t0, t1,… ], where each element has a bit width of 16 bits, Binius packs eight adjacent elements into a single element in F2⁷, resulting in a polynomial t′ on F2⁷, as illustrated below:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*ig6dAIOznnEZRb1HPQeMmw.png" alt=""/></figure>



<p id="994f">Subsequently, using RScode to encode t′ which is a polynomial over the larger field, resolving Conflict 2. Then, the Merkle Tree root node of t′ is computed as the commitment value. The PCS in Binius is also referred to as the small-field PCS.</p>



<h2 class="wp-block-heading" id="24cd"><strong>3. Ring-Switch</strong></h2>



<p id="a962">As previously mentioned, to address the issue of the coefficient field of polynomial t lacking a sufficiently large commitment domain when using RS codes, Binius introduces the packing technique. This involves packing adjacent coefficients of t into a polynomial t′ over a larger field, followed by computing the Merkle Tree root node of t′ as the commitment value.</p>



<p id="6ab4">However, this introduces another challenge: while the Merkle Tree commitment is made using t′, the proof opening requires computing the function values of the small-field polynomial t(failing to address this could lead to security issues).</p>



<p id="b0b9">To resolve this, Binius employs&nbsp;<strong>Ring-Switch and Sumcheck to reduce the problem of opening the commitment of t to opening the commitment of t</strong>′, where the commitment opening proof for t′ is achieved through FRI folding and Merkle Tree queries.</p>



<p id="e18e">The PCS in Binius is also referred to as the small-field PCS. The purpose of Ring-Switch is to reduce the opening proof of the small-field PCS to that of the large-field PCS, which is somewhat complex. We attempt to explain as follows:</p>



<p id="6052">Suppose the small-field MLE is t(X_0,…,X_{l−1}), and after packing, the resulting polynomial is t′(X_0,…,X_{l−κ}). We need to prove s = t(r_0,…,r_{l−1}), where:</p>



<ul class="wp-block-list">
<li>r_j are random challenge values from the large field (for security reasons, random challenge values are typically drawn from the large field, e.g., F2⁷);</li>



<li>κ is the packing or extension coefficient. For example, if each coefficient of t is an element of F2⁴(bit width of 16 bits), and each coefficient of t′ is an element of F2⁷(bit width of 128 bits), then κ = log⁡2(128/16) = 3.</li>
</ul>



<p id="518c">For convenience, we denote the coefficient field of t as K, and the coefficient field of t as L, where L is an extension of K of degree 2^κ. Additionally, L is a 2^κ-dimensional vector space over K, with basis β_0,…,β_{2^κ−1}. We also denote l′=l−κl.</p>



<p id="09b1">Note that&nbsp;<strong>our goal is to prove (Statement 1)</strong>:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*DMnVGP_2THFf-0YyNeLK4A.png" alt=""/></figure>



<p id="c987"><strong>3.1 How to Prove (Statement 1)?</strong><br>Note that by treating r_0,…,r_{κ−1} as variables and based on the definition of MLE, we can deduce:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*gn8Xe35hYoWD-CxN1YYQXA.png" alt=""/></figure>



<p id="04c5">Thus, we conclude that (Statement 1) holds if and only if (Statement 2) holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Vcgd5e_38ZF28FbGZ1JsNA.png" alt=""/></figure>



<p id="8f35">This is because, in the right-hand side of (Statement 2),</p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="56a1">v_0,…,v_{κ−1} traverse the hypercube B_κ. After receiving hat_s_v, the verifier can easily compute t(r_0,…,r_{l−1}) according to (Eq.1.1) and then determine whether s equals t(r_0,…,r_{l−1}). Thus,&nbsp;<strong>the problem reduces to proving (Statement 2)</strong>.</p>



<p id="7f6b"><strong>3.2 How to Prove (Statement 2)?</strong><br>We expand all hat_s_v∈L in terms of the basis β_0,…,β_{2^κ−1}, obtaining the following expression:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*UrPo2SRV9a8vFEwth0KZBQ.png" alt=""/></figure>



<p id="cbce">Additionally, note that by expanding the polynomial t(v_0,…,v_{κ−1}, X_κ,…,X_{l−1}) in terms of the Lagrange basis and evaluating it at r_κ,…,r_{l−1}, we obtain:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*ZNmXylIYkDEPRq-ksrwTtA.png" alt=""/></figure>



<p id="2ef0">We conclude that (Statement 2) holds if and only if (Statement 3) holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*FIYoSfXG4FKo07xRsrKGeg.png" alt=""/></figure>



<p id="1fd5">In fact, we have used two methods to compute hat_s_v:</p>



<ul class="wp-block-list">
<li>On the left-hand side of (Statement 2), the hat_s_v is expanded in terms of the basis for vector space of L over K, according to (Eq. 2.1).</li>



<li>On the right-hand side of (Statement 2), the polynomial is expanded in terms of the Lagrange basis and then evaluated, according to (Eq. 2.2)</li>
</ul>



<p id="089b">Thus,&nbsp;<strong>the problem reduces to proving (Statement 3)</strong>.</p>



<p id="00b3"><strong>3.3 How to Prove (Statement 3)?</strong><br>To connect with t′, as we aim to reduce the small-field opening proof to the large-field opening proof, we can attempt to express β_u in terms of β_v Noting that v appears in t in (Statement 3), we expand the function values of the hat_eq polynomial value:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*jM479NzTPG4t_lc4nGOBIQ.png" alt=""/></figure>



<p id="d6f7">By substituting (Eq. 3.1) into (Statement 3) and simplifying, and leveraging the linear independence of the basis in the vector space, we conclude that (Statement 3) holds if and only if (Statement 4) holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*ozk7fEGdtgG7EGbrAdNSYg.png" alt=""/></figure>



<p id="3095">Thus,&nbsp;<strong>the problem reduces to proving (Statement 4)</strong>.</p>



<p id="1982"><strong>3.4 How to Prove (Statement 4)?</strong><br><br>By multiplying both sides by β_v and simplifying, we conclude that (Statement 4) holds if and only if (Statement 5) holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Cs6daZmJsTDPRc-P05nrkQ.png" alt=""/></figure>



<p id="ca29">Where hat_s_u is defined as (Eq. 4.1):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*0R-CEox0b4vqSWYUnXyoLw.png" alt=""/></figure>



<p id="a97f">Note that u in (Eq. 4.1) comes from the right-hand side of (Statement 4), which originates from (Eq. 3.1), and u in hat_s_u must match u in A_{w,u}.<br>Thus,&nbsp;<strong>the problem reduces to proving (Statement 5)</strong>.</p>



<p id="8e65"><strong>3.5 How to Prove (Statement 5)?</strong><br>Since (Statement 5) is a standard Multi Sumcheck equation, it can be proven using the Multi Sumcheck Protocol. However, we aim to prove (Statement 5) holds for all u∈B_κ, batching them together to reduce the proof size. To this end, we have that (Statement 5) is almost equivalent to (Statement 6):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*JY6fk1v9yzyF8GxQgooBAA.png" alt=""/></figure>



<p id="4875">(Statement 6) can be proven using Multi Sumcheck. (Why? After replacing r′′ with κ variables on both sides of (Statement 6), both sides are MLEs. If (Statement 5) holds, (Statement 6) must hold; and if two MLEs are equal at a randomly chosen point, then (Statement 5) holds with overwhelming probability.)</p>



<p id="b81a"><strong>3.6 Summary:</strong><br>To prove (Statement 1):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*DMnVGP_2THFf-0YyNeLK4A.png" alt=""/></figure>



<p id="1622">Using fewer challenge values, (Statement 1) is equivalent to (Statement 2):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Vcgd5e_38ZF28FbGZ1JsNA.png" alt=""/></figure>



<p id="569e">By computing hat_s_v using two different basis expansion methods, (Statement 1) is equivalent to (Statement 3):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*FIYoSfXG4FKo07xRsrKGeg.png" alt=""/></figure>



<p id="b9d7">To connect with t′, eliminate β_u, (Statement 3) is equivalent to (Statement 4):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*ozk7fEGdtgG7EGbrAdNSYg.png" alt=""/></figure>



<p id="c8b7">Multiplying by β_v and simplifying, (Statement 4) is equivalent to (Statement 5):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Cs6daZmJsTDPRc-P05nrkQ.png" alt=""/></figure>



<p id="6f90">Batching the Sumcheck proof, (Statement 5) is almost equivalent to (Statement 6):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*JY6fk1v9yzyF8GxQgooBAA.png" alt=""/></figure>



<p id="cec7">Finally, we define the polynomial ( h ):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*0bCnksPBNkUXxv3tLbLXSA.png" alt=""/></figure>



<p id="b74b">where A is given by (Eq. 6.2):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*5FUjU1kIcGgrvBN0n-UrfA.png" alt=""/></figure>



<p id="3614">The purpose of Ring-Switch is to compute the polynomial h, the left-hand side of (Statement 6) s0, for subsequent Sumcheck proof and FRI. The specific algorithm flow is shown in the figure below:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Qic5WO2WGFUDSmGco8X8mw.png" alt=""/></figure>



<h2 class="wp-block-heading" id="3383"><strong>4. Sumcheck FRI</strong></h2>



<p id="6803"><strong>Sumcheck Protocol</strong><br>The purpose of the Sumcheck protocol is to prove the following equation holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*VKtxJ4rNVctOHuYgoOq3pw.png" alt=""/></figure>



<p id="5a4d">where h is defined as in (Eq. 6.1). The prover needs to demonstrate that this equation holds without revealing the coefficients of h. From the equation above, it can be seen that the right-hand side is effectively a sum of the function values of the polynomial over all vertices of the “hypercube,” i.e., Sumcheck.</p>



<p id="4943"><strong>Prover</strong>:</p>



<p id="c10a">There are a total of l’ rounds, and the proof proceeds as follows:<br>For j=round_1, …, round_l′:<br><strong>A.&nbsp;</strong>The prover computes the coefficients of the univariate polynomial:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*bwt8WoaLG7kNQWBr3u0KZg.png" alt=""/></figure>



<p id="b1f2"><strong>B.</strong>&nbsp;The prover appends the coefficients of H_j to the current transcript.<br><strong>C.&nbsp;</strong>The prover uses the generated transcript as input to produce a random challenge value r′_j.<br><strong>D.</strong>&nbsp;The prover computes the coefficients of the multivariate polynomial h(r′_1,…,r′_j, X_{j+1},…,X_{l′}).</p>



<p id="bb61"><strong>Verifier</strong>:</p>



<p id="d2ce">Upon receiving the transcript, the verifier locates the section corresponding to the Sumcheck and extracts each univariate polynomial H_j.</p>



<p id="dda5">For j=round 1, …, round_l′:<br>The verifier checks if H_j(0)+H_j(1) = H_{j−1}(r′_{j−1}).<br>— If this holds for all j, the verifier accepts the Sumcheck as correct.<br>— Otherwise, the verifier considers the Sumcheck incorrect.<br>Additionally, the verifier checks if H_l′(r′_l′) == h(r′_1,…,r′_l′).</p>



<p id="fbde">Note that in the final step, the verifier needs to check</p>



<p id="9d3e">H_l′(r′_l′) == h(r′_1,…,r′_l′) = A(r&#8217;_1, &#8230;, r&#8217;_l&#8217;) * t′(r&#8217;_1, &#8230;, r&#8217;_l&#8217;). Here, the verifier can computeH_l′(r′_l′) and A(r′_1, …, r′_l′) independently (why? refer to the definition in Eq. 6.2, which involves the hat_eq polynomial). However, t′(r′_1, …, r′_l′) cannot be computed by the verifier (to ensure zero-knowledge, the verifier does not have access to it).</p>



<p id="66e4">The prover use FRI to compute and prove t′(r′_1, …, r′_l′). How is this proven? After FRI folding, the RS code of t′ becomes a constant function, and this constant is exactly t′(r′_1,…,r′_l′)(why? refer to the&nbsp;<a href="https://eprint.iacr.org/2024/504" rel="noreferrer noopener" target="_blank">this</a>). Additionally, a Merkle Tree path proof is required to ensure the folding process is correct.</p>



<p id="9ba5">The specific algorithm flow is shown in the figure below:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*GgBnCEax0GRnoGDRCjYSCw.png" alt=""/></figure>



<h2 class="wp-block-heading" id="7c35">5. Conclusion</h2>



<p id="8409">The Binius Proof System&#8217;s PCS is a sophisticated and innovative scheme that optimizes for binary fields, offering a balance of efficiency, security, and practicality. By addressing the limitations of binary fields through techniques like additive NTT, MLE, and Ring-Switch, Binius provides a robust framework for zero-knowledge proofs, particularly in scenarios where computational resources are limited. Its design not only enhances performance but also sets a new standard for ZK proof systems in constrained environments. Especially, it’s friendly to hardware such as FPGA or ASIC, thus it might be further accelerated.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/binius-proof-system-pcs-f04bf3f22e18/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Aleo Roadmap 2025 and In-Depth Exploration of Varuna</title>
		<link>https://medium.com/@CFrontier_Labs/aleo-roadmap-2025-and-in-depth-exploration-of-varuna-540ea05a4e8d</link>
					<comments>https://medium.com/@CFrontier_Labs/aleo-roadmap-2025-and-in-depth-exploration-of-varuna-540ea05a4e8d#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 12 Jun 2025 02:02:41 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=451</guid>

					<description><![CDATA[TL;DR Recently, Aleo released their&#160;roadmap for 2025.&#160;We analyze the possible algorithms Varuna will use to achieve this goal. Varuna is [&#8230;]]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading" id="cd58"><strong>TL;DR</strong></h2>



<p id="e299">Recently, Aleo released their&nbsp;<a href="https://aleo.org/roadmap/" rel="noreferrer noopener" target="_blank">roadmap for 2025.</a>&nbsp;We analyze the possible algorithms Varuna will use to achieve this goal.</p>



<p id="4e90">Varuna is the current proof system for snarkVM in Aleo, used to prove the correctness of program execution (written in the Leo language). It reduces the problem of proving the correctness of one or more program executions to proving an R1CS problem, with the underlying proof leveraging the sum-check protocol for univariate polynomials.</p>



<h2 class="wp-block-heading" id="91ea">1. Roadmap 2025</h2>



<p id="3fc2">Aleo officially released the 2025 roadmap, as shown in the figure below. The goal for 2025 is to support circuits of size 2²², with plans to support circuits of size 2²⁸ in the future, which poses significant challenges for memory and computational power.<br>To achieve this goal, we estimate that Aleo VM may evolve in the following directions:<br><strong>A</strong>. Support for recursion functionality, i.e., splitting the program or circuit to be proven into multiple parts, generating proofs for each part individually, and then aggregating these proofs into a final proof.<br><strong>B</strong>. Development of a more efficient polynomial commitment scheme.<br><strong>C</strong>. Currently, Varuna uses the base field of BN254, where each element occupies significant memory, and arithmetic operations in this field require substantial computation. Switching to a smaller field could significantly reduce memory usage and computational requirements.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*bLUwF9EKFiKX_JBhT-vgAg.png" alt=""/><figcaption class="wp-element-caption">Source:&nbsp;<a href="https://aleo.org/roadmap/" rel="noreferrer noopener" target="_blank">https://aleo.org/roadmap/</a></figcaption></figure>



<p id="c86e">See also our previous articles,&nbsp;<a href="https://medium.com/@CFrontier_Labs/the-journey-toward-aleos-universal-zkvm-1df9114fbd86">The Journey Toward Aleo’s Universal ZKVM</a>&nbsp;and&nbsp;<a href="https://medium.com/@CFrontier_Labs/advanced-zk-hardware-acceleration-the-technology-behind-aleo-asic-miner-0fbffc42c4f6">Advanced ZK Hardware Acceleration: The Technology Behind Aleo ASIC Miner</a>.</p>



<h2 class="wp-block-heading" id="3a70"><strong>2. Introduction</strong></h2>



<p id="0ad8">The Varuna algorithm primarily consists of three components: circuit synthesis, algebraic holographic proof, and polynomial commitment scheme. We will provide a detailed introduction and analysis of the algebraic holographic proof part of the Varuna ZKP algorithm (i.e., how constraints are proven through polynomial representation and computation), which is the most challenging part of Varuna.<br>This article will analyze and introduce the algorithm’s principles from a theoretical perspective, provide a simple mathematical reasoning process, and discuss the future evolution of Varuna in the context of the Aleo 2025 Roadmap.</p>



<p id="cfbf">As shown in the figure below, this article will focus on the left part of the diagram, namely the algebraic holographic proof (where “holographic” refers to the small number of queries required in the proof process). The circuit synthesis and polynomial commitment scheme, while logically somewhat complex, are similar to those in other proof systems.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*BWAetlN1qNVLll_Tc8UAWQ.png" alt=""/></figure>



<p id="e2f6">The following text will explain the problem that Varuna proves, providing rigorous definitions of the terms used, and then analyzing in detail the five rounds introduced to address this problem (with Rounds 3 and 4 being more complex, while the other rounds are simpler). Due to space constraints, for the mathematical foundations involved, we will only briefly explain the concepts and theorems, without providing detailed descriptions or proofs, and encourage readers to refer to any abstract algebra textbook they find interesting.</p>



<h2 class="wp-block-heading" id="e2d5"><strong>3. R1CS Problem and Definitions</strong></h2>



<p id="9a85">The purpose of the Varuna algorithm is to prove that the execution of a program (written in the Leo language, assuming a single program for now; the process for proving multiple programs will be discussed later) is correct. To achieve this, Varuna first executes the program and, based on the types of computations and intermediate results during execution, constructs three matrices A, B, C, and a vector z. This reduces the problem of proving the correctness of program execution to proving that these three matrices and the vector z satisfy certain constraints. We provide the following definition:</p>



<p id="6fec"><strong>Definition 3.1: R1CS Problem</strong><br>For given matrices A, B, C, and vector z, the R1CS (Rank-1 Constraint System) refers to proving that the following equation holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*sgLVVI_azf2vP3-MOWLItw.png" alt=""/></figure>



<ul class="wp-block-list">
<li>matrix A: An (m, n) dimension matrix over a finite field F (i.e., the elements of the matrix belong to F, not the real numbers). A finite field is a set in which the four basic arithmetic operations (addition, subtraction, multiplication, and division) can be performed, and the number of elements in the set is finite (for a rigorous definition, refer to textbooks).</li>



<li>matrix B, C: Defined similarly to A, but as distinct matrices, with all three matrices having the same dimensions.</li>



<li>z: An n-dimensional column vector in the finite field, where z includes the public inputs of the program and the witness (i.e., public inputs and intermediate results of program execution), such as z = (x, w).</li>



<li>Az: Denotes standard matrix-vector multiplication, resulting in a column vector a_z. Similarly, b_z and c_z are defined.</li>



<li>The product of Az and Bz: Represents the Hadamard product between column vectors, i.e., element-wise multiplication of corresponding elements in the two vectors.</li>
</ul>



<p id="082e">As shown in the simple example in the figure below, this illustrates how arithmetic operations of instructions are converted into an R1CS problem. Note that each row of the matrix corresponds to a constraint, and the constraint holds if and only if the corresponding instruction’s computation is correct.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*o_FX-eqCyr4OFreUN94zLg.png" alt=""/><figcaption class="wp-element-caption">Source: Proofs, Arguments and Zero Knowledge by Justin Thaler</figcaption></figure>



<p id="1edc"><strong>Lemma 3.2: Sum-check</strong></p>



<p id="ece2">For a given polynomial f(Y) over a finite field F (i.e., the coefficients of the polynomial belong to F), the sum of f over a cyclic group C (a non-zero multiplicative subgroup, slightly abuse notation with C in R1CS, but easy to get the difference) in F equals σ, if and only if there exist polynomials h(Y) and g(Y) over F such that the following equation holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*wVe9Qgg84zR3qGCqOK3NcA.png" alt=""/></figure>



<ul class="wp-block-list">
<li>σ: An element in F.</li>



<li>C: A subgroup of the multiplicative group formed by all non-zero elements in F under multiplication. Group, a set with a defined “multiplication” operation. Notably, any multiplicative subgroup in a finite field is a cyclic group, meaning every element in the group is an integer power of some element. This is crucial because many algorithms in zero-knowledge proofs (especially Varuna) rely on the simple structure of cyclic groups.</li>



<li>|C|: The number of elements in the cyclic group C. Since F is finite, C is necessarily finite.</li>



<li>vC(Y): The vanishing polynomial over the cyclic group C, i.e., a polynomial that evaluates to zero at every element of C. Other vanishing polynomials over cyclic groups can be defined similarly.</li>
</ul>



<p id="5417">For ease of discussion in the following sections, we assume that the row indices of the three matrices in R1CS, such as those in A, are treated as a function from a cyclic subgroup R of F to F. We call this cyclic subgroup R the&nbsp;<strong>constraint domain</strong>&nbsp;(i.e., constraint_domain in the code). The cyclic subgroup C corresponding to the column indices is called the&nbsp;<strong>variable domain</strong>&nbsp;(i.e., variable_domain in the code). Based on the context, it is easy to distinguish whether C refers to the matrix in R1CS or the variable domain, avoiding symbol confusion.<br>Clearly, z should be treated as a function from R to F , and the public inputs of the program to be proven (i.e., public_input in the code) are treated as a function on a subgroup H0 of R, which we call the&nbsp;<strong>input domain</strong>&nbsp;(i.e., input_domain in the code).<br>Additionally, as seen in the R1CS example above, many elements in the three matrices are zero. To save memory and computation, all non-zero elements in each matrix are sorted in the order of their appearance, and the subgroup K corresponding to the indices of this sorting is called the&nbsp;<strong>non-zero parameter domain</strong>&nbsp;(i.e., nonzero_domain in the code).</p>



<h2 class="wp-block-heading" id="01e5"><strong>4. Round 1: Generating the Witness Shift Polynomial and Random Polynomial</strong></h2>



<p id="a83b"><strong>4.1. Computing the Witness Shift Polynomial</strong><br>The following polynomial is computed:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Q95Pse-qWqTrK4BtQXbmPA.png" alt=""/></figure>



<ul class="wp-block-list">
<li>x: The public input of the program to be proven.</li>



<li>H0: The input domain, with its number of elements being the smallest power of 2 greater than or equal to the number of elements in the public input.</li>



<li>vH0(X): The vanishing polynomial over H0。</li>



<li>hat_x(X): The low-degree extension polynomial of the public input x, obtained using the function values of x on H0 via IFFT.</li>



<li>z’(X): The low-degree extension polynomial of the function z’, where z’ represents the public input and witness.</li>
</ul>



<p id="3922"><strong>4.2. Generating the Random Mask Polynomial</strong><br>The generation process is relatively simple, as shown in the following equation:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*yW6krYU_U82Au426tI8ZxQ.png" alt=""/></figure>



<ul class="wp-block-list">
<li>R3(X), R4(X): Polynomials with coefficients that are random elements in F, with R3(X) of degree 3 and R4(X) of degree 4.</li>



<li>vC(X): The vanishing polynomial over C (in the code, this corresponds to the maximum variable domain).</li>
</ul>



<p id="5ad9"><strong>Why is the mask polynomial needed?</strong><br>It is used in Round 3 to achieve zero-knowledge properties.</p>



<p id="5a46"><strong>4.3. Prove process</strong></p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*awu7BfC9xdhpGLyoTiWouQ.png" alt=""/></figure>



<p id="2e1f">The figure below illustrates the computation process for Round 1. Note that the figure involves iterating over circuits and multiple instances of each circuit. This is because Varuna supports what is known as batch proving, where multiple circuits and their respective multiple executions are proven together, generating only a single proof.</p>



<h2 class="wp-block-heading" id="daad"><strong>5. Round 2, row check</strong></h2>



<p id="5ac7"><strong>5.1 Prove Principle</strong><br>The goal is to prove the following:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*419P6Q9c5mBsqajLfscjCg.png" alt=""/></figure>



<ul class="wp-block-list">
<li>a_z = Az, the product of matrix A and column vector z = (x, w), with similar definitions for b_z and c_z</li>



<li>The multiplication on the left-hand side is element-wise multiplication, not a vector inner product.</li>



<li>the equation being proven consists of m equations, corresponding to m constraints. If the program is executed correctly, these m equations hold.</li>
</ul>



<p id="018f"><strong>How to prove it?</strong>&nbsp;It’s straightforward.</p>



<p id="6452"><strong>1.&nbsp;</strong>Use IFFT to compute the low-degree extension polynomials of a_z, b_z and c_z, obtaining hat_az(X), hat_bz(X) and hat_cz(X).</p>



<p id="43c3"><strong>2.</strong>&nbsp;(Eq. 5) holds, meaning every constraint in the R1CS is satisfied, if and only if for any element x in R, x is a root for the equation: hat_az(X)·hat_bz(X) — hat_cz(X) = 0.</p>



<p id="2846"><strong>3.&nbsp;</strong>The statement in step 2 holds if and only if the vanishing polynomial v_R(V) over ( R ) divides hat_az(X)·hat_bz(X) — hat_cz(X). That is, there exists a polynomial h0(X) on F such that:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*9tZLx2nRwnEfryddiTfCSQ.png" alt=""/></figure>



<p id="f9c7"><strong>4.</strong>&nbsp;(Eq. 6) holding is almost equivalent to the following equation holding, in which α∈F\R is a random challenge value from:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*HApTbBWvfJ3WzkCnHgdvxA.png" alt=""/></figure>



<p id="5e21"><strong>5.</strong>&nbsp;The purpose of Round 2 is to compute h0(X), calculate its commitment value, and submit it.</p>



<p id="16b6"><strong>6.</strong>&nbsp;When the verifier receives the proof for verification, they need to check that (Eq. 7) holds.</p>



<p id="435e"><strong>5.2 Prove process</strong></p>



<p id="bffc">The specific proof process for Round 2 is illustrated in the figure below:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*1PxcMitPVNFFD8ko2aUiww.png" alt=""/></figure>



<p id="1a87">Note that in Round 2, the process iterates over multiple circuits and multiple instances of each circuit, computing the row_check_witness.</p>



<p id="911b">The most important feature of Varuna is its support for&nbsp;<strong>batch proving</strong>, which involves proving multiple circuits and their multiple instances together, generating a single proof.</p>



<p id="b73b">Since there are multiple circuits, each potentially with multiple instances, a naive implementation would require computing the h0(X) polynomial for each instance of each circuit, calculating its commitment value, and submitting it. However, this approach would result in many commitment values being submitted, leading to a large proof size.</p>



<p id="a890">Instead, we aim to combine all quotient polynomials from multiple circuits and instances into a single polynomial and submit its commitment value. This is achieved by computing the following expression, derived through easy mathematical reasoning.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*RKGT5lQD-K5liB1d2Fr69w.png" alt=""/></figure>



<ul class="wp-block-list">
<li>R: The largest constraint domain.</li>



<li>Ri: The constraint domain of the i-th circuit.</li>



<li>vRi(X): The vanishing polynomial over.</li>
</ul>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*nMS8w34kVQMXzQaMsSC_uA.png" alt=""/></figure>



<h2 class="wp-block-heading" id="98a9"><strong>6. Round 3, linear check</strong></h2>



<p id="8978"><strong>6.1 prove principle</strong><br>Linear check is to compute and prove that the matrix-vector multiplication is correct, i.e., compute and prove that the following equation holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*sQEY2zhnmFilbbxB2-yG0A.png" alt=""/></figure>



<ul class="wp-block-list">
<li>M ∈ {A, B, C}, with a_z denoting the product of matrix A and vector z = (x, w), and similarly for b_z and c_z.</li>
</ul>



<p id="229e"><strong>Why is this necessary?</strong></p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="a66b"><strong>1</strong>. The R1CS itself requires proving this equation.</p>



<p id="5ef2"><strong>2.&nbsp;</strong>During verification in Round 2, the verifier needs to compute:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Y-m2mJlkdw6C7lXPAlDz9w.png" alt=""/></figure>



<p id="1143">However, the verifier cannot directly compute this because they do not have M (which may be very large) or z. Thus, they rely on the prover.<br>Note that (Eq. 11) holding is almost equivalent to (Eq. 10) holding, so we only need to compute and prove (Eq. 11).</p>



<p id="53a9"><strong>How to prove it?</strong><br>For ease of explanation, we first assume there is only one circuit. The intuitive understanding of the proof process is as follows:</p>



<p id="0ec8"><strong>1.&nbsp;</strong>Leverage the sum-check protocol for a univariate polynomial over the cyclic group ( C ).&nbsp;<strong>Sum-check Lemma</strong>: As stated in Definition 3.2, for a polynomial ( f(Y) ), the sum over the cyclic group C equals σ, if and only if there exist polynomials h(Y) and g(Y) such that:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*FZ84DbeBZ7Eetd-nj_xZwg.png" alt=""/></figure>



<p id="7223"><strong>2.&nbsp;</strong>Note that the matrix-vector multiplication is essentially a summation.</p>



<p id="02dd"><strong>3.</strong>&nbsp;The key to the proof is computing f(Y) = hatM(α, Y) · z(Y), where hatM(α, Y) is the low-degree extension of M as a bivariate polynomial.</p>



<p id="fc04"><strong>4.&nbsp;</strong>To compute f(Y), consider the Lagrange basis expansion of hatM(X, Y):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*1xcgrrifxMUQcMCwNgxW8g.png" alt=""/></figure>



<ul class="wp-block-list">
<li>R: The constraint domain, i.e., the domain of the row indices of M.</li>



<li>C: The variable domain, i.e., the domain of the column indices of M.</li>



<li>K : The cyclic subgroup corresponding to the indices obtained by sorting all non-zero elements of matrix Min their order of appearance, i.e., the non-zero domain in the code.</li>



<li>valM(κ): The function that treats all non-zero elements of M as a function over K, with hat_valM(Z) as its low-degree extension polynomial.</li>



<li>rowM(κ): The function that treats the row indices of all non-zero elements of M as a function over K, with hat_rowM(Z) as its low-degree extension polynomial.</li>



<li>colM(κ): Defined similarly to rowM(κ).</li>



<li>L_rowM(κ)_R(X): The Lagrange basis function corresponding to the rowM(κ)-th element over R, and similarly for L_colM(κ)_C(Y)</li>
</ul>



<p id="f1b0"><strong>5.&nbsp;</strong>Using the low-degree extension of M(Eq.13), we can easily construct f(Y).</p>



<p id="6276"><strong>6.&nbsp;</strong>Then, applying the sum-check lemma, (Eq. 11) holding is almost equivalent to the following equation holding, where β is a random challenge value in F\C:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*9KDUi_TMn5JAQAbHFO2cKg.png" alt=""/></figure>



<p id="5cec"><strong>7.&nbsp;</strong>The purpose of Round 3 is to compute h(Y) and g(Y), submit them, and ensure zero-knowledge properties.</p>



<p id="c251"><strong>8.&nbsp;</strong>The verifier, during verification, needs to check that (Eq. 14) holds.</p>



<p id="e9e4"><strong>6.2 Prove process</strong></p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Rn6WWxt0rguXKMPdnHTj1g.png" alt=""/></figure>



<p id="b635">Additionally, note that in Round 3, the process iterates over multiple circuits and multiple instances of each circuit. Similar to the prove process in Round 2, for multiple circuits and instances, we aim to combine all quotient polynomials h(Y) and remainder polynomials r(Y) into a single polynomial, i.e., compute h1(Y) and r1(Y) in the following equations and submit their commitment values:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*XZrbgcnrdGjw6QS750xsjw.png" alt=""/></figure>



<ul class="wp-block-list">
<li>C : The largest variable domain.</li>



<li>Ci: The variable domain of the i-th circuit.</li>



<li>vC(Y): The vanishing polynomial over C.</li>



<li>h’_ijk(Y) and r’_ijk(Y) must satisfy the following equation:</li>
</ul>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*UFwSlkNs9IH_6Lxg4fTQ9w.png" alt=""/></figure>



<h2 class="wp-block-heading" id="b622"><strong>7. Round 4, rational check</strong></h2>



<p id="15fe"><strong>7.1 Prove principle</strong></p>



<p id="9426">Compute and prove the evaluation of the bivariate matrix polynomial</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*LXceIYMdrRM3YSFpqz8ssQ.png" alt=""/></figure>



<ul class="wp-block-list">
<li>M ∈ {A, B, C}.</li>



<li>hatM(X, Y): The low-degree extension polynomial of M, as noted in (Eq. 13).</li>



<li>α: A random challenge value from F\R.</li>



<li>β: A random challenge value from F\C.</li>
</ul>



<p id="51ef"><strong>Why is this necessary?</strong></p>



<p id="7180"><strong>1.</strong>&nbsp;In Round 3, the verifier needs to compute hatM(α, β) for the final check, but cannot do so directly (due to high computational cost). Thus, the prover must compute and prove it.</p>



<p id="4e65"><strong>2.</strong>&nbsp;Directly using the bivariate polynomial obtained from the Lagrange expansion of the matrix for the proof would be computationally expensive. Therefore, a derivative polynomial basis is introduced. Below are the Lagrange basis function expansion and the derivative polynomial basis function expansion:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*KIaOl2kmQQ-zRpNs-FTAKw.png" alt=""/></figure>



<ul class="wp-block-list">
<li>dR(X, Y) = [vR(X) — vR(Y)] / (X — Y).</li>



<li>R: The constraint domain.</li>



<li>C: The variable domain.</li>



<li>K: The non-zero parameter domain.</li>
</ul>



<p id="5f5f"><strong>How to prove (Eq. 17)?</strong><br>For ease of explanation, we first assume there is only one circuit.</p>



<p id="1e18"><strong>1.</strong>&nbsp;Using the definition of the quotient polynomial, simplify (Eq. 18), obtain the following expression:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*XLU_8ywozoHiwi2vccS8Mw.png" alt=""/></figure>



<p id="4d68"><strong>2.&nbsp;</strong>Define the following polynomial r(Z):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*VhoHLHUl_wgIB7Fo2ap7yQ.png" alt=""/></figure>



<p id="1f55">Then, (Eq. 17) holds if and only if the sum of the rational polynomial r(Z) over K equals ω.</p>



<p id="6a5c"><strong>3.</strong>&nbsp;The statement in step 2 holds if and only if there exist polynomials h4(Z) and gM(Z) such that the following equation holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*IH7lAPCcrD5KcGWbs4vvIA.png" alt=""/></figure>



<p id="93b8"><strong>4.&nbsp;</strong>The purpose of Round 4 is to compute ω, hM(Z), gM(Z), a(Z), b(Z) from the above equation, and submit them (for polynomials, compute their commitments).</p>



<p id="dee8"><strong>7.2 prove process</strong></p>



<p id="ba61">Based on the above description, we can obtain the proof process for Round 4 as shown in the figure below:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*CF8aJqTQ26VJfXQQe6CBWg.png" alt=""/></figure>



<p id="292b">Similar to the approaches in Round 2 and Round 3, all hM_i(Z) polynomials are combined (using random selection, but the combiner is obtained later and accumulated in Round 5).</p>



<h2 class="wp-block-heading" id="2bf5"><strong>8. Round 5</strong></h2>



<p id="2d39">The process for Round 5 is relatively simple, only requiring the completion of the following computations:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Ba8h9r11n9FL8GBQ4xZZIg.png" alt=""/></figure>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*kj1K0bd6OaRYCeIuLNiJjA.png" alt=""/></figure>



<h2 class="wp-block-heading" id="33a9"><strong>9. Summary</strong></h2>



<p id="38bf">In summary, the proof process for the algebraic holographic part of Varuna primarily consists of the following five rounds:</p>



<p id="f1c0"><strong>Round 1</strong>: Preparation for the proof, generating the witness_shift polynomial and the random polynomial mask(X), used to achieve zero-knowledge ZK.</p>



<p id="fb43"><strong>Round 2</strong>: Row check, proving that each row constraint in the R1CS holds.</p>



<p id="c106"><strong>Round 3</strong>: Linear check, computing and proving that the matrix-vector multiplication is correct, while also supporting Round 2.</p>



<p id="6175"><strong>Round 4</strong>: Rational check, supporting Round 3, computing and proving the evaluation of the low-degree extension polynomial of matrix M at (α, β): ω = hatM(α, β).</p>



<p id="cc7f"><strong>Round 5</strong>: Randomly combining the quotient polynomials from Round 4 and computing their commitment values, supporting Round 4.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/aleo-roadmap-2025-and-in-depth-exploration-of-varuna-540ea05a4e8d/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Finite Field Arithmetic Optimization</title>
		<link>https://medium.com/@CFrontier_Labs/finite-field-arithmetic-optimization-ab449202c776</link>
					<comments>https://medium.com/@CFrontier_Labs/finite-field-arithmetic-optimization-ab449202c776#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 19 May 2025 02:35:39 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=453</guid>

					<description><![CDATA[TLDR The finite field arithmetic operations are essential in ZKP proof systems, we introduce some simple novel field computation algorithms, [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><strong>TLDR</strong></p>



<p id="8720">The finite field arithmetic operations are essential in ZKP proof systems, we introduce some simple novel field computation algorithms, result in at least 20% performance improvement of multiplication in extension field K, and 30% improvement of Montgomery transformation from integer to Montgomery representation in the base field F.</p>



<h2 class="wp-block-heading" id="d212"><strong>1. Goal</strong></h2>



<p id="6563">In many zero-knowledge proof systems (such as SP1 and RISC0), the majority of computations occur within finite fields, including finite field addition, multiplication, and division. Therefore, efficient finite field arithmetic operations are crucial for the performance of zero-knowledge proof systems. We will introduce some novel finite field computation algorithms and provide performance comparison data.</p>



<h2 class="wp-block-heading" id="4a5c"><strong>2. Finite Fields​​</strong></h2>



<p id="d295">A ​​field​​ is a set&nbsp;<em>F</em>&nbsp;equipped with two operations, referred to as “addition” and “multiplication”, denoted as + and ⋅ respectively, that satisfy the following conditions:</p>



<p id="1b8e">​​A.​​&nbsp;<em>F</em>&nbsp;forms an ​​Abelian group​​ under addition:</p>



<ul class="wp-block-list">
<li>There exists an additive identity element 0∈<em>F</em>.</li>



<li>Every element in <em>F</em> has an additive inverse.</li>



<li>Addition is associative and commutative.</li>
</ul>



<p id="e0da">​​B.​​&nbsp;<em>F</em>∖{0} forms an ​​Abelian group​​ under multiplication:</p>



<ul class="wp-block-list">
<li>There exists a multiplicative identity element 1∈<em>F</em>∖{0}.</li>



<li>Every non-zero element in <em>F</em> has a multiplicative inverse.</li>



<li>Multiplication is associative and commutative.</li>
</ul>



<p id="6387">For example, the set of real numbers R, under standard addition and multiplication, forms the real number field R. A field&nbsp;<em>F</em>&nbsp;is called a ​​finite field​​ if it contains a finite number of elements.</p>



<p id="1ca8">A ​​prime field​​ F<em>p</em>​ is defined as the set F<em>p</em>​={0,1,2,…,<em>p</em>−1}, where&nbsp;<em>p</em>&nbsp;is a prime number. Its operations are:</p>



<ul class="wp-block-list">
<li>​​Addition​​: <em>a</em>+<em>b</em>=(<em>a</em>+<em>b</em>) mod <em>p</em> (where the second “+” denotes integer addition, and mod is the modulo operation).</li>



<li>​​Multiplication​​: <em>a</em>⋅<em>b</em>=(<em>a</em>×<em>b</em>) mod <em>p</em> (where × denotes integer multiplication).</li>
</ul>



<p id="0fec">A ​​field extension​​&nbsp;<em>K</em>&nbsp;of a field&nbsp;<em>F</em>&nbsp;is a field such that&nbsp;<em>F</em>&nbsp;is a subfield of&nbsp;<em>K</em>&nbsp;(i.e.,&nbsp;<em>F</em>⊆<em>K</em>, and the operations of&nbsp;<em>K</em>, when restricted to&nbsp;<em>F</em>, satisfy the field axioms). For example, the complex numbers C form an extension field of the real numbers R.</p>



<p id="db00">Since prime fields F<em>p</em>​ are widely used in zero-knowledge proof systems, we focus on computations in F<em>p</em>​ and their extension field&nbsp;<em>K</em>. Without loss of generality, we assume that:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*G5TLt20AI804aVziQ7Swdg.png" alt=""/></figure>



<p id="da4b">​​Easy to know that polynomial&nbsp;<em>X</em>4+11 is irreducible in F[X], then the quotient ring F[<em>X</em>]/(<em>X^</em>4+11) forms a field and is an extension of F.</p>



<h2 class="wp-block-heading" id="206c"><strong>3. Operations in the Field F and Its Extension Field K</strong></h2>



<p id="6458">Operations in the field F typically include addition, subtraction, multiplication, and finding inverses, where addition and multiplication are the basic operations defined in the field F, while subtraction and inversion can be constructed from these basic operations. Similarly, operations in the extension field K can be derived from those in F, so we first focus on addition and multiplication in F.</p>



<p id="3a08">Based on the definition of the prime field, addition in F is defined as a+b = (a+b) mod p. Note that 0≤a≤p−1, 0≤b≤p−1. When a+b is computed as integer addition, the result c=a+b satisfies 0≤c≤2p−2. Thus c mod p can be computed as: c mod p = if c&lt;p then c ; otherwise c−p. This means that addition in F can be easily calculated using 1~2 integer addition/subtraction operations and one logical comparison.</p>



<p id="5209">Multiplication in F is defined as a⋅b=(a⋅b) mod p. Note that 0≤a⋅b≤(p−1)^2. When a⋅b is large, numerous subtraction operations are required to reduce a⋅b to an element in the field F. Alternatively, one division can be used, such as q = ⌊(a⋅b)/p⌋, and (a⋅b) mod p = (a⋅b)−(p⋅q). However, division operations require a significant number of cycles in hardware or GPU implementations. To address this issue, we introduce Montgomery reduction.</p>



<h2 class="wp-block-heading" id="430c"><strong>4. Montgomery Reduction</strong></h2>



<p id="cf41">To reduce a given large integer to the range [0, p) without using division, Montgomery reduction bases on the following equation:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*aiBZg5Vkui5SqJ83JjsGpQ.png" alt=""/></figure>



<p id="376c">This equation clearly holds because, by considering the operations in the expression z + ((z⋅p′ mod R)⋅p) as multiplication and addition in the ring Z/RZ, there exists an integer k such that the following relationship is satisfied:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*wVr7HVN6eIWUoJ_Obc8zcA.png" alt=""/></figure>



<p id="cb08">That is, the numerator in Equation (1) is a multiple of ( R ).</p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="6c45">Additionally, note that Rc = R(z⋅R_inv mod p). By treating the multiplication as an operation in the field F, we have:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*OPHdZSYJdevxErega3rZIg.png" alt=""/></figure>



<p id="0cf9">For x∈[0,p), suppose tilde_x = xR mod p, with y and tilde_y defined similarly. According to the definition of the field F, the following equation holds:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*g2dhwb4PLYo3tVEy602hGQ.png" alt=""/></figure>



<p id="aefc">From Equation (2), it can be seen that we can compute the product of tilde_x and tilde_y using Equation (1), i.e., Montgomery reduction and this equation. However, this requires first transforming x to tilde_x = xR mod p, which is computationally expensive. This transformation can be achieved, for example, via Mont(x,R2), where R2=(R⋅R) mod p, and R2 can be precomputed. In many proof systems, such a transformation typically needs to be performed only once, followed by numerous multiplication or division operations. Thus, Montgomery multiplication is an efficient method for multiplying elements.</p>



<p id="39d9">Hereafter, we refer to this transformation as the Montgomery transformation, and tilde_x as its Montgomery representation.</p>



<h2 class="wp-block-heading" id="2397"><strong>5. Optimized Barrett Reduction</strong></h2>



<p id="2802">In many proof systems (such as SP1 and RISC0), when generating a trace, it is necessary to transform numerous integers in the range [0, 0xffffffff) into their Montgomery representation using the Montgomery transformation. However, the computational cost of the Montgomery transformation is relatively high. To address this, Barrett reduction can be introduced, as shown in the following equation:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*0Acbe8gDBx7pZpCb1unofA.png" alt=""/></figure>



<p id="8030"><mark>The</mark>&nbsp;correctness of Equation (5) or the proof of correctness for Barrett reduction can be found in Note 2.15 of Reference [1]. It suffices to note that, from this proof, it can be shown that: 0≤(z−hat_q⋅p) &lt; 3p.</p>



<p id="5afa">However, note that since μ&gt;(1≪32), requires a u64 to represent μ. Additionally, since 0≤⌊z / b^(k−1)⌋&lt; (1≪34), it also requires a u64 to represent. This results in one u64-u64 multiplication and one u64-u32 multiplication, which is computationally expensive. To address this, we can leverage the properties of μ and p to simplify the computation.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*38ZL199y_aYylRCSgTu-WQ.png" alt=""/></figure>



<p id="5b7c">Additionally, Montgomery transformation is tilde_x = xR mod p, then let</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*Yiox-s9OJPK724816LeluA.png" alt=""/></figure>



<p id="6b70">Use Equation6 ~ Equation 8, we can change Equation 3 and Equation 4 to the following form:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*7LBk79Ov3B1wER31XDgYxg.png" alt=""/></figure>



<p id="3d2d">Then the Optimized Barrett reduction just need one u32 multiplication, and other shift, add and subtraction, which are more efficient than multiplication operations.</p>



<h2 class="wp-block-heading" id="c627"><strong>6. Multiplication in the Extension Field K</strong></h2>



<p id="c6b0">We know that the elements in the extension field K are essentially a set of polynomials { r(X)+q(X)p(X) | q(X)∈F[X] }, where p(X) is an irreducible polynomial, such as p(X)=X^4 + 11 as defined above, with r(X)∈F[X] and its degree less than 4.</p>



<p id="3dee">Thus, elements in K can be represented by their corresponding polynomial r(X), which can be expressed using 4 elements from the field F as the coefficients of the polynomial.</p>



<p id="a88b">The multiplication operation in the extension field K consists of polynomial multiplication followed by a polynomial reduction with respect to p(X):</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*kzOJfXlM84N4WOf_BftZXA.png" alt=""/></figure>



<p id="9cae">Then h(X) = (f(X) ⋅ g(X)) mod r(X) with coefficients:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*15bDmjnWpzBOYM5YJWGQxA.png" alt=""/></figure>



<p id="4ba2">From Equation (10), it can be seen that multiplication in the extension field K is implemented through the multiplication and addition of elements in the field F. Notably, this primarily involves the inner product of vectors, such as computing inn3=a0⋅b2+a1⋅b1+a2⋅b2. If computed naively using the multiplication and addition operations in the field F, multiple Montgomery reductions would be required (e.g., inn3 requires 3 reductions), with each reduction involving a significant computational cost:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*_JMJCTDM3qB8kxe3qWW-8g.png" alt=""/></figure>



<p id="eaf1">Suppose we have already performed the Montgomery transformation on a0 using Optimized Barrett reduction, i.e., tilde_a0 = a0⋅R mod p, and similarly for tilde_a1, tilde_a2, tilde_b0, tilde_b1, tilde_b2. Then tilde_inn3 is:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*ELgiXzt1olK2DI4q6f3WoQ.png" alt=""/></figure>



<p id="fec2">From Equation (12), it can be seen that we only need to perform a single Montgomery reduction, which significantly improves the computational performance of the inner product. Consequently, the multiplication operation in the extension field K can be optimized.</p>



<h2 class="wp-block-heading" id="5a49"><strong>7. Performance Comparison</strong></h2>



<p id="f0a5">We implemented the following three simple algorithms in Python and collected performance data:</p>



<ol class="wp-block-list">
<li>Optimized Barrett reduction for Montgomery transformation;</li>



<li>Merged Montgomery reduction in the computation of the inner product of two vectors.</li>



<li>Merged Montgomery reduction for multiplication in the extension field K;</li>
</ol>



<p id="d72b">Test Platform: Mac M3, 16GB RAM, Python implementation.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*dAt0XiNV1ogLkQlCB0qODw.png" alt=""/></figure>



<h2 class="wp-block-heading" id="7338"><strong>8. Reference</strong></h2>



<p>1. Menezes, A. J., Vanstone, S. A., &amp; Oorschot, P. C. V. (2004). Guide to elliptic curve cryptography. Springer.<a href="https://medium.com/@CFrontier_Labs?source=post_page---byline--ab449202c776---------------------------------------"></a></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/finite-field-arithmetic-optimization-ab449202c776/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Introduction to SP1 zkVM</title>
		<link>https://medium.com/@CFrontier_Labs/introduction-to-sp1-zkvm-92555f50f38e</link>
					<comments>https://medium.com/@CFrontier_Labs/introduction-to-sp1-zkvm-92555f50f38e#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 07 Mar 2025 02:39:27 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=455</guid>

					<description><![CDATA[TL;DR 1. Purpose SP1 is a zkVM (zero-knowledge Virtual Machine) proof system based on the RISC-V instruction set and STARK [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><strong>TL;DR</strong></p>



<ol class="wp-block-list">
<li>SP1 is a zkVM proof system based on the RISC-V instruction set and STARK. It supports Recursion proofs , enabling the generation of proofs of the same size for the execution process of any program.</li>



<li>2. SP1 can convert the STARK proof obtained through Recursion into a SNARK proof using Groth16 or Plonk.</li>



<li>We make a detail comparison with RISC0 and evaluate the performance of SP1 zkVM on the generation proof for a Taiko block.</li>
</ol>



<h3 class="wp-block-heading" id="46f1">1. Purpose</h3>



<p id="ccc4">SP1 is a zkVM (zero-knowledge Virtual Machine) proof system based on the RISC-V instruction set and STARK (Scalable Transparent Argument of Knowledge). It supports Recursion proofs , enabling the generation of proofs of the same size for the execution process of any program. Additionally, SP1 can convert the STARK proof obtained through Recursion into a SNARK proof using Groth16 or Plonk, further compressing the proof size.</p>



<p id="da04">Since SP1’s proof process is similar to that of RISC0, and we have previously analyzed RISC0’s proof process ( https://medium.com/p/0666216b654b/edit ), we will briefly introduce SP1’s proof process and key steps. Following this, we will compare it with RISC0 and finally discuss the performance, CPU, and memory usage when proving a Taiko block.</p>



<h3 class="wp-block-heading" id="eb46"><strong>2. SP1’s Proof Process</strong></h3>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*S6BjAO5qQ0iWF6eBHcEOnA.png" alt=""/><figcaption class="wp-element-caption">Figure 1. The SP1 zkVM prove process</figcaption></figure>



<p id="75e6">As shown in the figure above, for a program to be proven (composed of numerous RISC-V instructions, compiled from a high-level language such as Rust) and its given input, the SP1 zkVM executes the program and generates a proof file to verify that the program was executed correctly. This proof file must not contain any sensitive information, such as data generated during execution or private inputs.</p>



<p id="9056">To achieve this, the SP1 zkVM performs the following steps:</p>



<p id="a87f">A. Execute the program, split it into multiple Shards/Checkpoints , and record all memory data required for executing each Shard.</p>



<p id="2e06">B. Generate proofs for each Shard individually using the STARK protocol .</p>



<p id="9ab3">C. Merge the proofs from the previous step into a single proof via Recursion Proving , ensuring that proofs for programs of varying sizes result in a proof file of identical length.</p>



<p id="28c0">D. Convert the Recursion Proving-generated proof into a SNARK proof using Groth16 or Plonk , further reducing the proof size. Since generating Shard Proofs consumes the most memory and computational resources, the following section will briefly explain the Shard Proof generation process.</p>



<h3 class="wp-block-heading" id="2c7b"><strong>3. Generating Shard Proof</strong></h3>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*yX7wVOfABAAlOca8cJWVog.png" alt=""/><figcaption class="wp-element-caption">Figure 2: Shard Proof generation process</figcaption></figure>



<p id="2863">As shown in the figure above, when generating a Shard Proof for a program, SP1 employs three independent threads : Checkpoint Generation , Trace Generation , and Shard Proof Generation . This enables parallel processing of multiple tasks, significantly improving performance.</p>



<p id="7a74">The details are as follows:</p>



<p id="5a66">A. Checkpoint Generation Thread. This thread executes the program to be proven, splits it into multiple Shards based on a predefined Shard size, and records the instructions within each Shard along with the memory data required to execute them.</p>



<p id="6937">B. Trace Generation Thread. For each Shard generated by the Checkpoint thread, this thread sequentially executes the instructions in the Shard, logs the register states and memory access patterns, and ultimately produces a normal execution trace . Notably, all Shards are processed independently, allowing hardware acceleration (e.g., GPU or multi-CPU parallelism) to achieve high performance.</p>



<p id="f39d">C. Shard Proof Generation Thread. Using the STARK protocol , this thread generates commitments and opening proofs for the traces obtained from the Trace thread, resulting in the final Shard Proof .</p>



<h3 class="wp-block-heading" id="f505"><strong>4. Generating Recursion Proof</strong></h3>



<p id="0bcd">The size of the final proof is roughly proportional to the length of the execution trace of the program being proven. In other words, longer programs result in more Shards (and corresponding Shard proofs), and proof size is a critical metric for proof systems (since proofs must be transmitted from the prover to the verifier).</p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="226f">To minimize the final proof size, SP1 supports compressing pairs of adjacent Shard Proofs into Reduce Proofs using the STARK protocol. These Reduce Proofs are then recursively compressed further until a single root proof is generated. This ensures that, regardless of the program size, the final proof size will be fixed-length.</p>



<p id="375e">Finally, the root proof at the base of the recursion tree is wrapped into a Groth16-compatible proof for efficient on-chain verification. Figure 3 illustrates this process.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*b50aaqM87vy130vnlCVGAw.png" alt=""/><figcaption class="wp-element-caption">Figure 3: SP1 Recursion Proof (Source: SP1 Technical White paper — Succinct)</figcaption></figure>



<h3 class="wp-block-heading" id="c0a4"><strong>5. SP1 vs. RISC0: Similarities and Differences</strong></h3>



<p id="d4ef"><strong>5.1. Similarities</strong></p>



<p id="a57d">Fundamental Principles There is no fundamental difference in their underlying principles. As shown in Figures 1 and 4, both proof processes follow these steps:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*vVfxuKcJf9Y_OdRQLIpa6Q.png" alt=""/><figcaption class="wp-element-caption">Figure 4: RISC0 proof process (Source:&nbsp;<a href="https://dev.risczero.com/proof-system/" rel="noreferrer noopener" target="_blank">https://dev.risczero.com/proof-system/</a>)</figcaption></figure>



<p id="96c3"><strong>A</strong>. Split the program into multiple parts ( Segments in RISC0, Shards in SP1).</p>



<p id="1892"><strong>B</strong>. Generate proofs for each part using STARK .</p>



<p id="25a7"><strong>C</strong>. Merge the proofs recursively via STARK-based recursion proving .</p>



<p id="7b1b"><strong>D</strong>. Convert the final recursive proof into a Groth16-compatible proof for verification.</p>



<p id="648f">Finite fields which both proof systems operate on are the same finite field structure: base field: BabyBear, extension field: 4-degree extension.</p>



<p id="75df"><strong>5.2. Differences</strong></p>



<p id="e9b2">Implementation Transparency SP1: Fully open-source CPU implementation (GPU implementation is closed-source). Code is modular and easier to audit. RISC0: Critical components (e.g., trace generation, polynomial checks) are auto-generated via Zirgen , making the code harder to understand or modify.</p>



<p id="b2c2">Hardware Acceleration SP1: Optimized for AVX256/512 and CUDA (closed-source GPU code). RISC0: No SIMD support but supports Metal (Apple GPUs) and CUDA.</p>



<p id="05cb">Precompile Support SP1: Allows custom precompiles (e.g., elliptic curve operations) by manually designing trace tables and constraints, reducing trace length and improving performance. RISC0: Precompiles are auto-generated via Zirgen, limiting user flexibility to add custom circuits. Proof Generation</p>



<p id="8ca4">Dependencies. SP1: Relies on the Plonky3 library (open-source) for commitments, FRI, and other proof components. RISC0: Implements all components in-house.</p>



<h3 class="wp-block-heading" id="ba71"><strong>6. Proving Taiko Block Performance</strong></h3>



<p id="801b">To evaluate SP1’s performance in proof generation, we tested a real-world use case: generating a proof for a Taiko block using SP1 and verifying its correctness locally.</p>



<p id="a39f">We measured the runtime, CPU usage, and memory consumption. The results are as follows.</p>



<p id="95f4"><strong>Test Configuration</strong>: Hardware: AMD Ryzen 9 9950X 16-Core Processor, 96 GB RAM. Block Details: Gas limit of 5,944,801 (5.9 million), total cycles counted during proof generation: 488,808,529 (488 million).</p>



<p id="2761"><strong>Results</strong>: Total Proof Generation Time: ~1.5 hours. Shard Proofs Phase: ~1 hour, with memory peaking at 50 GB during the final Shard Proof generation. Recursion Proof Phase: ~0.5 hours, primarily spent merging proofs. CPU and Memory Usage: The figure below shows the CPU and memory utilization during the proof generation process.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*sZeIHnCPzNQfug9iYpwIbw.png" alt=""/><figcaption class="wp-element-caption">Figure 5: CPU/Memory usage during SP1 proof generation for a Taiko block</figcaption></figure>



<p><a href="https://medium.com/@CFrontier_Labs?source=post_page---byline--92555f50f38e---------------------------------------"></a></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/introduction-to-sp1-zkvm-92555f50f38e/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Advanced ZK Hardware Acceleration: The Technology Behind Aleo ASIC Miner</title>
		<link>https://medium.com/@CFrontier_Labs/advanced-zk-hardware-acceleration-the-technology-behind-aleo-asic-miner-0fbffc42c4f6</link>
					<comments>https://medium.com/@CFrontier_Labs/advanced-zk-hardware-acceleration-the-technology-behind-aleo-asic-miner-0fbffc42c4f6#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 28 Jan 2025 02:40:19 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=457</guid>

					<description><![CDATA[1. Introduction A high-performance ASIC mining machine for the Aleo project has recently entered the market — the GoldShell AE [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>1. Introduction</p>



<p id="5a95">A high-performance ASIC mining machine for the Aleo project has recently entered the market — the GoldShell AE BOX small mining machine. GoldShell is a brand from Intchains Group. As a technology partner of Intchains, we have witnessed their long-standing support for technological innovation in privacy computing and AI. Zero-knowledge Proof (ZKP) technology has attracted widespread attention, requiring ongoing breakthroughs in both hardware and software — especially in performance acceleration.</p>



<p id="b103">This article explores the technology behind this ZK ASIC miner and the technology roadmap for future ZKVM + ASIC solutions.</p>



<h2 class="wp-block-heading" id="5d5f">2. Technical Innovation in Aleo Mainnet</h2>



<p id="001e">As we introduced in this article:&nbsp;<a href="https://medium.com/@CFrontier_Labs/the-journey-toward-aleos-universal-zkvm-1df9114fbd86">https://medium.com/@CFrontier_Labs/the-journey-toward-aleos-universal-zkvm-1df9114fbd86</a>, the prover of Aleo mainnet is a kind of POW prover, which is to compute the puzzle of circuit Synthesis.</p>



<p id="728c">Synthesis is a crucial part of ZKP proving. Hardware acceleration of computations like MSM was already achieved in Aleo’s previous testnets. The Aleo team designed the mainnet this way specifically to encourage the community to accelerate the synthesis component.</p>



<p id="8b61">According to the design, the prover needs to finish a lot of rounds of the R1CS circuit synthesis for a group of instruction sets of Aleo Varuna proving system. The witness will be used to construct a merkle tee, which can be converted to a proof target to compare with the difficulty target. The faster of the computation of circuit synthesis, the higher probability the prover can get the reward of a block.</p>



<h2 class="wp-block-heading" id="cfb2">3. Advanced Performance and ASIC Architecture of AE Box</h2>



<p id="c712">Based on the AE BOX’s public specifications, here is its performance compared to the RTX 4090.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*IivFNI7q691_ia8ODQbaHg.png" alt=""/><figcaption class="wp-element-caption">Performance comparison between AE BOX and RTX4090</figcaption></figure>



<p id="33ce">The ASIC miner demonstrates 20x higher performance than a single RTX 4090, and more significantly, achieves 26x better energy efficiency compared to the RTX 4090 GPU.</p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="d7eb">The AE Box ASIC achieves its exceptional performance and energy efficiency through a hardware architecture comprising these key modules:</p>



<ol class="wp-block-list">
<li>CHU: Curve hash unit</li>



<li>Decoder: Instruction decoder for processing Aleo operations</li>



<li>EXU: The instruction execution unit</li>



<li>Post processor: To generate the Merkle tree and get the proof target</li>
</ol>



<p id="d197">The ASIC implementation provides substantial improvements over general-purpose computing solutions:</p>



<ul class="wp-block-list">
<li>Significantly reduced power consumption per proof generation</li>



<li>Higher throughput for puzzle-proof generation</li>



<li>Specialized architecture optimized for Aleo’s the puzzle computation</li>
</ul>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*voUIcmVoTfaoxZErTbqm7A.png" alt=""/><figcaption class="wp-element-caption">The architecture of AE BOX miner</figcaption></figure>



<h2 class="wp-block-heading" id="e21a">4. Technical Evolution and Value Proposition</h2>



<p id="31be">The development of ASIC technology for ZKP marks a crucial step in the blockchain industry. For Aleo specifically, hardware acceleration through ASICs delivers several key benefits:</p>



<ul class="wp-block-list">
<li>Enhanced network security through increased proving capacity</li>



<li>Improved energy efficiency compared to general-purpose hardware like GPUs</li>



<li>Wider community recognition and decentralization as more individual provers join the ecosystem, thanks to the compact system design</li>
</ul>



<p id="cd6b">Notably, according to Aleo’s proposal&nbsp;<a href="https://github.com/ProvableHQ/ARCs/discussions/77" rel="noreferrer noopener" target="_blank">**ARC-0043: Extending the Puzzle to a Full SNARK</a>,** the prover will finally support full SNARK proof generation. Furthermore<a href="https://medium.com/@CFrontier_Labs/the-journey-toward-aleos-universal-zkvm-1df9114fbd86"><strong>,</strong></a>&nbsp;the prover will also be able to generate Aleo’s client-side proving-<a href="https://medium.com/@CFrontier_Labs/the-journey-toward-aleos-universal-zkvm-1df9114fbd86">AVM prover</a>.</p>



<p id="0e3f">The existing ASIC miner’s technology will serve as a solid foundation for Aleo’s future evolution. Additionally, through continuous iteration of Aleo’s ZK algorithms, ASIC companies like Intchains have accumulated valuable intellectual property (IP) in:</p>



<ul class="wp-block-list">
<li>MSM (Multi-Scalar Multiplication) operations</li>



<li>NTT (Number Theoretic Transform) computations</li>



<li>Various Elliptic curve-based hash operators</li>



<li>……</li>
</ul>



<p id="9ecd">Looking ahead, ASIC miner vendors are expected to continue supporting Aleo’s infrastructure through technological upgrades.</p>



<p id="d75a">Most significantly, the adoption of ASIC technology in the Aleo project will benefit the entire ZK industry, including projects like Ethereum beam chain that require real-time ZK proving.</p>



<h2 class="wp-block-heading" id="329c">5. Future Vision</h2>



<p>It’s encouraging to see continuous progress in ZKP acceleration, particularly the ASIC adoption in the Aleo project. CF Labs is proud to have built a partnership with Intchains on ZK ASIC design. CF Labs stands at the forefront of hardware acceleration research in ZKP technology. We look forward to advancing the adoption of ZKVM + ASIC solutions to benefit the entire industry.<a href="https://medium.com/@CFrontier_Labs?source=post_page---byline--0fbffc42c4f6---------------------------------------"></a></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/advanced-zk-hardware-acceleration-the-technology-behind-aleo-asic-miner-0fbffc42c4f6/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>RISC0 algorithm analysis, segment proof — Part1</title>
		<link>https://medium.com/@CFrontier_Labs/risc0-algorithm-analysis-segment-proof-part1-0666216b654b</link>
					<comments>https://medium.com/@CFrontier_Labs/risc0-algorithm-analysis-segment-proof-part1-0666216b654b#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 17 Jan 2025 02:43:23 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=459</guid>

					<description><![CDATA[TL;DR RISC0 is a zkVM (zero knowledge Virtual Machine) based on the RISCV instruction set and STARK (Scalable Transparency Argument [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>TL;DR</p>



<p id="6c00">RISC0 is a zkVM (zero knowledge Virtual Machine) based on the RISCV instruction set and STARK (Scalable Transparency Argument of Knowledge). RISC0 proof system consists with segment prove, recursion and STARK to SNARK proof. As segment prove occupies the largest amount of memory and computation, we focus on the analysis of segment prove process, including RAP, DEEP-ALI and FRI.</p>



<p id="c615"><strong>1. Purpose</strong><br>RISC0 is a zkVM (zero knowledge Virtual Machine) based on the RISCV instruction set and STARK (Scalable Transparency Argument of Knowledge). We will introduce the overall proof process in RISC0 and focus on analyzing the segment prove, including its principle and the detailed proof process of the segment. Finally, we will compare the similarities, differences and advantages of RISC0 and other zkVMs.</p>



<p id="7e6d"><strong>2. RISC0 proof process</strong></p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*89NwVd96snqXBRz_tquUdg.png" alt="figure 1 RISC0 proof process, from https://dev.risczero.com/proof-system/"/><figcaption class="wp-element-caption">Figure1 RISC0 proof process, from&nbsp;<a href="https://dev.risczero.com/proof-system/" rel="noreferrer noopener" target="_blank">https://dev.risczero.com/proof-system/</a></figcaption></figure>



<p id="5acf">As shown in the figure above, for a program to be proved (composed of countless RISCV instructions, compiled by a high-level language such as rust) and its given input, RISC0 zkVM will execute the program and generate a proof file to prove that the program is executed correctly. At the same time, the proof file cannot contain any sensitive information, such as data generated during the execution process and private input. To this end, RISC0 zkVM will:<br><strong>A.</strong>&nbsp;Execute the program, divide the program into multiple segments, and record the input and output of each segment;<br><strong>B.&nbsp;</strong>Prove each segment and obtain the Receipt of each segment. This step is proved by the STARK/FRI protocol;<br><strong>C.</strong>&nbsp;The Receipts obtained in the above steps are merged into one Receipt using recursive proving, so that proof files of the same length can be generated when proving programs of different sizes;<br><strong>D.</strong>&nbsp;Convert the Receipt obtained by recursive proving into a snark proof through Groth16 to further reduce the proof size.<br>Segment prove occupies the largest amount of memory and computation. The following will analyze the detailed proof process of segment in detail.</p>



<p id="6681"><strong>3. Algebraic structure</strong><br>Segment prove is a proof system based on the STARK protocol. To facilitate the description and understanding of segment prove, we need to introduce the algebraic structure required for the proof process, as shown in the figure below.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:672/1*ttnto1lSXi31EM_Ry7Q_wA.png" alt=""/><figcaption class="wp-element-caption">Figure2 Algebraic structure in segment prove</figcaption></figure>



<p id="db7f">Among them,<br>F, is the finite prime field Fp, that is, F = Fp, p = 2³¹ — 2¹⁷ + 1, Fp is the Baby Bear prime field;<br>K, is the finite extension field of F, K = F[X]/(X⁴ + 11), that is, the quotient ring of the polynomial ring F[X] about the ideal (X⁴+11). It is easy to know that X⁴+11 is an irreducible polynomial on F, then the extension number of K relative to F is 4, and 1 element in K generally requires 4 elements in F to represent;<br>Why do we need the extension field K? To ensure security, the sample space where the challenge value is located must be large enough. The number of elements in F is only about 2³¹, while the number of elements in K is about 2¹²⁴.</p>



<p id="df11">D0, the cyclic subgroup of the multiplication group F/{0}, whose number of elements |D0|, such as 2²⁰;<br>H, the subgroup of D0, whose number of elements |H|, such as 2¹⁸, the trace domain; Why do we need the cyclic subgroup H? Because when interpolating and calculating a univariate high-order polynomial f(x), NTT/iNTT is used for efficient calculation, and the premise of NTT/iNTT is that x must be an element in the cyclic subgroup.<br>H is called the trace domain, which is important in STARK because most of the calculations and representations of all polynomials are performed on H.</p>



<p id="805e">wD0, the left coset of D0, w is the element in F that does not belong to D0, such as wD0 = {3*d | d∈D0}, called the commitment domain or evaluation domain. Why is the left coset wD0 needed? w is introduced to achieve zero knowledge. At the same time, polynomial calculations on the coset of the cyclic group only require simple transformation of the polynomial coefficients, and NTT/iNTT can also be used;<br>When performing segment proofs, it is often necessary to calculate or represent polynomials on the trace domain H and make polynomial commitments on the commitment domain wD0.</p>



<p id="1982"><strong>4 segment prove</strong><br>The purpose of segment prove is to prove that the execution process of the sequential instruction stream is correct. This problem is converted into proving that the intermediate process (trace) of the execution satisfies the constraints of the polynomial equation group, and then converted into proving that the degree of a certain polynomial is relatively low (i.e., low-degree test problem). Finally, the FRI protocol is used to prove that the degree of the polynomial is relatively low. Segment prove mainly consists of set-up phase, RAP, DEEP-ALI, and FRI, as shown in the following figure:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:678/1*Va0RqIyYfw_17Wal7q2kUg.png" alt=""/><figcaption class="wp-element-caption">Figure3 segment prove</figcaption></figure>



<p id="b0da">The&nbsp;<strong>set-up phase</strong>&nbsp;configures some proof parameters, such as the number of queries in the query phase of the FRI (Fast Reed-Solomon Interactive oracle proof of proximity) protocol, the maximum length of the trace, etc. At the same time, it will also generate the constraint polynomial between the traces (given in the form of source code, this part of the source code generates the function value of the constraint polynomial on the trace). It is worth noting that for a proven program, the set-up phase only needs to be executed once, and this process will not be discussed later.</p>



<p id="9385"><strong>RAP (Randomized Algebraic intermediate representation with Preprocessing)</strong>&nbsp;will execute the segment and record the intermediate data during the execution process (such as register values, memory access, etc.), with the purpose of calculating the trace polys, including ctrl &amp; data &amp; accumulate polys and commitments, which will be described in detail later.</p>



<p id="bd5b"><strong>DEEP-ALI (Domain Extending for Eliminating Pretenders — Algebraic Linking IOP)</strong>&nbsp;will calculate the check polynomial and fri polynomial. Check polynomial refers to the result obtained by substituting trace polynomials into the corresponding constraint polynomials and dividing them by the vanish polynomial. This is a key step in STARK, because proving that the execution process of a program is correct is equivalent to proving that the check polynomial is a polynomial and the degree of the polynomial is relatively low. In order to ensure that the calculation process of the check poly is correct, deep polynomials need to be calculated, and finally all deep polynomials are merged into one fri polynomial.</p>



<p id="5d31"><strong>FRI (Fast Reed-Solomon Interactive oracle proof of proximity)</strong>&nbsp;aims to prove that the degree of the fri polynomial is less than or equal to the trace domain size. Below, we will introduce its process in detail.</p>



<p id="815e"><strong>5 RAP</strong><br>The purpose of RAP is to calculate the control and data polynomial groups, which represent the values ​​of the data register and control register during the segment execution process. In addition, in order to prove the correctness and range check of memory access behavior, a new set of polynomials needs to be generated: the accumulate polynomial group, which is used for permutation argument and lookup argument.<br>In addition, the merkle tree needs to be used to calculate the polynomial commitment values ​​of the control &amp; data &amp; accumulate polynomial groups respectively.</p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="4584">As shown in Figure 4, RAP consists of the following steps:<br><strong>5.1</strong>&nbsp;preflight/execute, sequentially execute the instructions in the segment, record the register address and value accessed by each instruction as raw_trace, and note that the data type is uint32;<br><strong>5.2&nbsp;</strong>generate_witness, convert the data in raw_trace into elements in F, calculate the values ​​of all registers in each cycle according to raw_trace, and thus obtain a 2D matrix. Any row represents all register values ​​in a certain cycle, and any column represents the value of a control or data register in all cycles. At the same time, the column is regarded as a function value of a control or data polynomial in the trace domain H, so the number of cycles must be less than or equal to the number of elements in H |H|. If the number of cycles is less than |H|, fill the missing rows of the matrix with 0.</p>



<p id="fe9f"><strong>5.3</strong>&nbsp;For the control &amp; data polynomial group (each polynomial is represented by a function value on H), calculate the commitment value of the polynomial:<br><strong>5.3.1</strong>&nbsp;iNNT calculates the coefficient of each polynomial, noting that the coefficient length of each polynomial is |H|;<br><strong>5.3.2</strong>&nbsp;Multiply the coefficient of each polynomial by 3^i, such as the coefficient of the nth polynomial is cn_0, ​​…, cn_i, …, then cn_i = cn_i*3^i;<br><strong>5.3.3</strong>&nbsp;NTT calculates the function value of each polynomial on the commitment domain 3*D0;<br><strong>5.3.4&nbsp;</strong>Calculate the merkle tree for the function value of the control and data polynomial groups on the commitment domain,<br>then we will get 2 merkle trees, and the root node of the merkle tree is the commitment value.</p>



<p id="47e6"><strong>5.4</strong>&nbsp;For control &amp; data polynomials (expressed by function values ​​on H), in order to ensure the legality and range check of memory access, it is necessary to introduce the polynomials required by permutation constraints and lookup constraints, namely accumulation polynomials. To this end, it is necessary to:<br><strong>5.4.1</strong>&nbsp;Generate a random challenge value in the extended domain K for each permutation constraint and lookup constraint based on the commitment value of the existing control and data polynomial groups;<br><strong>5.4.2</strong>&nbsp;Calculate the polynomial corresponding to each permutation and lookup constraint (expressed by function values ​​on H), which are collectively called accumulation polynomial groups;<br><strong>5.4.3</strong>&nbsp;Similar to the operation in 5.3, for the accumulation polynomial group, calculate its commitment value.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*i8RUPIrcSH-9vw8NnMyg-Q.png" alt=""/></figure>



<p id="9bf8"><strong>6. DEEP-ALI</strong><br>For the convenience of description, we call all polynomials in the control polynomial group, data polynomial group, and accumulation polynomial group trace polynomials, and all function values ​​of all trace polynomials on the trace domain H are called traces.<br>Our goal is to prove the correctness of the execution of instructions in the segment. This problem is reduced to a set of constraints (expressed as a set of multivariate polynomial equations) that must be satisfied between certain elements in the trace, and the establishment of this problem is equivalent to the constraint polynomial being able to divide the vanish polynomial on the trace domain H. DEEP-ALI will take the quotient of the constraint polynomial and the vanish polynomial to obtain the check polynomial and calculate its commitment value. In addition, to ensure that the check polynomial is calculated correctly, the DEEP polynomial will be calculated, and finally all the DEEP polynomials will be combined to obtain the FRI polynomial.</p>



<p id="a928">The definition of the check polynomial is as follows:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*BVsnbfOgevyZK8e1UWDXxQ.png" alt=""/><figcaption class="wp-element-caption">check poly/validity poly definition</figcaption></figure>



<p id="0550">Among them,<br>α_constriants, is a random challenge value, belonging to the extended domain K;<br>Ci, is a constrained multivariate polynomial;<br>P0(X), …, is a trace polynomial;<br>Z(X), is a vanish polynomial on the trace domain, Z(X) = X^|H| — 1, |H| is the number of elements in the trace domain H;<br>Note that f_validity is check_poly in the RISC0 code, so in this article, we call it check polynomial.</p>



<p id="8ed9">As shown in Figure 5, DEEP-ALI consists of the following steps:<br><strong>6.1</strong>&nbsp;eval_check, that is, calculating the function value of the check polynomial on the evaluation domain wD0. Because the highest degree of the check polynomial is 4*|H|, the function value needs to be calculated on the evaluation domain wD0 instead of the trace domain H;<br><strong>6.2</strong>&nbsp;batch iNTT obtains the coefficient of the check polynomial, because the check polynomial is regarded as a polynomial of degree 4*|H| on K, it can be regarded as 16 polynomials of degree |H| on F, thus obtaining a check polynomial polynomial group consisting of 16 polynomials;<br><strong>6.3</strong>&nbsp;Calculate the function value of each polynomial in the check polynomial group on the commitment domain, and use these function values ​​to construct a merkle tree to obtain the commitment value of the polynomial group;<br><strong>6.4</strong>&nbsp;Calculate deep polynomials, which is defined as:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:990/1*7kgmy7-L1fUqDEabeNmIFw.png" alt=""/><figcaption class="wp-element-caption">deep poly definition</figcaption></figure>



<p id="b5a6">The purpose of deep polys is to ensure that the above check polys are calculated honestly.<br><strong>We believe that check polys are are calculated honestly if a random value z is taken in the extended domain K, and both sides of the equation in eval_check are equal (there is a soundness error);<br>If check polys are calculated honestly if and only if the rational function is a polynomial function</strong>.<br>Where Pi is trace poly or check poly;<br>__Pi is the polynomial obtained by interpolating the function value of Pi at different points required to calculate the “constrained multivariate polynomial function value”, such as xi = ωz, ω is the generator of trace domain H.</p>



<p id="a8eb"><strong>6.5</strong>&nbsp;Combine all deep polynomial to get fri polynomials.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*bWOrWtrXYw_cjVlS8Ni-mg.png" alt=""/></figure>



<p id="9c89"><strong>7. FRI</strong><br>The purpose of FRI is to prove that the degree of the fri polynomial is less than or equal to the size of the trace domain H. If deg(fri(X)) ≤ |H|, then the check polynomial is a polynomial whose degree ≤ 4*|H|, and the check polynomial is calculated honestly. Then the elements in the trace satisfy the constrained multivariate polynomial equation system, and the instructions in the segment are executed honestly.</p>



<p id="1246">FRI includes commit phase and query phase, as shown in Figure 6, which consists of the following steps:<br><strong>7.1</strong>&nbsp;commit phase, the degree of fri poly is reduced to 256 in multiples of 16, then there are log(16, deg(fri)) rounds, and the polynomial generated by each round is committed. The following calculations are performed for each round:<br><strong>7.1.1</strong>&nbsp;fri poly is a polynomial of degree |H| on the extended domain K, which can be regarded as an element of degree |H| on four domains F. The function values ​​of the four polynomials on the commitment domain are calculated through batched NTT;<br><strong>7.1.2</strong>&nbsp;Using the function value of the fri polynomial on the commitment domain, a merkle tree is constructed to obtain the commitment value;<br><strong>7.1.3</strong>&nbsp;Split the fri polynomial into 16 parts to obtain 16 sub-polynomials, and then use random linear combine to obtain the fri polynomial of the next round.</p>



<p id="af74"><strong>7.2</strong>&nbsp;Query phase, for all committed polynomials, including control polynomial, data polynomial, accumulate polynomial, check polynomial, log(16, deg(fri)) round fri polynomial, open proof of these polynomials.<br><strong>7.2.1</strong>&nbsp;Select a random challenge value g0 in the commitment domain;<br>7.2.2 Construct the elements in the evaluation set, i.e. g0, g0*16, …, g0^(16*round_n);<br><strong>7.2.3</strong>&nbsp;Open proof of control polynomial, data polynomial, check polynomial at g0, and open proof of the fri polynomial of round i at g0^(16*round_i).</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*2QS7l3aQZW3P6iqCSncEHQ.png" alt=""/><figcaption class="wp-element-caption">Figure 6 FRI</figcaption></figure>



<p id="0d89"><strong>8. Comparison with other zkVMs</strong></p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1500/1*K1BPvvH9_JLBBhHIBXMxWw.png" alt=""/></figure>



<p><a href="https://medium.com/@CFrontier_Labs?source=post_page---byline--0666216b654b---------------------------------------"></a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/risc0-algorithm-analysis-segment-proof-part1-0666216b654b/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Journey Toward Aleo’s Universal ZKVM</title>
		<link>https://medium.com/@CFrontier_Labs/the-journey-toward-aleos-universal-zkvm-1df9114fbd86</link>
					<comments>https://medium.com/@CFrontier_Labs/the-journey-toward-aleos-universal-zkvm-1df9114fbd86#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 05 Nov 2024 02:45:24 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.110.200:8777/?p=461</guid>

					<description><![CDATA[0. Background Aleo is a Layer-1 blockchain focused on enabling privacy-preserving, scalable decentralized applications using zero-knowledge proofs (ZKP). Since 2019, [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>0. Background</p>



<p id="2f49">Aleo is a Layer-1 blockchain focused on enabling privacy-preserving, scalable decentralized applications using zero-knowledge proofs (ZKP). Since 2019, Aleo has been building a customized ZKVM solution with a unique programming language, Leo, powered by the Aleo Virtual Machine (AVM). Its consensus leverages Proof-of-Work to promote decentralization. In its early testnet, Aleo combined POW with zero-knowledge proofs to construct a puzzle for miners. In its mainnet, it encourages acceleration of trace generation for Leo instructions.</p>



<p id="a314">Recently, Aleo published a proposal titled&nbsp;<a href="https://github.com/AleoNet/ARCs/discussions/77" rel="noreferrer noopener" target="_blank"><strong>ARC-0043: Extending the Puzzle to a Full SNARK</strong></a>. This proposal aims to have miners generate useful SNARK proofs while reducing the burden on validators to verify the puzzle.</p>



<p id="0b12">Let’s dive into more detail.</p>



<h2 class="wp-block-heading" id="ca66">1. Architecture of Aleo</h2>



<p id="6b87">As shown in the picture below, there are two types of provers in the solution:</p>



<ol class="wp-block-list">
<li>AVM prover. It uses the Varuna proving system, based on AHP R1CS and univariate sumcheck protocol (the “Third Party Prover” part in the left side of the picture).</li>



<li>POW prover. It uses a POW mechanism to incentivize provers to compute a puzzle (the “ Prover” part in the right side of the picture).</li>
</ol>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*7TLRwj8mMwo-NJGCkzTbEg.png" alt=""/><figcaption class="wp-element-caption">Figure 1. Aleo core architecture, from link:&nbsp;<a href="https://developer.aleo.org/concepts/network/overview/core_architecture/" rel="noreferrer noopener" target="_blank">https://developer.aleo.org/concepts/network/overview/core_architecture/</a></figcaption></figure>



<p id="6e78"><strong>These two provers are currently distinct.</strong></p>



<ul class="wp-block-list">
<li>The AVM prover is used on the user side to generate a zkSNARK proof, ensuring user and application privacy.</li>



<li>The POW prover in the existing mainnet stage doesn’t generate a useful SNARK or zkSNARK proof. Instead, it performs computations involving circuit generation for a set of instructions and some Merkle tree hash computations. As a form of POW, randomness is included to have miners demonstrate proof of work.</li>
</ul>



<h2 class="wp-block-heading" id="123d">2. History of Aleo’s POW prover</h2>



<p id="1013">Here’s a picture illustrating how Aleo has incentivized provers throughout its history.</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*B9tU9Fjn93H3WchmtUK0kQ.png" alt=""/><figcaption class="wp-element-caption">Figure 2. Aleo incentive program, from link:&nbsp;<a href="https://aleo.org/post/community-road-to-mainnet/" rel="noreferrer noopener" target="_blank">https://aleo.org/post/community-road-to-mainnet/</a></figcaption></figure>



<p id="46cb"><strong>At each stage, Aleo has been constructing various algorithmic challenges for POW provers to accelerate, including KZG (Testnet 3) and synthesis (Current Mainnet) operations. Aleo also hosts multiple rounds of ZPrize to reward the community for accelerating ZK proving.</strong></p>



<p id="563e">As described in the&nbsp;<a href="https://github.com/AleoNet/ARCs/discussions/77" rel="noreferrer noopener" target="_blank">ARC-0043 proposal</a>, it’s time for POW provers to generate a full SNARK.</p>



<h2 class="wp-block-heading" id="c75a">3. The journey toward universal ZKVM</h2>



<p id="7378">In the&nbsp;<a href="https://github.com/AleoNet/ARCs/discussions/77" rel="noreferrer noopener" target="_blank">ARC-0043 proposal</a>, Aleo plans to use POW provers to generate SNARK proofs, which are then verified by validators. This approach could eliminate the performance bottleneck in validator verification, as SNARK verification is considerably faster.</p>



<p id="de90">If this becomes reality, the community’s POW provers will have the capacity to generate useful SNARK proofs. Note that these don’t necessarily have to be zkSNARK proofs, but SNARK proofs. This development could make&nbsp;<strong>Aleo the first project to leverage both major uses of zero-knowledge proofs: privacy preservation (via AVM prover) and computation scaling (via POW prover).</strong></p>



<p id="0775">What’s next? Universal AVM prover.</p>



<p id="7fdc">As mentioned by Aleo in a&nbsp;<a href="https://aleo.org/post/prover-incentives-2-retrospective/" rel="noreferrer noopener" target="_blank">previous article</a>, “Aleo’s innovative approach allows users to outsource proof generation to third-party proving services equipped with advanced computational resources” It would be advantageous if POW prover machines could run the AVM prover, helping users generate ZK proofs or outsource the proving work.&nbsp;<strong>This means the AVM prover and the POW prover can be unified at the machine level, creating a universal AVM prover. This development would provide Aleo’s application side or user side with low-cost, high-performance ZKP proving, further advancing the Aleo project and realizing the vision to “build cryptographically secure dApps at scale”.</strong></p>



<h2 class="wp-block-heading">Get&nbsp;Computation Frontier’s stories in&nbsp;your&nbsp;inbox</h2>



<p>Join Medium for free to get updates from&nbsp;this&nbsp;writer.Subscribe</p>



<p id="e731">Further potential? Universal ZKVM prover.</p>



<p id="4f86">As Aleo potentially gains a dominant position with thousands of miners generating AVM ZK proofs at lightning-fast speeds, could it evolve to support various ZKVM proof generations? T<strong>his evolution could lead to a universal ZKVM prover — not just for Aleo itself, but for other ZKP projects as well. Such a prover could encompass both privacy preservation and computation scaling use cases.</strong></p>



<p id="2df2">It will be challenging for Aleo to upgrade its algorithms to support different instruction sets and backends. Additionally, continuation technology and proof recursion technology may need to be added. However, if successful, Aleo will create immense value for the mass adoption of zero-knowledge proof technology.</p>



<p id="e9ff">First, by supporting various frontends — not just Leo — the developers’ user experience will improve significantly.</p>



<p id="6f85">Second, a universal ZKVM prover can benefit the entire ZKP community by providing common ZKP computational power for all ZKP projects.</p>



<p id="aa3b">Moreover, developing an ASIC ZKP machine is costly. A universal ZKVM prover can save substantial development expenses.</p>



<p id="82ab">At the hardware level, we can consider designing a system compatible with various ZKVM protocols. Alternatively, the hardware could support or be upgraded with minimal changes to accommodate new ZKVMs. It’s worth noting that not only Aleo but other ZKVM projects also have the potential to build such universal platforms.</p>



<p id="77b0">Regardless, the first step will be to realize&nbsp;<a href="https://github.com/AleoNet/ARCs/discussions/77" rel="noreferrer noopener" target="_blank">ARC-0043</a>. Now, let’s delve into more details about AVM.</p>



<h2 class="wp-block-heading" id="6418">4. Introduction to Aleo’s Varuna-based AVM</h2>



<p id="37ff">The zkSNARK proof system in Aleo is Varuna, primarily based on AHP (Algebraic Holographic Proofs) and PCS (Polynomial Commitment Scheme), as shown in the following diagram:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*3NrjlmtbgX6XD7aMYl05ig.png" alt=""/><figcaption class="wp-element-caption">Figure 3. Varuna architecture.</figcaption></figure>



<p id="f18f">The time-consuming parts are mainly NTT, MSM, and Synthesis, which have already achieved good hardware acceleration (such as ASIC, GPU, etc.). The purpose of Aleo&nbsp;<a href="https://github.com/AleoNet/ARCs/discussions/77" rel="noreferrer noopener" target="_blank">ARC-0043</a>&nbsp;is to transform the Puzzle in the diagram into a SNARK system, significantly reducing the validation time for validators and thereby increasing block generation speed.</p>



<p id="8bcf"><strong>To turn the Puzzle into a SNARK, it’s estimated that the POW Prover would need to perform both synthesis operations and SNARK proof generation.</strong>&nbsp;Therefore, the Puzzle prover would need to:</p>



<ol class="wp-block-list">
<li>Support more Aleo instruction types;</li>



<li>Implement AHP and PCS to generate SNARK proofs;</li>



<li>Implement modules such as NTT and MSM, simplified from algorithms in the AVM prover.</li>
</ol>



<h2 class="wp-block-heading" id="4909">5. Comparisons of ZKVMs</h2>



<p id="6b62">Aleo’s AVM was introduced several years ago and has now reached mainnet. Since then, other new ZKVMs have emerged. Let’s compare Varuna with some other ZKVMs:</p>



<figure class="wp-block-image"><img decoding="async" src="https://miro.medium.com/v2/resize:fit:1050/1*rXTQrIWaoxuVQBm5kxmOFQ.jpeg" alt=""/><figcaption class="wp-element-caption">Table 1. Comparison of some ZKVMs.</figcaption></figure>



<p id="b650"><strong>Through this comparison, we can see that Aleo’s Varuna has reached production and mainnet, though it doesn’t use a more general ISA and hasn’t yet supported recursion technology.</strong></p>



<p id="41e1">It’s possible that Aleo will continue to upgrade to further promote the mass adoption of zero-knowledge proof technology.</p>



<h2 class="wp-block-heading" id="fd3c">6. Summary</h2>



<p>We’ve analyzed the history of Aleo provers and its latest&nbsp;<a href="https://github.com/AleoNet/ARCs/discussions/77" rel="noreferrer noopener" target="_blank">ARC-0043</a>, and compared Aleo’s Varuna-based AVM with other ZKVMs. We observe that Aleo is on track to become a platform with abundant, high-performance, and decentralized ZKP provers.<a href="https://medium.com/@CFrontier_Labs?source=post_page---byline--1df9114fbd86---------------------------------------"></a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://medium.com/@CFrontier_Labs/the-journey-toward-aleos-universal-zkvm-1df9114fbd86/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
