Since July 2024, Veridise has closely collaborated with RISC Zero, a company developing the RISC Zero zkVM — a zero-knowledge virtual machine that enables secure and private computation using the RISC-V architecture.
While we’ve completed multiple security audits for RISC Zero, our security collaboration went well beyond just audits.
In this blog post, we highlight how we’ve partnered to provide formal guarantees for their zkVM using Picus, Veridise’s automated ZK verification tool. We explain how this integration enables continuous, automated verification as their circuits evolve. Finally, we detail three critical vulnerabilities we identified and mitigated in the process.
Achieving mathematically provable ZK security
Our collaboration with RISC Zero was aimed at achieving a higher standard of ZK security than traditional audits.
We’ve been working with RISC Zero to help them become the first RISC-V zkVM with formal guarantees against one of the most common and critical classes of ZK vulnerabilities: underconstrained bugs.
Using our automated ZK verification tool, Picus, we’ve formally verified determinism across key components of their zkVM. This process specifically targets underconstrained circuits — responsible for nearly 97% of known ZK circuit vulnerabilities according to recent research.
From one-time audits to continuous verification
Traditional manual security audits provide valuable insights at a fixed point in time — but they often become outdated as codebases evolve. This can leave critical components exposed to emerging vulnerabilities introduced after the audit is complete.
By combining formal methods with automated tooling, we enable continuous verification of RISC Zero’s zkVM circuits. Instead of a one-time review, the integration with Picus evolves alongside their codebase — automatically detecting critical flaws as changes are introduced in their circuits. This ensures that the security guarantees remain intact throughout the development lifecycle.
With this approach, RISC Zero doesn’t have to choose between rapid iteration and high assurance. They get both: mathematically provable security, maintained continuously, without slowing down development.
Collaborating with RISC Zero
The project kicked off with a lengthy in-person meeting in Seattle. These initial discussions were very productive in many ways, for example, they sped up the process of translating RISC Zero’s ZK circuit language into a form that could be used with Picus, Veridise’s ZK tool.
After kickoff, we held regular meetings with RISC Zero, typically meeting with their CEO, CTO, and security expert once a week. The RISC Zero team promptly followed up on any outstanding questions after the meetings. We also communicated frequently over a shared Slack channel.
RISC Zero’s modular design simplified circuit verification
RISC Zero’s ZK circuits were easy to work with and verify for correctness, thanks to their Zirgen DSL which made both reading and extracting logic simple — perfect for our ZK vulnerability detection tool, Picus.
Even more helpful was RISC Zero’s modular design philosophy. Rather than cramming all logic into one massive circuit, they had developed things in manageable pieces that we could analyze individually.
What we’ve verified so far
So far, our collaboration with RISC Zero has successfully confirmed determinism for their Keccak accelerator circuit, as well as a significant portion of the new RISC-V circuit for the upcoming R0VM 2.0 release. We’re now working together toward a full formal proof of the entire RISC-V circuit.
As RISC Zero writes in their announcement:
“With R0VM 2.0, ZK security isn’t just stronger, it’s provable.”
RISC Zero’s perspective: read their announcement for more details
To learn more about our collaboration with RISC Zero, read Jacob Weightman’s blog post “RISC Zero’s Path to The First Formally Verified RISC-V zkVM”.
Jacob describes how formal methods and automated tooling (Picus) from Veridise have enabled provable, continuous security guarantees for their core zkVM components. He also walks you through why underconstrained bugs are the most critical to prioritize, how Picus works in a nutshell, and provides a detailed description of the proof for the Keccak accelerator circuit.
Three vulnerabilities Veridise fixed in RISC Zero zkVM:
In the second half of this post, we’ll detail three critical vulnerabilities Veridise identified and helped mitigate during our security assessments.
Our security assessment covered various components of RISC Zero’s zkVM, and it was conducted over 96 person-weeks, with six security analysts reviewing the project over a 16-week period.
1) Component ExpandU32 is underconstrained (5.1.1)
The first vulnerability is Underconstrained Circuit vulnerability in “ExpandU32” component.
The component ExpandU32, shown below, takes as input a value x which is of type ValU32, along with a boolean variable signed which indicates whether x is a signed value, and returns bytes b0
, b1
, b2
, and b3
which represent the ith byte of x
. A ValU32
type is a tuple of Felts(low, high)
where low
is intended to capture the lower 16 bits and high
captures the upper 16 bits.

The basic idea is that the value x
is intended to represent a u32
value. However, since everything operates within a BabyBear field, a u32
cannot be directly represented. To address this, the developers split u32
value into two field elements: low
and high
. The low
field element captures the lower 16 bits of the u32
, while the high
field element captures the upper 16 bits.
To further break this down, the u32
value represented by x
is split into four distinct byte values: b0
, b1
, b2
, and b3
. The logic behind this process is somewhat complex and is explained in the audit report in more detail. Briefly, b0
through b2
are initialized using the NondetU8Reg
mechanism, which ensures that these values are going to be bytes.
However, there was an oversight in the original implementation: the same constraint was not applied to b3
. This omission allows an attacker to exploit the representation, as b3
can be set to a value outside the byte range. Consequently, the u32
value can be represented in multiple unintended ways, enabling critical vulnerabilities.
The issue lies in how they represent b3Top7times2
. This field is intended to hold an u8
value, and it is constrained to be a byte. However, when setting the b3
value, they perform a division by 2. In a standard computer program, dividing by 2 would perform integer division. For instance, dividing 255 by 2 would yield 127. In this context, however, division by 2 is not the same—it involves multiplying by the inverse of 2 within the field. This allows b3Top7times2
to be set to a large field element. If b3Top7times2
is an odd number, multiplying it by the inverse of 2 results in another large value. Consequently, the b3
byte no longer adheres to the constraints of being a valid byte, as it exceeds the allowable range.
In the audit report we analyze the vulnerability in more detail and provide an example of malicious prover taking advantage of the vulnerability.
💡 The fix: The fix for this issue was to initialize
b3
as anu8
register, just like the other bytes (b0, b1, b2). This ensures thatb3
is properly constrained to a valid byte range. Once this adjustment is made, the problem is resolved.
2) DoDiv is underconstrained (5.1.2)
The next vulnerability is in a module called DoDiv
(Do Division), which is the main circuit in the RISC Zero zkVM responsible for performing all division operations.
The purpose of the circuit is to take a numerator and a denominator as inputs and produce the corresponding quotient and remainder.
The challenge lies in directly implementing a division algorithm within the circuit, which is quite complex. To simplify this, the RISC Zero team employs a clever trick: given a numerator and denominator, they guess a quotient and remainder, then assert the relationship:
Numerator=(Quotient×Denominator)+Remainder.

The bug in their implementation arose because they were missing key constraints on the quotient and remainder. Specifically, the attacker could manipulate the remainder and quotient to any value they wanted.
For the approach to work correctly, it is necessary to assert that the remainder is always less than the denominator and that the quotient is valid within the context of the division operation.
Additionally, there are complications related to the sign of the quotient, depending on whether the division is signed or unsigned. We skip the signed/unsigned complication for simplicity in this blog summary.
Essentially, the original code failed to enforce the critical constraint that the remainder must always be less than the denominator, which left the circuit under-constrained.
Example:
Here is some additional content and background to help clarify the vulnerability:
Typically, there are algorithms implemented to calculate the quotient and remainder directly. However, these algorithms are computationally expensive to replicate within a circuit. To address this, the following standard approach is used: instead of calculating the values explicitly, the circuit “guesses” the quotient and remainder, then adds a constraint to validate them:
Numerator=(Quotient×Denominator)+Remainder
However, if this is the only constraint added, problems arise. For example, consider dividing 6 by 2: The expected result is a quotient of 3 and a remainder of 0.
But without additional constraints, other solutions are possible. For instance:
- Quotient = 0, Remainder = 6 (satisfies the equation).
- Quotient = 1, Remainder = 4 (also satisfies the equation).
These solutions are incorrect, but the circuit cannot distinguish them because it lacks additional rules. To ensure the correct and unique result, you must add constraints:
- The remainder must be less than the denominator.
- The remainder must also be non-negative.
In this circuit, the only assertion made is: Numerator=(Quotient×Denominator)+Remainder.
However, the code failed to include the crucial constraint that the remainder must be less than the denominator. Without this, the circuit is under-constrained, allowing incorrect results to satisfy the equation, and leads to critical severity vulnerability.
💡 The fix: The vulnerability can be fixed by adding a constraint asserting that reminder < denominator.
3) Decoder is underconstrained (5.1.4)
The core issue with this vulnerability lies in how a RISC-V instruction, represented as a u32
value, is processed. When the processor handles a RISC-V instruction (encoded instruction), it must decode it to determine the operation and various parameters required for execution.
This circuit is responsible for decoding the instruction and extracting all the relevant parameters.
The parameters with underscores represent the decoded components of the instructions.

The constraints on Lines 11 and 20 are designed to ensure that these parameters are decoded correctly. Each decoded parameter corresponds to specific bits in the instruction, and the constraints verify that the values were derived from the appropriate bits. It’s a clever approach to asserting correct decoding.
However, the decoding process was implemented incorrectly, leaving circuit underconstrained. The problem stems from the declaration of the _rd_0
field on the first line.
The _rd_0
field is initialized to accept values between 0 and 3 (exclusive), but it should only allow values between 0 and 1 (inclusive). This overly permissive range allows the _rd_0
field to take on invalid values, which introduces inconsistencies.
As a result, an attacker can manipulate the decoding process as a single encoded instruction has multiple valid interpretations, creating a critical issue. This means the attacker could generate two proofs of execution for the same ELF binary, exploiting the vulnerability.
💡 The fix: We recommend changing _rd_0 to a NondetBitReg instead of a NondetTwitReg. We ran Picus on the fixed circuit after that change and it proved the circuit was deterministic.
Adapting Picus for zkVM memory modeling
While auditing zkVMs is largely similar to other zero-knowledge applications, we had to make a few adjustments.
zkVMs must utilize program memory in a way that many other circuits — especially traditional R1CS-style circuits — do not. Since Picus does not model memory directly, we needed to develop a strategy for encoding memory reads and writes within a zkVM circuit in a way that Picus could reason about.
We decided to model memory reads as inputs in Picus, and memory writes as outputs. This approach, combined with the fact that the initial memory state is provided as part of the proof input, gives us an inductive argument for why the encoding is sound.
Final remarks: raising the bar for ZK security
Our work with RISC Zero represents a model for how deep collaboration, formal methods, and automated tooling can come together to raise the bar for security in zero-knowledge systems.
By combining traditional auditing with formal guarantees and continuous verification through Picus, we’ve helped RISC Zero build a zkVM that doesn’t just claim to be secure — it can prove it. This level of assurance is increasingly vital as ZK infrastructure becomes foundational to the next generation of decentralized applications.
Thanks to our smooth collaboration with RISC Zero, we were also able to make improvements to Picus itself. Certain performance issues were resolved in Picus, making it more scalable. As a result, we are even better equipped to handle complex ZK audits in the future.
We’re proud to support RISC Zero in setting a new industry benchmark for zkVM security. As their products evolve, we look forward to continuing our collaboration — and to helping more teams across the ZK ecosystem apply the same high standards in ZK security.
Full audit report
Download the full RISC Zero zkVM audit report here: https://github.com/risc0/rz-security/blob/main/audits/zkVM/veridise_zkVM_20250224.pdf
Editor: Mikko Ikola, VP of Marketing