From 6a70be0580b8bfc0c1e128d920ad189583fa60a3 Mon Sep 17 00:00:00 2001 From: Elias Rad <146735585+nnsW3@users.noreply.github.com> Date: Mon, 11 Dec 2023 13:11:28 +0200 Subject: [PATCH] Fix typos (#1052) * fix typos README.md * fix minor typos Bytecode_Circuit.md * fix typos EVM_Circuit.md * fix typos Keccak_Circuit.md * fix typos MPT_Circuit.md * fix typos Modexp_Circuit.md * fix typos Public_Input_Circuit.md * fix typos Sig_Circuit.md * fix typos State_Circuit.md * fix typos Tx_Circuit.md --- aggregator/README.md | 12 ++++++------ docs/Bytecode_Circuit.md | 2 +- docs/EVM_Circuit.md | 16 ++++++++-------- docs/Keccak_Circuit.md | 34 +++++++++++++++++----------------- docs/MPT_Circuit.md | 30 +++++++++++++++--------------- docs/Modexp_Circuit.md | 14 +++++++------- docs/Public_Input_Circuit.md | 6 +++--- docs/Sig_Circuit.md | 4 ++-- docs/State_Circuit.md | 18 +++++++++--------- docs/Tx_Circuit.md | 30 +++++++++++++++--------------- 10 files changed, 83 insertions(+), 83 deletions(-) diff --git a/aggregator/README.md b/aggregator/README.md index b06c636323..4c9a49483e 100644 --- a/aggregator/README.md +++ b/aggregator/README.md @@ -133,19 +133,19 @@ This is done via comparing the `data_rlc` of `chunk_{i-1}` and ` chunk_{i}`. Our keccak table uses $2^{19}$ rows. Each keccak round takes `300` rows. When the number of round is less than $2^{19}/300$, the cell manager will fill in the rest of the rows with dummy hashes. -The only hash that uses dynamic number of rounds is the last hash. +The only hash that uses a dynamic number of rounds is the last hash. Suppose we target for `MAX_AGG_SNARK = 10`. Then, the last hash function will take no more than `32 * 10 /136 = 3` rounds. We also know in the circuit if a chunk is an empty one or not. This is given by a flag `is_padding`. For the input of the final data hash -- we extract `32 * MAX_AGG_SNARK` number of cells (__static__ here) from the last hash. We then compute the RLC of those `32 * MAX_AGG_SNARK` when the corresponding `is_padding` is not set. We constrain this RLC matches the `data_rlc` from the keccak table. +- we extract `32 * MAX_AGG_SNARK` number of cells (__static__ here) from the last hash. We then compute the RLC of those `32 * MAX_AGG_SNARK` when the corresponding `is_padding` is not set. We constrain this RLC match the `data_rlc` from the keccak table. For the output of the final data hash -- we extract all three hash digest cells from last 3 rounds. We then constraint that the actual data hash matches one of the three hash digest cells with proper flags defined as follows. - - if the num_of_valid_snarks <= 4, which only needs 1 keccak-f round. Therefore the batch's data hash (input, len, data_rlc, output_rlc) are in the first 300 keccak rows; - - else if the num_of_valid_snarks <= 8, which needs 2 keccak-f rounds. Therefore the batch's data hash (input, len, data_rlc, output_rlc) are in the 2nd 300 keccak rows; - - else the num_of_valid_snarks <= 12, which needs 3 keccak-f rounds. Therefore the batch's data hash (input, len, data_rlc, output_rlc) are in the 3rd 300 keccak rows; +- we extract all three hash digest cells from the last 3 rounds. We then constraint that the actual data hash matches one of the three hash digest cells with proper flags defined as follows. + - if the num_of_valid_snarks <= 4, which only needs 1 keccak-f round. Therefore the batch's data hash (input, len, data_rlc, output_rlc) is in the first 300 keccak rows; + - else if the num_of_valid_snarks <= 8, which needs 2 keccak-f rounds. Therefore the batch's data hash (input, len, data_rlc, output_rlc) is in the 2nd 300 keccak rows; + - else the num_of_valid_snarks <= 12, which needs 3 keccak-f rounds. Therefore the batch's data hash (input, len, data_rlc, output_rlc) is in the 3rd 300 keccak rows; |#valid snarks | offset of data hash | flags| |---| ---| ---| diff --git a/docs/Bytecode_Circuit.md b/docs/Bytecode_Circuit.md index 666140a9ce..9d3e1a7f37 100644 --- a/docs/Bytecode_Circuit.md +++ b/docs/Bytecode_Circuit.md @@ -29,7 +29,7 @@ The bytecode circuit aims at constraining the correctness of the above bytecode ## Architecture and Design -Raw `bytes` with the Keccak codehash of it `code_hash=keccak(&bytes[...])` are feed into `unroll_with_codehash` that runs over all the bytes and unroll them into `UnrolledBytecode{bytes, rows}`, which fits the bytecode table layout. This means each row contains `(code_hash, tag, index, is_code, value)`. (The only difference is that here `code_hash` is not RLCed yet.) Notice that only `PUSHx` will be followed by non-instruction bytes, so `is_code = push_rindex==0`, i.e. `unroll_with_codehash` computes `push_rindex` that decays from push size to zero and when it is zero it means the byte is an instruction code. With `UnrolledBytecode`, we contrain its rows via `BytecodeCircuit`, so the architecture looks like +Raw `bytes` with the Keccak codehash of it `code_hash=keccak(&bytes[...])` are feed into `unroll_with_codehash` that runs over all the bytes and unroll them into `UnrolledBytecode{bytes, rows}`, which fits the bytecode table layout. This means each row contains `(code_hash, tag, index, is_code, value)`. (The only difference is that here `code_hash` is not RLCed yet.) Notice that only `PUSHx` will be followed by non-instruction bytes, so `is_code = push_rindex==0`, i.e. `unroll_with_codehash` computes `push_rindex` that decays from push size to zero and when it is zero it means the byte is an instruction code. With `UnrolledBytecode`, we constrain its rows via `BytecodeCircuit`, so the architecture looks like ```mermaid stateDiagram diff --git a/docs/EVM_Circuit.md b/docs/EVM_Circuit.md index ecf32d1179..6f1e6b4a05 100644 --- a/docs/EVM_Circuit.md +++ b/docs/EVM_Circuit.md @@ -117,7 +117,7 @@ The backend proof system is based on Halo2, so all constraints are implemented a #### CellTypes -`Challenge` API is a feature of Halo2 that can produce SNARK's random challenges based on different phases of the proof in the transcript. In alignment with the `Challenge` API, EVM Circuit classifies its cells based on different phases of the proof. This results in `CellType`, which currently contains `StoragePhase1`, `StoragePhase2`, `StoragePermutation` and `Lookup` for different types of lookup tables, etc. . A given column in EVM Circuit will consist of cells with the same `CellType`. Columns under a given `CellType` will be horizontally layouted in a consecutive way to occupy a region. Cells that are of the same `CellType` will be subject to the same random challenge produced by `Challenge` API when Random Linear Combination (RLC) is applied to them. +`Challenge` API is a feature of Halo2 that can produce SNARK's random challenges based on different phases of the proof in the transcript. In alignment with the `Challenge` API, EVM Circuit classifies its cells based on different phases of the proof. This results in `CellType`, which currently contains `StoragePhase1`, `StoragePhase2`, `StoragePermutation` and `Lookup` for different types of lookup tables, etc. . A given column in EVM Circuit will consist of cells with the same `CellType`. Columns under a given `CellType` will be horizontally layouted in a consecutive way to occupy a region. Cells that are of the same `CellType` will be subject to the same random challenge produced by `Challenge` API when a Random Linear Combination (RLC) is applied to them. #### Behavior of CellManager within columns of the same CellType @@ -153,12 +153,12 @@ Then we enable an advice column: A few extra columns are enabled for various purposes, including: - `constants`, a fixed column, hold constant values used for copy constraints -- `num_rows_until_next_step`, an advice column, count the number of rows until next step +- `num_rows_until_next_step`, an advice column, count the number of rows until the next step - `num_rows_inv`, an advice column, for a constraint that enforces `q_step:= num_rows_until_next_step == 0` -In alignment with the `Challenge` API of Halo2, we classify all above columns as in phase 1 columns using Halo2's phase convention. +In alignment with the `Challenge` API of Halo2, we classify all the above columns as in phase 1 columns using Halo2's phase convention. -After that, a rectangular region of advice columns is arranged for the execution step specific constraints that corresponds to an `ExecutionGadget`. The width of this region is set to be a constant `STEP_WIDTH`. The step height is not fixed, so it is dynamic. This part is handled by `CellManager`. +After that, a rectangular region of advice columns is arranged for the execution step specific constraints that correspond to an `ExecutionGadget`. The width of this region is set to be a constant `STEP_WIDTH`. The step height is not fixed, so it is dynamic. This part is handled by `CellManager`. For each step, `CellManager` allocates an area of advice columns with a given height and the number of advice columns as its width. The allocation process is in alignment with `Challenge` API. So within the area allocated to advice columns, `CellManager` marks columns in consecutive regions from left to right as @@ -245,7 +245,7 @@ For the EVM Circuit, there are internal and external lookup tables. We summarize #### Lookup into the table -We use `Lookup::input_exprs` method to extract input expressions from an EVM execution trace that we want to do lookup, and we use `LookupTable::table_exprs` method to extract table expressions from a recorded lookup table. The lookup argument aims at proving the former expression lies in the latter ones. This is done by first RLC (Random Linear Combine) both sides of expressions using the same random challenge and then lookup the RLC expression. Here each column of a lookup table corresponds to one lookup expression, and EVM Step's lookup input expressions must correspond to the table expressions. See the following illustration: +We use `Lookup::input_exprs` method to extract input expressions from an EVM execution trace that we want to do lookup, and we use `LookupTable::table_exprs` method to extract table expressions from a recorded lookup table. The lookup argument aims to prove that the former expression lies in the latter ones. This is done by first RLC (Random Linear Combine) on both sides of expressions using the same random challenge and then lookup the RLC expression. Here each column of a lookup table corresponds to one lookup expression, and EVM Step's lookup input expressions must correspond to the table expressions. See the following illustration: ![evm-circuit-lookup](https://hackmd.io/_uploads/rynmiTWF2.png) @@ -255,7 +255,7 @@ At the execution level, the input expression RLCs enter EVM Circuit's lookup cel At each step, `CellManager` allocates a region with columns consisting of `CellType::Lookup(table)` that are for various types of lookups. Cells in these columns are filled with the RLC result of input expressions induced by the execution trace. This is done by the `add_lookup` method in `ConstraintBuilder`. -The number of columns allocated for one type of lookup (corresponding lookup will be into a particular lookup table) follows the configuration setting in `LOOKUP_CONFIG`. These numbers are tunable in order to make the circuit more "compact", i.e., with less unused cells, so that performance can be improved. +The number of columns allocated for one type of lookup (the corresponding lookup will be into a particular lookup table) follows the configuration setting in `LOOKUP_CONFIG`. These numbers are tunable in order to make the circuit more "compact", i.e., with less unused cells, so that performance can be improved. #### Challenge used for EVM Circuit's lookup cells @@ -269,7 +269,7 @@ Due to page limit we only discuss some examples here. #### `SameContextGadget` -This gadget is applied to every execution step in order to check the correct transition from current state to next state. The constraints include +This gadget is applied to every execution step in order to check the correct transition from the current state to the next state. The constraints include - lookup to Bytecode Table and Fixed Table (with `tag=FixedTableTag::ResponsibleOpcode`) for the correctness of the current opcode; - check remaining gas is sufficient; - check correct step state transition, such as change of `rw_counter`, `program_counter`, `stack_pointer` etc. @@ -354,7 +354,7 @@ Note that the data to be copied to memory that exceeds return data size ($||\mu_ For RETURNDATACOPY opcode, EVM Circuit does the following type of constraint checks together with witness assignments: -- Constraints for stack pop of `dest_offset`, `data_offset` and `size`. This means they are assigned from the step's rw data (via bus mapping) and then they are checked by doing RwTable lookups with `tag=RwTableTag::Stack`, as well as verifying correct stack pointer update; +- Constraints for stack pop of `dest_offset`, `data_offset` and `size`. This means they are assigned from the step's rw data (via bus mapping) and then they are checked by doing RwTable lookups with `tag=RwTableTag::Stack`, as well as verifying the correct stack pointer update; - Constraints for call context related data including `last_callee_id`, `return_data_offset` (offset for $\mu_{o}$) and `return_data_size` ($\|\mu_o\|$). These are assigned and then checked by RwTable lookups with `tag=RwTableTag::CallContext`. Here return data means the output data from last call, from which we fetch data that is to be copied into memory; - Constraints for ensuring no out-of-bound errors happen with `return_data_size`; - Constraints for memory related behavior. This includes memory address and expansion via `dest_offset` and `size`, as well as gas cost for memory expansion; diff --git a/docs/Keccak_Circuit.md b/docs/Keccak_Circuit.md index 07d23cd203..8c3df2df50 100644 --- a/docs/Keccak_Circuit.md +++ b/docs/Keccak_Circuit.md @@ -136,7 +136,7 @@ The final Keccak hash makes use of the sponge function, with arbitrary bit-lengt #### Padding `NUM_WORDS_TO_ABSORB` = 17, `NUM_BYTES_PER_WORD` = 8, then `RATE` = 17 $\times$ 8 = 136. Since `NUM_BITS_PER_BYTE` = 8 , then `RATE_IN_BITS` = 17 $\times$ 8 $\times$ 8 = 1088. -Given an input with length counted in bytes, compute `padding_total=RATE-input.len()%RATE`, this gives total number of padding bytes. Note that if input length in bytes is a multiple of `RATE`, the number of padding bytes is `RATE` but not 0. This means we always do padding. +Given an input with length counted in bytes, compute `padding_total=RATE-input.len()%RATE`, this gives a total number of padding bytes. Note that if input length in bytes is a multiple of `RATE`, the number of padding bytes is `RATE` but not 0. This means we always do padding. If the above `padding_total` is 1, then we pad a byte for $\verb"0x81"$; else pad a list of bytes $\verb"0x01", \verb"0x00", ..., \verb"0x00", \verb"0x80"$ with length in bytes equal to `padding_total`. @@ -147,7 +147,7 @@ After padding, the input is extended to be with length in bytes a multiple of `R #### Absorb Following the result of padding represented in bits, in each chunk further divide this chunk into 17 sections with length 64 for each section. At the first chunk, initialize from all $A[i][j]=0$, then we perform the following absorb operation: for each of the first 17 words $A[i][j], i+5j<17$, we let $\verb"idx"=i+5j$ and we update -$$A[i][j] \leftarrow A[i][j]\oplus \verb"chunk[idx * 64..(idx + 1) * 64]"$$ to obtain output $A[i][j]$. We perform this absorb operation before the 24 rounds of Keccak-f permutation. Then the 24-rounds of Keccak-f permutation is followed to operate on all $A[i][j]$'s where $0\leq i, j\leq 4$. After that we move to next chunk and repeat this process starting from the output $A[i][j]$ from last iteration (absorb + permutation). Such iteration is continued until the final chunk (which may contain padding data). +$$A[i][j] \leftarrow A[i][j]\oplus \verb"chunk[idx * 64..(idx + 1) * 64]"$$ to obtain output $A[i][j]$. We perform this absorb operation before the 24 rounds of Keccak-f permutation. Then the 24-rounds of Keccak-f permutation is followed to operate on all $A[i][j]$'s where $0\leq i, j\leq 4$. After that we move to the next chunk and repeat this process starting from the output $A[i][j]$ from the last iteration (absorb + permutation). Such iteration is continued until the final chunk (which may contain padding data). #### Squeeze @@ -182,14 +182,14 @@ which is the output of Keccak hash's RLC that we use in our zkevm-circuits' Kecc The whole of zkevm-circuits at the level of super circuit uses the following structure of the Keccak table which contains the following 4 advice columns - `is_final`: when true, it means that this row is the final part of a variable-length input extended with padding, so all Keccak rounds including absorb and permutation and squeeze are done, and the row's corresponding output is the hash result; -- `input_rlc`: each row takes one byte of the input, and add it to the previous row's `input_rlc` multiplied by the challenge for Keccak input. At the row for the last byte of the input (without padding, so `is_final` may not be true), this row is the `input_rlc` for the whole input. After that row, if there are padding rows until `is_final` becomes true, this `input_rlc` keeps the same; +- `input_rlc`: each row takes one byte of the input, and adds it to the previous row's `input_rlc` multiplied by the challenge for Keccak input. At the row for the last byte of the input (without padding, so `is_final` may not be true), this row is the `input_rlc` for the whole input. After that row, if there are padding rows until `is_final` becomes true, this `input_rlc` keeps the same; - `input_len`: similarly to `input_rlc`, it adds up $8$ bits for each row until the last byte (without padding) which gives the actual input length in bytes; -- `output_rlc`: it is zero for all rows except the row corresponding to the last byte of input (with padding, so that `is_final` is true), where result is the `output_rlc` as we defined in the previous section. +- `output_rlc`: it is zero for all rows except the row corresponding to the last byte of input (with padding, so that `is_final` is true), where the result is the `output_rlc` as we defined in the previous section. ## Arithmetic for Packed Implementation -The multi-packed implementation for the Keccak Circuit uses special arithmetic called (Keccak) sparse-word-representation, this splits a 64-bit word into parts. +The multi-packed implementation for the Keccak Circuit uses special arithmetic called (Keccak) sparse-word-representation, which splits a 64-bit word into parts. Each bit in the sparse-word-representation holds `BIT_COUNT` (=3) number of 0/1 bits, so equivalent to `BIT_SIZE` (=8) as the base of bit in the sparse-word-representation. @@ -207,7 +207,7 @@ These operations have to go back-and-forth between input (to Keccak internal) an It must be noticed that the packing related functions/modules `pack`, `unpack`, `into_bits`, `to_bytes` etc used inside Keccak Circuit are all in terms of little-endian order (in bits). -The underlying reason why we use these packing and unpacking operations is because of the simple fact that for two bit variables $x, y$ we have $x\oplus y=x+y\mod 2$. This means we can use simple addition then parity to build the Keccak Circuit arithmetic that constraints XOR operation. The pack-implementation helps to avoid carry to higher digits in the addition. The `BIT_SIZE` is chosen to be 8 because in the $\theta$-step we need 5 XOR operations so with the least number of bits >5 we must take 8 = $2^3$. +The underlying reason why we use these packing and unpacking operations is because of the simple fact that for two bit variables $x, y$ we have $x\oplus y=x+y\mod 2$. This means we can use simple addition then parity to build the Keccak Circuit arithmetic that constraints XOR operation. The pack-implementation helps to avoid carrying to higher digits in the addition. The `BIT_SIZE` is chosen to be 8 because in the $\theta$-step we need 5 XOR operations so with the least number of bits >5 we must take 8 = $2^3$. ### `decode` @@ -221,16 +221,16 @@ This is a util struict that returns a description of how a 64-bit word will be s - `uniform` is true, then `target_sizes` consists of dividing 64 bit positions into even `part_size` plus a possible remainder; - `uniform` is false, then `target_sizes` consists of dividing both the `rot` (rotation) and (64-`rot`)-bit words into even `part_size` but named `part_a` and `part_b` plus a possible remainder for each. -The parts splitted from the word is then determined bit-by-bit from `target_size`, so that each part will consist of a collection of bit positions that it represents, starting from 0-th bit position. The way bit positions are determined ensures that if +The parts split from the word are then determined bit-by-bit from `target_size`, so that each part will consist of a collection of bit positions that it represents, starting from 0-th bit position. The way bit positions are determined ensures that if - `uniform` is true, then the remaining `rot`-sized bits [63-`rot`+1,...,63] are divided by `part_size` plus a remainder, and first 64-`rot` bits are determined by a section compensating previous remainder, plus divide by `part_size`, and plus the remainder from `target_size` division; -- `uniform` is false, then the remaining `rot`-sized bits [63-`rot`+1,...,63] are divided by `part_size` plus remainder, and first 64-`rot` bits determined by `part_size` plus a remainder. +- `uniform` is false, then the remaining `rot`-sized bits [63-`rot`+1,...,63] are divided by `part_size` plus remainder, and first 64-`rot` bits are determined by `part_size` plus a remainder. The way we do the above split when `uniform` is true enables an optimization shown below, where after rotation the front and tail remainders combined together becomes an interval of length `part_size`: ![split_normalize_true](https://hackmd.io/_uploads/S1V-8qoKn.png) -Such a property is useful in `split_uniform`, because in that case it can use this specific partition to split an input word to parts and then rotation transform it to an input that will have the same partition layout as the output after rotation operation in Keccak permutations. This enables same input-output shape so that we can do lookup uniformly on input-output pairs without re-partition (`uniform_lookup = true`), i.e, splitting up the words into parts, then recombining and then splitting up again. This saves a significant amount of columns. +Such a property is useful in `split_uniform`, because in that case it can use this specific partition to split an input word into parts and then rotation transform it to an input that will have the same partition layout as the output after rotation operation in Keccak permutations. This enables the same input-output shape so that we can do lookup uniformly on input-output pairs without re-partition (`uniform_lookup = true`), i.e, splitting up the words into parts, then recombining and then splitting up again. This saves a significant amount of columns. ### `split` @@ -263,7 +263,7 @@ It bitwise transforms `input` expressed in `parts` to `output_parts` that are in ### Keccak specific CellManager -Circuit uses Keccak specific `CellManager`, which layouts advice columns in `SecondPhase`. They form a rectangular region of `Cell`'s. +The circuit uses Keccak specific `CellManager`, which layouts advice columns in `SecondPhase`. They form a rectangular region of `Cell`'s. `CellManager.rows` records the current length of used cells at each row. New cells are queried and inserted at the row with shortest length of used cells. The `start_region()` method pads all rows to the same length according to the longest row, making sure all rows start at the same column. This is used when a new region is started to fill in witnesses. @@ -294,7 +294,7 @@ The first round of rows must be initialized with dummy values, in particular, se In the current set-up, the number of rows that each round will occupy is `DEFAULT_KECCAK_ROWS = 12`. This means `CellManager` at each region aligns cells vertically to reach a height of 12 before it starts to align cells at the next consecutive column. -The number of columns consumed by each region (with different color in the figure) shown in the figure is just schematic. In reality this is determined by the size of data to be processed and the `part_size`, so that `CellManager` will insert each part into a cell and (vertically) align them. +The number of columns consumed by each region (with different colors in the figure) shown in the figure is just schematic. In reality this is determined by the size of data to be processed and the `part_size`, so that `CellManager` will insert each part into a cell and (vertically) align them. ### Spread input, absorb data and squeeze data along with the rounds of Keccak permutation in the circuit layout @@ -315,11 +315,11 @@ parity normalize table with each bit in range $[0,6)$; - `chi_base_table`: [TableColumn; 2], for $\chi$-step lookup, $[0,1,1,0,0]$; - `pack_table`: [TableColumn; 2], indicates the pack relation, i.e. 0..256 encoded as Keccak-sparse-word representation; -Since these tables can be used in multiple bits where each bit will do lookup according to the table, e.g. with number of bits equals `part_size`, Keccak Circuit needs to determine the optimal `part_size` for each table. This is done in the function `get_num_bits_per_lookup_impl`, which determines the maximum number of bits by the condition $$\verb"range"^{\verb"num_bits"+1}+\verb"num_unusable_rows"<=2^{\verb"KECCAK_DEGREE"}$$ Currently `KECCAK_DEGREE`=19. +Since these tables can be used in multiple bits where each bit will do lookup according to the table, e.g. with a number of bits equals `part_size`, Keccak Circuit needs to determine the optimal `part_size` for each table. This is done in the function `get_num_bits_per_lookup_impl`, which determines the maximum number of bits by the condition $$\verb"range"^{\verb"num_bits"+1}+\verb"num_unusable_rows"<=2^{\verb"KECCAK_DEGREE"}$$ Currently `KECCAK_DEGREE`=19. ### Number of KeccakRows needed for one hash -In some other circuits such as the [aggregation and compression circuits](https://github.com/scroll-tech/zkevm-circuits/tree/develop/aggregator), there is a need to exactly compute the number of rows consumed by one hash. This can be done by first compute the number of bytes in the input plus padding, and then divide into chunks of size `RATE`=136 bytes. The number of Keccak rows that each chunk will consume is equal to +In some other circuits such as the [aggregation and compression circuits](https://github.com/scroll-tech/zkevm-circuits/tree/develop/aggregator), there is a need to exactly compute the number of rows consumed by one hash. This can be done by first computing the number of bytes in the input plus padding, and then dividing it into chunks of size `RATE`=136 bytes. The number of Keccak rows that each chunk will consume is equal to $$\verb"DEFAULT_KECCAK_ROWS" * (1+\verb"NUM_ROUNDS")$$ and with the current setting `DEFAULT_KECCAK_ROWS=12`, `NUM_ROUNDS=24` this number will be 300. The number of chunks is determined by the input length plus padding and is equal to $$\verb"input.len()"/\verb"RATE" + 1$$ @@ -364,7 +364,7 @@ There are 3 consecutive regions for `rho_pi_chi_cells`: #### Constraints -Lookup in `normalize_4` table already gives a constraint. This is for the $os[i][j]$ step left in the $\theta$-step. Further, since `split_uniform` will combine small parts that are seperated by $0$, lookup to `normalize_4` of these small parts in a uniform way. +Lookup in `normalize_4` table already gives a constraint. This is for the $os[i][j]$ step left in the $\theta$-step. Further, since `split_uniform` will combine small parts that are separated by $0$, lookup to `normalize_4` of these small parts in a uniform way. #### Rationale @@ -412,7 +412,7 @@ Do, for round $k$, the computation $s[0][0]+\verb"round_cst"[k] \ (RC[k])$ and l ### Absorb #### Constraints -Verify the absorb data in parallel with each round of permutation, so in the first $17$ rounds each round absorb one word $A[i][j]$ indexed by $i+5j<17$, using $\verb"result"=\verb"state"+\verb"data"$ and then normalize to $\{0,1\}$ via `normalize_3` table. +Verify the absorb data in parallel with each round of permutation, so in the first $17$ rounds each round absorbs one word $A[i][j]$ indexed by $i+5j<17$, using $\verb"result"=\verb"state"+\verb"data"$ and then normalize to $\{0,1\}$ via `normalize_3` table. #### Rationale - Soundness: Clear from fact that $a\oplus b$ has the same parity as $a+b$ (i.e. normalize to $\{0,1\}$). @@ -423,7 +423,7 @@ Verify the absorb data in parallel with each round of permutation, so in the fir #### Constraints When the previous hash is done: (1) Verify that the hash RLC result from `squeeze_from_prev` is the same as `hash_rlc`. -(2) Verify that `squeeze_from_prev` are indeed filled in with the same values as state words $A[0][0], A[1][0], A[2][0], A[3][0]$ to squeeze. +(2) Verify that `squeeze_from_prev` is indeed filled in with the same values as state words $A[0][0], A[1][0], A[2][0], A[3][0]$ to squeeze. The Rationale of above constraints are pretty straightforward @@ -439,7 +439,7 @@ Use `length` to record the actual length of input bytes, and pad until total len (4) padding input byte starts with $1$ then followed by $0$ if this is not last padding input byte; (5) for last padding input byte, input bytes is $128$ (if not first padding, equals $\overline{00000001}$ in standard bits, little-endian-form) or $129$ (if is also first padding, equals $\overline{10000001}$ in standard bits, this means the only padding is $\overline{10000001}$). -The Rationale of the above constraints are also straightforward following the definition of padding. Only one thing that needs to pay attention: Keccack internal uses byte representation together with Keccack-sparse-word-representation. +The Rationale of the above constraints is also straightforward following the definition of padding. Only one thing that needs to pay attention: Keccack internal uses byte representation together with Keccack-sparse-word-representation. ### data_rlc diff --git a/docs/MPT_Circuit.md b/docs/MPT_Circuit.md index 405b94cdf3..049f83be29 100644 --- a/docs/MPT_Circuit.md +++ b/docs/MPT_Circuit.md @@ -22,7 +22,7 @@ In zkevm-circuits, we use [MPT table](https://github.com/scroll-tech/zkevm-circu |-|-|-|-|-|-|-|-| |||actually the RLC of `storage_key` in little-endian bytes|[MPTProofType](https://github.com/scroll-tech/zkevm-circuits/blob/700e93c898775a19c22f9abd560ebb945082c854/zkevm-circuits/src/table.rs#L645)||||| -In the above, each row of the MPT table reflects an update to the MPT ([MPTUpdate](https://github.com/scroll-tech/zkevm-circuits/blob/700e93c898775a19c22f9abd560ebb945082c854/zkevm-circuits/src/witness/mpt.rs#L15)). The columns `address` and `storage_key` indicate the location where changes in account or storage happen. The change in value for this update are recorded in `old_value` and `new_value`, and the corresponding change of the trie root are recorded in `old_root` and `new_root`. +In the above, each row of the MPT table reflects an update to the MPT ([MPTUpdate](https://github.com/scroll-tech/zkevm-circuits/blob/700e93c898775a19c22f9abd560ebb945082c854/zkevm-circuits/src/witness/mpt.rs#L15)). The columns `address` and `storage_key` indicate the location where changes in account or storage happen. The change in value for this update is recorded in `old_value` and `new_value`, and the corresponding change of the trie root is recorded in `old_root` and `new_root`. For each row's corresponding MPT update, we will prove its correctness in the MPT circuit that we describe below. To facilitate this, we record the MPTProofType in the above table, which corresponds to the following proof types: @@ -39,7 +39,7 @@ For each row's corresponding MPT update, we will prove its correctness in the MP ## The purpose of MPT circuit and encoding of MPT updates via `SMTTrace` -MPT circuit aims at proving the correctedness of the MPT table described above. This means its constraint system enforces a unique change of MPT for each of the MPT's updates as recorded in the MPT table. In particular, when the MPT is changed due to account or storage updates, MPT circuit must prove this update leads to the correct root change. +MPT circuit aims at proving the correctness of the MPT table described above. This means its constraint system enforces a unique change of MPT for each of the MPT's updates as recorded in the MPT table. In particular, when the MPT is changed due to account or storage updates, MPT circuit must prove this update leads to the correct root change. `SMTTrace` (SMT = Sparse Merkle Trie) is the witness data structure constructed inside the [zkTrie] of the zkevm-circuits to [encode](https://github.com/scroll-tech/zkevm-circuits/blob/c656b971d894102c41c30000a40e07f8e0520627/zkevm-circuits/src/witness/mpt.rs#L77) one update step of the MPT. `SMTTrace` carries the old and new paths on the trie before and after the change to either account or storage, with both paths running from root to leaf. Each path is encoded as an `SMTPath`, which consists of the root hash, the leafnode and all intermediate nodes along that path from root to leaf. Each node's value hash and its sibling's value hash are recorded. `SMTTrace` also records the address and storage key at which location the change is made. Furthermore, it carries information about the account or storage updates, such as change of nonce, balance, code hash, storage value hash, etc. . @@ -170,13 +170,13 @@ The [`MPTProofType`](https://github.com/scroll-tech/mpt-circuit/blob/9d129125bd7 Each circuit row represents the mpt update (from old to new) of a path from root to leaf, at a certain depth (depth of root is 0). So each row represents only one node (virtual node in case it does not exist). The row offset order `previous --> current --> next` indicates an increase of depth towards leafnode. -The configuration changes to the MPT is characterized by the following two columns: +The configuration changes to the MPT are characterized by the following two columns: (1) `PathType`: This characterizes topological structural change. It includes `PathType::Start`, `PathType::Common`, `PathType::ExtensionOld` and `PathType::ExtensionNew`; (2) `SegmentType`: This characterizes data field (which form the trie's leafnode hash) change. It includes `SegmentType::Start`, `SegmentType::AccountTrie`, `SegmentType::AccountLeaf0`-`SegmentType::AccountLeaf4`, `SegmentType::StorageTrie`, `SegmentType::StorageLeaf0`-`SegmentType::StorageLeaf1`. -In both of the above two column witnesses, `Start` is used as boundary marker between updates. This means each circuit row with a `Start` indicates a new MPT update. +In both of the above two column witnesses, `Start` is used as a boundary marker between updates. This means each circuit row with a `Start` indicates a new MPT update. ### Topological structure changes to the trie, their corresponding mpt operations and account/storage types @@ -188,7 +188,7 @@ Operations to the trie can be classified as ![PathType_Common_modify](https://hackmd.io/_uploads/B19kctid2.png) -(2) insert to/delete from append: an account/storage slot is inserted to the MPT as append, i.e. it forms a new leafnode that previously does not exist even as empty nodes (insert to append), or it is deleted from an appended leaf, i.e., after deletion there will be no node (including empty node) left (delete from append). The account/storage under this operation is named type 1, i.e., type 1 account/storage are empty. Where they could be in the MPT, there is instead a leaf node that maps to another non-empty account/storage. Figure below (for insert case, and delete case just swap old and new): +(2) insert to/delete from append: an account/storage slot is inserted to the MPT as append, i.e. it forms a new leafnode that previously does not exist even as empty nodes (insert to append), or it is deleted from an appended leaf, i.e., after deletion there will be no node (including empty node) left (delete from append). The account/storage under this operation is named type 1, i.e., type 1 account/storage is empty. Where they could be in the MPT, there is instead a leaf node that maps to another non-empty account/storage. Figure below (for insert case, and delete case just swap old and new): ![PathType_ExtensionNew_append](https://hackmd.io/_uploads/S1_M9Ksun.png) Notice that the zkTrie adds only one bit of common prefix at each level of its depth. It also applies the optimization that replaces subtrees consisting of exactly one leaf with a single leaf node to reduce the tree height. This means that when we insert to append (or delete from append), it may happen that an extension happens with some intermediate nodes that provide key prefix bits, either for the old path or for the new path. It is constrained that at all the new siblings added are empty nodes. @@ -199,19 +199,19 @@ Notice that the zkTrie adds only one bit of common prefix at each level of its d #### PathType::Common -`PathType::Common` refers to the sitation that the old and new path share the same topological configuration. +`PathType::Common` refers to the situation that the old and new paths share the same topological configuration. This can correspond to the topological configuration change on the whole path in the modify operation, or the common path (not extended one) of the insert to/delete from append, or the insert to/delete from fill operations. #### PathType::ExtensionNew -`PathType::ExtensionNew` refers to the sitation that the new path extends the old path in its toplogical configuration. +`PathType::ExtensionNew` refers to the situation that the new path extends the old path in its toplogical configuration. This can correspond to the extended part of the path in insert to append, insert to fill operations. #### PathType::ExtensionOld -`PathType::ExtensionOld` refers to the sitation that the old path extends the new path in its toplogical configuration. +`PathType::ExtensionOld` refers to the situation that the old path extends the new path in its toplogical configuration. This can correspond to the extended part of the path in delete from append, delete from fill operations. @@ -227,11 +227,11 @@ These cases of non-existence proofs are related to non-existence before writing There are 2 cases to constrain based on the path directed by the provided non-existing key (coming from hash of account address): -- Type 1 non-existence proof (insert to append/delete from append): the path ended at a leaf node. Illustration figure shown below: +- Type 1 non-existence proof (insert to append/delete from append): the path ended at a leaf node. The illustration figure shown below: ![AccountNotExist_Leaf](https://i.imgur.com/SyExuBC.png) In this case, due to our construction of the old and new paths of `SMTTrace`, the old path (when inserting)/new path (when deleting) must be directed to this leaf node. The prefix key provided by the old/new path must end at a bit position before the last bit of the leaf key that is to be proved non-exist. So we constrain that the non-existing account/storage must have its key that is not equal to the key at this leaf node. Circuit columns `other_key`, `other_key_hash`, `other_leafnode_hash` and an IsEqualGadget `key_equals_other_key` are used to provide witness to these constraints and to constrain. -- Type 2 non-existence proof (insert to fill/delete from fill): the path ended at an empty node. Illustration figure shown below: +- Type 2 non-existence proof (insert to fill/delete from fill): the path ended at an empty node. The illustration figure shown below: ![AccounNotExist_Empty](https://i.imgur.com/FLxg11Q.png) In this case, due to our construction of the old and new paths of `SMTTrace`, the old path (when inserting)/new path (when deleting) must be directed to this empty node. So we constrain the emptiness of these nodes. Circuit provides two IsZeroGadgets `old_hash_is_zero` and`new_hash_is_zero` to constrain this case. @@ -264,9 +264,9 @@ For storage, the formula for `valueHash` is The above expansions are beyond what the original trie defined in [zkTrie spec] can have, where the expansion of the trie will only be up to leafnodes that are classified into `AccountTrie` or `StorageTrie`, i.e. trie-type segments. The leaf-type segments (non-trie segments) are witnesses generated inside the circuit (parsed from `SMTTrace`), and the trie-like structure shown above are just imaginary/virtual which only lives in the circuit. So such common constraints as key is 0 and depth is 0 for non-trie segments must be enforced in the circuit constraint system. -The depth of such expansion at witness generation depends on MPTProofType, which points to a specific data field at which level the expansion needs done. +The depth of such expansion at witness generation depends on MPTProofType, which points to a specific data field at which level the expansion needs to be done. -The `PathType` used inside the circuit is also corresponding to this expanded trie including leaf-type segments (non-trie segments). For example, in the case of insert to fill operation, `PathType` changes from `Common` to `ExtensionNew` at the empty node that is filled by a new leaf, and the path continues to the specific account/storage field. So constraints for `PathType::ExtensionNew(Old)` must discuss separately for trie and non-trie segment cases. +The `PathType` used inside the circuit is also corresponding to this expanded trie including leaf-type segments (non-trie segments). For example, in the case of insert to fill operation, `PathType` changes from `Common` to `ExtensionNew` at the empty node that is filled by a new leaf, and the path continues to the specific account/storage field. So constraints for `PathType::ExtensionNew(Old)` must be discussed separately for trie and non-trie segment cases. ## Constraints @@ -300,7 +300,7 @@ on rows with `PathType::Common`: ### PathType::ExtensionNew on rows with `PathType::ExtensionNew`: - `old_value==0` -- old path has `old_hash` keep unchanged on the extended path +- old path has `old_hash` kept unchanged on the extended path - new path has `new_hash=poisedon_hash(leftchild, rightchild)` - on trie segments (`is_trie==true`) - In case next`SegmentType` is still trie type (`is_final_trie_segment==false` ) @@ -316,7 +316,7 @@ on rows with `PathType::ExtensionNew`: - for type 1 non-existence (type 1 account), in case `key!=other_key`, constrain that `old_hash_prev==poisedon(other_key_hash, other_leaf_data_hash)` ### PathType::ExtensionOld -The constraints for this case is just symmetric with respect to `PathType::ExtensionNew` by swapping old and new. +The constraints for this case are just symmetric with respect to `PathType::ExtensionNew` by swapping old and new. ### MPTProofType::NonceChanged @@ -462,4 +462,4 @@ On rows with `MPTProofType::StorageDoesNotExist`: - `direction==0` - `PathType==Common`, because storage read must be on an existing account - correct behavior of `key_hi` (16 bytes), `key_lo` (16 bytes) and `key==poisedon(key_hi, key_lo)` - - correct RLC behavior of `storage_key_rlc` as RLC of `RLC(key_hi)` and `RLC(key_lo)` \ No newline at end of file + - correct RLC behavior of `storage_key_rlc` as RLC of `RLC(key_hi)` and `RLC(key_lo)` diff --git a/docs/Modexp_Circuit.md b/docs/Modexp_Circuit.md index 86e3c05415..5f675adf24 100644 --- a/docs/Modexp_Circuit.md +++ b/docs/Modexp_Circuit.md @@ -55,7 +55,7 @@ In addition, $x_3$ stands for $x_3=x\mod r$. Let x, y be in U256 and p be the prime used in Modexp, d

account constraints, which are established in the `ConstraintBuilder`. @@ -255,4 +255,4 @@ We discuss account constraints, which are established in the `ConstraintB - lookup of `(address, storage_key, mpt_proof_type, state_root, state_root_prev, value, initial_value)` into `mpt_update_table`'s `(address, storage_key, new_root, old_root, new_value, old_value)` - constraints of `value_prev` - - `value_prev` must be the same as `value` in the previous row unless `is_non_exist` \ No newline at end of file + - `value_prev` must be the same as `value` in the previous row unless `is_non_exist` diff --git a/docs/Tx_Circuit.md b/docs/Tx_Circuit.md index dd8ca9f9dd..b4fc09e9f5 100644 --- a/docs/Tx_Circuit.md +++ b/docs/Tx_Circuit.md @@ -35,18 +35,18 @@ According to this document of [ETHTransactions], a submitted (legacy, [EIP2718] - `v`, `r`, `s`: signature of the tx sender, `v` (U64) is the recovery id, `(r,s)` (U256, U256) is the ECDSA signature computed from [EllipticCurveDigitalSignatureAlgorithm] with specified message and key; - `hash`: U256, Keccak-256 hash of the signed tx's data = `[Nonce, Gas, GasPrice, To, Value, CallData, v, r, s]`. -A transaction can be contract creation or message call. For contract creation, `to=0`. +A transaction can be a contract creation or a message call. For contract creation, `to=0`. ### Types of Transactions -Besides legacy transation, there are also other types of transactions such as [EIP155], [EIP2930] ([EIP2718] type `0x01`) and [EIP1559] ([EIP2718] type `0x02`). They differ by their details in tx information and their signing process. +Besides legacy transaction, there are also other types of transactions such as [EIP155], [EIP2930] ([EIP2718] type `0x01`) and [EIP1559] ([EIP2718] type `0x02`). They differ by their details in tx information and their signing process. For example, in the above legacy transaction the ECDSA signature `(r,s)` is obtained by signing with message `RLP([nonce, gasprice, startgas, to, value, data])` and the private key of the EOA account that initiates this tx. In this case `v` is 27 or 28; instead, in EIP155 the sign message becomes `RLP([nonce, gasprice, startgas, to, value, data, chainid, 0, 0])` and `v` is `ChainID*2+35` or `ChainID*2+36`. In the tx circuit, we currently support the following 3 types of txs: - `PreEip155`: legacy tx - `Eip155`: as above EIP-155 -- `L1Msg`: L1 message tx. This is not a tx type in the original Etherem protocol, but a tx type in which bridge contract users initiate deposit tx or invoke L2 contract txs. We follow [EIP2718] and use `0x7e` to stand for this tx type. +- `L1Msg`: L1 message tx. This is not a tx type in the original Ethereum protocol, but a tx type in which bridge contract users initiate deposit tx or invoke L2 contract txs. We follow [EIP2718] and use `0x7e` to stand for this tx type. ### Transactions data provided to the Tx Circuit @@ -62,7 +62,7 @@ Besides the above transaction data, some metadata are also available for the tx ### Use of Keccak256 hash and the tx signature process -In the Tx Circuit, there are two pieces of tx related data that will be applied the Keccak256 hash. They both correspond to some tx_tag fields ([`TxFieldTag`]): +In the Tx Circuit, there are two pieces of tx related data that will be applied to the Keccak256 hash. They both correspond to some tx_tag fields ([`TxFieldTag`]): - (`TxHashLength`, `TxHashRLC`, `TxHash`): signed tx's data = `[Nonce, Gas, GasPrice, To, Value, CallData, v, r, s]`. `TxHashLength` stands for the length in bytes of `RLP([signed tx's data])`; `TxHashRLC` stands for the RLC in bytes of signed tx's data and `TxHash=Keccak256(RLP(signed tx's data))`. `TxHash` becomes the `hash` data field in tx data; - (`TxSignLength`, `TxSignRLC`, `TxSignHash`): tx's data that needs to be signed = `[Nonce, Gas, GasPrice, To, Value, ChainID, CallData]`. `TxSignLenth` stands for the length in bytes of `RLP(tx's data that needs to be signed)`; `TxSignRLC` stands for the RLC in bytes of `RLP(tx's data that needs to be signed)` and `TxSignHash=Keccak256(RLP(tx's data that needs to be signed))`. `TxSignHash` is the same as the `msg_hash` data field that will be used to obtain the tx sender's signature `(v,r,s)`. @@ -72,7 +72,7 @@ According to the [EllipticCurveDigitalSignatureAlgorithm] (ECDSA), the signature `(r,s)=ecdsa(msg_hash, public_key)` -The recovery id `v` is then computed from the parity of the `y` component of the EC point corresponding to `x` compnent being `r`. The `public_key` can be recovered from `(v,r,s)` and `msg_hash` using `ecrecover`, which further determines the caller address as `caller_address=keccak(public_key)[-20:]` (because this is the way account address is created, which is derived from its public key). Notice that only EOA address can initiate a tx and contract address cannot, because contract address is not calculated from public key but from nonce and EOA address. Also notice that EOA account's private key are elliptic-curve pre-images of public key and are thus hidden, while the public key can be calculated from the private key. +The recovery id `v` is then computed from the parity of the `y` component of the EC point corresponding to `x` component being `r`. The `public_key` can be recovered from `(v,r,s)` and `msg_hash` using `ecrecover`, which further determines the caller address as `caller_address=keccak(public_key)[-20:]` (because this is the way account address is created, which is derived from its public key). Notice that only EOA address can initiate a tx and contract address cannot, because contract address is not calculated from public key but from nonce and EOA address. Also notice that EOA account's private key are elliptic-curve pre-images of public key and are thus hidden, while the public key can be calculated from the private key. In the Tx Circuit, validation of correct signature is made through lookup to the sig table; validation of correct RLP is made through lookup to the RLP table; and validation of correct has is made through lookup to the Keccak table. @@ -107,16 +107,16 @@ Some boolean columns are used to reduce the degree of constraint system, such as - `is_l1_msg`: Advice Column. If `tx_type` is `L1Msg`; - `is_chain_id`: Advice Column. If tx_tag is `ChainID`; - `LookupCondition` - - `TxCallData`: condition that enables lookup to tx table for CallDataLength and CallDataGasCost. This is enabled when tx_tag is `CallDataLength` and the latter has non-zero value; - - `L1MsgHash`: condition that enables lookup to RLP table for message hashing of `tx_type = L1Msg`. This is enabled when `is_l1_msg==1` and tx_tag chosen among `Nonce, Gas, CalleeAddress, Value, CallDataRLC, CallerAddress, TxHashLength, TxHashRLC`; + - `TxCallData`: the condition that enables lookup to tx table for CallDataLength and CallDataGasCost. This is enabled when tx_tag is `CallDataLength` and the latter has non-zero value; + - `L1MsgHash`: the condition that enables lookup to RLP table for message hashing of `tx_type = L1Msg`. This is enabled when `is_l1_msg==1` and tx_tag chosen among `Nonce, Gas, CalleeAddress, Value, CallDataRLC, CallerAddress, TxHashLength, TxHashRLC`; - `RlpSignTag`: condition that enables lookup to RLP table for signing in case `tx_type != L1Msg`; - - `RlpHashTag`: condition that enables lookup to RLP table for message hashing of `tx_type != L1Msg`. This is enabled when `is_l1_msg==0` and tx_tag chosen among `Nonce, GasPrice, Gas, CalleeAddress, Value, CallDataRLC, TxDataGasCost, SigV, SigR, SigS, TxHashLength, TxHashRLC`; + - `RlpHashTag`: condition that enables lookup to RLP table for message hashing of `tx_type != L1Msg`. This is enabled when `is_l1_msg==0` and tx_tag is chosen among `Nonce, GasPrice, Gas, CalleeAddress, Value, CallDataRLC, TxDataGasCost, SigV, SigR, SigS, TxHashLength, TxHashRLC`; - `Keccak`: condition that enables lookup to Keccak table. This is enabled when tx_tag is `TxSignLength` (and `is_l1_msg==0`) or `TxHashLength`; - `is_final`: Advice Column. Final part of assigning tx data is the CallData part; - `call_data_gas_cost_acc`: Advice Column. Accumulated CallDataGasCost, starting to accumulate from the first CallDataByte. Otherwise None; - `is_padding_tx`: Advice Column. If this row is for padding tx; - `cum_num_txs`: Advice Column. The cumulative number of txs; -- `sv_address`: Advice Column. The sign-verified caller address recovered using `ecrecover`. +- `sv_address`: Advice Column. The sign-verified caller address was recovered using `ecrecover`. ### Sub-Configurations @@ -162,22 +162,22 @@ Assign an empty row first, then followed by each tx's data fields except calldat - constraints for columns that are boolean indicators: - `is_call_data`, `is_caller_address`, `is_chain_id`, `is_tag_block_num`, `is_l1_msg` are boolean; - - Conditions for each type of `LookupCondition::xx` imposed correctly in accordance with the way they are assigned. + - Conditions for each type of `LookupCondition::xx` are imposed correctly in accordance with the way they are assigned. - lookup to tx_table for CallData (LookupCondition::TxCallData) - triggered when tag is CallDataLength and CallDataLength is not 0; - Lookup tx table to validate call_data_gas_cost_acc; - Lookup tx table to validate last call data byte in tx has index = call_data_length-1 when the call data length is larger than 0. -- lookup to RLP table for tx_type when the latter is `L1Msg`, to check that the corrsponding hash format in RLP table is `L1MsgHash`. +- lookup to RLP table for tx_type when the latter is `L1Msg`, to check that the corresponding hash format in RLP table is `L1MsgHash`. - lookup to RLP table for signing (LookupCondition::RlpSignTag) - triggered when tx_tag is one of the following: Nonce, GasPrice, Gas, CalleeAddress, Value, CallDataRLC, TxSignLength, TxSignRLC; - Lookup RLP table to validate the value correspond to any of these tags with the corresponding sign format `TxSignPreEip155` (for `tx_type==PreEip155`) or `TxSignEip155` (for `tx_type==Eip155`). -- lookup to RLP table for hasing (LookupCondition::RlpHashTag and LookupCondition::L1MsgHash) - - LookupCondition::RlpHashTag triggerd when tx_tag is one of the following: Nonce, GasPrice, Gas, CalleeAddress, Value, CallDataRLC, TxDataGasCost, SigV, SigR, SigS, TxHashLength, TxHashRLC; - - LookupCondition::L1MsgHash triggerd when tx_tag is one of the following: Nonce, Gas, CalleeAddress, Value, CallDataRLC, CallerAddress, TxHashLength, TxHashRLC; +- lookup to RLP table for hashing (LookupCondition::RlpHashTag and LookupCondition::L1MsgHash) + - LookupCondition::RlpHashTag triggered when tx_tag is one of the following: Nonce, GasPrice, Gas, CalleeAddress, Value, CallDataRLC, TxDataGasCost, SigV, SigR, SigS, TxHashLength, TxHashRLC; + - LookupCondition::L1MsgHash triggered when tx_tag is one of the following: Nonce, Gas, CalleeAddress, Value, CallDataRLC, CallerAddress, TxHashLength, TxHashRLC; - Lookup RLP table to validate the value correspond to any of these tags with the corresponding hash format `L1MsgHash`(for `tx_type==L1Msg`) or `TxHashPreEip155` (for `tx_type==PreEip155`) or `TxHashEip155` (for `tx_type==Eip155`). - lookup to sig table for signature information `(msg_hash_rlc, v, r, s, sv_address)` @@ -219,4 +219,4 @@ Assign an empty row first, then followed by each tx's data fields except calldat - if `tx_type!=L1Msg`, then CallerAddress is the same as `sv_address` - constrain `v` based on `tx_type`: for `tx_type==Eip155`, `v=ChainID+35 or 36`, for `tx_type==PreEip155`, `v=27 or 28`, for `tx_type==L1Msg`, `v==0` - \ No newline at end of file +