-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
style!: issue15 - cleaner polynomial interface #21
Conversation
17050bf
to
dd40e9f
Compare
Separate Polynomial into PolynomialEvalForm and PolynomialCoefForm to make library type safe when converting between these. Eg: fft can only go from coeff->eval, and ifft can only go from eval->coeff
c7c7ae0
to
5d3ec00
Compare
Rebased over #18 |
Made the internal function take a Polynomial trait (that has len() and elements() fns) to be polymorphic. But public fns are now duplicated, one for each form (foo_coeff_form, foo_eval_form)
follows renaming done in a previous commit, forgot this fn
this fixes all the tests and benchmarks that were failing The previous code also had the implicit assumption that it only worked with polys in eval form. So just made that explicit. We can add the function to commit from coeff form (by taking ifft for eg) later if needed, but opted to keep API simple for now.
/// same length. | ||
/// | ||
/// TODO: We should get rid of this: polynomial should not know about the | ||
/// blob. This len is equivalent to the coeffs len before it gets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can remove this safety from the end2end approach, because the blob header contains the rollup data blob up.
Padding to next power of 2 is always needed if we want to compute a kzg proof for it, even in coefficient form. It is for deriving root of unity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what you mean by safety. I think this is just a convenience function that @anupsv added to make it easy to get back the blob from the polynomial. Unclear where/how this is used though. We can remove it in some other PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have mapping from
rollup data <-> blob
blob -> polynomial
because blob header contain the length of rollup data, I don't think we need the this length field
the kzg commits both blob header and codec(rollup_data) or everything is safe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anupsv what do you think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, primarily a convenience method as this value is need to calculate the roots during the batch verification. As long as there is way to retrieve that information, i.e convert polynomial -> blob and then get this length, then no problem. This field would have avoided some repetitive work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bxue-l2 you're missing a word there (just calculate the root?)
@anupsv why do we need to be able to revert polynomial -> blob? Shouldn't we always have the blob around wherever we convert to poly (which is typically just a quick conversion to be able to commit/generate proof afaiu?
An alternative proposal is to not pad the polynomial in the constructor, but only do it when needed (for eg when computing commit or proof). Only downside is if we need to do this a few times (which is probably always?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bxue-l2 yep we can, but it might mean adding new interface used by just 1 function. Just trying to keep the interfaces uniform. Or we can move everything to this interface.
@samlaf So in the deeper functions, per the spec, only polynomial is passed in. And since we won't have the data field in the poly, we'd need to get it from the blob, hence the reversion thought. Reason is poly will have the power of 2 padded Fr's and if that length is passed in, calculations will shift to next power of 2 which will be incorrect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the beginning of the blob contain the data, you can always figure out what the original data is, and get the length of codec blob length, by applying the codec function. It is a bit of detour, but you can do it
polynomial -> rollup-data -> codec blob
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bxue-l2 right thats also possible. But seems like a lot of work to get the length. If thats fine with everyone then we can work on that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can open an issue on this problem, I don't think this field is removed.
Look into the blob kind of breaking the library abstraction, which I don't like.
looks good to me, want others to take a pass |
src/polynomial.rs
Outdated
let mut padded_evals = evals; | ||
// TODO: does it make sense to pad evaluations with zeros? This changes the | ||
// polynomial... | ||
padded_evals.resize(next_power_of_two, Fr::zero()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If im understanding this right, when we call to_eval_form()
, the Fr field size is changed to the next power of 2 and then padded with the "0" field element right ?
Thinking through, if someone calls to_eval
or to_coeff
this might happen ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not the Fr field size, but the size of the polynomial (number of evaluation points) is extended to the next power of 2 usings 0s yes. See the comment here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if i have [1, 2, 3] in coeff, it gets padded to [1,2,3,0] and when to_eval()
is called, does it do [1, 2, 3, 0, 0, 0, 0, 0] ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct because we use https://doc.rust-lang.org/std/primitive.usize.html#method.next_power_of_two
Returns the smallest power of two greater than or equal to self.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added test: e61ddcf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I'm thinking we shouldn't pad extra 0's when calling to_eval()
or to_coeff()
right ? The fft or ifft will return the same amount of elements and we should be using only those after conversion ?
Or is it the understanding that since it's 0's the commitment will remain the same and the caller will only parse the number of bytes as specified by the header byte, which contains the length, attached by the proxy ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, that is what we are doing in hokulea, we get the entire blob and just read the data based on the header that encodes how much is the user data
https://github.com/Layr-Labs/hokulea/blob/master/crates/eigenda/src/eigenda_data.rs#L41
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see. Then let's just add a comment in the code to capture this behaviour and be done ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I'm thinking we shouldn't pad extra 0's when calling to_eval() or to_coeff() right ? The fft or ifft will return the same amount of elements and we should be using only those after conversion ?
it won't pad extra 0s, because the evals/coeffs are already a power of 2, so that won't change.
Or is it the understanding that since it's 0's the commitment will remain the same and the caller will only parse the number of bytes as specified by the header byte, which contains the length, attached by the proxy ?
not sure I understand what you mean here
Eval is the most important one, so should be prominent and easier to find/look at.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
README.md
Outdated
|
||
From the `Blob`, a polynomial can be obtained via calling the `to_polynomial()` function. This converts the Blob to Field elements, then calculates the next power of 2 from this length of field elements and appends `zero` value elements for the remaining length. | ||
The [Blob](./src/blob.rs) and [Polynomial](./src/polynomial.rs) structs are mostly [Plain Old Data](https://en.wikipedia.org/wiki/Passive_data_structure) with constructor and few helper methods. The interesting stuff happens in the [KZG](./src/kzg.rs) struct, which has methods for committing to a blob, polynomial in coeff or eval form, and generating and verifying proofs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sure to note that this blob is different from the blob defintion from the eigenda_client. I would get super confused by it
https://github.com/Layr-Labs/eigenda/blob/master/api/clients/eigenda_client.go#L167
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is too deep in the weeds. Nobody will know about this (for now). Let's just fix that name and not mention it here. Let's try to keep the docs here self-contained.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just say not to confuse with the others. Then once the interface update, we ca update here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we will just forget to update here and then confuse people even more. Let's not confuse them with problems they don't have haha.
Ok sorry guys I'm being fookin stupid keep pushing random doc crap to a PR that was already approved. Promise this was my last one. Decided to move all the new docs to an actual rust docs in lib.rs so that it'll show up on https://docs.rs/rust-kzg-bn254/latest/rust_kzg_bn254/. See 194b32b Let's merge this, then publish a new crate version to update the docs, and then we can make more small improvements in future PRs. Promise to not touch this PR anymore other than for addressing comments! |
Fixes #15
Fixes #19
Built on top of #18 so targetting that branch to make the PR only contain the code related to this issue.
Separate Polynomial into PolynomialEvalForm and PolynomialCoefForm to make library type safe when converting between these. Eg: fft can only go from coeff->eval, and ifft can only go from eval->coeff
This is just a proposal. There are still issues to address, as raised in #19 and #20. We'll meet on monday to discuss these. We might also want to change the srs code to use the same type-safe techniques (separate srs into monomial and lagrange form, such that fft/ifft only works on one of those).
Haven't fixed the tests/benchmarks yet, because waiting for review on style and our monday discussion. @anupsv can we change the settings of this repo to run ci on all branches. Guessing it's set to only run on PRs that target master right now?