Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More efficient matrix multiplication #12

Open
1 of 3 tasks
jdwhite48 opened this issue Jan 31, 2022 · 1 comment
Open
1 of 3 tasks

More efficient matrix multiplication #12

jdwhite48 opened this issue Jan 31, 2022 · 1 comment
Assignees
Labels
help wanted Extra attention is needed optimization This code could be faster

Comments

@jdwhite48
Copy link
Owner

jdwhite48 commented Jan 31, 2022

Prerequisite: Issue #10 task 3 (so the code doesn't become a disaster to manage)

Currently, all matrices perform multiplication using a slight variant of the naïve O(n*m) algorithm. However, matrices used by Groth-Sahai are necessarily dense and unstructured due to nearly every element including randomized elements as terms, which may make optimizations more difficult.

  • Research more into matrix multiplication algorithms to determine whether more efficient algorithms exist for extremely dense matrices.
  • If a more practically efficient matrix multiplication exists (e.g. FFT?), implement it and benchmark to see if it's more efficient
  • If not, review the existing code to discover if further optimizations are possible, e.g. by unrolling
@jdwhite48 jdwhite48 added help wanted Extra attention is needed optimization This code could be faster labels Jan 31, 2022
@jdwhite48 jdwhite48 self-assigned this Nov 8, 2022
@jdwhite48
Copy link
Owner Author

jdwhite48 commented Nov 9, 2022

I could not find an existing, well-supported Rust or Rust-integratible linear algebra library which supports group action (i.e., scalar multiplication of a group element with a distinct scalar) using custom operations over matrices, which GS needs for both proving and verifying. However, I have discovered that the nalgebra library provides efficient enough data structures and algorithms that are amenable with Arkworks:

  1. Compared to other crates, it's a relatively well-used library for Rust developers requiring linear algebra.
  2. When I had initially explored the space, I only found libraries which focused on support for field primitives like i32, u32, f32; however, I was pleasantly surprised on looking deeper into nalgebra that it supports matrices operations over any custom field, including Arkworks' Fr, Fqk. Only requires that Add, AddAssign, Mul, MulAssign are all defined, and closed over the same determinate-Sized elements at compile-time.
  • Ensure that nalgebra works with the required Arkworks fields
  • Ensure that these operations' traits are supported by the custom bilinear groups I implemented for GS
  • Propagate the notation changes to the rest of the code so it looks like less of a mess and reads more like math than a jumble of functions.
  1. nalgebra provides me with static (and sane...) typing for vectors and matrices:
    a. This saves me the headache and boilerplate of defining custom / bespoke data encodings for my matrices, as I was doing before. Hopefully it'll make data formats a bit more interoperable across libraries too.
    b. For large and sparse matrices of field elements (i.e. statements' pairing exponents Γ \in Fr^{m x n} for all but the most complex statements), nalgebra::sparse::CsMatrix will give a slight efficiency boost to proof and verifier computation + storage bandwidth. I suspect that field arithmetic will very rarely be the bottleneck, however.
  • Ensure that the statement Γcan be stored as a CsMatrix and write benchmarks to see if it helps with efficiency
  1. While nalgebra does NOT support scalar multiplication of G_1, G_2 elements with scalar field elements Fr (and so I must still rely on a custom scalar matrix multiplication algorithm for that), in the context of GS these will almost exclusively be:
    a. dense (by inclusion of random matrices R \in Fr^{m x 2} and S \in Fr^{n x 2})
    b. and small (group matrices' and computed scalar matrices' rows and/or column space will always be 1 or 2 dimensions, eventually reducing down to a (2 x 2) matrix in G_T for the verifier and a select few (2 x 1) G_1, G_2 elements for the proof.)

TL;DR this satisfies my worries that the naive O(n*m) algorithm I had already implemented would be an unnecessary bottleneck if the number of variables grows large. This really only affects field operations, but those are the ones where better-than-naive would matter, and nalgebra devs are assuredly better at efficient linear algebra implementations than I am.

  • The remaining naive matrix multiplication algorithm should now only apply to small and dense matrices (see (4)). Ensure that all the complexity optimizations I did to perform better than naive for large matrices didn't accidentally make it slower for small ones. Keep it simple :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed optimization This code could be faster
Projects
None yet
Development

No branches or pull requests

1 participant