Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
claudns authored Jan 4, 2025
1 parent 8fe8b2c commit 28d9446
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ Our solution is **Equivariant Encryption (EE)**. The term **equivariance** signi

Rather than relying exclusively on arithmetic operations compatible with HE, EE integrates specialized transformations designed around the internal properties of neural networks. We exploit the known architecture, layer composition, and input-output mappings of the model to construct a system in which each step of inference operates correctly on encrypted inputs. This approach avoids expensive retraining on encrypted datasets. Instead, by following a set of mathematical guidelines, we can generate a new variant of the model that works with our encryption schema in a matter of seconds.

Formally, given some plaintext $p_i$, and some ciphertext $c_i$, with $p_i$ = decrypt($c_i$), our EE framework ensures that decrypt(nonlinear($c_1,c_2$)) = nonlinear($p_1,p_2$), where "nonlinear" represents a specific set of **non-linear neural functions**. Currently, our framework directly supports the following set of activation and processing functions: ReLU, GeLU, SiLU, RMS Normalization, and Layer Normalization.
Formally, given some plaintext $p$, and some ciphertext $c$, with $p$ = decrypt($c$), our EE framework ensures that decrypt(nonlinear($c$)) = nonlinear($p$), where "nonlinear" represents a specific set of **non-linear neural functions**. Note that our framework is general, and $p$ can represent any appropriate format of plaintext data: scalar, vector, or tensor. Currently, our framework directly supports the following set of activation and processing functions: ReLU, GeLU, SiLU, RMS Normalization, and Layer Normalization.

Crucially, the complexity of inference under EE does not surpass that of the unencrypted version. Each forward pass through the network involves approximately the same computational cost. Thus, **inference latency remains unchanged**, a significant advantage compared to conventional HE-based techniques.

Expand Down

0 comments on commit 28d9446

Please sign in to comment.