Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Aqasch authored Jul 5, 2024
1 parent d5c021e commit a2bf612
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ The MLP part of the code is built using the [RL-VQE code agent](https://github.c


# To study the interpretability of KANs
As a motivation for the future work towards the interpretability of KAN we illustrate trained KAN in constructing Bell state
As a motivation for future work towards the interpretability of KAN, we illustrate trained KAN in constructing Bell state
![The learned nerwotk](pics/the_network_after_training_bell_state.png)
where we use the `[84,2,12]` configuration. The `Tensor encoded quantum circuit as input to KAN` contains 84 entries due to the fact that the quantum circuit is encoded into $D\times (N\times(N+5))$ dimension tensor where $D=6$ corresponds maximum depth. For more details please check our [paper](https://scirate.com/arxiv/2406.17630). We can clearly see that not all the neurons are activaly contributing to the choice of action, which is defined as `Quantum gates as output of KAN`.
where we use the `[84,2,12]` configuration. The `Tensor encoded quantum circuit as input to KAN` contains 84 entries because the quantum circuit is encoded into $D\times (N\times(N+5))$ dimension tensor where $D=6$ corresponds to maximum depth. For more details please check our [paper](https://scirate.com/arxiv/2406.17630). We can see that not all the neurons are actively contributing to the choice of action, which is defined as `Quantum gates as output of KAN`.


Due to the huge dimension of the KAN in the previous picture the `activation function` in between the input and the output layers. Hence in the following illustrate we explicity show the trend of the `acivation function` of the trained KAN
Due to the huge dimension of the KAN in the previous picture the `activation function` is in between the input and the output layers the activation functions are not visible. Hence in the following illustration, we explicitly show the trend of the `activation function` of the trained KAN
![The learned nerwotk](pics/2q_activation_function.png)

0 comments on commit a2bf612

Please sign in to comment.