Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
LucasBoTang authored Sep 4, 2024
1 parent 269bf68 commit a12e5d0
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,10 @@ log_weights, log_loss = gradNorm(net=mtlnet, layer=net.fc4, alpha=0.12, dataload
Consider $T$ regression tasks trained using standard squared loss onto the functions:

$$
f_i (\mathbf{x}) = \sigma_i \tanh \left( ( \mathbf{B} + \mathbf{\epsilon}_i ) \mathbf{x} \right)
f_i (\mathbf{x}) = \sigma_i \tanh \left( ( \mathbf{B} + \epsilon_i ) \mathbf{x} \right)
$$

Inputs are dimension 250 and outputs dimension 100, while $\mathbf{B}$ and $\mathbf{\epsilon}_i$ are constant matrices with their elements generated IID from $N(0; 10)$ and $N(0; 3.5)$, respectively. Each task, therefore, shares information in B but also contains task-specific information $\mathbf{\epsilon}_i$. The $\sigma_i$ sets the scales of the outputs.
Inputs are dimension 250 and outputs dimension 100, while $\mathbf{B}$ and $\epsilon_i$ are constant matrices with their elements generated IID from $N(0; 10)$ and $N(0; 3.5)$, respectively. Each task, therefore, shares information in B but also contains task-specific information $\epsilon_i$. The $\sigma_i$ sets the scales of the outputs.

```python
from data import toyDataset
Expand Down

0 comments on commit a12e5d0

Please sign in to comment.