Skip to content

Commit

Permalink
DOCS shift to rst (openvinotoolkit#17375)
Browse files Browse the repository at this point in the history
  • Loading branch information
sgolebiewski-intel authored May 5, 2023
1 parent 175e169 commit 963f30a
Show file tree
Hide file tree
Showing 13 changed files with 1,370 additions and 1,183 deletions.
51 changes: 29 additions & 22 deletions docs/ops/activation/Mish_4.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Mish {#openvino_docs_ops_activation_Mish_4}

@sphinxdirective

**Versioned name**: *Mish-4*

**Category**: *Activation function*
Expand All @@ -8,43 +10,48 @@

**Detailed description**

*Mish* is a self regularized non-monotonic neural activation function proposed in this [article](https://arxiv.org/abs/1908.08681v2).
*Mish* is a self regularized non-monotonic neural activation function proposed in this `article <https://arxiv.org/abs/1908.08681v2>`__.

*Mish* performs element-wise activation function on a given input tensor, based on the following mathematical formula:

\f[
Mish(x) = x\cdot\tanh\big(SoftPlus(x)\big) = x\cdot\tanh\big(\ln(1+e^{x})\big)
\f]
.. math::

Mish(x) = x\cdot\tanh\big(SoftPlus(x)\big) = x\cdot\tanh\big(\ln(1+e^{x})\big)


**Attributes**: *Mish* operation has no attributes.

**Inputs**:

* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**:

* **1**: The result of element-wise *Mish* function applied to the input tensor. A tensor of type *T* and the same shape as input tensor.
* **1**: The result of element-wise *Mish* function applied to the input tensor. A tensor of type *T* and the same shape as input tensor.

**Types**

* *T*: arbitrary supported floating-point type.

**Example**

```xml
<layer ... type="Mish">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="3">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block: cpp

<layer ... type="Mish">
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="3">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>


@endsphinxdirective

115 changes: 63 additions & 52 deletions docs/ops/arithmetic/Maximum_1.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,33 @@
# Maximum {#openvino_docs_ops_arithmetic_Maximum_1}

@sphinxdirective

**Versioned name**: *Maximum-1*

**Category**: *Arithmetic binary*

**Short description**: *Maximum* performs element-wise maximum operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute.

**Detailed description**
As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to `auto_broadcast` attribute specification. As a second step *Maximum* operation is computed element-wise on the input tensors *a* and *b* according to the formula below:
As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to ``auto_broadcast`` attribute specification. As a second step *Maximum* operation is computed element-wise on the input tensors *a* and *b* according to the formula below:

After broadcasting *Maximum* does the following with the input tensors *a* and *b*:

\f[
o_{i} = max(a_{i},\ b_{i})
\f]
.. math::

o_{i} = max(a_{i},\ b_{i})


**Attributes**:

* *auto_broadcast*

* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:

* *none* - no auto-broadcasting is allowed, all input shapes must match
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`

* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
Expand All @@ -44,52 +49,58 @@ o_{i} = max(a_{i},\ b_{i})

*Example 1 - no broadcasting*

```xml
<layer ... type="Maximum">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp

<layer ... type="Maximum">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>


*Example 2: numpy broadcasting*
```xml
<layer ... type="Maximum">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```

.. code-block:: cpp

<layer ... type="Maximum">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>


@endsphinxdirective

115 changes: 63 additions & 52 deletions docs/ops/arithmetic/Minimum_1.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,31 @@
# Minimum {#openvino_docs_ops_arithmetic_Minimum_1}

@sphinxdirective

**Versioned name**: *Minimum-1*

**Category**: *Arithmetic binary*

**Short description**: *Minimum* performs element-wise minimum operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute.

**Detailed description**
As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to `auto_broadcast` attribute specification. As a second step *Minimum* operation is computed element-wise on the input tensors *a* and *b* according to the formula below:
As a first step input tensors *a* and *b* are broadcasted if their shapes differ. Broadcasting is performed according to ``auto_broadcast`` attribute specification. As a second step *Minimum* operation is computed element-wise on the input tensors *a* and *b* according to the formula below:

.. math::

o_{i} = min(a_{i},\ b_{i})

\f[
o_{i} = min(a_{i},\ b_{i})
\f]

**Attributes**:

* *auto_broadcast*

* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:

* *none* - no auto-broadcasting is allowed, all input shapes must match
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* *numpy* - numpy broadcasting rules, description is available in :doc:`Broadcast Rules For Elementwise Operations <openvino_docs_ops_broadcast_rules>`

* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
Expand All @@ -42,52 +47,58 @@ o_{i} = min(a_{i},\ b_{i})

*Example 1 - no broadcasting*

```xml
<layer ... type="Minimum">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
.. code-block:: cpp

<layer ... type="Minimum">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="2">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>


*Example 2: numpy broadcasting*
```xml
<layer ... type="Minimum">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>
```

.. code-block:: cpp

<layer ... type="Minimum">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
<dim>1</dim>
<dim>6</dim>
<dim>1</dim>
</port>
<port id="1">
<dim>7</dim>
<dim>1</dim>
<dim>5</dim>
</port>
</input>
<output>
<port id="2">
<dim>8</dim>
<dim>7</dim>
<dim>6</dim>
<dim>5</dim>
</port>
</output>
</layer>


@endsphinxdirective

Loading

0 comments on commit 963f30a

Please sign in to comment.