text
stringlengths 17
362k
| id
stringlengths 13
115
| metadata
dict | __index_level_0__
int64 0
75
|
---|---|---|---|
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="PySciProjectComponent">
<option name="PY_SCI_VIEW_SUGGESTED" value="true" />
</component>
</project>
| ivy/.idea/other.xml/0 | {
"file_path": "ivy/.idea/other.xml",
"repo_id": "ivy",
"token_count": 71
} | 0 |
{
"compiler": [
"cp38-cp38-manylinux_2_17_x86_64",
"cp38-cp38-win_amd64",
"cp39-cp39-manylinux_2_17_x86_64",
"cp39-cp39-win_amd64",
"cp310-cp310-manylinux_2_17_x86_64",
"cp310-cp310-win_amd64",
"cp310-cp310-macosx_12_0_arm64",
"cp311-cp311-manylinux_2_17_x86_64",
"cp311-cp311-win_amd64",
"cp311-cp311-macosx_12_0_arm64"
]
}
| ivy/available_configs.json/0 | {
"file_path": "ivy/available_configs.json",
"repo_id": "ivy",
"token_count": 266
} | 1 |
{
"jax": ["dm-haiku", "flax"],
"numpy": ["numpy"],
"mxnet": ["mxnet"],
"torch": ["torchvision", "torch-scatter"],
"tensorflow": ["tensorflow-probability"]
}
| ivy/docker/requirement_mappings_gpu.json/0 | {
"file_path": "ivy/docker/requirement_mappings_gpu.json",
"repo_id": "ivy",
"token_count": 74
} | 2 |
Open Tasks
==========
.. _`repo`: https://github.com/unifyai/ivy
.. _`discord`: https://discord.gg/sXyFF8tDtm
.. _`open tasks thread`: https://discord.com/channels/799879767196958751/1189903501011202128
.. _`issue description`: https://github.com/unifyai/ivy/issues/1526
.. _`reference API`: https://numpy.org/doc/stable/reference/routines.linalg.html
.. _`imports`: https://github.com/unifyai/ivy/blob/38dbb607334cb32eb513630c4496ad0024f80e1c/ivy/functional/frontends/numpy/__init__.py#L27
.. _`Deep Dive`: ../deep_dive.rst
Here, we explain all tasks which are currently open for contributions from the community!
This section of the docs will be updated frequently, whereby new tasks will be added and completed tasks will be removed.
The tasks outlined here are generally broad high-level tasks, each of which is made up of many individual sub-tasks, distributed across task-specific `ToDo List Issues <https://github.com/unifyai/ivy/issues?q=is%3Aopen+is%3Aissue+label%3AToDo>`_.
Please read about :ref:`overview/contributing/the_basics:ToDo List Issues` in detail before continuing.
All tasks should be selected and allocated as described in the ToDo List Issues section.
We make no mention of task selection and allocation in the explanations below, which instead focus on the steps to complete only once a sub-task has been allocated to you.
The tasks currently open are:
#. Fixing Failing Tests
#. Function Formatting
#. Frontend APIs
#. Ivy Experimental API
We try to explain these tasks as clearly as possible, but in cases where things are not clear, then please feel free to reach out on `discord`_ in the `open tasks thread`_!
Please always use the latest commit on GitHub when working on any of these tasks, **DO NOT** develop your code using the latest PyPI release of :code:`ivy`.
Fixing Failing Tests
--------------------
We've identified a range of functions and tests that fail the tests.
The root of these issues is not always straightforward.
In some instances, the problem may lie within the function implementations, while in others, it could be the way the tests were added previously.
Certain failing tests are more urgent to fix than others, as mentioned in the sub-section below.
We encourage contributions from the community to help tackle these challenges.
How to Contribute
~~~~~~~~~~~~~~~~~
**Identifying Issues**
To get started, visit our issues page: `Failing Tests <https://github.com/unifyai/ivy/issues?q=is%3Aissue+is%3Aopen+label%3A%22Failing+Test%22+label%3A%22ToDo%22>`_.
Here, you will find a list of open issues labeled as "Failing Test" and "ToDo".
These issues are categorised under various frameworks supported by our repository.
We encourage you to select a framework you're comfortable with or interested in contributing to.
**Selecting a Test**
Within each framework, tests are classified as either "Priority Open" or "Other Open."
While we prioritize fixing the "Priority Open" tests, contributions towards any test, including those labeled "Other Open," are highly valuable.
Each test issue is linked directly to the specific failing workflow.
This linkage provides you with immediate access to the details of what exactly is failing and the context around it.
**Making Your Contribution**
After selecting a test to work on, please make the necessary changes and create a PR referring to `the basics <the_basics.rst>`_.
Ensure that your solution addresses the issue effectively and doesn't introduce new errors.
Once you're confident in your fix, submit a pull request to the main repository.
Our team will review your contribution, provide feedback if necessary, and then merge your changes once we're good to go.
**Video**
.. raw:: html
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/EmN2T_O_Ggw" class="video" allowfullscreen="true">
</iframe>
Frontend APIs
-------------
For this task, the goal will be to implement functions for each of the frontend functional APIs (see `Ivy as a Transpiler <../design/ivy_as_a_transpiler.rst>`_), with frontend APIs implemented for: :code:`JAX`, :code:`NumPy`, :code:`TensorFlow` :code:`PyTorch`, :code:`Paddle`, :code:`Scipy`, :code:`MXNet` and :code:`MindSpore`.
Currently, we have many ToDo list issues `open <https://github.com/unifyai/ivy/issues?q=is%3Aopen+is%3Aissue+label%3AToDo+label%3A%22JAX+Frontend%22%2C%22TensorFlow+Frontend%22%2C%22PyTorch+Frontend%22%2C%22NumPy+Frontend%22+-label%3A%22Test+Sweep%22>`_ for this task.
The general workflow for this task is:
#. Find the correct location for the function by following the :ref:`overview/contributing/open_tasks:Where to place a frontend function` subsection below
#. Implement the function by following the `Ivy Frontends <../deep_dive/ivy_frontends.rst>`_ guide
#. Write tests for your function by following the `Ivy Frontend Tests <../deep_dive/ivy_frontends_tests.rst>`_ guide
#. Verify that the tests for your function are passing
If you feel as though there is an ivy function :code:`ivy.<func_name>` clearly missing, which would make your frontend function much simpler to implement, then you should first do the following:
#. Create a new issue with the title :code:`ivy.<func_name>`
#. Add the labels :code:`Suggestion`, :code:`Experimental`, :code:`Ivy API` and :code:`Next Release` to it
#. Then simply leave this issue open.
At some point, a member of our team will assess whether it should be added, and if so, they will add it to another appropriate ToDo list issue (see the open task below).
You do not need to wait for this in order to proceed.
After this, you then have two options for how to proceed:
#. Try to implement the function as a composition of currently present ivy functions, as explained in the :ref:`overview/deep_dive/ivy_frontends:Short Frontend Implementations` sub-section of the `Ivy Frontends <../deep_dive/ivy_frontends.rst>`_ guide, and add the :code:`#ToDo` comment in the implementation as explained.
Once the PR is merged, your sub-task issue will then be closed as normal.
#. Alternatively, if you do not want to try and implement the frontend function compositionally, or if this is not feasible, then you can simply choose another frontend function to work on.
You could also choose to work on another open task entirely at this point if you wanted to.
For example, you might decide to wait for a member of our team to review your suggested addition :code:`ivy.<func_name>`, and potentially add this to an Ivy Experimental ToDo list issue (see the open task below).
In either case, you should add the label "Pending other Issue" to the frontend sub-task issue, and leave it open.
This issue will then still show up as open in the original frontend ToDo list, helpfully preventing others from working on this problematic frontend function, which depends on the unimplemented :code:`ivy.<func_name>`.
Finally, you should add a comment to the issue with the contents: :code:`pending <issue_link>`, which links to the :code:`ivy.<func_name>` issue, making the "Pending other Issue" label more informative.
There are a few other points to take note of when working on your chosen frontend function:
#. You should only implement **one** frontend function.
#. The frontend function is framework-specific, thus it should be implemented in its respective frontend framework only.
#. Each frontend function should be tested on all backends to ensure that conversions are working correctly.
#. Type hints, docstrings, and examples are not required for frontend functions.
#. Some frontend functions shown in the ToDo list issues are aliases of other functions.
If you detect that this is the case, then you should add all aliases in your PR, with a single implementation and then simple bindings to this implementation, such as :code:`<alias_name> = <function_name>`.
If you notice that an alias function has already been implemented and pushed, then you can simply add this one-liner binding and get this very simple PR merged.
In the case where your chosen function exists in all frameworks by default, but is not implemented in Ivy's functional API, please convert your existing GitHub issue to request for the function to be added to Ivy.
Meanwhile, you can select another frontend function to work on from the ToDo list!
If you're stuck on a function that requires complex compositions, you're allowed to reselect a function too!
Where to place a frontend function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The placement of new frontend functions for a given backend should follow the categorisation of the backend API as faithfully as possible.
In each `issue description`_, there will be a link to the relevant `reference API`_.
Check where the function you're working on is located, e.g. :code:`numpy.inner` falls under :code:`numpy.linalg`.
Then, in the Ivy source code, check :code:`ivy/functional/frontends/[backend]` for pre-existing files which best match the function's category in the backend reference API.
Taking :code:`numpy.inner` as an example, we can see that there are a few :code:`ivy/functional/frontends/numpy` sub-directories to choose from:
.. code-block:: bash
:emphasize-lines: 4
creation_routines
fft
indexing_routines
linalg
logic
ma
manipulation_routines
mathematical_functions
matrix
ndarray
random
sorting_searching_counting
statistics
ufunc
There is a :code:`linalg` sub-directory, so we choose this.
Then we need to choose from the files in this hierarchy:
.. code-block:: bash
:emphasize-lines: 3
__init__.py
decompositions.py
matrix_and_vector_products.py
matrix_eigenvalues.py
norms_and_other_numbers.py
solving_equations_and_inverting_matrices.py
This may require a bit of reasoning.
:code:`inner` calculates the inner product of two arrays, so :code:`matrix_and_vector_products.py` seems like the most appropriate option.
It is important to note that some functions require the :code:`np.linalg.[func]` namespace, as can gleamed from the numpy `reference API`_.
These functions are listed out under the :code:`functional/frontends/numpy/__init__.py` `imports`_.
There are some functions which have not been implemented yet, and are therefore commented out.
Once you have finished the implementation of one of these functions, uncomment it from the list.
The location of :code:`test_numpy_inner` should mirror the location of its corresponding function, this time in :code:`ivy_tests/test_ivy/test_frontends/[backend]`.
If you're unsure about where to put the function you're working on, explore the content of these files to see if you can find a similar function.
In :code:`matrix_and_vector_products.py`, we can see other functions such as :code:`outer` that are similar to :code:`inner`.
This is confirmation that we've found the correct place!
If many of the files are empty and you're unsure where to place your function, feel free to ask the member of the Ivy team reviewing your PR.
Frontend checklist
~~~~~~~~~~~~~~~~~~
After creating a frontend-related Pull Request on github, you will notice a checklist is automatically added. This checklist describes the main points that need to be taken into consideration when adding a new frontend function. Please do not worry if you don't understand everything in that checklist! It's mainly there for the reviewer to make sure everything has been done correctly.
However, you can still use the checklist as a reference in cases where you do understand the content, if you find it helpful in your development efforts. In that case, feel free to update any "not completed" (marked with β) items of the list to "stuck" (π) and/or "ready for review" (β
) status. Your reviewer will make sure to guide you as needed π.
**Notes**:
1. More details on how to update the checklist items can be found in the :ref:`overview/contributing/open_tasks:Formatting checklist` part of our docs.
2. Do not edit the checklist text, only the emoji symbols.
3. Please refrain from using the checkboxes next to checklist items.
Function Formatting
-------------------
Currently, we have many ToDo list issues `open <https://github.com/unifyai/ivy/issues?q=is%3Aopen+is%3Aissue+label%3A%22Function+Reformatting%22+label%3AToDo>`_ for a general function formatting task, which is explained below.
Each function in each submodule should be updated to follow the implementation instructions given in the `Deep Dive`_ section.
The updates should be applied for the:
#. ivy API
#. all backend APIs
#. array instance methods
#. container instance methods
#. array operators
#. array reverse operators
#. container operators
#. container reverse operators
The `Deep Dive`_ is an **essential** resource for learning how each of these functions/methods should be implemented.
Before starting any contribution task, you should go through the `Deep Dive`_, and familiarize yourself with the content.
At the time of writing, many of the functions are not implemented as they should be.
You will need to make changes to the current implementations, but you do not need to address *all* sections of the `Deep Dive`_ in detail.
Specifically, you **do not** need to address the following:
#. Implement the hypothesis testing for the function
#. Get the tests passing for your function, if they are failing before you start
However, everything else covered in the `Deep Dive`_ must be addressed.
Some common important tasks are:
#. Remove all :code:`lambda` and direct bindings for the backend functions (in :code:`ivy.functional.backends`), with each function instead defined using :code:`def`.
#. Implement the following if they don't exist but should do: :class:`ivy.Array` instance method, :class:`ivy.Container` instance method, :class:`ivy.Array` special method, :class:`ivy.Array` reverse special method, :class:`ivy.Container` special method, :class:`ivy.Container` reverse special method.
#. Make sure that the aforementioned methods are added into the correct category-specific parent class, such as :class:`ivy.ArrayWithElementwise`, :class:`ivy.ContainerWithManipulation` etc.
#. Correct all of the `Function Arguments <../deep_dive/function_arguments.rst>`_ and the type hints for every function **and** its *relevant methods*, including those you did not implement yourself.
#. Add the correct `Docstrings <../deep_dive/docstrings.rst>`_ to every function **and** its *relevant methods*, including those you did not implement yourself.
#. Add thorough `Docstring Examples <../deep_dive/docstring_examples.rst>`_ for every function **and** its *relevant methods* and ensure they pass the docstring tests.
Formatting checklist
~~~~~~~~~~~~~~~~~~~~
After creating your Pull Request on github, you should then produce the checklist for the formatting task as follows:
1. Add a comment with the following format: :code:`add_reformatting_checklist_<category_name>` on your PR, where *<category_name>* is the name of the category that the function belongs to.
An example of this is shown below.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/open_tasks/checklist_generator.png?raw=true
:width: 420
Using this formatting will then trigger our github automation bots to update your comment with the proper markdown text for the checklist.
These updates might take a few moments to take effect, so please be patient π.
2. After adding the checklist to your PR, you should then modify this checklist with the status of each item according to the symbols(emojis) within the LEGEND section.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/open_tasks/checklist_legend.png?raw=true
:width: 420
3. When all check items are marked as (β
, β©, or π), you should request a review for your PR and we will start checking your implementation and marking the items as complete using the checkboxes next to them.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/open_tasks/checklist_checked.png?raw=true
:width: 420
4. In case you are stuck or need help with one of the checklist items, please add the π symbol next to the item on the checklist, and proceed to add a comment elaborating on your point of struggle with this item.
The PR assignee will then see this comment and address your issues.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/open_tasks/checklist_SOS.png?raw=true
:width: 420
**Notes**:
1. It is important that the PR author is the one to add the checklist generating comment in order to ensure they will have access to edit and update it later.
2. The checklist items' statuses should be manually updated by the PR author.
It does not automatically run any tests to update them!
3. Do not edit the checklist text, only the emoji symbols. π
4. Please refrain from using the checkboxes next to checklist items.
Ivy Experimental API
--------------------
The goal of this task is to add functions to the existing Ivy API which would help with the implementation for many of the functions in the frontend.
Your task is to implement these functions in Ivy, along with their Implementation in the respective backends which are :code:`Jax`, :code:`PyTorch`, :code:`TensorFlow` :code:`NumPy` and :code:`Paddle`.
You must also implement tests for these functions.
There is only one central ToDo list `issue <https://github.com/unifyai/ivy/issues/3856>`_ for this task.
A general workflow for these tasks would be:
#. Analyze the function type, we have a very detailed section for it in the deep dive `Function Types Guide <../deep_dive/function_types.rst>`_
#. Every function will have a different file structure according to the function type, refer to :ref:`overview/contributing/open_tasks:Where to place a backend function` subsection below.
#. Implement the container instance method in :mod:`ivy/container/experimental/[relevant_submodule].py` and the array instance method
in :mod:`ivy/array/experimental/[relevant_submodule].py`
#. Write tests for the function using the `Ivy Tests <../deep_dive/ivy_tests.rst>`_ guide, and make sure they are passing.
A few points to keep in mind while doing this:
#. Make sure all the positional arguments are positional-only and optional arguments are keyword-only.
#. In case some tests require function-specific parameters, you can create composite hypothesis strategies using the :code:`draw` function in the hypothesis library.
If youβre stuck on a function which requires complex compositions, feel free to reselect a function
Extending the Ivy API
~~~~~~~~~~~~~~~~~~~~~~~
We primarily invite contributors to work on the tasks listed as :ref:`overview/contributing/open_tasks:Open Tasks`, as these are on our current roadmap. As a result of this, we prompt everyone interested in contributing to our Experimental API to do so under the :ref:`Ivy Experimental API Open Task <overview/contributing/open_tasks:Ivy Experimental API>`.
However, if you would like to extend Ivy's functionality with a new function, you are invited to open an issue using the *Missing Function Suggestion* template as described in :ref:`overview/contributing/open_tasks:Creating an Issue on Ivy's GitHub using a Template`.
In this template form, you'll be asked to fill in the reason you think we should implement the suggested function, as well as the links to any native implementations of the suggested function.
We will review your issue as soon as possible and let you know if it's been accepted or not. In case we deem that the suggested function fits our roadmap, we will add it as a subtask to the `Ivy Experimental API Open Task <overview/contributing/open_tasks:Ivy Experimental API>`_.
Where to place a backend function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The placement of the backend function should be in the proper location to follow the proper structure as guided below.
There are multiple types of backend functions as discussed above, we will go through 3 of those which you will encounter while adding a backend function in our Functional API:
**Primary Functions**
Implement the function in :mod:`ivy/functional/ivy/experimental/[relevant_submodule].py` simply deferring to their backend-specific implementation
(where ivy.current_backend(x).function_name() is called), refer to the :ref:`Ivy API Guide <overview/deep_dive/navigating_the_code:Ivy API>`
to get a clearer picture of how this must be done. Then, implement the functions in each of the backend files :mod:`ivy/functional/backends/backend_name/experimental/[relevant_submodule].py`,
you can refer to the :ref:`Backend API Guide <overview/deep_dive/navigating_the_code:Backend API>` for this.
**Compositional Functions**
Implement the function in :mod:`ivy/functional/ivy/experimental/[relevant_submodule].py`, we will not use the primary function approach in this
case, the implementation will be a composition of functions from Ivy's functional API. You can refer to
:ref:`overview/deep_dive/function_types:Compositional Functions` for a better understanding of this.
You don't need to add any implementation in any other file in this case.
**Mixed Functions**
Sometimes, a function may only be provided by some of the supported backends. In this case, we have to take a mixed approach. You can say that this is a mix of both
primary and a compositional function. For this, you have to implement the function in :mod:`ivy/functional/ivy/experimental/[relevant_submodule].py`, where the implementation
will be a composition of functions from Ivy's functional API. After you are done with this, you then have to implement the functions in each of the backend files
:mod:`ivy/functional/backends/backend_name/experimental/[relevant_submodule].py`.
**Other Function Types**
:ref:`overview/deep_dive/function_types:Standalone Functions`, :ref:`overview/deep_dive/function_types:Nestable Functions` and
:ref:`overview/deep_dive/function_types:Convenience Functions` are the ones which you will rarely come across
while implementing a function from the ToDo List but they are an essential part of the Ivy API.
Creating an Issue on Ivy's GitHub using a Template
----------------------------------------------------
#. Go to the `GitHub Ivy <https://github.com/unifyai/ivy>`_ page, select the Issues tab, and click on the green button :code:`New issue` at the centre-right of the screen.
#. You will see 5 options. Each option has a predetermined form. To start filling in the form, click on the green button at the right which says :code:`Get started`. The options are explained as follows:
* Bug Report:
In case you find a bug in our API, you have to provide details in the form and the issue will be assigned to one of our team members to look into.
* Feature request:
If you want to suggest an idea for our project, our team is always open to suggestions.
* Missing Function Suggestion:
In case you find a function that the other frameworks have and is missing in our API or we have some functionality missing that the other frameworks support(superset behavior).
* Sub-Task:
Reserve a sub-task from a ToDo list issue.
* Questions:
If you want to interact with the Ivy community to ask for any type of help, discussing and more!
#. To submit your issue, you will have to complete the requirements in the form and click on the green button :code:`Submit new issue` at the right-bottom of the screen.
**Round Up**
This should have hopefully given you a good understanding of the basics for contributing.
If you have any questions, please feel free to reach out on `discord`_ in the `open tasks thread`_!
| ivy/docs/overview/contributing/open_tasks.rst/0 | {
"file_path": "ivy/docs/overview/contributing/open_tasks.rst",
"repo_id": "ivy",
"token_count": 6374
} | 3 |
Fix Failing Tests:
==============================
.. _`repo`: https://github.com/unifyai/ivy
.. _`issues`: https://github.com/unifyai/ivy/issues?q=is%3Aopen+is%3Aissue+label%3A%22Failing+Test%22
.. _`issue`: https://github.com/unifyai/ivy/issues/25849
.. _`discord`: https://discord.gg/sXyFF8tDtm
.. _`docker thread`: https://discord.com/channels/799879767196958751/1186629067966009424
.. _`miniconda`: https://docs.conda.io/en/latest/miniconda.html
.. _`venv`: https://docs.python.org/3/library/venv.html
.. _`ivy/scripts/shell`: https://github.com/unifyai/ivy/tree/f71a414417646e1dfecb5de27fb555f80333932c/scripts/shell
.. _`platform compatibility tags`: https://packaging.python.org/en/latest/specifications/platform-compatibility-tags/
.. _`logging level`: https://docs.python.org/3/library/logging.html#logging.Logger.setLevel
.. _`pycharm thread`: https://discord.com/channels/799879767196958751/1186628916522262629
.. _`pre-commit thread`: https://discord.com/channels/799879767196958751/1186629635694399539
.. _`pip packages thread`: https://discord.com/channels/799879767196958751/1186629837515935765
.. _`ivy tests thread`: https://discord.com/channels/799879767196958751/1189907526226034698
.. _`ivy frontend tests thread`: https://discord.com/channels/799879767196958751/1190246804940402738
We're really happy you'd like to learn how to contribute towards Ivy π
This page explains the main steps to get started with fixing failing tests!
Prerequirement:
**************************
Before you start with this you should have:
#. `Git <https://git-scm.com/book/en/v2/Getting-Started-Installing-Git>`_
#. `Visual Studio Code here <https://code.visualstudio.com/>`_
#. `Docker Desktop <https://www.docker.com/products/docker-desktop>`_
Setting Up
***********
**Forking and cloning the repo**
#. `Fork Ivy Repo <https://github.com/unifyai/ivy/fork>`_
#. `Clone <https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository>`_ the fork with it's submoodules locally or on codespaces
.. dropdown:: If you are new to Git:
Depending on your preferred mode of cloning, any of the below should work:
.. code-block:: bash
git clone --recurse-submodules [email protected]:YOUR_USERNAME/ivy.git
.. code-block:: bash
git clone --recurse-submodules https://github.com/YOUR_USERNAME/ivy.git
.. code-block:: bash
gh repo clone YOUR_USERNAME/ivy your_folder -- --recurse-submodules
Then enter into your cloned ivy folder, for example :code:`cd ~/ivy` and add Ivy original repository as upstream, to easily sync with the latest changes.
.. code-block:: bash
git remote add upstream https://github.com/unifyai/ivy.git
.. dropdown:: **Windows, docker and VsCode**
#. Open the Docker desktop, make sure it's running in the background while following the process below.
#. Open Ivy repo folder with Visual Studio Code, and follow the next steps:
a. At the bottom right a window will pop up asking for "Dev Containers" extension, install that.
In case the window doesn't pop up, search for the "Dev Containers" extension in the Visual Studio Code and install that.
b. Install the "Docker" extension for Visual Studio Code, you'll easily find that by searching "docker" in the extensions tab.
c. Once done, restart Visual Studio Code, at the bottom left corner there would be an icon similar to " >< " overlapped on each other.
d. Clicking on that will open a bar at the top which will give you an option "Open Folder in Container...", click on that.
e. Run tests with the next command "pytest test_file_path::test_fn_name". You are inside the container now, and you can locally run the tests that you've modified.
.. warning::
Opening the container may take a long time, as the Docker image is very large (5+ GB).
How to run tests
****************
To find tests which are currently failing, open the `issues`_ in our GitHub.,
You can notice :code:`test_jax_transpose` is failing in this `issue`_, this function is in the Jax frontends in the manipulaiton submodule.
To run test locally, you need to run the following command:
:code:`pytest test_file_path::test_fn_name`
In the case of :code:`test_jax_transpose`, the command will be
.. code-block:: bash
pytest ivy_tests/test_ivy/test_frontends/test_jax/test_numpy/test_manipulations.py::test_jax_transpose
You will need to read through the errors in the terminal and use the common errors in the list at the end of this page to solve the test.
.. dropdown:: **Setting Up Testing for VsCode**
The steps are as following to setup testing on VS Code.
1. In the left toolbar menu, click on the flask Icon and select "Configure Python Tests" and select PyTest as the test framework.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/vs_code_testing_setup/vs_testing_01.png?raw=true
:width: 420
1. Select ivy_tests as the root directory for testing.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/vs_code_testing_setup/vs_testing_02.png?raw=true
:width: 420
1. Configure the _array_module.py file in the array_api_tests to be set to one of the supported frameworks.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/vs_code_testing_setup/vs_testing_03.png?raw=true
:width: 420
1. Following all of this, you should refresh the test suite and you should now be able to run tests right from VS Code!
2. To simply run the tests using the play button in the toolbar, you will need to add the .vscode folder to your workspace. Then add the ``settings.json`` file containing the following:
.. code-block:: json
{
"python.testing.pytestArgs": [
"./ivy_tests/test_ivy/",
"./ivy_tests/array_api_testing/test_array_api/",
"--continue-on-collection-errors",
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.autoTestDiscoverOnSaveEnabled": true,
}
Common Errors
*************
This section aims to assist you in navigating through some common errors you might encounter while working with the Ivy's Functional API. We'll go through :code:`test_jax_transpose` and then some common errors which you might encounter while working as a contributor or a developer.
#. Starting off with :code:`test_jax_transpose`, it throws an Assertion error because the shape returned by ground truth is different from the shape returned by the target backend.
.. code-block:: python
E ivy.utils.exceptions.IvyBackendException: paddle: to_numpy: paddle: default_device: paddle: dev: (PreconditionNotMet) Tensor not initialized yet when DenseTensor::place() is called.
E [Hint: holder_ should not be null.] (at /paddle/paddle/phi/core/dense_tensor_impl.cc:61)
E
E Falsifying example: test_jax_transpose(
E on_device='cpu',
E frontend='jax',
E backend_fw='paddle',
E array_and_axes=(array([], shape=(1, 0), dtype=complex64),
E ['complex64'],
E None),
E test_flags=FrontendFunctionTestFlags(
E num_positional_args=0,
E with_out=False,
E inplace=False,
E as_variable=[False],
E native_arrays=[False],
E test_trace=False,
E generate_frontend_arrays=False,
E transpile=False,
E precision_mode=True,
E ),
E fn_tree='ivy.functional.frontends.jax.numpy.transpose',
E )
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.87.3', b'AAEGBAEGAQAAAAAAAAAAAAAB') as a decorator on your test case
**Solution:**
As it is failing for torch backend and its producing a different shape than the ground truth, it is most likely a bug in the :code:`permute_dims` in torch backend which is being used in this frontend function.
Now lets explore some other common errors you might face.
#. This is the case where we pass in a dtype to `torch` which is not actually supported by the torch's native framework itself.
.. code-block:: python
E RuntimeError: "logaddexp2_cpu" not implemented for 'Half'
E Falsifying example: test_logaddexp2(
E backend_fw='torch',
E on_device='cpu',
E dtype_and_x=(['float16', 'float16'],
E [array([-1.], dtype=float16), array([-1.], dtype=float16)]),
E test_flags=FunctionTestFlags(
E ground_truth_backend='tensorflow',
E num_positional_args=2,
E with_out=False,
E instance_method=False,
E test_gradients=False,
E test_trace=None,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E fn_name='logaddexp2',
E )
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.82.4', b'AXicY2BkAAMoBaaR2WAAAACVAAY=') as a decorator on your test case
**Solution:**
As we are explicitly passing in a `dtype` which is not supported in the torch framework itself so torch backend fails here, a possible fix is adding the dtype in the unsupported dtype decoartor which would look something like this.
.. code-block:: python
@with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, backend_version)
and place it above the function definition.
#. This is the case where the value from the ground-truth backend(tensorflow) does not match the value of the backend(jax) we are testing for this case.
.. code-block:: python
E AssertionError: the results from backend jax and ground truth framework tensorflow do not match
E 0.25830078125!=0.258544921875
E
E
E Falsifying example: test_acosh(
E backend_fw='jax',
E on_device='cpu',
E dtype_and_x=(['float16'], [array(4., dtype=float16)]),
E test_flags=FunctionTestFlags(
E ground_truth_backend='tensorflow',
E num_positional_args=1,
E with_out=False,
E instance_method=False,
E test_gradients=True,
E test_trace=None,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E fn_name='acosh',
E )
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.82.4', b'AXicY2BAABYQwQgiAABDAAY=') as a decorator on your test case
**Solution:**
As both the results are pretty close to each others in this case, adding an `rtol = 10^-3` and `atol = 10^-3` would fix the failing tests here.
.. code-block:: python
@handle_test(
fn_tree="functional.ivy.acosh",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=1,
large_abs_safety_factor=4,
small_abs_safety_factor=4,
),
)
def test_acosh(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
rtol_=1e-2,
atol_=1e-2,
x=x[0],
)
#. This is a similar assertion as stated in point 2 but with torch and ground-truth tensorflow not matching but the matrices are quite different so there should be an issue in the backends rather than a numerical instability here.
.. code-block:: python
E AssertionError: the results from backend torch and ground truth framework tensorflow do not match
E [[1.41421356 1.41421356 1.41421356]
E [1.41421356 1.41421356 1.41421356]
E [1.41421356 inf 1.41421356]]!=[[1.41421356e+000 1.41421356e+000 1.41421356e+000]
E [1.41421356e+000 1.41421356e+000 1.41421356e+000]
E [1.41421356e+000 1.34078079e+154 1.41421356e+000]]
E
E
E Falsifying example: test_abs(
E backend_fw='torch',
E on_device='cpu',
E dtype_and_x=(['complex128'],
E [array([[-1.-1.00000000e+000j, -1.-1.00000000e+000j, -1.-1.00000000e+000j],
E [-1.-1.00000000e+000j, -1.-1.00000000e+000j, -1.-1.00000000e+000j],
E [-1.-1.00000000e+000j, -1.-1.34078079e+154j, -1.-1.00000000e+000j]])]),
E fn_name='abs',
E test_flags=FunctionTestFlags(
E ground_truth_backend='tensorflow',
E num_positional_args=1,
E with_out=False,
E instance_method=False,
E test_gradients=False,
E test_trace=None,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E )
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.82.4', b'AXicY2ZkYAIiBiBgZIAAxqHEXsAAB7jUQAAAMtEAzQ==') as a decorator on your test case
**Solution:**
If this is passing for all other backends and just failing for torch, and the result matrices are also different which states there is not a numerical instability, the issue is with the torch backend. The best approach in this case is to see the torch backend, there should be an issue in the implementation. You have to correct the backend implementation for torch.
Where to ask for Help
*********************
The best place to ask for help is our `discord`_ server in the relevant channels. For instance, lets say you're facing an issue with :code:`test_jax_transpose` function, in this case you should post your query in the `ivy frontend tests thread`_.
| ivy/docs/overview/deep_dive/fix_failing_tests.rst/0 | {
"file_path": "ivy/docs/overview/deep_dive/fix_failing_tests.rst",
"repo_id": "ivy",
"token_count": 6437
} | 4 |
Ivy as a Framework
==================
On the `Building Blocks <building_blocks.rst>`_ page, we explored the role of the Backend functional APIs, the Ivy functional API, the Backend handler, and the Tracer.
These are parts labeled as (a) in the image below.
On the `Ivy as a Transpiler <ivy_as_a_transpiler.rst>`_ page, we explained the role of the backend-specific frontends in Ivy, and how these enable automatic code conversions between different ML frameworks.
This part is labeled as (b) in the image below.
So far, by considering parts (a) and (b), we have mainly treated Ivy as a fully functional framework with code conversion abilities.
Ivy builds on these primitives to create a fully-fledged ML framework with stateful classes, optimizers, and convenience tools to get ML experiments running in very few lines of code.
Specifically, here we consider the :class:`ivy.Container` class, the :class:`ivy.Array` class and the stateful API.
These parts are labeled as (c) in the image below.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/design/submodule_dependency_graph.png?raw=true
:align: center
:width: 100%
You may choose from the following upcoming discussions or click next.
| (a) `Ivy Container <ivy_as_a_framework/ivy_container.rst>`_
| Hierarchical container solving almost everything behind the scenes in Ivy
|
| (b) `Ivy Stateful API <ivy_as_a_framework/ivy_stateful_api.rst>`_
| Trainable Layers, Modules, Optimizers, and more built on the functional API and the Ivy Container
|
| (c) `Ivy Array <ivy_as_a_framework/ivy_array.rst>`_
| Bringing methods as array attributes to Ivy, cleaning up and simplifying code
.. toctree::
:hidden:
:maxdepth: -1
:caption: Ivy as a Framework
ivy_as_a_framework/ivy_container.rst
ivy_as_a_framework/ivy_stateful_api.rst
ivy_as_a_framework/ivy_array.rst
**Round Up**
Hopefully, this has given you a good idea of how Ivy can be used as a fully-fledged ML framework.
Please reach out on `discord <https://discord.gg/sXyFF8tDtm>`_ if you have any questions!
| ivy/docs/overview/design/ivy_as_a_framework.rst/0 | {
"file_path": "ivy/docs/overview/design/ivy_as_a_framework.rst",
"repo_id": "ivy",
"token_count": 646
} | 5 |
``ivy.unify()``
===============
..
β οΈ **Warning**: The tracer and the transpiler are not publicly available yet, so certain parts of this doc won't work as expected as of now!
Ivy's Unify function is an alias for ``ivy.transpile(..., to="ivy", ...)``. You can know
more about the transpiler in the `transpile() <transpile.rst>`_ page.
Unify API
---------
.. py:function:: ivy.unify(*objs, source = None, args = None, kwargs = None, **transpile_kwargs,)
Transpiles an object into Ivy code. It's an alias to
``ivy.transpile(..., to="ivy", ...)``
:param objs: Native callable(s) to transpile.
:type objs: ``Callable``
:param source: The framework that ``obj`` is from. This must be provided unless ``obj`` is a framework-specific module.
:type source: ``Optional[str]``
:param args: If specified, arguments that will be used to unify eagerly.
:type args: ``Optional[Tuple]``
:param kwargs: If specified, keyword arguments that will be used to unify eagerly.
:type kwargs: ``Optional[dict]``
:param transpile_kwargs: Arbitrary keyword arguments that will be passed to ``ivy.transpile``.
:rtype: ``Union[Graph, LazyGraph, ModuleType, ivy.Module]``
:return: A transpiled ``Graph`` or a non-initialized ``LazyGraph``. If the object is a native trainable module, the corresponding module in the target framework will be returned. If the object is a ``ModuleType``, the function will return a copy of the module with every method lazily transpiled.
Usage
-----
As we mentioned, ``ivy.unify()`` is an alias for ``ivy.transpile(..., to="ivy", ...)``.
So you can use it in the same way as ``ivy.transpile()``. In this case, instead of
getting a graph composed of functions from the functional API of the target framework,
the function will return a graph fully composed of ivy functions, allowing you to run
the graph in any framework directly.
.. code-block:: python
import ivy
ivy.set_backend("jax")
def test_fn(x):
return jax.numpy.sum(x)
x1 = ivy.array([1., 2.])
# transpiled_func and unified_func will have the same result
transpiled_func = ivy.transpile(test_fn, to="ivy", args=(x1,))
unified_func = ivy.unify(test_fn, args=(x1,))
Sharp bits
----------
``ivy.unify()`` has the same sharp bits as ``ivy.transpile()``. You can know more about
them in the :ref:`overview/one_liners/transpile:Sharp bits` section of the transpiler.
Examples
--------
Below, we will define a function in torch and try to call it with different native
arguments.
Here we will define the torch function and unify it:
.. code-block:: python
import ivy
import torch
def normalize(x):
mean = torch.mean(x)
std = torch.std(x)
return torch.div(torch.sub(x, mean), std)
normalize = ivy.unify(normalize, source="torch")
Now we can call the function with different ivy backends:
.. code-block:: python
import numpy as np
import jax.numpy as jnp
import tensorflow as tf
# create random numpy arrays for testing
x = np.random.uniform(size=10).astype(np.float32)
ivy.set_backend("numpy")
print(normalize(x))
# jax
x_ = jnp.array(x)
ivy.set_backend("jax")
print(normalize(x_))
# tensorflow
x_ = tf.constant(x)
ivy.set_backend("tensorflow")
print(normalize(x_))
# torch
x_ = torch.tensor(x)
ivy.set_backend("torch")
print(normalize(x_))
| ivy/docs/overview/one_liners/unify.rst/0 | {
"file_path": "ivy/docs/overview/one_liners/unify.rst",
"repo_id": "ivy",
"token_count": 1136
} | 6 |
# This shell script is required by the doc-builder. Moving it might break
# the doc-building pipeline
pip install -e .
pip install -r requirements/requirements.txt
if [[ $(arch) == 'arm64' ]]; then
brew install pandoc
pip install -r requirements/optional_apple_silicon_1.txt
pip install -r requirements/optional_apple_silicon_2.txt
else
sudo apt-get update
sudo apt-get install pandoc -y
pip install -r requirements/optional.txt
fi
| ivy/install_dependencies.sh/0 | {
"file_path": "ivy/install_dependencies.sh",
"repo_id": "ivy",
"token_count": 150
} | 7 |
# global
import abc
from typing import Optional, Union, Literal
# local
import ivy
class _ArrayWithActivationsExperimental(abc.ABC):
def logit(
self,
/,
*,
eps: Optional[float] = None,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.logit. This method simply
wraps the function, and so the docstring for ivy.logit also applies to
this method with minimal changes.
Parameters
----------
self
Input array.
eps
When eps is None the function outputs NaN where x < 0 or x > 1.
and inf or -inf where x = 1 or x = 0, respectively.
Otherwise if eps is defined, x is clamped to [eps, 1 - eps]
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
Optional output array.
Returns
-------
ret
Array containing elementwise logits of x.
Examples
--------
>>> x = ivy.array([1, 0, 0.9])
>>> z = x.logit()
>>> print(z)
ivy.array([ inf, -inf, 2.19722438])
>>> x = ivy.array([1, 2, -0.9])
>>> z = x.logit(eps=0.2)
>>> print(z)
ivy.array([ 1.38629448, 1.38629448, -1.38629436])
"""
return ivy.logit(self, eps=eps, complex_mode=complex_mode, out=out)
def thresholded_relu(
self: ivy.Array,
/,
*,
threshold: Union[int, float] = 0,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.thresholded_relu. This
method simply wraps the function, and so the docstring for
ivy.thresholded_relu also applies to this method with minimal changes.
Parameters
----------
self
input array.
threshold
threshold value above which the activation is linear. Default: ``0``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the relu activation function applied element-wise
with custom threshold.
Examples
--------
>>> x = ivy.array([-1., .2, 1.])
>>> y = x.thresholded_relu(threshold=0.5)
>>> print(y)
ivy.array([0., 0., 1.])
"""
return ivy.thresholded_relu(self._data, threshold=threshold, out=out)
def prelu(
self,
slope: Union[float, ivy.NativeArray, ivy.Array],
/,
*,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Prelu takes input data (Array) and slope array as input,
and produces one output data (array) where the function
f(x) = slope * x for x < 0, f(x) = x for x >= 0., is applied
to the data array elementwise. This operator supports unidirectional
broadcasting (array slope should be unidirectional broadcastable to
input tensor X);
Parameters
----------
self
input array.
slope
Slope Array. The shape of slope can be smaller than first input X;
if so, its shape must be unidirectional broadcastable to X.
out
Optional output array.
Returns
-------
ret
input array with prelu applied elementwise.
"""
return ivy.prelu(self._data, slope, out=out)
def relu6(
self,
/,
*,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Apply the rectified linear unit 6 function element-wise.
Parameters
----------
self
input array
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to.
It must have a shape that the inputs broadcast to.
Returns
-------
ret
an array containing the rectified linear unit 6 activation
of each element in input.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([-1., 0., 1., 2., 3., 4., 5., 6., 7.])
>>> y = ivy.relu6(x)
>>> print(y)
ivy.array([0., 0., 1., 2., 3., 4., 5., 6., 6.])
>>> x = ivy.array([-1., 0., 1., 2., 3., 4., 5., 6., 7.])
>>> y = ivy.zeros(9)
>>> ivy.relu6(x, out = y)
>>> print(y)
ivy.array([0., 0., 1., 2., 3., 4., 5., 6., 6.])
"""
return ivy.relu6(self._data, complex_mode=complex_mode, out=out)
def logsigmoid(
self: ivy.Array,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.logsigmoid. This method
simply wraps the function, and so the docstring for ivy.logsigmoid also
applies to this method with minimal changes.
Parameters
----------
self
Input array.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
Returns
-------
Array with same shape as input with Log-sigmoid applied to every element.
Examples
--------
>>> x = ivy.array([-1., 2., 4., -10.])
>>> z = x.logsigmoid()
>>> print(z)
ivy.array([ -1.31326175, -0.126928 , -0.01814993, -10.00004578])
>>> x = ivy.array([-2.5, 1., 0, 4.5])
>>> z = x.logsigmoid()
>>> print(z)
ivy.array([-2.57888985, -0.31326169, -0.69314718, -0.01104775])
"""
return ivy.logsigmoid(self._data, complex_mode=complex_mode)
def selu(self, /, *, out: Optional[ivy.Array] = None) -> ivy.Array:
"""Apply the scaled exponential linear unit function element-wise.
Parameters
----------
self
input array
out
optional output array, for writing the result to.
It must have a shape that the inputs broadcast to.
Returns
-------
ret
an array containing the scaled exponential linear unit activation
of each element in input.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([-1., 0., 1., 2., 3., 4., 5., 6., 7.])
>>> y = x.selu()
>>> print(y)
ivy.array([-1.11133075, 0., 1.05070102, 2.10140204, 3.15210295,
4.20280409, 5.25350523, 6.30420589, 7.35490704])
>>> x = ivy.array([-1., 0., 1., 2., 3., 4., 5., 6., 7.])
>>> y = ivy.zeros(9)
>>> x.selu(out = y)
>>> print(y)
ivy.array([-1.11133075, 0., 1.05070102, 2.10140204, 3.15210295,
4.20280409, 5.25350523, 6.30420589, 7.35490704])
"""
return ivy.selu(self._data, out=out)
def silu(self: ivy.Array, /, *, out: Optional[ivy.Array] = None) -> ivy.Array:
"""ivy.Array instance method variant of ivy.silu. This method simply
wraps the function, and so the docstring for ivy.silu also applies to
this method with minimal changes.
Parameters
----------
self
input array.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Examples
--------
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.silu()
>>> print(y)
ivy.array([-0.26894143, 0. , 0.73105854])
"""
return ivy.silu(self._data, out=out)
def elu(
self,
/,
*,
alpha: float = 1.0,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Ivy.Array instance method variant of ivy.elu. This method simply
wraps the function, and so the docstring for ivy.elu also applies to
this method with minimal.
Parameters
----------
self
input array.
alpha
scaler for controlling the slope of the function for x <= 0 Default: 1.0
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the elu activation function applied element-wise.
Examples
--------
>>> x = ivy.array([0.39, -0.85])
>>> y = x.elu()
>>> print(y)
ivy.array([ 0.39, -0.57])
"""
return ivy.elu(self._data, alpha=alpha, out=out)
def hardtanh(
self: ivy.Array,
/,
*,
max_val: float = 1,
min_val: float = -1,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.hardtanh. This method
simply wraps the function, and so the docstring for ivy.hardtanh also
applies to this method with minimal changes.
Parameters
----------
self
input array.
min_val
minimum value of the linear region range. Default: -1.
max_val
maximum value of the linear region range. Default: 1.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the hardtanh activation function applied element-wise
with custom linear region range.
Examples
--------
>>> x = ivy.array([-1., .2, 1.])
>>> y = x.hardtanh()
>>> print(y)
ivy.array([-1. , 0.2, 1. ])
"""
return ivy.hardtanh(self._data, min_val=min_val, max_val=max_val, out=out)
def tanhshrink(self: ivy.Array, /, *, out: Optional[ivy.Array] = None) -> ivy.Array:
"""ivy.Array instance method variant of ivy.tanhshrink. This method
simply wraps the function, and so the docstring for ivy.tanhshrink also
applies to this method with minimal changes.
Parameters
----------
self
input array.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Examples
--------
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.tanhshrink()
>>> print(y)
ivy.array([-0.23840582, 0. , 0.23840582])
"""
return ivy.tanhshrink(self._data, out=out)
def threshold(
self: ivy.Array,
/,
*,
threshold: Union[int, float],
value: Union[int, float],
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.threshold. This method
simply wraps the function, and so the docstring for ivy.threshold also
applies to this method with minimal changes.
Parameters
----------
self
input array.
threshold
threshold value for thresholding operation.
value
value to replace with if thresholding condition is not met.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the thresholding function applied element-wise.
Examples
--------
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.hreshold(threshold=0.5, value=0.0)
>>> print(y)
ivy.array([0.5, 0.5 , 1. ])
"""
return ivy.threshold(self._data, threshold=threshold, value=value, out=out)
def softshrink(
self: ivy.Array,
/,
*,
lambd: float = 0.5,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.softshrink. This method
simply wraps the function, and so the docstring for ivy.softshrink also
applies to this method with minimal changes.
Parameters
----------
self
input array.
lambd
the value of the lower bound of the linear region range.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the softshrink activation function applied element-wise.
Examples
--------
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.softshrink()
>>> print(y)
ivy.array([-0.5, 0. , 0.5])
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.softshrink(lambd=1.0)
>>> print(y)
ivy.array([0., 0., 0.])
"""
return ivy.softshrink(self._data, lambd=lambd, out=out)
def celu(
self: ivy.Array,
/,
*,
alpha: float = 1.0,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.celu. This method simply
wraps the function, and so the docstring for ivy.celu also applies to
this method with minimal changes.
Parameters
----------
self
input array.
alpha
the alpha (negative slope) value for CELU formulation.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the celu activation function applied element-wise.
Examples
--------
>>> x = ivy.array([0.39, -0.85])
>>> y = x.celu()
>>> print(y)
ivy.array([ 0.39, -0.57])
"""
return ivy.celu(self._data, alpha=alpha, complex_mode=complex_mode, out=out)
def scaled_tanh(
self: ivy.Array,
/,
*,
alpha: float = 1.7159,
beta: float = 0.67,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.scaled_tanh. This method
simply wraps the function, and so the docstring for ivy.scaled_tanh
also applies to this method with minimal changes.
Parameters
----------
self
input array.
alpha
The scaling parameter for the output.
Determines the amplitude of the tanh function.
Default: 1.7159
beta
The scaling parameter for the input.
Determines the slope of the tanh function.
Default: 0.67
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array after applying the scaled_tanh activation.
Examples
--------
>>> x = ivy.array([-3., 2., 3.])
>>> x.scaled_tanh()
ivy.array([-1.65537548, 1.49570239, 1.65537548])
>>> x = ivy.array([2., 2., 2.])
>>> x.scaled_tanh(alpha=9, beta=0.1)
ivy.array([1.77637792, 1.77637792, 1.77637792])
>>> x = ivy.array([2., 2., 2.])
>>> x.scaled_tanh(alpha=0.1, beta=9)
ivy.array([0.1, 0.1, 0.1])
"""
return ivy.scaled_tanh(self._data, alpha=alpha, beta=beta, out=out)
def hardshrink(
self: ivy.Array,
/,
*,
lambd: float = 0.5,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.hardshrink. This method
simply wraps the function, and so the docstring for ivy.hardshrink also
applies to this method with minimal changes.
Parameters
----------
self
input array.
lambd
the lambd value for the Hardshrink formulation
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the hardshrink activation function applied element-wise.
Examples
--------
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.hardshrink()
>>> print(y)
ivy.array([-1., 0., 1.])
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.hardshrink(lambd=1.0)
>>> print(y)
ivy.array([0., 0., 0.])
"""
return ivy.hardshrink(self._data, lambd=lambd, out=out)
def hardsilu(self, out: Optional[ivy.Array] = None) -> ivy.Array:
"""ivy.Array instance method which acts as a wrapper for ivy.hardsilu.
Parameters
----------
self
input array
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
an array containing the output of the hardsilu/hardswish function applied
to each element in ``x``.
Examples
--------
>>> x = ivy.array([1., 2., 3.])
>>> y = x.hardsilu()
>>> print(y)
ivy.array([0.66666667, 1.66666667, 3.])
"""
return ivy.hardsilu(self._data, out=out)
| ivy/ivy/data_classes/array/experimental/activations.py/0 | {
"file_path": "ivy/ivy/data_classes/array/experimental/activations.py",
"repo_id": "ivy",
"token_count": 8622
} | 8 |
# global
import abc
class _ArrayWithSetExperimental(abc.ABC):
pass
| ivy/ivy/data_classes/array/experimental/set.py/0 | {
"file_path": "ivy/ivy/data_classes/array/experimental/set.py",
"repo_id": "ivy",
"token_count": 26
} | 9 |
# global
from typing import Optional, Union, Sequence
import abc
# local
import ivy
# ToDo: implement all methods here as public instance methods
class _ArrayWithStatistical(abc.ABC):
def min(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
initial: Optional[Union[int, float, complex]] = None,
where: Optional[ivy.Array] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Calculate the minimum value of the input array ``x``.
Parameters
----------
self
Input array. Should have a real-valued data type.
axis
axis or axes along which minimum values must be computed.
By default, the minimum value must be computed over the
entire array. If a tuple of integers,minimum values must be
computed over multiple axes. Default: ``None``.
keepdims
optional boolean, if ``True``, the reduced axes (dimensions)
must be included in the result as singleton dimensions, and,
accordingly, the result must be compatible with the input
array (see :ref:`broadcasting`). Otherwise, if ``False``, the
reduced axes (dimensions) must not be included in the
result. Default: ``False``.
initial
The maximum value of an output element.
Must be present to allow computation on empty slice.
where
Elements to compare for minimum
out
optional output array, for writing the result to.
Returns
-------
ret
if the minimum value was computed over the entire array, a
zero-dimensional array containing the minimum value; otherwise,
a non-zero-dimensional array containing the minimum values.
The returned array must have the same data type
as ``x``.
Examples
--------
With :code:`ivy.Array` input:
>>> x = ivy.array([3., 4., 5.])
>>> y = x.min()
>>> print(y)
ivy.array(3.)
>>> x = ivy.array([[-1, 0, 1], [2, 3, 4]])
>>> y = x.min(axis=1)
>>> print(y)
ivy.array([-1, 2])
>>> x = ivy.array([0.1, 1.1, 2.1])
>>> y = ivy.array(0.)
>>> x.min(out=y)
>>> print(y)
ivy.array(0.1)
"""
return ivy.min(
self._data,
axis=axis,
keepdims=keepdims,
initial=initial,
where=where,
out=out,
)
def max(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.max. This method simply
wraps the function, and so the docstring for ivy.max also applies to
this method with minimal changes.
Parameters
----------
x
input array. Should have a numeric data type.
axis
axis or axes along which maximum values must be computed.
By default, the maximum value must be computed over the
entire array. If a tuple of integers, maximum values must
be computed over multiple axes. Default: ``None``.
keepdims
if ``True``, the reduced axes (dimensions) must be included
in the result as singleton dimensions, and, accordingly, the
result must be compatible with the input array
(see :ref:`broadcasting`). Otherwise, if ``False``, the reduced axes
(dimensions) must not be included in the result. Default: ``False``.
out
optional output array, for writing the result to.
Returns
-------
ret
if the maximum value was computed over the entire array,
a zero-dimensional array containing the maximum value;
otherwise, a non-zero-dimensional array
containing the maximum values. The returned array must
have the same data type
as ``x``.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([1, 2, 3])
>>> z = x.max()
>>> print(z)
ivy.array(3)
>>> x = ivy.array([0, 1, 2])
>>> z = ivy.array(0)
>>> y = x.max(out=z)
>>> print(z)
ivy.array(2)
>>> x = ivy.array([[0, 1, 2], [4, 6, 10]])
>>> y = x.max(axis=0, keepdims=True)
>>> print(y)
ivy.array([[4, 6, 10]])
"""
return ivy.max(self._data, axis=axis, keepdims=keepdims, out=out)
def mean(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.mean. This method simply
wraps the function, and so the docstring for ivy.mean also applies to
this method with minimal changes.
**Special Cases**
Let ``N`` equal the number of elements over which to compute the
arithmetic mean.
- If ``N`` is ``0``, the arithmetic mean is ``NaN``.
- If ``x_i`` is ``NaN``, the arithmetic mean is ``NaN`` (i.e., ``NaN``
values propagate).
Parameters
----------
self
input array. Should have a floating-point data type.
axis
axis or axes along which arithmetic means must be computed. By default,
the mean must be computed over the entire array. If a Sequence of
integers, arithmetic means must be computed over multiple axes.
Default: ``None``.
keepdims
bool, if ``True``, the reduced axes (dimensions) must be included in the
result as singleton dimensions, and, accordingly, the result must be
compatible with the input array (see :ref:`broadcasting`). Otherwise,
if ``False``, the reduced axes (dimensions) must not be included in
the result. Default: ``False``.
out
optional output array, for writing the result to.
Returns
-------
ret
array, if the arithmetic mean was computed over the entire array, a
zero-dimensional array containing the arithmetic mean; otherwise, a
non-zero-dimensional array containing the arithmetic means.
The returned array must have the same data type as ``x``.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([3., 4., 5.])
>>> y = x.mean()
>>> print(y)
ivy.array(4.)
>>> x = ivy.array([-1., 0., 1.])
>>> y = ivy.mean(x)
>>> print(y)
ivy.array(0.)
>>> x = ivy.array([0.1, 1.1, 2.1])
>>> y = ivy.array(0.)
>>> x.mean(out=y)
>>> print(y)
ivy.array(1.1)
>>> x = ivy.array([1., 2., 3., 0., -1.])
>>> y = ivy.array(0.)
>>> ivy.mean(x, out=y)
>>> print(y)
ivy.array(1.)
>>> x = ivy.array([[-0.5, 1., 2.], [0.0, 1.1, 2.2]])
>>> y = ivy.zeros((1, 3))
>>> x.mean(axis=0, keepdims=True, out=y)
>>> print(y)
ivy.array([[-0.25 , 1.04999995, 2.0999999 ]])
>>> x = ivy.array([[0., 1., 2.], [3., 4., 5.]])
>>> y = ivy.array([0., 0.])
>>> ivy.mean(x, axis=1, out=y)
>>> print(y)
ivy.array([1., 4.])
"""
return ivy.mean(self._data, axis=axis, keepdims=keepdims, out=out)
def var(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
correction: Union[int, float] = 0.0,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.var. This method simply
wraps the function, and so the docstring for ivy.var also applies to
this method with minimal changes.
**Special Cases**
Let N equal the number of elements over which to compute the variance.
If N - correction is less than or equal to 0, the variance is NaN.
If x_i is NaN, the variance is NaN (i.e., NaN values propagate).
Parameters
----------
self
input array. Should have a floating-point data type.
axis
axis or axes along which variances must be computed. By default, the
variance must be computed over the entire array. If a tuple of integers,
variances must be computed over multiple axes. Default: ``None``.
correction
degrees of freedom adjustment. Setting this parameter to a value other
than 0 has the effect of adjusting the divisor during the calculation
of the variance according to N-c where N corresponds to the total
number of elements over which the variance is computed and c corresponds
to the provided degrees of freedom adjustment. When computing the variance
of a population, setting this parameter to 0 is the standard choice
(i.e., the provided array contains data constituting an entire population).
When computing the unbiased sample variance, setting this parameter to 1
is the standard choice (i.e., the provided array contains data sampled
from a larger population; this is commonly referred to as Bessel's
correction). Default: ``0``.
keepdims
if True, the reduced axes (dimensions) must be included in the result as
singleton dimensions, and, accordingly, the result must be compatible
with the input array (see Broadcasting). Otherwise, if False, the
reduced axes (dimensions) must not be included in the result.
Default: ``False``.
out
optional output array, for writing the result to.
Returns
-------
ret
if the variance was computed over the entire array, a zero-dimensional array
containing the variance; otherwise, a non-zero-dimensional array containing
the variances. The returned array must have the same data type as x.
Examples
--------
>>> x = ivy.array([[0.0, 1.0, 2.0],
... [3.0, 4.0, 5.0],
... [6.0, 7.0, 8.0]])
>>> y = x.var()
>>> print(y)
ivy.array(6.6666665)
>>> x = ivy.array([[0.0, 1.0, 2.0],
... [3.0, 4.0, 5.0],
... [6.0, 7.0, .08]])
>>> y = x.var(axis=0)
>>> print(y)
ivy.array([6., 6., 4.1])
>>> x = ivy.array([[0.0, 1.0, 2.0],
... [3.0, 4.0, 5.0],
... [6.0, 7.0, .08]])
>>> y = ivy.array([0., 0., 0.])
>>> x.var(axis=1, out=y)
>>> print(y)
ivy.array([0.667, 0.667, 9.33 ])
"""
return ivy.var(
self._data, axis=axis, correction=correction, keepdims=keepdims, out=out
)
def prod(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.array instance method variant of ivy.prod. This method simply
wraps the function, and so the docstring for ivy.prod also applies to
this method with minimal changes.
Parameters
----------
self
input array. Should have a floating-point data type.
axis
axis or axes along which products must be computed. By default,
the product must be computed over the entire array. If a
tuple of integers, products must be computed over multiple
axes. Default: ``None``.
keepdims
bool, if True, the reduced axes (dimensions) must be
included in the result as singleton dimensions, and,
accordingly, the result must be compatible with the
input array (see Broadcasting). Otherwise, if False,
the reduced axes (dimensions) must not be included in
the result. Default: ``False``.
dtype
data type of the returned array.
out
optional output array, for writing the result to.
Returns
-------
ret
container, if the product was computed over the entire array,
a zero-dimensional array containing the product;
otherwise, a non-zero-dimensional array containing the products.
The returned array must have the same data type as ``self``.
Examples
--------
With: class: `ivy.Array` input:
>>> x = ivy.array([1, 2, 3])
>>> z = x.prod()
>>> print(z)
ivy.array(6)
>>> x = ivy.array([1, 0, 3])
>>> z = x.prod()
>>> print(z)
ivy.array(0)
>>> x = ivy.array([[3., 4., 5.]])
>>> y = x.prod(axis=1)
>>> print(y)
ivy.array([60.])
>>> x = ivy.array([2., 1.])
>>> y = ivy.array(0.)
>>> x.prod(out=y)
>>> print(y)
ivy.array(2.)
>>> x = ivy.array([[-1., -2.], [3., 3.]])
>>> y = x.prod(axis=1)
>>> print(y)
ivy.array([2., 9.])
"""
return ivy.prod(self._data, axis=axis, keepdims=keepdims, dtype=dtype, out=out)
def sum(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.sum(self, axis=axis, dtype=dtype, keepdims=keepdims, out=out)
def std(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
correction: Union[int, float] = 0.0,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.array instance method variant of ivy.std. This method simply
wraps the function, and so the docstring for ivy.std also applies to
this method with minimal changes.
Parameters
----------
self
input array.
axis
axis or axes along which standard deviation must be computed.
By default, the product must be computed over the entire array.
If a tuple of integers, products must be computed over multiple
axes. Default: ``None``.
correction
degrees of freedom adjustment. Setting this parameter to a
value other than ``0`` has the effect of adjusting the
divisor during the calculation of the standard deviation
according to ``N-c`` where ``N`` corresponds to the total
number of elements over which the standard deviation is
computed and ``c`` corresponds to the provided degrees of
freedom adjustment. When computing the standard deviation
of a population, setting this parameter to ``0`` is the
standard choice (i.e., the provided array contains data
constituting an entire population). When computing
the corrected sample standard deviation, setting this
parameter to ``1`` is the standard choice (i.e., the
provided array contains data sampled from a larger
population; this is commonly referred to as Bessel's
correction). Default: ``0``.
keepdims
bool, if True, the reduced axes (dimensions) must be
included in the result as singleton dimensions, and,
accordingly, the result must be compatible with the
input array (see Broadcasting). Otherwise, if False,
the reduced axes (dimensions) must not be included in
the result. Default: ``False``.
out
optional output array, for writing the result to.
Returns
-------
ret
container, if the product was computed over the entire array,
a zero-dimensional array containing the product;
otherwise, a non-zero-dimensional array containing the products.
The returned array must have the same data type as ``self``.
Examples
--------
With: class: `ivy.Array` input:
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.std()
>>> print(y)
ivy.array(0.81649661)
>>> x = ivy.array([-1., 0., 1.])
>>> z = x.std(correction=1)
>>> print(z)
ivy.array(1.)
>>> x = ivy.array([[0., 4.]])
>>> y = x.std(keepdims=True)
>>> print(y)
ivy.array([[2.]])
>>> x = ivy.array([2., 1.])
>>> y = ivy.array(0.)
>>> x.std(out=y)
>>> print(y)
ivy.array(0.5)
>>> x = ivy.array([[-1., -2.], [3., 3.]])
>>> y = x.std(axis=1)
>>> print(y)
ivy.array([0.5, 0. ])
"""
return ivy.std(
self, axis=axis, correction=correction, keepdims=keepdims, out=out
)
# Extra #
# ----- #
def cumsum(
self: ivy.Array,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.cumsum. This method simply
wraps the function, and so the docstring for ivy.cumsum also applies to
this method with minimal changes.
Parameters
----------
self
Input array to apply cumsum.
axis
Axis along which the cumulative sum is computed. Default is ``0``.
exclusive
Whether to perform cumsum exclusively. Default is ``False``.
reverse
Whether to perform the cumsum from last to first element in the selected
axis. Default is ``False`` (from first to last element)
dtype
Data type of the returned array. Default is ``None``.
out
Optional array container. Default is ``None``.
Returns
-------
ret
Array which holds the result of applying cumsum at each
original array elements along the specified axis.
Examples
--------
>>> x = ivy.array([1, 2, 3, 4, 5])
>>> y = x.cumsum()
>>> print(y)
ivy.array([ 1, 3, 6, 10, 15])
>>> x = ivy.array([2, 6, 4, 10])
>>> y = x.cumsum(axis=0, exclusive=False, reverse=True, dtype='float64')
>>> print(y)
ivy.array([22., 20., 14., 10.])
>>> x = ivy.array([[2, 3], [4, 6], [8, 12]])
>>> y = ivy.zeros((3, 2))
>>> x.cumsum(axis=1, exclusive=True, reverse=False, out=y)
>>> print(y)
ivy.array([[0, 2],
[0, 4],
[0, 8]])
>>> x = ivy.array([[1, 5, 2],
... [4, 3, 0],
... [4, 8, 2]])
>>> y = x.cumsum(axis=1, exclusive=True, reverse=True)
>>> print(y)
ivy.array([[ 7, 2, 0],
[ 3, 0, 0],
[10, 2, 0]])
>>> x = ivy.array([[1, 5, 10], [4, 8, 10], [2, 3, 5]])
>>> x.cumsum(axis=0, out=x)
>>> print(x)
ivy.array([[ 1, 5, 10],
[ 5, 13, 20],
[ 7, 16, 25]])
"""
return ivy.cumsum(self._data, axis, exclusive, reverse, dtype=dtype, out=out)
def cumprod(
self: ivy.Array,
/,
*,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.cumprod. This method simply
wraps the function, and so the docstring for ivy.cumprod also applies
to this method with minimal changes.
Parameters
----------
self
input array
axis
int, axis along which to take the cumulative product. Default is ``0``.
exclusive
optional bool, whether to exclude the first value of the input array.
Default is ``False``.
reverse
Whether to perform the cumprod from last to first element in the selected
axis. Default is ``False`` (from first to last element)
dtype
data type of the returned array. If None, if the default data type
corresponding to the data type βkindβ (integer or floating-point) of x
has a smaller range of values than the data type of x (e.g., x has data
type int64 and the default data type is int32, or x has data type uint64
and the default data type is int64), the returned array must have the
same data type as x. if x has a floating-point data type, the returned array
must have the default floating-point data type. if x has a signed integer
data type (e.g., int16), the returned array must have the default integer
data type. if x has an unsigned integer data type (e.g., uint16), the
returned array must have an unsigned integer data type having the same
number of bits as the default integer data type (e.g., if the default
integer data type is int32, the returned array must have a uint32 data
type). If the data type (either specified or resolved) differs from the
data type of x, the input array should be cast to the specified data type
before computing the product. Default: ``None``.
out
optional output array, for writing the result to.
Returns
-------
ret
Input array with cumulatively multiplied elements along the specified axis.
Examples
--------
>>> x = ivy.array([1, 2, 3, 4, 5])
>>> y = x.cumprod()
>>> print(y)
ivy.array([1, 2, 6, 24, 120])
>>> x = ivy.array([[2, 3], [5, 7], [11, 13]])
>>> y = ivy.zeros((3, 2), dtype="int32")
>>> x.cumprod(axis=1, exclusive=True, out=y)
>>> print(y)
ivy.array([[0, 0],
[0, 0],
[0, 0]])
"""
return ivy.cumprod(
self._data,
axis=axis,
exclusive=exclusive,
reverse=reverse,
dtype=dtype,
out=out,
)
def einsum(
self: ivy.Array,
equation: str,
*operands: Union[ivy.Array, ivy.NativeArray],
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.einsum. This method simply
wraps the function, and so the docstring for ivy.einsum also applies to
this method with minimal changes.
Parameters
----------
equation
A str describing the contraction, in the same format as numpy.einsum.
operands
seq of arrays, the inputs to contract (each one an ivy.Array), whose shapes
should be consistent with equation.
out
optional output array, for writing the result to.
Returns
-------
ret
The array with sums computed.
Examples
--------
>>> x = ivy.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
>>> y = x.einsum('ii')
>>> print(y)
ivy.array(12)
>>> x = ivy.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
>>> z = x.einsum('ij -> j')
>>> print(z)
ivy.array([ 9, 12, 15])
>>> A = ivy.array([0, 1, 2])
>>> B = ivy.array([[ 0, 1, 2, 3],
... [ 4, 5, 6, 7],
... [ 8, 9, 10, 11]])
>>> C = A.einsum('i,ij->i', B)
>>> print(C)
ivy.array([ 0, 22, 76])
>>> A = ivy.array([[1, 1, 1],
... [2, 2, 2],
... [5, 5, 5]])
>>> B = ivy.array([[0, 1, 0],
... [1, 1, 0],
... [1, 1, 1]])
>>> C = A.einsum('ij,jk->ik', B)
>>> print(C)
ivy.array([[ 2, 3, 1],
[ 4, 6, 2],
[10, 15, 5]])
>>> A = ivy.arange(10)
>>> B = A.einsum('i->')
>>> print(B)
ivy.array(45)
>>> A = ivy.arange(10)
>>> B = ivy.arange(5, 15)
>>> C = A.einsum('i,i->i', B)
>>> print(C)
ivy.array([ 0, 6, 14, 24, 36, 50, 66, 84, 104, 126])
>>> A = ivy.arange(10)
>>> B = ivy.arange(5, 15)
>>> C = A.einsum('i,i->', B) # or just use 'i,i'
>>> print(C)
ivy.array(510)
"""
return ivy.einsum(equation, *(self._data,) + operands, out=out)
| ivy/ivy/data_classes/array/statistical.py/0 | {
"file_path": "ivy/ivy/data_classes/array/statistical.py",
"repo_id": "ivy",
"token_count": 11934
} | 10 |
from ivy.data_classes.container.base import ContainerBase
class _ContainerWithData_typeExperimental(ContainerBase):
pass
| ivy/ivy/data_classes/container/experimental/data_type.py/0 | {
"file_path": "ivy/ivy/data_classes/container/experimental/data_type.py",
"repo_id": "ivy",
"token_count": 35
} | 11 |
# global
from typing import Optional, Union, Dict, List
# local
import ivy
from ivy.data_classes.container.base import ContainerBase
class _ContainerWithUtilityExperimental(ContainerBase):
@staticmethod
def static_optional_get_element(
x: Optional[Union[ivy.Array, ivy.Container]] = None,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str]]] = None,
to_apply: bool = True,
prune_unapplied: bool = False,
map_sequences: bool = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.optional_get_element.
This method simply wraps the function, and so the docstring for
ivy.optional_get_element also applies to this method with minimal
changes.
Parameters
----------
x
container with array inputs.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to.
Returns
-------
ret
Container with arrays flattened at leaves.
"""
return ContainerBase.cont_multi_map_in_function(
"optional_get_element",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def optional_get_element(
self: ivy.Container,
/,
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.optional_get_element.
This method simply wraps the function, and so the docstring for
ivy.optional_get_element also applies to this method with minimal
changes.
Parameters
----------
self
Input container
out
Optional output container, for writing the result to.
Returns
-------
ret
Output container.
"""
return self.static_optional_get_element(self, out=out)
| ivy/ivy/data_classes/container/experimental/utility.py/0 | {
"file_path": "ivy/ivy/data_classes/container/experimental/utility.py",
"repo_id": "ivy",
"token_count": 1135
} | 12 |
from .tucker_tensor import TuckerTensor
from .cp_tensor import CPTensor
from .tr_tensor import TRTensor
from .parafac2_tensor import Parafac2Tensor
from .tt_tensor import TTTensor
| ivy/ivy/data_classes/factorized_tensor/__init__.py/0 | {
"file_path": "ivy/ivy/data_classes/factorized_tensor/__init__.py",
"repo_id": "ivy",
"token_count": 62
} | 13 |
use pyo3::prelude::*;
use pyo3::exceptions::{PyOSError};
use std::str::Utf8Error;
/// Main library error type.
#[derive(thiserror::Error, Debug)]
pub enum Error {
/// Incorrect number of elements.
#[error("wrong element count {element_count} for dims {dims:?}")]
WrongElementCount { dims: Vec<usize>, element_count: usize },
/// Error from the xla C++ library.
#[error("xla error {msg}\n{backtrace}")]
XlaError { msg: String, backtrace: String },
#[error("unexpected element type {0}")]
UnexpectedElementType(i32),
#[error("unexpected number of dimensions, expected: {expected}, got: {got} ({dims:?})")]
UnexpectedNumberOfDims { expected: usize, got: usize, dims: Vec<i64> },
#[error("not an element type, got: {got:?}")]
NotAnElementType { got: crate::PrimitiveType },
#[error("not an array, expected: {expected:?}, got: {got:?}")]
NotAnArray { expected: Option<usize>, got: crate::Shape },
#[error("cannot handle unsupported shapes {shape:?}")]
UnsupportedShape { shape: crate::Shape },
#[error("unexpected number of tuple elements, expected: {expected}, got: {got}")]
UnexpectedNumberOfElemsInTuple { expected: usize, got: usize },
#[error("element type mismatch, on-device: {on_device:?}, on-host: {on_host:?}")]
ElementTypeMismatch { on_device: crate::ElementType, on_host: crate::ElementType },
#[error("unsupported element type for {op}: {ty:?}")]
UnsupportedElementType { ty: crate::PrimitiveType, op: &'static str },
#[error(
"target buffer is too large, offset {offset}, shape {shape:?}, buffer_len: {buffer_len}"
)]
TargetBufferIsTooLarge { offset: usize, shape: crate::ArrayShape, buffer_len: usize },
#[error("binary buffer is too large, element count {element_count}, buffer_len: {buffer_len}")]
BinaryBufferIsTooLarge { element_count: usize, buffer_len: usize },
#[error("empty literal")]
EmptyLiteral,
#[error("index out of bounds {index}, rank {rank}")]
IndexOutOfBounds { index: i64, rank: usize },
#[error("npy/npz error {0}")]
Npy(String),
/// I/O error.
#[error(transparent)]
Io(#[from] std::io::Error),
/// Zip file format error.
#[error(transparent)]
Zip(#[from] zip::result::ZipError),
/// Integer parse error.
#[error(transparent)]
ParseInt(#[from] std::num::ParseIntError),
#[error("cannot create literal with shape {ty:?} {dims:?} from bytes data with len {data_len_in_bytes}")]
CannotCreateLiteralWithData {
data_len_in_bytes: usize,
ty: crate::PrimitiveType,
dims: Vec<usize>,
},
#[error("invalid dimensions in matmul, lhs: {lhs_dims:?}, rhs: {rhs_dims:?}, {msg}")]
MatMulIncorrectDims { lhs_dims: Vec<i64>, rhs_dims: Vec<i64>, msg: &'static str },
#[error("Invalid UTF-8 data: {0}")]
Utf8Error(#[from] Utf8Error),
}
impl From<Error> for PyErr {
fn from(err: Error) -> PyErr {
PyOSError::new_err(err.to_string())
}
}
pub type Result<T> = std::result::Result<T, Error>;
| ivy/ivy/engines/XLA/rust_api/src/error.rs/0 | {
"file_path": "ivy/ivy/engines/XLA/rust_api/src/error.rs",
"repo_id": "ivy",
"token_count": 1196
} | 14 |
from .ivy import experimental
from .ivy.experimental import *
from . import ivy
from .ivy import *
| ivy/ivy/functional/__init__.py/0 | {
"file_path": "ivy/ivy/functional/__init__.py",
"repo_id": "ivy",
"token_count": 30
} | 15 |
"""Collection of Jax network layers, wrapped to fit Ivy syntax and
signature."""
# global
import jax.lax as jlax
import jax.numpy as jnp
# local
import ivy
from ivy.functional.backends.jax import JaxArray
from typing import Union, Tuple, Optional, Sequence
from ivy.functional.ivy.layers import (
_handle_padding,
_deconv_length,
_get_x_data_format,
)
def _transpose_padding_helper(k, s, padding, dilation, diff=0):
k = (k - 1) * dilation + 1
if padding == "SAME":
pad_len = k + s - 2
pad_len -= diff
if s > k - 1:
pad_a = k - 1
else:
pad_a = int(jnp.ceil(pad_len / 2))
else:
pad_len = k + s - 2 + max(k - s, 0)
pad_a = k - 1
pad_b = pad_len - pad_a
return pad_a, pad_b
def _get_tranpose_padding(
x_shape, filter_shape, strides, padding, dims, dilations, output_shape
):
new_shape = [
_deconv_length(x_shape[i], strides[i], filter_shape[i], padding, dilations[i])
for i in range(dims)
]
if output_shape is None:
output_shape = [x_shape[0], *new_shape, filter_shape[-1]]
elif len(output_shape) == dims:
output_shape = [x_shape[0]] + list(output_shape) + [filter_shape[-1]]
shape_diff = [-(output_shape[1 + i] - new_shape[i]) for i in range(dims)]
pad_list = [
_transpose_padding_helper(
filter_shape[i], strides[i], padding, dilations[i], shape_diff[i]
)
for i in range(dims)
]
return pad_list
def _get_new_padding_before_conv(
x,
filters,
strides,
padding,
dims,
data_format,
filter_format,
dilations,
x_dilations,
):
if len(x_dilations) != x_dilations.count(1):
new_pad = [0] * dims
x_shape = (
list(x.shape[1 : dims + 1])
if data_format == ("NWC" or "NHWC" or "NDHWC")
else list(x.shape[2:])
)
x_shape = [
x_shape[i] + (x_shape[i] - 1) * (x_dilations[i] - 1) for i in range(dims)
]
f_shape = (
list(filters.shape[:dims])
if filter_format == "channel_last"
else list(filters.shape[2:])
)
f_shape = [
f_shape[i] + (f_shape[i] - 1) * (dilations[i] - 1) for i in range(dims)
]
if isinstance(padding, str):
for i in range(dims):
new_pad[i] = _handle_padding(
x_shape[i], strides[i], f_shape[i], padding
)
padding = [
(new_pad[i] // 2, new_pad[i] - new_pad[i] // 2) for i in range(dims)
]
return padding
return padding
def conv1d(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int]] = 1,
dilations: Union[int, Tuple[int]] = 1,
bias: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
data_format = "channel_last" if data_format == "NWC" else "channel_first"
return conv_general_dilated(
x,
filters,
strides,
padding,
dims=1,
data_format=data_format,
filter_format=filter_format,
x_dilations=x_dilations,
dilations=dilations,
bias=bias,
)
def conv1d_transpose(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int]],
padding: str,
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "NWC",
dilations: Union[int, Tuple[int]] = 1,
bias: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
strides = (strides,) if isinstance(strides, int) else strides
dilations = (dilations,) if isinstance(dilations, int) else dilations
if data_format == "NWC":
x_shape = list(x.shape[1:2])
else:
x_shape = list(x.shape[2:])
if filter_format == "channel_first":
filters = jnp.transpose(filters, (2, 1, 0))
padding = _get_tranpose_padding(
x_shape, filters.shape, strides, padding, 1, dilations, output_shape
)
res = jlax.conv_transpose(
x,
filters,
strides,
padding,
dilations,
(data_format, "WIO", data_format),
True,
)
if bias is not None:
if data_format == "NWC":
return jnp.add(res, bias)
return jnp.add(res, bias[(None,) + (...,) + (None,) * 1])
return res
def conv2d(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NHWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int, int]] = 1,
dilations: Union[int, Tuple[int, int]] = 1,
bias: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
data_format = "channel_last" if data_format == "NHWC" else "channel_first"
return conv_general_dilated(
x,
filters,
strides,
padding,
dims=2,
data_format=data_format,
filter_format=filter_format,
x_dilations=x_dilations,
dilations=dilations,
bias=bias,
)
def conv2d_transpose(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int, int]],
padding: str,
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "NHWC",
dilations: Union[int, Tuple[int, int]] = 1,
bias: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
strides = [strides] * 2 if isinstance(strides, int) else strides
dilations = [dilations] * 2 if isinstance(dilations, int) else dilations
if data_format == "NHWC":
x_shape = list(x.shape[1:3])
else:
x_shape = list(x.shape[2:])
if filter_format == "channel_first":
filters = jnp.transpose(filters, (2, 3, 1, 0))
padding = _get_tranpose_padding(
x_shape, filters.shape, strides, padding, 2, dilations, output_shape
)
res = jlax.conv_transpose(
x,
filters,
strides,
padding,
dilations,
(data_format, "HWIO", data_format),
True,
)
if bias is not None:
if data_format == "NHWC":
return jnp.add(res, bias)
return jnp.add(res, bias[(None,) + (...,) + (None,) * 2])
return res
def depthwise_conv2d(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NHWC",
dilations: Union[int, Tuple[int, int]] = 1,
out: Optional[JaxArray] = None,
) -> JaxArray:
strides = [strides] * 2 if isinstance(strides, int) else strides
strides = [strides[1], strides[2]] if len(strides) == 4 else strides
dilations = [dilations] * 2 if isinstance(dilations, int) else dilations
if isinstance(padding, int):
padding = [(padding, padding)] * 2
filters = jnp.squeeze(filters, 3) if filters.ndim == 4 else filters
cn = filters.shape[-1]
filters = jnp.expand_dims(filters, -2)
return jlax.conv_general_dilated(
x,
filters,
strides,
padding,
None,
dilations,
(data_format, "HWIO", data_format),
feature_group_count=cn,
)
def conv3d(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int, int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NDHWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int, int, int]] = 1,
dilations: Union[int, Tuple[int, int, int]] = 1,
bias: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
data_format = "channel_last" if data_format == "NDHWC" else "channel_first"
return conv_general_dilated(
x,
filters,
strides,
padding,
dims=3,
data_format=data_format,
filter_format=filter_format,
x_dilations=x_dilations,
dilations=dilations,
bias=bias,
)
def conv3d_transpose(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int, int, int]],
padding: str,
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
dilations: Union[int, Tuple[int, int, int]] = 1,
filter_format: str = "channel_last",
data_format: str = "NDHWC",
bias: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
strides = [strides] * 3 if isinstance(strides, int) else strides
dilations = [dilations] * 3 if isinstance(dilations, int) else dilations
if filter_format == "channel_first":
filters = jnp.transpose(filters, (2, 3, 4, 1, 0))
if data_format == "NDHWC":
x_shape = list(x.shape[1:4])
else:
x_shape = list(x.shape[2:])
padding = _get_tranpose_padding(
x_shape, filters.shape, strides, padding, 3, dilations, output_shape
)
res = jlax.conv_transpose(
x,
filters,
strides,
padding,
dilations,
(data_format, "DHWIO", data_format),
True,
)
if bias is not None:
if data_format == "NDHWC":
return jnp.add(res, bias)
return jnp.add(res, bias[(None,) + (...,) + (None,) * 3])
return res
def _get_filter_dataformat(dims: int = 2, filter_format: str = "channel_last"):
first = True if filter_format == "channel_first" else False
if dims == 1:
return "OIW" if first else "WIO"
if dims == 2:
return "OIHW" if first else "HWIO"
elif dims == 3:
return "OIDHW" if first else "DHWIO"
def conv_general_dilated(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
dims: int = 2,
data_format: str = "channel_last",
filter_format: str = "channel_last",
feature_group_count: int = 1,
x_dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
bias: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
):
strides = [strides] * dims if isinstance(strides, int) else strides
dilations = [dilations] * dims if isinstance(dilations, int) else dilations
x_dilations = [x_dilations] * dims if isinstance(x_dilations, int) else x_dilations
if isinstance(padding, int):
padding = [(padding, padding)] * dims
filter_df = _get_filter_dataformat(dims, filter_format)
if len(x_dilations) != x_dilations.count(1):
new_pad = [0] * dims
x_shape = (
list(x.shape[1 : dims + 1])
if data_format == "channel_last"
else list(x.shape[2:])
)
x_shape = [
x_shape[i] + (x_shape[i] - 1) * (x_dilations[i] - 1) for i in range(dims)
]
f_shape = (
list(filters.shape[:dims])
if filter_format == "channel_last"
else list(filters.shape[2:])
)
f_shape = [
f_shape[i] + (f_shape[i] - 1) * (dilations[i] - 1) for i in range(dims)
]
if isinstance(padding, str):
for i in range(dims):
new_pad[i] = _handle_padding(
x_shape[i], strides[i], f_shape[i], padding
)
padding = [
(new_pad[i] // 2, new_pad[i] - new_pad[i] // 2) for i in range(dims)
]
df = _get_x_data_format(dims, data_format)
promoted_type = jnp.promote_types(x.dtype, filters.dtype)
x = x.astype(promoted_type)
filters = filters.astype(promoted_type)
res = jlax.conv_general_dilated(
x,
filters,
strides,
padding,
x_dilations,
dilations,
(df, filter_df, df),
feature_group_count,
)
if bias is not None:
if data_format == "channel_last":
return jnp.add(res, bias)
return jnp.add(res, bias[(None,) + (...,) + (None,) * dims])
return res
def conv_general_transpose(
x: JaxArray,
filters: JaxArray,
strides: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]],
padding: str,
/,
*,
dims: int = 2,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "channel_last",
dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
feature_group_count: int = 1,
bias: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
):
strides = [strides] * dims if isinstance(strides, int) else strides
dilations = [dilations] * dims if isinstance(dilations, int) else dilations
if filter_format == "channel_first":
filters = jnp.transpose(filters, (*range(2, dims + 2), 1, 0))
df = _get_x_data_format(dims, "channel_last")
filter_df = _get_filter_dataformat(dims)
if data_format == "channel_first":
x = jnp.transpose(x, (0, *range(2, dims + 2), 1))
padding = _get_tranpose_padding(
x.shape[1:], filters.shape, strides, padding, dims, dilations, output_shape
)
res = jnp.concatenate(
[
jlax.conv_transpose(
x[..., j : j + filters.shape[-1] // feature_group_count],
filters[..., j : j + filters.shape[-1] // feature_group_count],
strides,
padding,
dilations,
(df, filter_df, df),
True,
)
for j in range(
0, filters.shape[-1], filters.shape[-1] // feature_group_count
)
],
axis=-1,
)
res = jnp.add(res, bias) if bias is not None else res
if data_format == "channel_first":
return jnp.transpose(res, (0, dims + 1, *range(1, dims + 1)))
return res
def nms(
boxes,
scores=None,
iou_threshold=0.5,
max_output_size=None,
score_threshold=float("-inf"),
):
change_id = False
if score_threshold != float("-inf") and scores is not None:
keep_idx = scores > score_threshold
boxes = boxes[keep_idx]
scores = scores[keep_idx]
change_id = True
nonzero = jnp.nonzero(keep_idx)[0].flatten()
if scores is None:
scores = jnp.ones((boxes.shape[0],), dtype=boxes.dtype)
if len(boxes) < 2:
if len(boxes) == 1:
ret = jnp.array([0], dtype=ivy.int64)
else:
ret = jnp.array([], dtype=ivy.int64)
else:
areas = jnp.prod(boxes[:, 2:4] - boxes[:, :2], axis=1)
order = jnp.argsort(-1 * scores) # get boxes with more ious first
boxes = boxes[order]
areas = areas[order]
size = order.size
pad_width = 1 if size == 0 else 2 ** (size - 1).bit_length()
order = jnp.pad(order, [0, pad_width - size], constant_values=pad_width)
boxes = jnp.pad(boxes, [[0, pad_width - size], [0, 0]])
areas = jnp.pad(areas, [0, pad_width - size])
keep = jnp.zeros((size,), dtype=jnp.int64)
keep_idx = 0
while jnp.unique(order).size > 1:
max_iou_idx = order[0]
keep = keep.at[keep_idx].set(max_iou_idx)
keep_idx += 1
boxes1 = jnp.maximum(boxes[0, :2], boxes[1:, :2])
boxes2 = jnp.minimum(boxes[0, 2:4], boxes[1:, 2:4])
boxes_intersection = jnp.maximum(0.0, boxes2 - boxes1)
intersection = jnp.prod(
jnp.where(boxes_intersection != 0, boxes_intersection, 1), axis=1
)
iou = intersection / (areas[0] + areas[1:] - intersection)
condition = jnp.pad(iou <= iou_threshold, [1, 0], constant_values=False)
order = jnp.where(condition, order, pad_width)
boxes = jnp.where(jnp.expand_dims(condition, axis=1), boxes, 0)
areas = jnp.where(condition, areas, 0)
first = jnp.argwhere(order < pad_width, size=pad_width)[0][0]
forward = jnp.array([0, first])
order = order.at[forward].set(order[forward[::-1]])
boxes = boxes.at[forward].set(boxes[forward[::-1]])
areas = areas.at[forward].set(areas[forward[::-1]])
ret = jnp.array(keep[:keep_idx], dtype=jnp.int64)
if len(ret) > 1 and scores is not None:
ret = sorted(
ret.flatten().tolist(), reverse=True, key=lambda x: (scores[x], -x)
)
ret = jnp.array(ret, dtype=jnp.int64).flatten()
if change_id and len(ret) > 0:
ret = jnp.array(nonzero[ret], dtype=jnp.int64).flatten()
return ret.flatten()[:max_output_size]
| ivy/ivy/functional/backends/jax/layers.py/0 | {
"file_path": "ivy/ivy/functional/backends/jax/layers.py",
"repo_id": "ivy",
"token_count": 8254
} | 16 |
"""MXNet device functions.
Collection of MXNet general functions, wrapped to fit Ivy syntax and
signature.
"""
import mxnet as mx
from typing import Union, Optional
import ivy
from ivy.functional.ivy.device import Profiler as BaseProfiler
from ivy.utils.exceptions import IvyNotImplementedException
def dev(
x: Union[(None, mx.ndarray.NDArray)], /, *, as_native: bool = False
) -> Union[(ivy.Device, str)]:
if as_native:
return x.context
return as_ivy_dev(x.context)
def to_device(
x: Union[(None, mx.ndarray.NDArray)],
device: str,
/,
*,
stream: Optional[int] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return x.as_in_context(as_native_dev(device))
def as_ivy_dev(device):
if isinstance(device, str):
return ivy.Device(device)
if device is None:
return None
# if mx device context is passed
p, dev_id = (device.device_type, device.device_id)
if p == "cpu":
return ivy.Device(p)
return ivy.Device(p + ":" + str(dev_id))
def as_native_dev(device: str, /):
if isinstance(device, mx.Context):
return device
if device is None or "cpu" in device:
mx_dev = "cpu"
elif "gpu" in device:
mx_dev = "gpu"
else:
raise ValueError(f"dev input {device} not supported.")
if device.find(":") != -1:
mx_dev_id = int(device[device.find(":") + 1 :])
else:
mx_dev_id = 0
return mx.Context(mx_dev, mx_dev_id)
def clear_cached_mem_on_dev(device: str, /):
raise IvyNotImplementedException()
def num_gpus() -> int:
return mx.context.num_gpus()
def gpu_is_available() -> bool:
if mx.context.num_gpus() > 0:
return True
return False
def tpu_is_available() -> bool:
return False
class Profiler(BaseProfiler):
def __init__(self, save_dir: str):
raise IvyNotImplementedException()
def start(self):
raise IvyNotImplementedException()
def stop(self):
raise IvyNotImplementedException()
def __enter__(self):
raise IvyNotImplementedException()
def __exit__(self, exc_type, exc_val, exc_tb):
raise IvyNotImplementedException()
| ivy/ivy/functional/backends/mxnet/device.py/0 | {
"file_path": "ivy/ivy/functional/backends/mxnet/device.py",
"repo_id": "ivy",
"token_count": 935
} | 17 |
# global
import logging
from typing import Callable
# local
def bind_custom_gradient_function(func, custom_grad_fn):
logging.warning(
"NumPy does not support autograd, 'bind_custom_gradient_function' "
"has no effect on the array, as gradients are not supported in the first place."
)
return func
def vjp(func: Callable, *primals):
logging.warning(
"NumPy does not support autograd, 'vjp' returns None in place of `vjpfun`."
)
return func(*primals), None
def jvp(func: Callable, primals, tangents):
logging.warning(
"NumPy does not support autograd, "
"'jvp' returns None in place of `tangents_out`."
)
return func(*primals), None
| ivy/ivy/functional/backends/numpy/experimental/gradients.py/0 | {
"file_path": "ivy/ivy/functional/backends/numpy/experimental/gradients.py",
"repo_id": "ivy",
"token_count": 266
} | 18 |
import functools
from typing import Callable
import numpy as np
def _scalar_output_to_0d_array(function: Callable) -> Callable:
"""Convert scalar outputs to 0d arrays.
Sometimes NumPy functions return scalars e.g. `np.add` does when the
inputs are both 0 dimensional.
We use this wrapper to handle such cases, and convert scalar outputs
to 0d arrays, since the array API standard dictates outputs must be
arrays.
"""
@functools.wraps(function)
def new_function(*args, **kwargs):
ret = function(*args, **kwargs)
return np.asarray(ret) if np.isscalar(ret) else ret
return new_function
| ivy/ivy/functional/backends/numpy/helpers.py/0 | {
"file_path": "ivy/ivy/functional/backends/numpy/helpers.py",
"repo_id": "ivy",
"token_count": 220
} | 19 |
# global
from typing import Optional, Union, Sequence, List
import paddle
import ivy.functional.backends.paddle as paddle_backend
import numpy as np
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from ivy.functional.ivy.data_type import _handle_nestable_dtype_info
from . import backend_version
ivy_dtype_dict = {
paddle.int8: "int8",
paddle.int16: "int16",
paddle.int32: "int32",
paddle.int64: "int64",
paddle.uint8: "uint8",
paddle.bfloat16: "bfloat16",
paddle.float16: "float16",
paddle.float32: "float32",
paddle.float64: "float64",
paddle.complex64: "complex64",
paddle.complex128: "complex128",
paddle.bool: "bool",
}
native_dtype_dict = {
"int8": paddle.int8,
"int16": paddle.int16,
"int32": paddle.int32,
"int64": paddle.int64,
"uint8": paddle.uint8,
"bfloat16": paddle.bfloat16,
"float16": paddle.float16,
"float32": paddle.float32,
"float64": paddle.float64,
"complex64": paddle.complex64,
"complex128": paddle.complex128,
"bool": paddle.bool,
}
class Finfo:
def __init__(self, paddle_finfo: np.finfo):
self._paddle_finfo = paddle_finfo
def __repr__(self):
return repr(self._paddle_finfo)
@property
def bits(self):
return self._paddle_finfo.bits
@property
def eps(self):
return float(self._paddle_finfo.eps)
@property
def max(self):
return float(self._paddle_finfo.max)
@property
def min(self):
return float(self._paddle_finfo.min)
@property
def smallest_normal(self):
return float(self._paddle_finfo.tiny)
class Iinfo:
def __init__(self, paddle_iinfo: np.iinfo):
self._paddle_iinfo = paddle_iinfo
def __repr__(self):
return repr(self._paddle_iinfo)
@property
def bits(self):
return self._paddle_iinfo.bits
@property
def max(self):
return self._paddle_iinfo.max
@property
def min(self):
return self._paddle_iinfo.min
class Bfloat16Finfo:
def __init__(self):
self.resolution = 0.01
self.bits = 16
self.eps = 0.0078125
self.max = 3.38953e38
self.min = -3.38953e38
self.tiny = 1.17549e-38
def __repr__(self):
return (
f"finfo(resolution={self.resolution}, min={self.min}, max={self.max},"
" dtype=bfloat16)"
)
# Array API Standard #
# -------------------#
def astype(
x: paddle.Tensor,
dtype: paddle.dtype,
/,
*,
copy: bool = True,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
dtype = ivy.as_native_dtype(dtype)
if x.dtype == dtype:
return x.clone() if copy else x
return x.clone().cast(dtype) if copy else x.cast(dtype)
def broadcast_arrays(*arrays: paddle.Tensor) -> List[paddle.Tensor]:
if len(arrays) > 1:
desired_shape = paddle_backend.broadcast_shapes(
arrays[0].shape, arrays[1].shape
)
if len(arrays) > 2:
for i in range(2, len(arrays)):
desired_shape = paddle_backend.broadcast_shapes(
desired_shape, arrays[i].shape
)
else:
return [arrays[0]]
result = []
for tensor in arrays:
result.append(paddle_backend.broadcast_to(tensor, desired_shape))
return result
@with_unsupported_dtypes(
{
"2.6.0 and below": (
"uint8",
"int8",
"int16",
"float16",
"bfloat16",
)
},
backend_version,
)
def broadcast_to(
x: paddle.Tensor,
/,
shape: Union[ivy.NativeShape, Sequence[int]],
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
ivy.utils.assertions.check_shapes_broadcastable(x.shape, shape)
# paddle doesn't accept 0 in shape and uses -1 instead
shape = [-1 if dim == 0 else dim for dim in shape]
if x.ndim == 0:
if len(shape) == 0:
return x
else:
x = paddle_backend.expand_dims(x, axis=0)
if x.ndim > len(shape):
x = x.reshape([-1])
if x.dtype in [paddle.complex64, paddle.complex128]:
x_real = paddle.broadcast_to(x.real(), shape)
x_imag = paddle.broadcast_to(x.imag(), shape)
return paddle.complex(x_real, x_imag)
else:
return paddle.broadcast_to(x, shape)
@_handle_nestable_dtype_info
def finfo(type: Union[paddle.dtype, str, paddle.Tensor], /) -> Finfo:
if isinstance(type, paddle.Tensor):
type = str(type.dtype)[7:]
elif isinstance(type, paddle.dtype):
type = str(type)[7:]
if ivy.as_native_dtype(type) == paddle.bfloat16:
return Finfo(Bfloat16Finfo())
return Finfo(np.finfo(type))
@_handle_nestable_dtype_info
def iinfo(type: Union[paddle.dtype, str, paddle.Tensor], /) -> Iinfo:
if isinstance(type, paddle.Tensor):
type = str(type.dtype)[7:]
elif isinstance(type, paddle.dtype):
type = str(type)[7:]
return Iinfo(np.iinfo(type))
def result_type(*arrays_and_dtypes: Union[paddle.Tensor, paddle.dtype]) -> ivy.Dtype:
return ivy.promote_types_of_inputs(*arrays_and_dtypes)[0].dtype
# Extra #
# ------#
def as_ivy_dtype(dtype_in: Union[paddle.dtype, str, bool, int, float], /) -> ivy.Dtype:
if dtype_in is int:
return ivy.default_int_dtype()
if dtype_in is float:
return ivy.default_float_dtype()
if dtype_in is complex:
return ivy.default_complex_dtype()
if dtype_in is bool:
return ivy.Dtype("bool")
if isinstance(dtype_in, str):
if dtype_in in native_dtype_dict:
return ivy.Dtype(dtype_in)
else:
raise ivy.utils.exceptions.IvyException(
"Cannot convert to ivy dtype."
f" {dtype_in} is not supported by Paddle backend."
)
return ivy.Dtype(ivy_dtype_dict[dtype_in])
def as_native_dtype(
dtype_in: Union[paddle.dtype, str, bool, int, float],
) -> paddle.dtype:
if dtype_in is int:
return ivy.default_int_dtype(as_native=True)
if dtype_in is float:
return ivy.default_float_dtype(as_native=True)
if dtype_in is complex:
return ivy.default_complex_dtype(as_native=True)
if dtype_in is bool:
return paddle.bool
if not isinstance(dtype_in, str):
return dtype_in
if dtype_in in native_dtype_dict:
return native_dtype_dict[ivy.Dtype(dtype_in)]
else:
raise ivy.utils.exceptions.IvyException(
f"Cannot convert to Paddle dtype. {dtype_in} is not supported by Paddle."
)
def dtype(x: paddle.Tensor, *, as_native: bool = False) -> ivy.Dtype:
if as_native:
return ivy.to_native(x).dtype
return as_ivy_dtype(x.dtype)
def dtype_bits(dtype_in: Union[paddle.dtype, str], /) -> int:
dtype_str = as_ivy_dtype(dtype_in)
if "bool" in dtype_str:
return 1
return int(
dtype_str.replace("paddle.", "")
.replace("uint", "")
.replace("int", "")
.replace("bfloat", "")
.replace("float", "")
.replace("complex", "")
)
def is_native_dtype(dtype_in: Union[paddle.dtype, str], /) -> bool:
if not ivy.is_hashable_dtype(dtype_in):
return False
return dtype_in in ivy_dtype_dict
| ivy/ivy/functional/backends/paddle/data_type.py/0 | {
"file_path": "ivy/ivy/functional/backends/paddle/data_type.py",
"repo_id": "ivy",
"token_count": 3405
} | 20 |
from collections import namedtuple
from typing import (
Iterable,
Optional,
Union,
Sequence,
Tuple,
NamedTuple,
List,
Any,
Literal,
Callable,
)
from numbers import Number
from .. import backend_version
from ivy.func_wrapper import (
with_supported_device_and_dtypes,
with_unsupported_device_and_dtypes,
with_supported_dtypes,
with_unsupported_dtypes,
handle_out_argument,
)
import paddle
import ivy
import ivy.functional.backends.paddle as paddle_backend
from ivy.functional.ivy.experimental.manipulation import (
_check_paddle_pad,
_to_paddle_padding,
)
# Code from cephes for i0
_i0A = [
-4.41534164647933937950e-18,
3.33079451882223809783e-17,
-2.43127984654795469359e-16,
1.71539128555513303061e-15,
-1.16853328779934516808e-14,
7.67618549860493561688e-14,
-4.85644678311192946090e-13,
2.95505266312963983461e-12,
-1.72682629144155570723e-11,
9.67580903537323691224e-11,
-5.18979560163526290666e-10,
2.65982372468238665035e-9,
-1.30002500998624804212e-8,
6.04699502254191894932e-8,
-2.67079385394061173391e-7,
1.11738753912010371815e-6,
-4.41673835845875056359e-6,
1.64484480707288970893e-5,
-5.75419501008210370398e-5,
1.88502885095841655729e-4,
-5.76375574538582365885e-4,
1.63947561694133579842e-3,
-4.32430999505057594430e-3,
1.05464603945949983183e-2,
-2.37374148058994688156e-2,
4.93052842396707084878e-2,
-9.49010970480476444210e-2,
1.71620901522208775349e-1,
-3.04682672343198398683e-1,
6.76795274409476084995e-1,
]
_i0B = [
-7.23318048787475395456e-18,
-4.83050448594418207126e-18,
4.46562142029675999901e-17,
3.46122286769746109310e-17,
-2.82762398051658348494e-16,
-3.42548561967721913462e-16,
1.77256013305652638360e-15,
3.81168066935262242075e-15,
-9.55484669882830764870e-15,
-4.15056934728722208663e-14,
1.54008621752140982691e-14,
3.85277838274214270114e-13,
7.18012445138366623367e-13,
-1.79417853150680611778e-12,
-1.32158118404477131188e-11,
-3.14991652796324136454e-11,
1.18891471078464383424e-11,
4.94060238822496958910e-10,
3.39623202570838634515e-9,
2.26666899049817806459e-8,
2.04891858946906374183e-7,
2.89137052083475648297e-6,
6.88975834691682398426e-5,
3.36911647825569408990e-3,
8.04490411014108831608e-1,
]
@with_unsupported_dtypes(
{
"2.6.0 and below": (
"int16",
"int8",
"uint8",
"bfloat16",
)
},
backend_version,
)
def moveaxis(
a: paddle.Tensor,
source: Union[int, Sequence[int]],
destination: Union[int, Sequence[int]],
/,
*,
copy: Optional[bool] = None,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
if isinstance(source, tuple):
source = list(source)
if isinstance(destination, tuple):
source = list(destination)
return paddle.moveaxis(a, source, destination)
@with_supported_dtypes(
{"2.6.0 and below": ("float16", "float32", "float64", "int32", "int64")},
backend_version,
)
def pad(
input: paddle.Tensor,
pad_width: Union[Iterable[Tuple[int]], int],
/,
*,
mode: Union[
Literal[
"constant",
"dilated",
"edge",
"linear_ramp",
"maximum",
"mean",
"median",
"minimum",
"reflect",
"symmetric",
"wrap",
"empty",
],
Callable,
] = "constant",
stat_length: Union[Iterable[Tuple[int]], int] = 1,
constant_values: Union[Iterable[Tuple[Number]], Number] = 0,
end_values: Union[Iterable[Tuple[Number]], Number] = 0,
reflect_type: Literal["even", "odd"] = "even",
**kwargs: Optional[Any],
) -> paddle.Tensor:
constant_values = (
float(constant_values)
if not isinstance(constant_values, float)
else constant_values
)
pad_width = _to_paddle_padding(pad_width, input.ndim)
mode = "replicate" if mode == "edge" else "circular" if mode == "wrap" else mode
data_format = "NCL" if input.ndim == 1 else "NCHW" if input.ndim == 2 else "NCDHW"
return (
paddle.nn.functional.pad(
input.unsqueeze(0).unsqueeze(0),
pad_width,
mode=mode,
value=constant_values,
data_format=data_format,
)
.squeeze(0)
.squeeze(0)
)
pad.partial_mixed_handler = (
lambda *args, mode="constant", constant_values=0, reflect_type="even", **kwargs: (
len(args[0].shape) <= 3
and (
_check_paddle_pad(
mode, reflect_type, args[1], args[0].shape, constant_values, 3
)
)
)
)
@with_unsupported_device_and_dtypes(
{
"2.6.0 and below": {
"cpu": (
"int8",
"int16",
"uint8",
"float16",
"complex64",
"complex128",
"bool",
)
}
},
backend_version,
)
def heaviside(
x1: paddle.Tensor,
x2: paddle.Tensor,
/,
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
return paddle.heaviside(x1, x2)
@with_unsupported_dtypes(
{"2.6.0 and below": ("bfloat16", "float16", "int16", "int8", "uint8")},
backend_version,
)
def flipud(
m: paddle.Tensor,
/,
*,
copy: Optional[bool] = None,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
return paddle.flip(m, axis=0)
def vstack(
arrays: Sequence[paddle.Tensor],
/,
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
with ivy.ArrayMode(False):
arrays = [ivy.reshape(x, shape=(1, -1)) if x.ndim < 2 else x for x in arrays]
return ivy.concat(arrays, axis=0)
@with_unsupported_device_and_dtypes(
{"2.6.0 and below": {"cpu": ("int16", "bfloat16")}},
backend_version,
)
def hstack(
arrays: Sequence[paddle.Tensor],
/,
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
with ivy.ArrayMode(False):
if arrays[0].ndim >= 2:
return ivy.concat(arrays, axis=1)
else:
return ivy.concat(arrays, axis=0)
@with_unsupported_dtypes(
{"2.6.0 and below": ("bfloat16", "float16", "int16", "int8", "uint8")},
backend_version,
)
def rot90(
m: paddle.Tensor,
/,
*,
copy: Optional[bool] = None,
k: Optional[int] = 1,
axes: Optional[Tuple[int, int]] = (0, 1),
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
return paddle.rot90(m, k=k, axes=axes)
@with_unsupported_device_and_dtypes(
{"2.6.0 and below": {"cpu": ("complex64", "complex128")}},
backend_version,
)
def top_k(
x: paddle.Tensor,
k: int,
/,
*,
axis: int = -1,
largest: Optional[bool] = True,
sorted: bool = True,
out: Optional[Tuple[paddle.Tensor, paddle.Tensor]] = None,
) -> Tuple[paddle.Tensor, paddle.Tensor]:
k = min(k, x.shape[axis])
topk_res = NamedTuple(
"top_k", [("values", paddle.Tensor), ("indices", paddle.Tensor)]
)
with ivy.ArrayMode(False):
indices = ivy.argsort(x, axis=axis, descending=largest)
indices = paddle.index_select(indices, paddle.arange(end=k), axis)
if not sorted:
indices = paddle.sort(indices, axis=axis)
val = ivy.take_along_axis(x, indices, axis)
return topk_res(val, indices)
@with_unsupported_dtypes(
{"2.6.0 and below": ("bfloat16", "float16", "int16", "int8", "uint8")},
backend_version,
)
def fliplr(
m: paddle.Tensor,
/,
*,
copy: Optional[bool] = None,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
return paddle.flip(m, axis=1)
@with_unsupported_dtypes(
{"2.6.0 and below": ("bfloat16", "float16")},
backend_version,
)
def i0(
x: paddle.Tensor,
/,
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
def _i0_1(x):
return paddle_backend.multiply(
paddle_backend.exp(x),
_chbevl(paddle_backend.subtract(paddle_backend.divide(x, 2.0), 2.0), _i0A),
)
def _i0_2(x):
return paddle_backend.divide(
paddle_backend.multiply(
paddle_backend.exp(x),
_chbevl(
paddle_backend.subtract(paddle_backend.divide(32.0, x), 2.0), _i0B
),
),
paddle_backend.sqrt(x),
)
def _chbevl(x, vals):
b0 = vals[0]
b1 = 0.0
for i in range(1, len(vals)):
b2 = b1
b1 = b0
b0 = paddle_backend.add(
paddle_backend.subtract(paddle_backend.multiply(x, b1), b2), vals[i]
)
return paddle_backend.multiply(0.5, paddle_backend.subtract(b0, b2))
x = paddle_backend.abs(x)
return paddle_backend.where(paddle_backend.less_equal(x, 8.0), _i0_1(x), _i0_2(x))
def flatten(
x: paddle.Tensor,
/,
*,
copy: Optional[bool] = None,
start_dim: Optional[int] = 0,
end_dim: Optional[int] = -1,
order: Optional[str] = "C",
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
ivy.utils.assertions.check_elem_in_list(order, ["C", "F"])
if x.ndim == 0:
return x.reshape((-1,))
def _flatten(x, start_dim, end_dim):
if x.dtype in [
paddle.float16,
paddle.complex64,
paddle.complex128,
paddle.bool,
]:
if paddle.is_complex(x):
return paddle.complex(
paddle.flatten(x.real(), start_axis=start_dim, stop_axis=end_dim),
paddle.flatten(x.imag(), start_axis=start_dim, stop_axis=end_dim),
)
return paddle.flatten(
x.cast("float32"), start_axis=start_dim, stop_axis=end_dim
).cast(x.dtype)
return paddle.flatten(x, start_axis=start_dim, stop_axis=end_dim)
if order == "F":
with ivy.ArrayMode(False):
x = ivy.permute_dims(x, list(reversed(range(x.ndim))))
ret = _flatten(x, start_dim, end_dim)
return ivy.permute_dims(ret, list(reversed(range(ret.ndim))))
return _flatten(x, start_dim, end_dim)
def vsplit(
ary: paddle.Tensor,
indices_or_sections: Union[int, Sequence[int], paddle.Tensor],
/,
*,
copy: Optional[bool] = None,
) -> List[paddle.Tensor]:
if ary.ndim < 2:
raise ivy.exceptions.IvyError(
"vsplit only works on arrays of 2 or more dimensions"
)
return ivy.split(ary, copy=copy, num_or_size_splits=indices_or_sections, axis=0)
def dsplit(
ary: paddle.Tensor,
indices_or_sections: Union[int, Sequence[int], paddle.Tensor],
/,
*,
copy: Optional[bool] = None,
) -> List[paddle.Tensor]:
if ary.ndim < 3:
raise ivy.exceptions.IvyError(
"dsplit only works on arrays of 3 or more dimensions"
)
return ivy.split(ary, num_or_size_splits=indices_or_sections, axis=2)
def atleast_1d(
*arys: paddle.Tensor, copy: Optional[bool] = None
) -> List[paddle.Tensor]:
res = []
for ary in arys:
ary = ivy.array(ary, copy=copy).data
if ary.ndim < 1:
with ivy.ArrayMode(False):
res.append(ivy.expand_dims(ary, axis=0))
else:
res.append(ary)
if len(res) == 1:
return res[0]
return res
def dstack(
arrays: Sequence[paddle.Tensor],
/,
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
with ivy.ArrayMode(False):
arrays = ivy.atleast_2d(*arrays)
if not isinstance(arrays, list):
arrays = [arrays]
if arrays[0].ndim < 3:
return ivy.stack(arrays, axis=-1)
else:
return ivy.concat(arrays, axis=2)
def atleast_2d(
*arys: paddle.Tensor, copy: Optional[bool] = None
) -> List[paddle.Tensor]:
res = []
for ary in arys:
ary = ivy.array(ary, copy=copy).data
if ary.ndim < 2:
with ivy.ArrayMode(False):
res.append(ivy.expand_dims(ary, axis=list(range(2 - ary.ndim))))
else:
res.append(ary)
if len(res) == 1:
return res[0]
return res
@with_unsupported_device_and_dtypes(
{"2.6.0 and below": {"cpu": ("float16",)}},
backend_version,
)
def atleast_3d(
*arys: Union[paddle.Tensor, bool, Number], copy: Optional[bool] = None
) -> List[paddle.Tensor]:
res = []
for ary in arys:
ary = ivy.array(ary, copy=copy).data
if ary.ndim == 0:
result = ary.reshape((1, 1, 1))
elif ary.ndim == 1:
result = ary[None, :, None]
elif ary.ndim == 2:
result = ary[:, :, None]
else:
result = ary
res.append(result)
if len(res) == 1:
return res[0]
else:
return res
@with_unsupported_dtypes(
{"2.6.0 and below": ("bfloat16", "bool", "float16", "int16", "int8", "uint8")},
backend_version,
)
def take_along_axis(
arr: paddle.Tensor,
indices: paddle.Tensor,
axis: int,
/,
*,
mode: str = "fill",
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
if arr.ndim != indices.ndim:
raise ivy.utils.exceptions.IvyException(
"arr and indices must have the same number of dimensions;"
+ f" got {arr.ndim} vs {indices.ndim}"
)
indices = indices.cast("int64")
if mode not in ["clip", "fill", "drop"]:
raise ValueError(
f"Invalid mode '{mode}'. Valid modes are 'clip', 'fill', 'drop'."
)
arr_shape = arr.shape
if axis < 0:
axis += arr.ndim
if mode == "clip":
max_index = arr.shape[axis] - 1
with ivy.ArrayMode(False):
indices = ivy.clip(indices, 0, max_index)
elif mode in ("fill", "drop"):
if "float" in str(arr.dtype) or "complex" in str(arr.dtype):
fill_value = float("nan")
elif "uint" in str(arr.dtype):
fill_value = paddle.iinfo(arr.dtype).max
elif "int" in str(arr.dtype):
fill_value = -paddle.iinfo(arr.dtype).max - 1
else:
raise TypeError(
f"Invalid dtype '{arr.dtype}'. Valid dtypes are 'float', 'complex',"
" 'uint', 'int'."
)
with ivy.ArrayMode(False):
indices = ivy.where(
(indices < 0) | (indices >= arr.shape[axis]), -1, indices
)
arr_shape = list(arr_shape)
arr_shape[axis] = 1
fill_arr = ivy.full(arr_shape, fill_value, dtype=arr.dtype)
arr = ivy.concat([arr, fill_arr], axis=axis)
indices = ivy.where(indices < 0, arr.shape[axis] + indices, indices)
if paddle.is_complex(arr):
return paddle.complex(
paddle.take_along_axis(arr.real(), indices, axis),
paddle.take_along_axis(arr.imag(), indices, axis),
)
return paddle.take_along_axis(arr, indices, axis)
def hsplit(
ary: paddle.Tensor,
indices_or_sections: Union[int, Tuple[int, ...]],
/,
*,
copy: Optional[bool] = None,
) -> List[paddle.Tensor]:
if ary.ndim == 1:
return ivy.split(ary, num_or_size_splits=indices_or_sections, axis=0)
return ivy.split(ary, num_or_size_splits=indices_or_sections, axis=1)
def broadcast_shapes(*shapes: Union[List[int], List[Tuple]]) -> Tuple[int]:
def _broadcast_shape(s1, s2):
len_1 = len(s1)
len_2 = len(s2)
if len_1 == 0:
return () if len_2 == 0 else s2
elif len_1 != 0 and len_2 == 0:
return s1
else:
return paddle.broadcast_shape(s1, s2)
if len(shapes) == 0:
raise ValueError("shapes=[] must be non-empty")
elif len(shapes) == 1:
return shapes[0]
result = _broadcast_shape(shapes[0], shapes[1])
for i in range(2, len(shapes)):
result = _broadcast_shape(result, shapes[i])
# paddle outputs -1 if the output dimension is 0
result = [0 if dim == -1 else dim for dim in result]
return tuple(result)
def expand(
x: paddle.Tensor,
shape: Union[List[int], List[Tuple]],
/,
*,
copy: Optional[bool] = None,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
return paddle_backend.broadcast_to(x, shape)
def concat_from_sequence(
input_sequence: Union[Tuple[paddle.Tensor], List[paddle.Tensor]],
/,
*,
new_axis: int = 0,
axis: int = 0,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
with ivy.ArrayMode(False):
if new_axis == 0:
return ivy.concat(input_sequence, axis=axis)
elif new_axis == 1:
return ivy.stack(input_sequence, axis=axis)
@with_unsupported_device_and_dtypes(
{"2.6.0 and below": {"cpu": ("int8", "int16", "uint8")}}, backend_version
)
def unique_consecutive(
x: paddle.Tensor,
/,
*,
axis: Optional[int] = None,
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]:
Results = namedtuple(
"Results",
["output", "inverse_indices", "counts"],
)
x_shape = None
if axis is None:
x_shape = x.shape
x = x.flatten()
axis = -1
if axis < 0:
axis += x.ndim
split_indices = paddle.flatten(
paddle.where(
ivy.current_backend().any(
paddle.abs(paddle.diff(x, axis=axis)) > 1e-50,
axis=tuple(i for i in paddle.arange(x.ndim) if i != axis),
)
)[0]
+ 1,
)
if len(split_indices) > 0:
split_sizes = (
[split_indices[0]]
+ [
split_indices[i] - split_indices[i - 1]
for i in range(1, len(split_indices))
]
+ [x.shape[axis] - split_indices[-1]]
)
sub_arrays = paddle.split(
x,
split_sizes,
axis=axis,
)
else:
sub_arrays = [x]
output = paddle.concat(
[
ivy.current_backend().unique_all(sub_array, axis=axis)[0]
for sub_array in sub_arrays
],
axis=axis,
)
counts = paddle.to_tensor([sub_array.shape[axis] for sub_array in sub_arrays])
inverse_indices = paddle.repeat_interleave(paddle.arange(len(counts)), counts)
if x_shape:
inverse_indices = paddle.reshape(inverse_indices, x_shape)
return Results(
output.astype(x.dtype),
inverse_indices,
counts,
)
@with_unsupported_device_and_dtypes(
{"2.6.0 and below": {"cpu": ("int8", "int16", "uint8", "float16")}},
backend_version,
)
def fill_diagonal(
a: paddle.Tensor,
v: Union[int, float],
/,
*,
wrap: bool = False,
) -> paddle.Tensor:
shape = a.shape
max_end = paddle.prod(paddle.to_tensor(shape))
end = max_end
if len(shape) == 2:
step = shape[1] + 1
if not wrap:
end = shape[1] * shape[1]
else:
step = 1 + (paddle.cumprod(paddle.to_tensor(shape[:-1]), dim=0)).sum()
end = max_end if end > max_end else end
a = paddle.reshape(a, (-1,))
w = paddle.zeros(a.shape, dtype=bool)
ins = paddle.arange(0, max_end)
steps = paddle.arange(0, end, step)
for i in steps:
i = ins == i
w = paddle.logical_or(w, i)
v = paddle.to_tensor(v, dtype=a.dtype)
a = paddle.where(w, v, a)
a = paddle.reshape(a, shape)
return a
def _take_with_axis(
x: paddle.Tensor, indices: paddle.Tensor, /, *, axis: int, mode: str
) -> paddle.Tensor:
# has no checks
# default behaviour is 'raise' like ON CPU
# additional check is recommended
x_shape = x.shape[axis]
if not ivy.exists(axis):
x = x.flatten()
x_shape = paddle.prod(paddle.to_tensor(x_shape))
else:
x_shape = x.shape[axis]
# wrap
if mode == "wrap":
indices = ((indices % x_shape) + x_shape) % x_shape
# clip
else:
indices = paddle.clip(indices, 0, x_shape - 1)
rank = len(x.shape)
axis = ((axis % rank) + rank) % rank
slicer = ([slice(None)] * axis) + [indices.tolist()]
ret = ivy.array(x)[tuple(slicer)]
if len(indices.shape) == 0 and ret.shape == [1]:
ret = ret[0]
return ret
@with_supported_device_and_dtypes(
{
"2.6.0 and below": {
"cpu": ("int64", "float64", "int32", "uint8", "float32", "bool")
}
},
backend_version,
)
def take(
x: Union[int, List, paddle.Tensor],
indices: Union[int, List, paddle.Tensor],
/,
*,
axis: Optional[int] = None,
mode: str = "clip",
fill_value: Optional[Number] = None,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
if mode not in ["raise", "wrap", "clip", "fill"]:
raise ValueError("mode must be one of 'clip', 'raise', 'wrap', or 'fill'")
if not isinstance(x, paddle.Tensor):
x = paddle.to_tensor(x)
if len(x.shape) == 0:
x = paddle.to_tensor([x])
if not isinstance(indices, paddle.Tensor):
indices = paddle.to_tensor(indices)
if paddle.is_floating_point(indices):
indices = indices.astype(paddle.int64)
# raise
if mode == "raise":
mode = "clip"
if ivy.exists(axis):
try:
x_shape = x.shape[axis]
except Exception as e:
rank = len(x.shape)
raise IndexError(
"(OutOfRange) Attr(axis) is out of range, "
"It's expected to be in range of "
f"[-{rank}, {rank-1}]. But received Attr(axis) = {axis}."
"[Hint: Expected axis < input_dim.size() && axis >= "
"(0 - input_dim.size()) == true, "
"but received axis < input_dim.size() && axis >= "
"(0 - input_dim.size()):0 != true:1.]"
) from e
else:
x_shape = paddle.prod(paddle.to_tensor(x.shape))
bound_check = (indices < -x_shape) | (indices >= x_shape)
if paddle.any(bound_check):
if len(indices.shape) != 0:
indices = indices[bound_check].flatten()[0]
raise ValueError(
"(InvalidArgument) Variable value (indices) of OP(take) "
f"expected >= -{x_shape} and < {x_shape}, but got {indices}. "
"Please check input value. "
"[Hint: Expected index_data[i] < input_dim[axis], "
f"but received index_data[i]:{indices} >= input_dim[axis]:2.]"
)
# clip, wrap
if mode != "fill":
ret = _take_with_axis(x, indices, axis=axis, mode=mode)
if ivy.exists(out):
ivy.inplace_update(out, ret)
return ret
# fill
x_dtype = x.dtype
if fill_value is None:
# set according to jax behaviour
# https://tinyurl.com/66jn68uj
if paddle.is_floating_point(x) or paddle.is_complex(x):
# NaN for inexact types
fill_value = float("NaN")
else:
if x_dtype == paddle.bool:
# True for booleans
fill_value = True
elif str(x_dtype).split(".")[-1].startswith("u"):
# the largest positive value for unsigned types
fill_value = paddle.iinfo(x_dtype).max
else:
# the largest negative value for signed types
fill_value = paddle.iinfo(x_dtype).min
fill_value = paddle.to_tensor(fill_value, dtype=x_dtype)
x_shape = x.shape
ret = _take_with_axis(x, indices, axis=axis, mode="wrap")
if len(ret.shape) == 0:
# if scalar (paddle scalar), scalar fill (replace)
if paddle.any(indices != 0):
ret = fill_value
else:
if ivy.exists(axis):
rank = len(x.shape)
axis = ((axis % rank) + rank) % rank
x_shape = x_shape[axis]
else:
axis = 0
x_shape = paddle.prod(x_shape)
bound_check = paddle.to_tensor((indices < -x_shape) | (indices >= x_shape))
if paddle.any(bound_check):
if axis > 0:
bound_check = paddle.broadcast_to(
bound_check, (*x.shape[:axis], *bound_check.shape)
)
ret[bound_check] = fill_value
if ivy.exists(out):
ivy.inplace_update(out, ret)
return ret
def trim_zeros(a: paddle.Tensor, /, *, trim: Optional[str] = "bf") -> paddle.Tensor:
first = 0
trim = trim.upper()
if "F" in trim:
for i in a:
if i != 0.0:
break
else:
first = first + 1
last = len(a)
if "B" in trim:
for i in a[::-1]:
if i != 0.0:
break
else:
last = last - 1
return a[first:last]
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, backend_version
)
def put_along_axis(
arr: paddle.Tensor,
indices: paddle.Tensor,
values: Union[int, paddle.Tensor],
axis: int,
/,
*,
mode: Literal["sum", "min", "max", "mul", "replace"] = "replace",
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
mode_mappings = {
"sum": "add",
"mul": "mul",
"replace": "assign",
}
mode = mode_mappings.get(mode, mode)
ret = paddle.put_along_axis(arr, indices, values, axis, reduce=mode)
return ivy.inplace_update(out, ret) if ivy.exists(out) else ret
put_along_axis.partial_mixed_handler = lambda *args, mode="assign", **kwargs: mode in [
"replace",
"sum",
"mul",
]
@with_supported_dtypes(
{
"2.6.0 and below": (
"int32",
"int64",
"float64",
"complex128",
"float32",
"complex64",
"bool",
)
},
backend_version,
)
@handle_out_argument
def unflatten(
x: paddle.Tensor,
/,
shape: Tuple[int] = None,
dim: int = 0,
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
res = paddle.unflatten(x, dim, shape)
return res
| ivy/ivy/functional/backends/paddle/experimental/manipulation.py/0 | {
"file_path": "ivy/ivy/functional/backends/paddle/experimental/manipulation.py",
"repo_id": "ivy",
"token_count": 13378
} | 21 |
from numbers import Number
from typing import Optional, Tuple, Union
import paddle
import ivy.functional.backends.paddle as paddle_backend
import ivy
from ivy.func_wrapper import (
with_supported_dtypes,
with_unsupported_dtypes,
)
from . import backend_version
from .elementwise import _elementwise_helper
# Array API Standard #
# ------------------ #
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int16", "int32", "int64", "uint8")},
backend_version,
)
def argmax(
x: paddle.Tensor,
/,
*,
axis: Optional[int] = None,
keepdims: bool = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
select_last_index: bool = False,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
dtype = dtype if dtype is not None else paddle.int64
if select_last_index:
x = paddle_backend.flip(x, axis=axis)
ret = paddle.argmax(x, axis=axis, keepdim=keepdims)
if axis is not None:
ret = paddle.to_tensor(x.shape[axis] - ret - 1)
else:
ret = paddle.to_tensor(x.size - ret - 1)
else:
ret = paddle.argmax(x, axis=axis, keepdim=keepdims)
if keepdims and axis is None:
ret = ret.reshape([1] * x.ndim)
if not keepdims and (x.ndim == 1 or axis is None):
ret = paddle_backend.squeeze(ret, axis=-1)
return ret.astype(dtype)
@with_unsupported_dtypes(
{"2.6.0 and below": ("bfloat16", "bool", "complex", "float16", "int8")},
backend_version,
)
def argmin(
x: paddle.Tensor,
/,
*,
axis: Optional[int] = None,
keepdims: bool = False,
dtype: Optional[paddle.dtype] = None,
select_last_index: bool = False,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
dtype = dtype if dtype is not None else paddle.int64
if select_last_index:
x = paddle_backend.flip(x, axis=axis)
ret = paddle.argmin(x, axis=axis, keepdim=keepdims)
if axis is not None:
ret = paddle.to_tensor(x.shape[axis] - ret - 1)
else:
ret = paddle.to_tensor(x.size - ret - 1)
else:
ret = paddle.argmin(x, axis=axis, keepdim=keepdims)
if keepdims and axis is None:
ret = ret.reshape([1] * x.ndim)
if not keepdims and (x.ndim == 1 or axis is None):
ret = paddle_backend.squeeze(ret, axis=-1)
return ret.astype(dtype)
@with_unsupported_dtypes(
{"2.6.0 and below": ("float16", "int8", "uint8")}, backend_version
)
def nonzero(
x: paddle.Tensor,
/,
*,
as_tuple: bool = True,
size: Optional[int] = None,
fill_value: Number = 0,
) -> Union[paddle.Tensor, Tuple[paddle.Tensor]]:
if paddle.is_complex(x):
real_idx = paddle.nonzero(x.real())
imag_idx = paddle.nonzero(x.imag())
idx = paddle.concat([real_idx, imag_idx], axis=0)
res = paddle.unique(idx, axis=0)
else:
res = paddle.nonzero(x)
res = res.T
if size is not None:
if isinstance(fill_value, float):
res = res.cast(paddle.float64)
diff = size - res[0].shape[0]
if diff > 0:
res = paddle.nn.functional.pad(
res.unsqueeze(0),
[0, diff],
mode="constant",
value=fill_value,
data_format="NCL",
).squeeze(0)
elif diff < 0:
res = res[:, :size]
if as_tuple:
return tuple(res)
return res.T
def where(
condition: paddle.Tensor,
x1: Union[float, int, paddle.Tensor],
x2: Union[float, int, paddle.Tensor],
/,
*,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
x1, x2, ret_dtype = _elementwise_helper(x1, x2)
arrays = [condition, x1, x2]
scalar_out = all(map(lambda x: x.ndim == 0, arrays))
for i, array in enumerate(arrays):
if array.ndim == 0:
arrays[i] = paddle_backend.expand_dims(array, axis=0)
condition, x1, x2 = arrays
condition = condition.cast("bool") if condition.dtype != paddle.bool else condition
if ret_dtype in [
paddle.int8,
paddle.int16,
paddle.uint8,
paddle.float16,
paddle.bool,
]:
x1 = x1.cast("float32")
x2 = x2.cast("float32")
result = paddle.where(condition, x1, x2)
elif ret_dtype in [paddle.complex64, paddle.complex128]:
result_real = paddle.where(condition, paddle.real(x1), paddle.real(x2))
result_imag = paddle.where(condition, paddle.imag(x1), paddle.imag(x2))
result = paddle.complex(result_real, result_imag)
else:
result = paddle.where(condition, x1, x2)
return result.squeeze().cast(ret_dtype) if scalar_out else result.cast(ret_dtype)
# Extra #
# ----- #
@with_unsupported_dtypes(
{"2.6.0 and below": ("float16", "int8", "uint8")}, backend_version
)
def argwhere(
x: paddle.Tensor, /, *, out: Optional[paddle.Tensor] = None
) -> paddle.Tensor:
if x.ndim == 0:
return paddle.zeros(shape=[int(bool(x.item())), 0], dtype="int64")
if paddle.is_complex(x):
real_idx = paddle.nonzero(x.real())
imag_idx = paddle.nonzero(x.imag())
idx = paddle.concat([real_idx, imag_idx], axis=0)
return paddle.unique(idx, axis=0)
return paddle.nonzero(x)
| ivy/ivy/functional/backends/paddle/searching.py/0 | {
"file_path": "ivy/ivy/functional/backends/paddle/searching.py",
"repo_id": "ivy",
"token_count": 2439
} | 22 |
from typing import Optional, Sequence, Tuple, Union
import ivy
from ivy.func_wrapper import with_supported_dtypes
from ivy.functional.backends.numpy.experimental.statistical import (
_handle_axis,
_quantile,
_validate_quantile,
)
import tensorflow_probability as tfp
import tensorflow as tf
from .... import backend_version
def histogram(
a: tf.Tensor,
/,
*,
bins: Optional[Union[int, tf.Tensor]] = None,
axis: Optional[int] = None,
extend_lower_interval: Optional[bool] = False,
extend_upper_interval: Optional[bool] = False,
dtype: Optional[tf.DType] = None,
range: Optional[Tuple[float]] = None,
weights: Optional[tf.Tensor] = None,
density: Optional[bool] = False,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Tuple[tf.Tensor]:
min_a = tf.reduce_min(a)
max_a = tf.reduce_max(a)
if isinstance(bins, tf.Tensor) and range:
raise ivy.exceptions.IvyException(
"Must choose between specifying bins and range or bin edges directly"
)
if range:
if isinstance(bins, int):
bins = tf.cast(
tf.linspace(start=range[0], stop=range[1], num=bins + 1), dtype=a.dtype
)
elif isinstance(bins, int):
range = (min_a, max_a)
bins = tf.cast(
tf.linspace(start=range[0], stop=range[1], num=bins + 1), dtype=a.dtype
)
if tf.shape(bins)[0] < 2:
raise ivy.exceptions.IvyException("bins must have at least 1 bin (size > 1)")
if min_a < bins[0] and not extend_lower_interval:
raise ivy.exceptions.IvyException(
"Values of x outside of the intervals cause errors in tensorflow backend. "
"Consider using extend_lower_interval to deal with this."
)
if max_a > bins[-1] and not extend_upper_interval:
raise ivy.exceptions.IvyException(
"Values of x outside of the intervals cause errors in tensorflow backend. "
"Consider using extend_upper_interval to deal with this."
)
ret = tfp.stats.histogram(
x=a,
edges=bins,
axis=axis,
weights=weights,
extend_lower_interval=extend_lower_interval,
extend_upper_interval=extend_upper_interval,
dtype=dtype,
name="histogram",
)
if density:
pass
# TODO: Tensorflow native dtype argument is not working
if dtype:
ret = tf.cast(ret, dtype)
bins = tf.cast(bins, dtype)
# TODO: weird error when returning bins: return ret, bins
return ret
@with_supported_dtypes(
{
"2.15.0 and below": (
"float",
"complex",
)
},
backend_version,
)
def median(
input: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[Union[Tuple[int], int]] = None,
keepdims: bool = False,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tfp.stats.percentile(
input,
50.0,
axis=axis,
interpolation="midpoint",
keepdims=keepdims,
)
def nanmedian(
input: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[Union[Tuple[int], int]] = None,
keepdims: bool = False,
overwrite_input: bool = False,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if overwrite_input:
copied_input = tf.identity(input)
return _nanmedian_helper(copied_input, axis, keepdims)
else:
result = _nanmedian_helper(input, axis, keepdims)
return result
def _nanmedian_helper(input, axis=None, keepdims=False):
"""The approach to Handle Nans in single dimensional plus multi-dimensional
inputs are composed on two-parts.
PART 1: In this part, you have axis=None, it means we have to work on
flattened data, we don't need to work on different axis.there are two cases here
Case 1: which is if our input data does contain all the Nans or not,
if our input have just Nans (means no numbers) then we'll not use
temp[~tf.math.is_nan(temp)] function with our input because it will remove all Nans
and we get empty tensor and this raise an error when it sent to percentile function,
in this case we need to keep this input but just we flatten the input and percentile
function returns nan if it find nan in median and here all the input is nan then we
get our result.
Case 2: if we have a number (0.4, 0.3, 0. ,1., 2., .....) with nans then we use this
function temp[~tf.math.is_nan(temp)], it will return a tensor by extracting the nans
and just keeping the values, but remember the returned tensor will be flattened and
axis=None work on flattene inputs, so in this case we are also on same page :)
for example: [[12.0 ,4.0 ,ivy.nan], [ivy.nan, ivy.nan,2.2]] => returned:
[12.0 ,4.0, 2.2] now this will be our new input in percentile function.
PART 2: In this case you have to do more work because we now don't allow to work
directly on flattened data, Here are two cases also.
CASE 1: we need to consider axis parameter here, but percentile axis does work
differently and we don't have median function in tensorflow yet, so we need to make
our input data compatible to the axis, then we compute nanmedian along that specific
axis. we transpose the input data according to our axis, axis can be (0,), (1,),
(0,1), (0,1,2) and input can be multi-dimensional, so we need to take care of edge
cases before making it compatible.
CASE 2: Here the main Nan handling part comes, you can only use 1D inputs here so we
have to flatten the input then we have jump parameter which is use to say how many
iterations we want to make because we have to calculate the row-wise median along
axis=None now, so we slice out some data from the flattened input and then we use
that 1D Input to remove the nans and use it in our percentile.
For example: input = [[ivy.nan, 3, ivy.nan, 7],[4, ivy.nan,6, 9]], axis=1
flatten data -> [[nan 3. nan 7. 4. nan 6. 9.]]
num_jumps -> 2 because we have to slice out this in (1, 4) and (1,4),
then it works same as PART 1 CASE 1 AND CASE 2.
now for first slice we get -> 5.0 and for second we get -> 6.0, these calculated
along axis=1 now we append the data into result, so to make the shape of result
compatible with the numpy output, we reshaped it.
the result which we get from our _nanmedian_helper = [5., 6.]
"""
dtype = input.dtype
temp = tf.cast(input, tf.float64)
num_dim = tf.rank(temp)
keepdim_shape = tf.shape(temp)
q = 50.0
# PART 1
if axis is None:
# PART 1 CASE 1
if tf.reduce_all(tf.math.is_nan(temp)):
temp = tf.reshape(temp, shape=(1, -1))
else:
# PART 1 CASE 2
temp = temp[~tf.math.is_nan(temp)]
ret = tfp.stats.percentile(
temp,
q,
axis=axis,
interpolation="midpoint",
keepdims=keepdims,
)
if dtype in [tf.int32, tf.int64, tf.float64]:
ret = tf.cast(ret, dtype=tf.float64)
elif dtype in [tf.float16, tf.bfloat16]:
ret = tf.cast(ret, dtype=tf.float16)
else:
ret = tf.cast(ret, dtype=tf.float32)
return ret
axis = [axis] if isinstance(axis, int) else list(axis)
# PART 2 CASE 1
for i in axis:
keepdim_shape = tf.tensor_scatter_nd_update(keepdim_shape, [[i]], [1])
axis = [num_dim + x if x < 0 else x for x in axis]
axis.sort()
dimension = tf.size(temp.shape)
while tf.size(axis) > 0:
axis1 = axis[0]
for axis2 in range(axis1 + 1, dimension):
temp = tf.transpose(
temp,
perm=tf.tensor_scatter_nd_update(
tf.range(tf.rank(temp)), [[axis1], [axis2]], [axis2, axis1]
),
)
axis1 = axis2
axis = [x - 1 for x in axis]
axis.pop(0)
dimension = dimension - 1
temp = tf.reshape(
temp, shape=tf.concat([tf.shape(temp)[: (dimension - len(axis))], [-1]], axis=0)
)
tensor = tf.reshape(temp, shape=(1, -1))
shape = temp.shape
dim = temp.ndim
slice_size = shape[len(shape) - 1]
num_jumps = 1
result = []
if slice_size == 1:
if dim == 2 and input.shape[0] == 1:
return tensor
if dim > 2 and input.shape[0] == 1:
return tf.reshape(tensor, shape=input.shape)
tensor = tf.reshape(tensor, shape=shape[:-1])
return tensor
# PART 2 CASE 2
i = dim
while i > 1:
num_jumps *= shape[len(shape) - i]
i -= 1
for i in range(num_jumps):
start = i * slice_size
end = (i + 1) * slice_size
arr = tensor[:, start:end]
if tf.reduce_all(tf.math.is_nan(arr)):
arr = tf.reshape(arr, shape=(1, -1))
else:
arr = arr[~tf.math.is_nan(arr)]
ret = tfp.stats.percentile(
arr, q, axis=None, interpolation="midpoint", keepdims=keepdims
)
if keepdims:
ret = tf.squeeze(ret)
result.append(ret)
result = tf.reshape(result, shape=shape[:-1])
if keepdims:
keepdim_shape = tuple(keepdim_shape)
result = tf.reshape(result, shape=keepdim_shape)
if dtype in [tf.int32, tf.int64, tf.float64]:
result = tf.cast(result, dtype=tf.float64)
elif dtype in [tf.float16, tf.bfloat16]:
result = tf.cast(result, dtype=tf.float16)
else:
result = tf.cast(result, dtype=tf.float32)
return result
def _compute_quantile_wrapper(
x,
q,
axis=None,
keepdims=False,
interpolation="linear",
):
if not _validate_quantile(q):
raise ValueError("Quantiles must be in the range [0, 1]")
if interpolation in [
"linear",
"lower",
"higher",
"midpoint",
"nearest",
"nearest_jax",
]:
if interpolation == "nearest_jax":
return _handle_axis(x, q, _quantile, keepdims=keepdims, axis=axis)
else:
axis = tuple(axis) if isinstance(axis, list) else axis
return tfp.stats.percentile(
x,
tf.math.multiply(q, 100),
axis=axis,
interpolation=interpolation,
keepdims=keepdims,
)
else:
raise ValueError(
"Interpolation must be 'linear', 'lower', 'higher', 'midpoint' or 'nearest'"
)
def quantile(
a: Union[tf.Tensor, tf.Variable],
q: Union[tf.Tensor, float],
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
interpolation: str = "linear",
keepdims: bool = False,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
# added the nearest_jax mode to enable jax-like calculations for method="nearest"
return _compute_quantile_wrapper(
a,
q,
axis=axis,
keepdims=keepdims,
interpolation=interpolation,
)
| ivy/ivy/functional/backends/tensorflow/sub_backends/tf_probability/experimental/statistical.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/sub_backends/tf_probability/experimental/statistical.py",
"repo_id": "ivy",
"token_count": 4874
} | 23 |
"""Collection of PyTorch network layers, wrapped to fit Ivy syntax and
signature."""
from typing import Optional, Tuple, Union, Sequence
# global
import torch
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from . import backend_version
from ivy.functional.ivy.layers import _get_embed_dim, _handle_padding, _deconv_length
@with_supported_dtypes(
{"2.2 and below": ("float32", "float64", "complex")},
backend_version,
)
def multi_head_attention(
query: torch.Tensor,
/,
*,
key: torch.Tensor = None,
value: torch.Tensor = None,
batch_first: bool = True,
num_heads: Optional[int] = 8,
scale: Optional[float] = None,
attention_mask: torch.Tensor = None,
in_proj_weights: torch.Tensor = None,
q_proj_weights: torch.Tensor = None,
k_proj_weights: torch.Tensor = None,
v_proj_weights: torch.Tensor = None,
out_proj_weights: torch.Tensor = None,
in_proj_bias: torch.Tensor = None,
out_proj_bias: torch.Tensor = None,
is_causal: Optional[bool] = False,
key_padding_mask: Optional[torch.Tensor] = None,
bias_k: Optional[torch.Tensor] = None,
bias_v: Optional[torch.Tensor] = None,
static_k: Optional[torch.Tensor] = None,
static_v: Optional[torch.Tensor] = None,
add_zero_attn: bool = False,
return_attention_weights: Optional[bool] = False,
average_attention_weights: Optional[bool] = True,
dropout: Optional[float] = 0.0,
training: Optional[bool] = False,
out: torch.Tensor = None,
) -> torch.Tensor:
if key is None and value is None:
key = value = query
emb_dim = _get_embed_dim(
in_proj_weights,
q_proj_weights,
k_proj_weights,
v_proj_weights,
query,
)[1]
num_dims = query.ndim
if num_dims == 3 and batch_first:
query, key, value = (torch.swapaxes(x, 0, 1) for x in [query, key, value])
ret = torch.nn.functional.multi_head_attention_forward(
query,
key,
value,
emb_dim,
num_heads,
in_proj_weights,
in_proj_bias,
bias_k,
bias_v,
add_zero_attn,
dropout,
out_proj_weights,
out_proj_bias,
training=training,
key_padding_mask=key_padding_mask,
need_weights=return_attention_weights,
attn_mask=attention_mask,
use_separate_proj_weight=not ivy.exists(in_proj_weights),
q_proj_weight=q_proj_weights,
k_proj_weight=k_proj_weights,
v_proj_weight=v_proj_weights,
static_k=static_k,
static_v=static_v,
average_attn_weights=average_attention_weights,
is_causal=is_causal,
)
ret = list(ret) if isinstance(ret, tuple) else [ret]
if num_dims == 3 and batch_first:
ret[0] = ret[0].swapaxes(0, 1)
if return_attention_weights:
return tuple(ret)
return ret[0]
multi_head_attention.partial_mixed_handler = (
lambda *args, scale=None, out_proj_weights=None, is_causal=False, attention_mask=None, return_attention_weights=False, in_proj_weights=None, q_proj_weights=None, k_proj_weights=None, v_proj_weights=None, **kwargs: not ivy.exists( # noqa: E501
scale
)
and ivy.exists(out_proj_weights)
and (not is_causal or ivy.exists(attention_mask))
and (not is_causal or not return_attention_weights)
and (
ivy.exists(in_proj_weights)
or all(ivy.exists(x) for x in [q_proj_weights, k_proj_weights, v_proj_weights])
)
and len(
set(
_get_embed_dim(
in_proj_weights, q_proj_weights, k_proj_weights, v_proj_weights, args[0]
)
)
)
== 1
)
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
def linear(
x: torch.Tensor,
weight: torch.Tensor,
/,
*,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
return torch.nn.functional.linear(x, weight, bias)
linear.partial_mixed_handler = lambda x, weight, **kwargs: weight.ndim == 2
def _x_dil_before_conv(x, dims, x_dilations):
# adding dilation to input
x_dilations = [x_dilations] * dims if isinstance(x_dilations, int) else x_dilations
x_dilations_idxs = [i for i, x_dil in enumerate(x_dilations) if x_dil > 1]
if x_dilations_idxs:
for i in x_dilations_idxs:
h = x.shape[2 + i]
new_height = h + (h - 1) * (x_dilations[i] - 1)
h = torch.eye(
new_height,
dtype=x.dtype,
device=ivy.as_native_dev(ivy.default_device()),
)[:: x_dilations[i]]
x = torch.swapaxes(x, 2 + i, -1)
x = torch.matmul(x, h)
x = torch.swapaxes(x, -1, 2 + i)
return x
def _pad_before_conv(
x, filters, strides, padding, dims, dilations, filter_format="channel_last"
):
dilations = [dilations] * dims if isinstance(dilations, int) else dilations
strides = [strides] * dims if isinstance(strides, int) else strides
filter_shape = (
filters.shape[2:] if filter_format == "channel_first" else filters.shape[:dims]
)
if isinstance(padding, str):
# use torch's padding in conv if strides are all 1
if len(strides) == strides.count(1):
return x, padding.lower()
filter_shape = [
filter_shape[i] + (filter_shape[i] - 1) * (dilations[i] - 1)
for i in range(dims)
]
pad_specific = [
_handle_padding(x.shape[2 + i], strides[i], filter_shape[i], padding)
for i in range(dims - 1, -1, -1)
]
pad_list_top = [pad_specific[i] // 2 for i in range(dims)]
pad_list_bot = [pad_specific[i] - pad_specific[i] // 2 for i in range(dims)]
pad_list = [None] * len(pad_list_top) * 2
pad_list[::2] = pad_list_top
pad_list[1::2] = pad_list_bot
else:
if isinstance(padding, int):
return x, padding
# if symmetric padding is used, use torch's padding in conv function
if all(pad[0] == pad[1] for pad in padding):
return x, [pad[0] for pad in padding]
pad_list = [item for sublist in padding for item in sublist[::-1]][::-1]
return torch.nn.functional.pad(x, pad_list), 0
def _new_pad_before_conv(x, padding):
if isinstance(padding, str):
return x, padding.lower()
elif isinstance(padding, int):
return x, padding
else:
# if symmetric padding is used, use torch's padding in conv function
if all(pad[0] == pad[1] for pad in padding):
return x, [pad[0] for pad in padding]
pad_list = [item for sublist in padding for item in sublist[::-1]][::-1]
return torch.nn.functional.pad(x, pad_list), "valid"
def _tranpose_padding(
x_shape, filter_shape, strides, padding, dims, dilations, output_shape, data_format
):
if output_shape is not None and len(output_shape) > dims:
if data_format[-1] == "C" or data_format == "channel_last":
output_shape = output_shape[1:-1]
elif data_format[1] == "C" or data_format == "channel_first":
output_shape = output_shape[2:]
strides = [strides] * dims if isinstance(strides, int) else strides
dilations = [dilations] * dims if isinstance(dilations, int) else dilations
not_valid_pad = [False] * dims
if isinstance(padding, str):
if output_shape is None:
output_shape = [
_deconv_length(
x_shape[i], strides[i], filter_shape[i], padding, dilations[i]
)
for i in range(dims)
]
if padding == "VALID":
symmetric_padding = [0] * dims
else:
pad_specific = [
_handle_padding(
output_shape[i],
strides[i],
filter_shape[i] + (filter_shape[i] - 1) * (dilations[i] - 1),
padding,
)
for i in range(dims)
]
for i in range(dims):
if pad_specific[i] % 2 != 0:
pad_specific[i] -= 1
not_valid_pad[i] = True
symmetric_padding = [pad_specific[i] // 2 for i in range(dims)]
out_shape = [
(x_shape[i] - 1) * strides[i]
- 2 * symmetric_padding[i]
+ dilations[i] * (filter_shape[i] - 1)
+ 1
for i in range(dims)
]
output_padding = [max(output_shape[i] - out_shape[i], 0) for i in range(dims)]
else:
if isinstance(padding, int):
padding = [[padding, padding]] * dims
symmetric_padding = [max(pad) for pad in padding]
output_padding = [max(pad) - min(pad) for pad in padding]
return not_valid_pad, symmetric_padding, output_padding
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
# noinspection PyUnresolvedReferences
def conv1d(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int]] = 1,
dilations: Union[int, Tuple[int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if data_format == "NWC":
x = x.permute(0, 2, 1)
if filter_format == "channel_last":
filters = filters.permute(2, 1, 0)
x = _x_dil_before_conv(x, 1, x_dilations)
x, padding = _pad_before_conv(
x, filters, strides, padding, 1, dilations, "channel_first"
)
res = torch.nn.functional.conv1d(x, filters, bias, strides, padding, dilations)
if data_format == "NWC":
res = res.permute(0, 2, 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
def conv1d_v_1p9p0_and_above(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int]] = 1,
dilations: Union[int, Tuple[int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if data_format == "NWC":
x = x.permute(0, 2, 1)
if filter_format == "channel_last":
filters = filters.permute(2, 1, 0)
x = _x_dil_before_conv(x, 1, x_dilations)
if padding != "SAME" or all(
s == 1 for s in ([strides] if isinstance(strides, int) else strides)
):
x, padding = _new_pad_before_conv(x, padding)
else:
x, padding = _pad_before_conv(
x, filters, strides, padding, 1, dilations, "channel_first"
)
res = torch.nn.functional.conv1d(x, filters, bias, strides, padding, dilations)
if data_format == "NWC":
res = res.permute(0, 2, 1)
return res
@with_unsupported_dtypes(
{
"2.2 and below": (
"float16",
"bfloat16",
"complex",
)
},
backend_version,
)
# noinspection PyUnresolvedReferences
def conv1d_transpose(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int]],
padding: str,
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "NWC",
dilations: Union[int, Tuple[int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
):
if data_format == "NWC":
x = x.permute(0, 2, 1)
if filter_format == "channel_last":
filters = filters.permute(2, 1, 0)
not_valid_pad, symmetric_padding, output_padding = _tranpose_padding(
x.shape[2:],
filters.shape[2:],
strides,
padding,
1,
dilations,
output_shape,
data_format,
)
res = torch.nn.functional.conv_transpose1d(
x,
filters,
bias,
strides,
symmetric_padding,
dilation=dilations,
output_padding=output_padding,
)
if not_valid_pad[0]:
res = res[:, :, 0:-1]
if data_format == "NWC":
res = res.permute(0, 2, 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
# noinspection PyUnresolvedReferences
def conv2d(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NHWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int, int]] = 1,
dilations: Union[int, Tuple[int, int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if data_format == "NHWC":
x = x.permute(0, 3, 1, 2)
if filter_format == "channel_last":
filters = filters.permute(3, 2, 0, 1)
x = _x_dil_before_conv(x, 2, x_dilations)
x, padding = _pad_before_conv(
x, filters, strides, padding, 2, dilations, "channel_first"
)
res = torch.nn.functional.conv2d(x, filters, bias, strides, padding, dilations)
if data_format == "NHWC":
return res.permute(0, 2, 3, 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
def conv2d_v_1p9p0_and_above(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NHWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int, int]] = 1,
dilations: Union[int, Tuple[int, int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if data_format == "NHWC":
x = x.permute(0, 3, 1, 2)
if filter_format == "channel_last":
filters = filters.permute(3, 2, 0, 1)
x = _x_dil_before_conv(x, 2, x_dilations)
if padding != "SAME" or all(
s == 1 for s in ([strides] if isinstance(strides, int) else strides)
):
x, padding = _new_pad_before_conv(x, padding)
else:
x, padding = _pad_before_conv(
x, filters, strides, padding, 2, dilations, "channel_first"
)
res = torch.nn.functional.conv2d(x, filters, bias, strides, padding, dilations)
if data_format == "NHWC":
return res.permute(0, 2, 3, 1)
return res
@with_unsupported_dtypes(
{
"2.2 and below": (
"float16",
"bfloat16",
"complex",
)
},
backend_version,
)
# noinspection PyUnresolvedReferences
def conv2d_transpose(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "NHWC",
dilations: Union[int, Tuple[int, int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
):
if data_format == "NHWC":
x = x.permute(0, 3, 1, 2)
if filter_format == "channel_last":
filters = filters.permute(3, 2, 0, 1)
not_valid_pad, symmetric_padding, output_padding = _tranpose_padding(
x.shape[2:],
filters.shape[2:],
strides,
padding,
2,
dilations,
output_shape,
data_format,
)
res = torch.nn.functional.conv_transpose2d(
x,
filters,
bias,
strides,
symmetric_padding,
dilation=dilations,
output_padding=output_padding,
)
if not_valid_pad[0]:
res = res[..., :-1, :]
if not_valid_pad[1]:
res = res[..., :-1]
if data_format == "NHWC":
res = res.permute(0, *range(2, 4), 1)
return res
@with_unsupported_dtypes(
{
"2.2 and below": (
"float16",
"bfloat16",
"complex",
)
},
backend_version,
)
# noinspection PyUnresolvedReferences
def depthwise_conv2d(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NHWC",
dilations: Union[int, Tuple[int, int]] = 1,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
strides = [strides] * 2 if isinstance(strides, int) else strides
dilations = [dilations] * 2 if isinstance(dilations, int) else dilations
if data_format == "NHWC":
x = x.permute(0, 3, 1, 2)
filters = ivy.squeeze(filters, axis=3).to_native() if filters.ndim == 4 else filters
filters = torch.unsqueeze(filters, -1)
dims_in = filters.shape[-2]
filters = filters.permute(2, 3, 0, 1)
x, padding = _pad_before_conv(
x, filters, strides, padding, 2, dilations, "channel_first"
)
# noinspection PyArgumentEqualDefault
res = torch.nn.functional.conv2d(
x, filters, None, strides, padding, dilations, dims_in
)
if data_format == "NHWC":
return res.permute(0, 2, 3, 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")}, backend_version
)
# noinspection PyUnresolvedReferences
def conv3d(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int, int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NDHWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int, int, int]] = 1,
dilations: Union[int, Tuple[int, int, int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
):
if data_format == "NDHWC":
x = x.permute(0, 4, 1, 2, 3)
if filter_format == "channel_last":
filters = filters.permute(4, 3, 0, 1, 2)
x = _x_dil_before_conv(x, 3, x_dilations)
x, padding = _pad_before_conv(
x, filters, strides, padding, 3, dilations, "channel_first"
)
res = torch.nn.functional.conv3d(x, filters, bias, strides, padding, dilations)
if data_format == "NDHWC":
res = res.permute(0, 2, 3, 4, 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")}, backend_version
)
def conv3d_v_1p9p0_and_above(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int, int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NDHWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int, int, int]] = 1,
dilations: Union[int, Tuple[int, int, int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
):
if data_format == "NDHWC":
x = x.permute(0, 4, 1, 2, 3)
if filter_format == "channel_last":
filters = filters.permute(4, 3, 0, 1, 2)
x = _x_dil_before_conv(x, 3, x_dilations)
if padding != "SAME" or all(
s == 1 for s in ([strides] if isinstance(strides, int) else strides)
):
x, padding = _new_pad_before_conv(x, padding)
else:
x, padding = _pad_before_conv(
x, filters, strides, padding, 3, dilations, "channel_first"
)
res = torch.nn.functional.conv3d(x, filters, bias, strides, padding, dilations)
if data_format == "NDHWC":
res = res.permute(0, 2, 3, 4, 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
# noinspection PyUnresolvedReferences
def conv3d_transpose(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int, int, int]],
padding: str,
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "NDHWC",
dilations: Union[int, Tuple[int, int, int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if data_format == "NDHWC":
x = x.permute(0, 4, 1, 2, 3)
if filter_format == "channel_last":
filters = filters.permute(4, 3, 0, 1, 2)
not_valid_pad, symmetric_padding, output_padding = _tranpose_padding(
x.shape[2:],
filters.shape[2:],
strides,
padding,
3,
dilations,
output_shape,
data_format,
)
res = torch.nn.functional.conv_transpose3d(
x,
filters,
bias,
strides,
symmetric_padding,
dilation=dilations,
output_padding=output_padding,
)
if not_valid_pad[0]:
res = res[:, :, 0:-1, :, :]
if not_valid_pad[1]:
res = res[:, :, :, 0:-1, :]
if not_valid_pad[2]:
res = res[:, :, :, :, 0:-1]
if data_format == "NDHWC":
res = res.permute(0, 2, 3, 4, 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
def conv_general_dilated(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
dims: int = 2,
data_format: str = "channel_last",
filter_format: str = "channel_last",
feature_group_count: int = 1,
x_dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
):
# permuting dims based on formats
if data_format == "channel_last":
x = x.permute(0, dims + 1, *range(1, dims + 1))
if filter_format == "channel_last":
filters = filters.permute(-1, -2, *range(dims))
x = _x_dil_before_conv(x, dims, x_dilations)
x, padding = _pad_before_conv(
x, filters, strides, padding, dims, dilations, "channel_first"
)
if dims == 1:
res = torch.nn.functional.conv1d(
x, filters, bias, strides, padding, dilations, feature_group_count
)
elif dims == 2:
res = torch.nn.functional.conv2d(
x, filters, bias, strides, padding, dilations, feature_group_count
)
else:
res = torch.nn.functional.conv3d(
x, filters, bias, strides, padding, dilations, feature_group_count
)
if data_format == "channel_last":
return res.permute(0, *range(2, dims + 2), 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
def conv_general_dilated_v_1p9p0_and_above(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
dims: int = 2,
data_format: str = "channel_last",
filter_format: str = "channel_last",
feature_group_count: int = 1,
x_dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
):
# permuting dims based on formats
if data_format == "channel_last":
x = x.permute(0, dims + 1, *range(1, dims + 1))
if filter_format == "channel_last":
filters = filters.permute(-1, -2, *range(dims))
x = _x_dil_before_conv(x, dims, x_dilations)
if padding != "SAME" or all(
s == 1 for s in ([strides] if isinstance(strides, int) else strides)
):
x, padding = _new_pad_before_conv(x, padding)
else:
x, padding = _pad_before_conv(
x, filters, strides, padding, dims, dilations, "channel_first"
)
if dims == 1:
res = torch.nn.functional.conv1d(
x, filters, bias, strides, padding, dilations, feature_group_count
)
elif dims == 2:
res = torch.nn.functional.conv2d(
x, filters, bias, strides, padding, dilations, feature_group_count
)
else:
res = torch.nn.functional.conv3d(
x, filters, bias, strides, padding, dilations, feature_group_count
)
if data_format == "channel_last":
return res.permute(0, *range(2, dims + 2), 1)
return res
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")},
backend_version,
)
def conv_general_transpose(
x: torch.Tensor,
filters: torch.Tensor,
strides: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]],
padding: str,
/,
*,
dims: int = 2,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "channel_first",
dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
feature_group_count: int = 1,
bias: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
):
if data_format == "channel_last":
x = x.permute(0, dims + 1, *range(1, dims + 1))
if filter_format == "channel_last":
filters = filters.permute(dims + 1, dims, *range(dims))
not_valid_pad, symmetric_padding, output_padding = _tranpose_padding(
x.shape[2:],
filters.shape[2:],
strides,
padding,
dims,
dilations,
output_shape,
data_format,
)
if dims == 1:
res = torch.nn.functional.conv_transpose1d(
x,
filters,
bias,
strides,
symmetric_padding,
dilation=dilations,
output_padding=output_padding,
groups=feature_group_count,
)
if not_valid_pad[0]:
res = res[:, :, :-1]
elif dims == 2:
res = torch.nn.functional.conv_transpose2d(
x,
filters,
bias,
strides,
symmetric_padding,
dilation=dilations,
output_padding=output_padding,
groups=feature_group_count,
)
if not_valid_pad[0]:
res = res[..., :-1, :]
if not_valid_pad[1]:
res = res[..., :-1]
else:
res = torch.nn.functional.conv_transpose3d(
x,
filters,
bias,
strides,
symmetric_padding,
dilation=dilations,
output_padding=output_padding,
groups=feature_group_count,
)
if not_valid_pad[0]:
res = res[..., :-1, :, :]
if not_valid_pad[1]:
res = res[..., :, :-1, :]
if not_valid_pad[2]:
res = res[..., :, :, :-1]
if data_format == "channel_last":
res = res.permute(0, *range(2, dims + 2), 1)
return res
def scaled_dot_product_attention_v_2p0p0_and_above(
q,
k,
v,
scale: float,
/,
*,
mask=None,
out=None,
):
if isinstance(mask, torch.Tensor):
mask = torch.where(mask == 0, -torch.inf, 0)
return torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask)
def lstm(
input: torch.Tensor,
initial_states: Tuple[torch.Tensor],
all_weights: Tuple[torch.Tensor],
num_layers: int,
dropout: float,
train: bool,
bidirectional: bool,
batch_first: bool = False,
batch_sizes: Sequence = None,
weights_transposed: bool = False,
has_ih_bias: bool = True,
has_hh_bias: bool = True,
):
if weights_transposed:
# transpose the weights if they are in the wrong format
all_weights = [
torch.transpose(weight, 1, 0).contiguous() if weight.dim() == 2 else weight
for weight in all_weights
]
else:
all_weights = list(all_weights)
if (has_ih_bias and not has_hh_bias) or (has_hh_bias and not has_ih_bias):
# insert zero biases into the weights where one set of biases is not
# used, to avoid stride errors in lstm
shapes = []
for i in range(2, len(all_weights), 3):
shapes.append(tuple(all_weights[i].shape))
for i, shape in enumerate(shapes):
idx = (i + 1) * 4 - (1 if has_ih_bias else 2)
all_weights.insert(idx, torch.zeros(shape))
has_ih_bias = True
has_hh_bias = True
if initial_states[0].dim() == 2:
initial_states[0] = ivy.expand_dims(initial_states[0])
if initial_states[1].dim() == 2:
initial_states[1] = ivy.expand_dims(initial_states[1])
ret = torch.lstm(
input,
initial_states,
all_weights,
has_ih_bias,
num_layers,
dropout,
train,
bidirectional,
batch_first,
)
return ret[0][:, -1], ret[0], (ret[1], ret[2])
| ivy/ivy/functional/backends/torch/layers.py/0 | {
"file_path": "ivy/ivy/functional/backends/torch/layers.py",
"repo_id": "ivy",
"token_count": 13829
} | 24 |
import importlib
versions = {
"torch": "2.2",
"tensorflow": "2.15.0",
"numpy": "1.25.2",
"jax": "0.4.24",
"scipy": "1.10.1",
"paddle": "2.6.0",
"sklearn": "1.3.0",
"xgboost": "1.7.6",
"torchvision": "0.15.2.",
"mindspore": "2.0.0",
}
def fn_name_from_version_specific_fn_name(name, version):
"""
Parameters
----------
name
the version specific name of the function for which the version support is to be
provided.
version
the version of the current framework for which the support is to be provided,
the version is inferred by importing the framework in the case of frontend
version support and defaults to the highest available version in case of import
failure
Returns
-------
the name of the original function which will then point to the version specific
function
"""
version = str(version)
if version.find("+") != -1:
version = tuple(map(int, version[: version.index("+")].split(".")))
# version = int(version[: version.index("+")].replace(".", ""))
else:
version = tuple(map(int, version.split(".")))
# version = int(version.replace(".", ""))
if "_to_" in name:
i = name.index("_v_")
e = name.index("_to_")
version_start = name[i + 3 : e]
version_start = tuple(map(int, version_start.split("p")))
version_end = name[e + 4 :]
version_end = tuple(map(int, version_end.split("p")))
if version_start <= version <= version_end:
return name[0:i]
elif "_and_above" in name:
i = name.index("_v_")
e = name.index("_and_")
version_start = name[i + 3 : e]
version_start = tuple(map(int, version_start.split("p")))
if version >= version_start:
return name[0:i]
else:
i = name.index("_v_")
e = name.index("_and_")
version_start = name[i + 3 : e]
version_start = tuple(map(int, version_start.split("p")))
if version <= version_start:
return name[0:i]
def set_frontend_to_specific_version(frontend):
"""
Parameters
----------
frontend
the frontend module for which we provide the version support
Returns
-------
The function doesn't return anything and updates the frontend __dict__
to make the original function name to point to the version specific one
"""
f = str(frontend.__name__)
f = f[f.index("frontends") + 10 :]
str_f = str(f)
try:
f = importlib.import_module(f)
f_version = f.__version__
except (ImportError, AttributeError):
f_version = versions[str_f]
for i in list(frontend.__dict__):
if "_v_" in i:
orig_name = fn_name_from_version_specific_fn_name(i, f_version)
if orig_name:
frontend.__dict__[orig_name] = frontend.__dict__[i]
return f_version
| ivy/ivy/functional/frontends/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/__init__.py",
"repo_id": "ivy",
"token_count": 1294
} | 25 |
# global
from typing import Any
import itertools
import string
import builtins
# local
import ivy
from ivy.func_wrapper import with_supported_dtypes
from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
from ivy.func_wrapper import with_unsupported_dtypes, frontend_outputs_to_ivy_arrays
_slice = builtins.slice
# --- Helpers --- #
# --------------- #
def _argsort_tuple(the_tuple):
return tuple(i for i, _ in sorted(enumerate(the_tuple), key=lambda x: x[1]))
def _conv_transpose_padding(k, s, padding):
if padding == "SAME":
pad_len = k + s - 2
if s > k - 1:
pad_a = k - 1
else:
pad_a = int(ivy.to_scalar(ivy.ceil(pad_len / 2)))
elif padding == "VALID":
pad_len = k + s - 2 + ivy.to_scalar(ivy.maximum(k - s, 0))
pad_a = k - 1
else:
raise ValueError("Padding mode must be `SAME` or `VALID`.")
pad_b = pad_len - pad_a
return pad_a, pad_b
def _dimension_numbers(dimension_numbers, lhs_len, transp=False):
if dimension_numbers is None:
if transp:
iota = (0, lhs_len - 1, *range(1, lhs_len - 1))
iotb = (lhs_len - 1, lhs_len - 2, *range(0, lhs_len - 2))
return iota, iotb, iota
else:
iota = tuple(range(lhs_len))
return iota, iota, iota
elif isinstance(dimension_numbers[0], (tuple, list)):
return dimension_numbers
else:
lhs_spec, rhs_spec, out_spec = dimension_numbers
def getperm(spec, charpair):
spatial = (i for i, c in enumerate(spec) if c not in charpair)
if spec is not rhs_spec:
spatial = sorted(spatial, key=lambda i: rhs_spec.index(spec[i]))
return (spec.index(charpair[0]), spec.index(charpair[1])) + tuple(spatial)
charpairs = ("N", "C"), ("O", "I"), ("N", "C")
lhs_spec, rhs_spec, out_spec = map(getperm, dimension_numbers, charpairs)
return lhs_spec, rhs_spec, out_spec
# --- Main --- #
# ------------ #
@to_ivy_arrays_and_back
def abs(x):
return ivy.abs(x)
@to_ivy_arrays_and_back
def acos(x):
return ivy.acos(x)
@to_ivy_arrays_and_back
def add(x, y):
return ivy.add(x, y)
@to_ivy_arrays_and_back
def argmax(operand, axis, index_dtype):
return ivy.astype(ivy.argmax(operand, axis=axis), index_dtype)
@to_ivy_arrays_and_back
def argmin(operand, axis, index_dtype):
return ivy.astype(ivy.argmin(operand, axis=axis), index_dtype)
@to_ivy_arrays_and_back
def asin(x):
return ivy.asin(x)
@to_ivy_arrays_and_back
def asinh(x):
return ivy.asinh(x)
@to_ivy_arrays_and_back
def atan(x):
return ivy.atan(x)
@to_ivy_arrays_and_back
def atan2(x, y):
return ivy.atan2(x, y)
@to_ivy_arrays_and_back
def atanh(x):
return ivy.atanh(x)
@to_ivy_arrays_and_back
def batch_matmul(lhs, rhs, precision=None):
if lhs.ndim < 2 or rhs.ndim < 2:
raise ValueError(
f"Arguments to batch_matmul must be at least 2D, got {lhs.ndim}, {rhs.ndim}"
)
if lhs.ndim != rhs.ndim:
raise ValueError(
f"Arguments to batch_matmul must have same ndim, got {lhs.ndim}, {rhs.ndim}"
)
return ivy.matmul(lhs, rhs).astype(lhs.dtype)
@to_ivy_arrays_and_back
def bitwise_and(x, y):
return ivy.bitwise_and(x, y)
@to_ivy_arrays_and_back
def bitwise_not(x):
return ivy.bitwise_invert(x)
@to_ivy_arrays_and_back
def bitwise_or(x, y):
return ivy.bitwise_or(x, y)
@to_ivy_arrays_and_back
def bitwise_xor(x, y):
return ivy.bitwise_xor(x, y)
@to_ivy_arrays_and_back
def broadcast(operand, sizes):
ret = ivy.zeros(tuple(sizes) + tuple(ivy.shape(operand)), dtype=ivy.dtype(operand))
return ret + operand
@with_supported_dtypes(
{
"0.4.24 and below": (
"float16",
"float32",
"float64",
)
},
"jax",
)
@to_ivy_arrays_and_back
def cbrt(x):
return ivy.pow(x, 1 / 3)
@to_ivy_arrays_and_back
def ceil(x):
return ivy.ceil(x)
@to_ivy_arrays_and_back
def clamp(min, x, max):
return ivy.clip(x, min, max)
@to_ivy_arrays_and_back
def complex(x, y):
return ivy.complex(x, y)
@to_ivy_arrays_and_back
def concatenate(operands, dimension):
return ivy.concat(operands, axis=dimension)
@to_ivy_arrays_and_back
def conj(x):
return ivy.conj(x)
@to_ivy_arrays_and_back
def conv(
lhs, rhs, window_strides, padding, precision=None, preferred_element_type=None
):
if preferred_element_type:
lhs = ivy.astype(lhs, preferred_element_type)
rhs = ivy.astype(rhs, preferred_element_type)
dims = len(lhs.shape) - 2
return ivy.conv_general_dilated(
lhs,
rhs,
window_strides,
padding,
dims=dims,
data_format="channel_first",
filter_format="channel_first",
)
@to_ivy_arrays_and_back
def conv_general_dilated(
lhs,
rhs,
window_strides,
padding,
lhs_dilation=None,
rhs_dilation=None,
dimension_numbers=None,
feature_group_count=1,
batch_group_count=1,
precision=None,
preferred_element_type=None,
):
# TODO: add support for batch_group_count
if preferred_element_type:
lhs = ivy.astype(lhs, preferred_element_type)
rhs = ivy.astype(rhs, preferred_element_type)
dims = len(lhs.shape) - 2
dim_nums = _dimension_numbers(dimension_numbers, dims + 2)
rhs_spec = tuple(dim_nums[1][i] for i in (*range(2, dims + 2), 1, 0))
return ivy.permute_dims(
ivy.conv_general_dilated(
ivy.permute_dims(lhs, axes=dim_nums[0]),
ivy.permute_dims(rhs, axes=rhs_spec),
window_strides,
padding,
dims=dims,
data_format="channel_first",
x_dilations=1 if lhs_dilation is None else lhs_dilation,
dilations=1 if rhs_dilation is None else rhs_dilation,
feature_group_count=feature_group_count,
),
axes=_argsort_tuple(dim_nums[2]),
)
@to_ivy_arrays_and_back
def conv_transpose(
lhs,
rhs,
strides,
padding,
rhs_dilation=None,
dimension_numbers=None,
transpose_kernel=False,
precision=None,
preferred_element_type=None,
):
# TODO: add support for transpose_kernel
if preferred_element_type:
lhs = ivy.astype(lhs, preferred_element_type)
rhs = ivy.astype(rhs, preferred_element_type)
dims = len(lhs.shape) - 2
dim_nums = _dimension_numbers(dimension_numbers, dims + 2, transp=True)
rhs_spec = tuple(dim_nums[1][i] for i in (*range(2, dims + 2), 1, 0))
rhs_dilation = 1 if rhs_dilation is None else rhs_dilation
if isinstance(padding, str):
k_sdims = [rhs.shape[i] for i in rhs_spec[:-2]]
effective_k_size = map(lambda k, r: (k - 1) * r + 1, k_sdims, rhs_dilation)
padding = [
_conv_transpose_padding(k, s, padding)
for k, s in zip(effective_k_size, strides)
]
return ivy.permute_dims(
ivy.conv_general_dilated(
ivy.permute_dims(lhs, axes=dim_nums[0]),
ivy.permute_dims(rhs, axes=rhs_spec),
1,
padding,
dilations=rhs_dilation,
x_dilations=strides,
dims=dims,
data_format="channel_first",
),
axes=_argsort_tuple(dim_nums[2]),
)
@to_ivy_arrays_and_back
def convert_element_type(operand, new_dtype):
return ivy.astype(operand, new_dtype, copy=False)
@to_ivy_arrays_and_back
def cos(x):
return ivy.cos(x)
@to_ivy_arrays_and_back
def cosh(x):
return ivy.cosh(x)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "bool", "complex64", "complex128")},
"jax",
)
@to_ivy_arrays_and_back
def cummin(operand, axis=0, reverse=False):
return ivy.cummin(operand, axis=axis, reverse=reverse, dtype=operand.dtype)
@to_ivy_arrays_and_back
def cumprod(operand, axis=None, reverse=False):
dtype = ivy.dtype(operand)
return ivy.cumprod(operand, axis=axis, reverse=reverse).astype(dtype)
@to_ivy_arrays_and_back
def cumsum(operand, axis=None, reverse=False):
if reverse:
return ivy.flip(ivy.cumsum(ivy.flip(operand), axis=axis, dtype=operand.dtype))
return ivy.cumsum(operand, axis=axis, dtype=operand.dtype)
@to_ivy_arrays_and_back
def div(x, y):
return ivy.astype(ivy.divide(x, y), x.dtype)
@to_ivy_arrays_and_back
def dot(lhs, rhs, precision=None, preferred_element_type=None):
ret = ivy.matmul(lhs, rhs)
if preferred_element_type:
ret = ivy.astype(ret, preferred_element_type, copy=False)
return ret
@with_unsupported_dtypes({"0.4.5 and below": ("bool",)}, "jax")
@to_ivy_arrays_and_back
def dot_general(
lhs, rhs, dimension_numbers, precision=None, preferred_element_type=None
):
(lhs_contracting, rhs_contracting), (lhs_batch, rhs_batch) = dimension_numbers
ivy.utils.assertions.check_less(
len(lhs.shape),
52,
"number of dimensions greater than 52 is not supported",
as_array=False,
)
new_id = itertools.count()
lhs_axis_ids = [next(new_id) for _ in lhs.shape]
rhs_axis_ids = [next(new_id) for _ in rhs.shape]
lhs_out_axis_ids = lhs_axis_ids[:]
rhs_out_axis_ids = rhs_axis_ids[:]
for lhs_axis, rhs_axis in zip(lhs_contracting, rhs_contracting):
shared_id = next(new_id)
lhs_axis_ids[lhs_axis] = shared_id
rhs_axis_ids[rhs_axis] = shared_id
lhs_out_axis_ids[lhs_axis] = None
rhs_out_axis_ids[rhs_axis] = None
batch_ids = []
for lhs_axis, rhs_axis in zip(lhs_batch, rhs_batch):
shared_id = next(new_id)
lhs_axis_ids[lhs_axis] = shared_id
rhs_axis_ids[rhs_axis] = shared_id
lhs_out_axis_ids[lhs_axis] = None
rhs_out_axis_ids[rhs_axis] = None
batch_ids.append(shared_id)
out_axis_ids = list(
filter(lambda x: x is not None, batch_ids + lhs_out_axis_ids + rhs_out_axis_ids)
)
char_list = [*string.ascii_letters]
lhs_axis_ids = "".join(str(char_list[i]) for i in lhs_axis_ids)
rhs_axis_ids = "".join(str(char_list[i]) for i in rhs_axis_ids)
out_axis_ids = "".join(str(char_list[i]) for i in out_axis_ids)
equ_str = f"{lhs_axis_ids},{rhs_axis_ids}->{out_axis_ids}"
ret = ivy.einsum(equ_str, lhs, rhs)
if preferred_element_type:
ret = ivy.astype(ret, preferred_element_type, copy=False)
return ret
@to_ivy_arrays_and_back
def eq(x, y):
return ivy.equal(x, y)
@to_ivy_arrays_and_back
def erf(x):
return ivy.erf(x)
@with_supported_dtypes(
{
"0.4.24 and below": (
"float16",
"float32",
"float64",
)
},
"jax",
)
@to_ivy_arrays_and_back
def erfc(x):
value = ivy.erf(x)
value = (1.0 - value) if value is not None else None
return value
@to_ivy_arrays_and_back
def exp(x):
return ivy.exp(x)
@to_ivy_arrays_and_back
def expand_dims(array, dimensions):
return ivy.expand_dims(array, axis=dimensions)
@to_ivy_arrays_and_back
def expm1(x):
return ivy.expm1(x)
@to_ivy_arrays_and_back
def full(shape, fill_value, dtype=None):
return ivy.full(shape, fill_value, dtype=dtype)
@to_ivy_arrays_and_back
def full_like(x, fill_value, dtype=None, shape=None):
if shape is None:
return ivy.full_like(x, fill_value, dtype=dtype)
return ivy.full(shape, fill_value, dtype=dtype)
@with_unsupported_dtypes({"0.4.5 and below": ("complex",)}, "jax")
@to_ivy_arrays_and_back
def ge(x, y):
return ivy.greater_equal(x, y)
@with_unsupported_dtypes({"0.4.5 and below": ("complex",)}, "jax")
@to_ivy_arrays_and_back
def gt(x, y):
return ivy.greater(x, y)
@to_ivy_arrays_and_back
def igamma(a, x):
return ivy.igamma(a, x=x)
@to_ivy_arrays_and_back
def imag(x):
return ivy.imag(x)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bool", "bfloat16")},
"jax",
)
@to_ivy_arrays_and_back
def iota(dtype, size):
return ivy.arange(0, size, dtype=dtype)
@to_ivy_arrays_and_back
def is_finite(x):
return ivy.isfinite(x)
@with_unsupported_dtypes({"0.4.5 and below": ("complex",)}, "jax")
@to_ivy_arrays_and_back
def le(x, y):
return ivy.less_equal(x, y)
@to_ivy_arrays_and_back
def log(x):
return ivy.log(x)
@to_ivy_arrays_and_back
def log1p(x):
return ivy.log1p(x)
@to_ivy_arrays_and_back
def lt(x, y):
return ivy.less(x, y)
@to_ivy_arrays_and_back
def max(x: Any, y: Any):
return ivy.maximum(x, y)
@to_ivy_arrays_and_back
def min(x, y):
return ivy.minimum(x, y)
@to_ivy_arrays_and_back
def mul(x, y):
return ivy.multiply(x, y)
@to_ivy_arrays_and_back
def ne(x, y):
return ivy.not_equal(x, y)
@to_ivy_arrays_and_back
def neg(x):
return ivy.negative(x)
@to_ivy_arrays_and_back
def nextafter(x1, x2):
return ivy.nextafter(x1, x2)
@to_ivy_arrays_and_back
def pad(operand, padding_value, padding_config):
return ivy.pad(
operand, padding_config, mode="dilated", constant_values=padding_value
)
@to_ivy_arrays_and_back
def pow(x, y):
return ivy.pow(x, y)
@to_ivy_arrays_and_back
def real(x):
return ivy.real(x)
@to_ivy_arrays_and_back
def reciprocal(x):
return ivy.reciprocal(x)
@to_ivy_arrays_and_back
def reduce_window(
operand,
init_value,
computation,
window_dimensions,
window_strides,
padding,
base_dilation=None,
window_dilation=None,
):
computation = frontend_outputs_to_ivy_arrays(computation)
return ivy.reduce_window(
operand,
init_value,
computation,
window_dimensions,
window_strides=window_strides,
padding=padding,
base_dilation=base_dilation,
window_dilation=window_dilation,
)
@to_ivy_arrays_and_back
def rem(x, y):
return ivy.remainder(ivy.abs(x), ivy.abs(y)) * ivy.sign(x)
@to_ivy_arrays_and_back
def reshape(operand, new_sizes, dimensions=None):
if dimensions:
operand = ivy.permute_dims(operand, dimensions)
return ivy.reshape(operand, new_sizes)
@to_ivy_arrays_and_back
def rev(operand, dimensions):
return ivy.flip(operand, axis=dimensions)
@to_ivy_arrays_and_back
def round(x, rounding_method=1):
if rounding_method == 0:
ret = ivy.where(
ivy.less(x, 0),
ivy.ceil(x) - (ivy.ceil(x) - ivy.floor(x)),
ivy.ceil(x),
)
elif rounding_method == 1:
ret = ivy.ceil(x)
ret = ivy.where(ivy.remainder(ret, 2) == 0, ret, ret - 1)
return ivy.where(ivy.abs(x - ivy.floor(x) - 0.5) < 1e-7, ret, ivy.round(x))
@to_ivy_arrays_and_back
def rsqrt(x):
return ivy.reciprocal(ivy.sqrt(x))
@to_ivy_arrays_and_back
def select(pred, on_true, on_false):
return ivy.where(pred, on_true, on_false)
@to_ivy_arrays_and_back
def shift_left(x, y):
return ivy.bitwise_left_shift(x, y)
@to_ivy_arrays_and_back
def shift_right_logical(x, y):
return ivy.bitwise_right_shift(x, y)
@to_ivy_arrays_and_back
def sign(x):
return ivy.sign(x, np_variant=False)
@to_ivy_arrays_and_back
def sin(x):
return ivy.sin(x)
@to_ivy_arrays_and_back
def sinh(x):
return ivy.sinh(x)
@to_ivy_arrays_and_back
def slice(operand, start_indices, limit_indices, strides=None):
strides = [1] * len(operand.shape) if strides is None else strides
full_slice = ()
for i, _ in enumerate(operand.shape):
strides_i = int(strides[i])
start_i = int(start_indices[i])
limit_i = int(limit_indices[i])
full_slice += (_slice(start_i, limit_i, strides_i),)
return operand[full_slice]
@to_ivy_arrays_and_back
def slice_in_dim(operand, start_index, limit_index, stride=1, axis=0):
start_indices = [0] * operand.ndim
limit_indices = list(operand.shape)
strides = [1] * operand.ndim
len_axis = operand.shape[axis]
start_index_int = start_index if start_index is not None else 0
limit_index_int = limit_index if limit_index is not None else len_axis
if start_index_int < 0:
start_index_int = start_index_int + len_axis
if limit_index_int < 0:
limit_index_int = limit_index_int + len_axis
axis = int(axis)
start_indices[axis] = start_index_int
limit_indices[axis] = limit_index_int
strides[axis] = int(stride)
return slice(operand, start_indices, limit_indices, strides)
@to_ivy_arrays_and_back
def sort(operand, dimension=-1, is_stable=True, num_keys=1):
return ivy.sort(operand, axis=dimension, stable=is_stable)
@to_ivy_arrays_and_back
def sqrt(x):
return ivy.sqrt(x)
@to_ivy_arrays_and_back
def square(x):
return ivy.square(x)
@to_ivy_arrays_and_back
def squeeze(array, dimensions):
return ivy.squeeze(array, axis=dimensions)
@to_ivy_arrays_and_back
def sub(x, y):
return ivy.subtract(x, y)
@to_ivy_arrays_and_back
def tan(x):
return ivy.tan(x)
@to_ivy_arrays_and_back
def tie_in(x, y):
return y
# top_k
@to_ivy_arrays_and_back
def top_k(operand, k):
values, indices = ivy.top_k(operand, k, axis=-1)
indices = ivy.astype(indices, ivy.int32, copy=False)
return [values, indices]
@to_ivy_arrays_and_back
def transpose(operand, permutation):
return ivy.permute_dims(operand, permutation)
| ivy/ivy/functional/frontends/jax/lax/operators.py/0 | {
"file_path": "ivy/ivy/functional/frontends/jax/lax/operators.py",
"repo_id": "ivy",
"token_count": 8336
} | 26 |
# global
import operator
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from ivy.functional.frontends.jax.func_wrapper import (
to_ivy_arrays_and_back,
handle_jax_dtype,
)
# --- Helpers --- #
# --------------- #
def _get_seed(key):
if "PRNGKeyArray" in repr(key):
key = key._base_array
key1, key2 = int(key[0]), int(key[1])
return ivy.to_scalar(int("".join(map(str, [key1, key2]))))
def _remove_axis(shape, axis):
return shape[:axis] + shape[axis + 1 :]
# --- Main --- #
# ------------ #
@to_ivy_arrays_and_back
def PRNGKey(seed):
return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_supported_dtypes(
{
"0.4.24 and below": (
"float32",
"float64",
)
},
"jax",
)
def ball(key, d, p=2.0, shape=(), dtype="float64"):
seed = _get_seed(key)
d = operator.index(d)
g = ivy.gamma(1 / p, 1.0, shape=shape, dtype=dtype, seed=seed)
b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=dtype, seed=seed)
r = 2 * b - 1
gn = r * g ** (1 / p)
uniform = ivy.random_uniform(seed=seed, shape=shape, dtype=dtype)
exp = -ivy.log(1 - uniform)
return gn / (((ivy.abs(gn) ** p).sum(axis=-1) + exp) ** (1 / p))[..., None]
@to_ivy_arrays_and_back
def bernoulli(key, p=0.5, shape=None):
seed = _get_seed(key)
return ivy.bernoulli(p, shape=shape, seed=seed)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def beta(key, a, b, shape=None, dtype=None):
seed = _get_seed(key)
return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def categorical(key, logits, axis, shape=None):
logits_arr = ivy.asarray(logits)
if axis >= 0:
axis -= len(logits_arr.shape)
batch_shape = tuple(_remove_axis(logits_arr.shape, axis))
if shape is None:
shape = batch_shape
else:
shape = tuple(shape)
if shape != batch_shape:
raise ValueError(
+f"Shape {shape} is not compatible with reference shape {batch_shape}"
)
logits_shape = list(shape[len(shape) - len(batch_shape) :])
logits_shape.insert(axis % len(logits_arr.shape), logits_arr.shape[axis])
gumbel_noise = gumbel(key, ivy.array(logits_shape), logits_arr.dtype)
expanded_logits = ivy.expand_dims(logits_arr, axis=axis)
noisy_logits = gumbel_noise + expanded_logits
# Use Ivy's argmax to get indices
indices = ivy.argmax(noisy_logits, axis=axis)
return indices
@handle_jax_dtype
@to_ivy_arrays_and_back
def cauchy(key, shape=(), dtype="float64"):
seed = _get_seed(key)
u = ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=seed)
return ivy.tan(ivy.pi * (u - 0.5))
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def dirichlet(key, alpha, shape=None, dtype="float32"):
seed = _get_seed(key)
alpha = ivy.astype(alpha, dtype)
return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{"0.4.24 and below": "uint32"},
"jax",
)
def double_sided_maxwell(key, loc, scale, shape=(), dtype="float64"):
params_shapes = ivy.broadcast_shapes(ivy.shape(loc), ivy.shape(scale))
if not shape:
shape = params_shapes
shape = shape + params_shapes
maxwell_rvs = maxwell(key, shape=shape, dtype=dtype)
random_sign = rademacher(key, shape=shape, dtype=dtype)
return random_sign * maxwell_rvs * scale + loc
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def exponential(key, shape=(), dtype="float64"):
seed = _get_seed(key)
uniform = ivy.random_uniform(seed=seed, shape=shape, dtype=dtype)
exp = -ivy.log(1 - uniform)
return exp
@to_ivy_arrays_and_back
def fold_in(key, data):
if "PRNGKeyArray" in repr(key):
key = key._base_array
s = ivy.bitwise_left_shift(
ivy.asarray(data, dtype=ivy.uint32), ivy.array(32, dtype=ivy.uint32)
)
return ivy.bitwise_xor(key, s)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def gamma(key, a, shape=None, dtype="float64"):
seed = _get_seed(key)
return ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def generalized_normal(key, p, shape=(), dtype="float64"):
seed = _get_seed(key)
g = ivy.gamma(1 / p, 1.0, shape=shape, dtype=dtype, seed=seed)
b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=dtype, seed=seed)
r = 2 * b - 1
return r * g ** (1 / p)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def gumbel(key, shape=(), dtype="float64"):
seed = _get_seed(key)
uniform_x = ivy.random_uniform(
low=0.0,
high=1.0,
shape=shape,
dtype=dtype,
seed=seed,
)
return -ivy.log(-ivy.log(uniform_x))
# loggamma
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def loggamma(key, a, shape=None, dtype="float64"):
seed = _get_seed(key)
return ivy.log(ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed))
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{"0.4.24 and below": ("float16", "bfloat16")},
"jax",
)
def logistic(key, shape=(), dtype="float64"):
seed = _get_seed(key)
uniform_x = ivy.random_uniform(seed=seed, shape=shape, dtype=dtype)
return ivy.log(ivy.divide(uniform_x, ivy.subtract(1.0, uniform_x)))
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.3.14 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def maxwell(key, shape, dtype="float64"):
seed = _get_seed(key)
shape = shape + (3,)
random_normal = ivy.random_normal(seed=seed, shape=shape, dtype=dtype)
return ivy.vector_norm(random_normal, axis=-1)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def multivariate_normal(key, mean, cov, shape=None, dtype="float64", method="cholesky"):
if shape is None:
shape = ivy.broadcast_shapes(mean.shape[:-1], cov.shape[:-2])
if method == "cholesky":
cov_factor = ivy.cholesky(cov)
elif method == "eigh":
(w, v) = ivy.eigh(cov)
cov_factor = v * ivy.sqrt(w[..., None, :])
elif method == "svd":
(u, s, _) = ivy.svd(cov)
cov_factor = u * ivy.sqrt(s[..., None, :])
rand_normal = normal(key=key, shape=shape + mean.shape[-1:], dtype=dtype)
result = mean + ivy.einsum("...ij,...j->...i", cov_factor, rand_normal.ivy_array)
return result
@handle_jax_dtype
@to_ivy_arrays_and_back
def normal(key, shape=(), dtype=None):
seed = _get_seed(key)
return ivy.random_normal(shape=shape, dtype=dtype, seed=seed)
@handle_jax_dtype
@to_ivy_arrays_and_back
def orthogonal(key, n, shape=(), dtype=None):
seed = _get_seed(key)
flat_shape = (n, n)
if shape:
flat_shape = shape + flat_shape
# Generate a random matrix with the given shape and dtype
random_matrix = ivy.random_uniform(seed=seed, shape=flat_shape, dtype=dtype)
# Compute the QR decomposition of the random matrix
q, _ = ivy.linalg.qr(random_matrix)
# Reshape the resulting orthogonal matrix to the desired shape
if shape:
q = ivy.reshape(q, shape + (n, n))
return q
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"float16",
"bfloat16",
)
},
"jax",
)
def pareto(key, b, shape=None, dtype="float64"):
seed = _get_seed(key)
if shape is None:
shape = b.shape
# Draw samples from exponential distribution
uniform = ivy.random_uniform(seed=seed, shape=shape, dtype=dtype)
e = -ivy.log(1 - uniform)
return ivy.exp(e / b)
@to_ivy_arrays_and_back
def permutation(key, x, axis=0, independent=False):
x = ivy.array(x)
seed = _get_seed(key)
if not ivy.get_num_dims(x):
r = int(x)
return ivy.shuffle(ivy.arange(r), axis, seed=seed)
if independent:
return ivy.shuffle(x, axis, seed=seed)
rand = ivy.arange(x.shape[axis])
ind = ivy.shuffle(rand, 0, seed=seed)
return ivy.gather(x, ind, axis=axis)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{"0.4.24 and below": ("unsigned", "int8", "int16")},
"jax",
)
def poisson(key, lam, shape=None, dtype=None):
seed = _get_seed(key)
return ivy.poisson(lam, shape=shape, dtype=dtype, seed=seed, fill_value=-1)
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{"0.4.24 and below": ("unsigned", "int8", "int16")},
"jax",
)
def rademacher(key, shape, dtype="int64"):
seed = _get_seed(key)
prob = ivy.full(shape, 0.5, dtype="float32")
b = ivy.bernoulli(prob, shape=shape, dtype="float32", seed=seed)
b = ivy.astype(b, dtype)
return 2 * b - 1
@handle_jax_dtype
@to_ivy_arrays_and_back
@with_unsupported_dtypes(
{"0.4.24 and below": ("unsigned", "int8", "int16")},
"jax",
)
def randint(key, shape, minval, maxval, dtype="int64"):
seed = _get_seed(key)
return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)
@to_ivy_arrays_and_back
def shuffle(key, x, axis=0):
seed = _get_seed(key)
x = ivy.flip(x, axis=axis)
return ivy.shuffle(x, seed=seed)
@handle_jax_dtype
@to_ivy_arrays_and_back
def t(key, df, shape=(), dtype="float64"):
seed = _get_seed(key)
n = ivy.random_normal(shape=shape, dtype=dtype, seed=seed)
half_df = df / 2.0
g = ivy.gamma(half_df, 1.0, shape=shape, dtype=dtype, seed=seed)
return n * ivy.sqrt(ivy.divide(half_df, g))
@handle_jax_dtype
@to_ivy_arrays_and_back
def uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):
seed = _get_seed(key)
return ivy.random_uniform(
low=minval, high=maxval, shape=shape, dtype=dtype, seed=seed
)
@handle_jax_dtype
@to_ivy_arrays_and_back
def weibull_min(key, scale, concentration, shape=(), dtype="float64"):
seed = _get_seed(key)
uniform_x = ivy.random_uniform(seed=seed, shape=shape, dtype=dtype)
x = 1 - uniform_x
weibull = x ** (concentration - 1) * -ivy.log(x / scale)
return weibull
| ivy/ivy/functional/frontends/jax/random.py/0 | {
"file_path": "ivy/ivy/functional/frontends/jax/random.py",
"repo_id": "ivy",
"token_count": 5530
} | 27 |
# global
# local
import ivy
import ivy.functional.frontends.mxnet as mxnet_frontend
class ndarray:
def __init__(self, array):
self._ivy_array = (
ivy.array(array) if not isinstance(array, ivy.Array) else array
)
def __repr__(self):
return str(self.ivy_array.__repr__()).replace(
"ivy.array", "ivy.frontends.mxnet.numpy.array"
)
# Properties #
# ---------- #
@property
def ivy_array(self):
return self._ivy_array
@property
def dtype(self):
return self.ivy_array.dtype
@property
def shape(self):
return self.ivy_array.shape
# Instance Methods #
# ---------------- #
def __add__(self, other):
return mxnet_frontend.numpy.add(self, other)
| ivy/ivy/functional/frontends/mxnet/numpy/ndarray.py/0 | {
"file_path": "ivy/ivy/functional/frontends/mxnet/numpy/ndarray.py",
"repo_id": "ivy",
"token_count": 356
} | 28 |
# local
import ivy
def finfo(dtype):
return ivy.finfo(dtype)
def iinfo(dtype):
return ivy.iinfo(dtype)
| ivy/ivy/functional/frontends/numpy/data_type_routines/data_type_information.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/data_type_routines/data_type_information.py",
"repo_id": "ivy",
"token_count": 53
} | 29 |
# global
import ivy
from ivy.functional.frontends.numpy import promote_types_of_numpy_inputs
from ivy import with_unsupported_dtypes
from ivy.functional.frontends.numpy.func_wrapper import (
to_ivy_arrays_and_back,
handle_numpy_casting,
handle_numpy_dtype,
from_zero_dim_arrays_to_scalar,
handle_numpy_out,
)
# --- Helpers --- #
# --------------- #
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _matmul(
x1, x2, /, out=None, *, casting="same_kind", order="K", dtype=None, subok=True
):
return ivy.matmul(x1, x2, out=out)
# --- Main --- #
# ------------ #
@to_ivy_arrays_and_back
def cross(a, b, *, axisa=-1, axisb=-1, axisc=-1, axis=None):
return ivy.cross(a, b, axisa=axisa, axisb=axisb, axisc=axisc, axis=axis)
@handle_numpy_out
@to_ivy_arrays_and_back
def dot(a, b, out=None):
a, b = promote_types_of_numpy_inputs(a, b)
return ivy.matmul(a, b, out=out)
@handle_numpy_out
@to_ivy_arrays_and_back
def einsum(
subscripts,
*operands,
out=None,
dtype=None,
order="K",
casting="safe",
optimize=False,
):
return ivy.einsum(subscripts, *operands, out=out)
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def inner(a, b, /):
a, b = promote_types_of_numpy_inputs(a, b)
return ivy.inner(a, b)
@to_ivy_arrays_and_back
def kron(a, b):
a, b = promote_types_of_numpy_inputs(a, b)
return ivy.kron(a, b)
@to_ivy_arrays_and_back
def matrix_power(a, n):
return ivy.matrix_power(a, n)
@with_unsupported_dtypes({"2.0.0 and below": ("float16",)}, "torch")
@handle_numpy_out
@to_ivy_arrays_and_back
def multi_dot(arrays, *, out=None):
return ivy.multi_dot(arrays, out=out)
@handle_numpy_out
@to_ivy_arrays_and_back
def outer(a, b, out=None):
a, b = promote_types_of_numpy_inputs(a, b)
return ivy.outer(a, b, out=out)
@to_ivy_arrays_and_back
def tensordot(a, b, axes=2):
return ivy.tensordot(a, b, axes=axes)
@to_ivy_arrays_and_back
def tensorsolve(a, b, axes=2):
return ivy.tensorsolve(a, b, axes=axes)
| ivy/ivy/functional/frontends/numpy/linalg/matrix_and_vector_products.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/linalg/matrix_and_vector_products.py",
"repo_id": "ivy",
"token_count": 989
} | 30 |
# global
import ivy
from ivy.functional.frontends.numpy.func_wrapper import (
to_ivy_arrays_and_back,
handle_numpy_casting,
handle_numpy_dtype,
from_zero_dim_arrays_to_scalar,
handle_numpy_out,
)
# --- Helpers --- #
# --------------- #
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def _fmax(
x1,
x2,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.fmax(x1, x2, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _fmin(
x1,
x2,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.fmin(x1, x2, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _maximum(
x1,
x2,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.maximum(x1, x2, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _minimum(
x1,
x2,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.minimum(x1, x2, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
# --- Main --- #
# ------------ #
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def amax(
a,
/,
*,
axis=None,
out=None,
keepdims=False,
initial=None,
where=True,
):
out_dtype = ivy.dtype(a)
where_mask = None
if initial is not None:
if ivy.is_array(where):
a = ivy.where(where, a, a.full_like(initial))
where_mask = ivy.all(ivy.logical_not(where), axis=axis, keepdims=keepdims)
s = ivy.shape(a, as_array=True)
if axis is not None:
if isinstance(axis, (tuple, list)) or ivy.is_array(axis):
# introducing the initial in one dimension is enough
ax = axis[0] % len(s)
s[ax] = 1
else:
ax = axis % len(s)
s[ax] = 1
header = ivy.full(ivy.Shape(s.to_list()), initial, dtype=ivy.dtype(a))
if axis:
if isinstance(axis, (tuple, list)) or ivy.is_array(axis):
a = ivy.concat([a, header], axis=axis[0])
else:
a = ivy.concat([a, header], axis=axis)
else:
a = ivy.concat([a, header], axis=0)
res = ivy.max(a, axis=axis, keepdims=keepdims, out=out)
if where_mask is not None and ivy.any(where_mask):
res = ivy.where(ivy.logical_not(where_mask), res, initial, out=out)
return ivy.astype(res, out_dtype, out=out, copy=False)
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def amin(
a,
/,
*,
axis=None,
out=None,
keepdims=False,
initial=None,
where=True,
):
out_dtype = ivy.dtype(a)
where_mask = None
if initial is not None:
if ivy.is_array(where):
a = ivy.where(where, a, a.full_like(initial))
where_mask = ivy.all(ivy.logical_not(where), axis=axis, keepdims=keepdims)
s = ivy.shape(a, as_array=True)
if axis is not None:
if isinstance(axis, (tuple, list)) or ivy.is_array(axis):
# introducing the initial in one dimension is enough
ax = axis[0] % len(s)
s[ax] = 1
else:
ax = axis % len(s)
s[ax] = 1
header = ivy.full(ivy.Shape(s.to_list()), initial, dtype=ivy.dtype(a))
if axis:
if isinstance(axis, (tuple, list)) or ivy.is_array(axis):
a = ivy.concat([a, header], axis=axis[0])
else:
a = ivy.concat([a, header], axis=axis)
else:
a = ivy.concat([a, header], axis=0)
res = ivy.min(a, axis=axis, keepdims=keepdims, out=out)
if where_mask is not None and ivy.any(where_mask):
res = ivy.where(ivy.logical_not(where_mask), res, initial, out=out)
return ivy.astype(res, out_dtype, out=out, copy=False)
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def max(
a,
/,
*,
axis=None,
out=None,
keepdims=False,
initial=None,
where=True,
):
return amax(a, axis=axis, out=out, keepdims=keepdims, initial=initial, where=where)
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def min(
a,
/,
*,
axis=None,
out=None,
keepdims=False,
initial=None,
where=True,
):
return amin(a, axis=axis, out=out, keepdims=keepdims, initial=initial, where=where)
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def nanmax(
a,
axis=None,
out=None,
keepdims=False,
initial=None,
where=True,
):
out_dtype = ivy.dtype(a)
nan_mask = ivy.isnan(a)
a = ivy.where(ivy.logical_not(nan_mask), a, a.full_like(-ivy.inf))
where_mask = None
if initial is not None:
if ivy.is_array(where):
a = ivy.where(where, a, a.full_like(initial))
where_mask = ivy.all(ivy.logical_not(where), axis=axis, keepdims=keepdims)
s = ivy.shape(a, as_array=True)
if axis is not None:
if isinstance(axis, (tuple, list)) or ivy.is_array(axis):
# introducing the initial in one dimension is enough
ax = axis[0] % len(s)
s[ax] = 1
else:
ax = axis % len(s)
s[ax] = 1
header = ivy.full(ivy.Shape(s.to_list()), initial, dtype=ivy.dtype(a))
if axis:
if isinstance(axis, (tuple, list)) or ivy.is_array(axis):
a = ivy.concat([a, header], axis=axis[0])
else:
a = ivy.concat([a, header], axis=axis)
else:
a = ivy.concat([a, header], axis=0)
res = ivy.max(a, axis=axis, keepdims=keepdims, out=out)
if nan_mask is not None:
nan_mask = ivy.all(nan_mask, axis=axis, keepdims=keepdims, out=out)
if ivy.any(nan_mask):
res = ivy.where(
ivy.logical_not(nan_mask),
res,
initial if initial is not None else ivy.nan,
out=out,
)
if where_mask is not None and ivy.any(where_mask):
res = ivy.where(ivy.logical_not(where_mask), res, ivy.nan, out=out)
return ivy.astype(res, out_dtype, out=out, copy=False)
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def nanmin(
a,
axis=None,
out=None,
keepdims=False,
initial=None,
where=True,
):
out_dtype = ivy.dtype(a)
nan_mask = ivy.isnan(a)
a = ivy.where(ivy.logical_not(nan_mask), a, a.full_like(+ivy.inf))
where_mask = None
if initial is not None:
if ivy.is_array(where):
a = ivy.where(where, a, a.full_like(initial))
where_mask = ivy.all(ivy.logical_not(where), axis=axis, keepdims=keepdims)
s = ivy.shape(a, as_array=True)
if axis is not None:
if isinstance(axis, (tuple, list)) or ivy.is_array(axis):
# introducing the initial in one dimension is enough
ax = axis[0] % len(s)
s[ax] = 1
else:
ax = axis % len(s)
s[ax] = 1
header = ivy.full(ivy.Shape(s.to_list()), initial, dtype=ivy.dtype(a))
if axis:
if isinstance(axis, (tuple, list)) or ivy.is_array(axis):
a = ivy.concat([a, header], axis=axis[0])
else:
a = ivy.concat([a, header], axis=axis)
else:
a = ivy.concat([a, header], axis=0)
res = ivy.min(a, axis=axis, keepdims=keepdims, out=out)
if nan_mask is not None:
nan_mask = ivy.all(nan_mask, axis=axis, keepdims=keepdims, out=out)
if ivy.any(nan_mask):
res = ivy.where(
ivy.logical_not(nan_mask),
res,
initial if initial is not None else ivy.nan,
out=out,
)
if where_mask is not None and ivy.any(where_mask):
res = ivy.where(ivy.logical_not(where_mask), res, ivy.nan, out=out)
return ivy.astype(res, out_dtype, out=out, copy=False)
| ivy/ivy/functional/frontends/numpy/mathematical_functions/extrema_finding.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/mathematical_functions/extrema_finding.py",
"repo_id": "ivy",
"token_count": 4749
} | 31 |
from .Generator import *
| ivy/ivy/functional/frontends/numpy/random/Generator/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/random/Generator/__init__.py",
"repo_id": "ivy",
"token_count": 7
} | 32 |
# global
import inspect
from math import inf
# local
import ivy.functional.frontends.numpy as np_frontend
identities = {
"abs": None,
"absolute": None,
"add": 0,
"arccos": None,
"arccosh": None,
"arcsin": None,
"arcsinh": None,
"arctan": None,
"arctan2": None,
"arctanh": None,
"bitwise_and": -1,
"bitwise_not": None,
"bitwise_or": 0,
"bitwise_xor": 0,
"cbrt": None,
"ceil": None,
"conj": None,
"conjugate": None,
"copysign": None,
"cos": None,
"cosh": None,
"deg2rad": None,
"degrees": None,
"divide": None,
"divmod": None,
"equal": None,
"exp": None,
"exp2": None,
"expm1": None,
"fabs": None,
"float_power": None,
"floor": None,
"floor_divide": None,
"fmax": None,
"fmin": None,
"fmod": None,
"frexp": None,
"gcd": 0,
"greater": None,
"greater_equal": None,
"heaviside": None,
"hypot": 0,
"invert": None,
"isfinite": None,
"isinf": None,
"isnan": None,
"isnat": None,
"lcm": None,
"ldexp": None,
"left_shift": None,
"less": None,
"less_equal": None,
"log": None,
"log10": None,
"log1p": None,
"log2": None,
"logaddexp": -inf,
"logaddexp2": -inf,
"logical_and": True,
"logical_not": None,
"logical_or": False,
"logical_xor": False,
"matmul": None,
"maximum": None,
"minimum": None,
"mod": None,
"modf": None,
"multiply": 1,
"negative": None,
"nextafter": None,
"not_equal": None,
"positive": None,
"power": None,
"rad2deg": None,
"radians": None,
"reciprocal": None,
"remainder": None,
"right_shift": None,
"rint": None,
"sign": None,
"signbit": None,
"sin": None,
"sinh": None,
"spacing": None,
"sqrt": None,
"square": None,
"subtract": None,
"tan": None,
"tanh": None,
"true_divide": None,
"trunc": None,
}
# constants #
# --------#
ufuncs = [
"abs",
"absolute",
"add",
"arccos",
"arccosh",
"arcsin",
"arcsinh",
"arctan",
"arctan2",
"arctanh",
"bitwise_and",
"bitwise_not",
"bitwise_or",
"bitwise_xor",
"cbrt",
"ceil",
"conj",
"conjugate",
"copysign",
"cos",
"cosh",
"deg2rad",
"degrees",
"divide",
"divmod",
"equal",
"exp",
"exp2",
"expm1",
"fabs",
"float_power",
"floor",
"floor_divide",
"fmax",
"fmin",
"fmod",
"frexp",
"gcd",
"greater",
"greater_equal",
"heaviside",
"hypot",
"invert",
"invert",
"isfinite",
"isinf",
"isnan",
"isnat",
"lcm",
"ldexp",
"left_shift",
"less",
"less_equal",
"log",
"log10",
"log1p",
"log2",
"logaddexp",
"logaddexp2",
"logical_and",
"logical_not",
"logical_or",
"logical_xor",
"matmul",
"maximum",
"minimum",
"mod",
"modf",
"multiply",
"negative",
"nextafter",
"not_equal",
"positive",
"power",
"rad2deg",
"radians",
"reciprocal",
"remainder",
"right_shift",
"rint",
"sign",
"signbit",
"sin",
"sinh",
"spacing",
"sqrt",
"square",
"subtract",
"tan",
"tanh",
"true_divide",
"trunc",
]
# Class #
# ----- #
class ufunc:
def __init__(self, name) -> None:
self.__frontend_name__ = name
# removing first underscore to get original ufunc name
self.__name__ = name[1:]
# getting the function from the frontend
self.func = getattr(np_frontend, self.__frontend_name__)
# properties #
# ------------#
@property
def nargs(self):
sig = inspect.signature(self.func)
return len(
[
param
for param in sig.parameters.values()
if param.kind in [param.POSITIONAL_ONLY, param.POSITIONAL_OR_KEYWORD]
]
)
@property
def nin(self):
sig = inspect.signature(self.func)
return len(
[
param
for param in sig.parameters.values()
if param.kind == param.POSITIONAL_ONLY
]
)
@property
def nout(self):
return self.nargs - self.nin
@property
def ntypes(self):
pass
@property
def signature(self):
pass
@property
def types(self):
pass
@property
def identity(self):
return identities[self.__name__]
# Methods #
# ---------#
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
def reduce(
array, axis=0, dtype=None, out=None, keepdims=False, initial=None, where=True
):
pass
def accumulate(array, axis=0, dtype=None, out=None):
pass
def reduceat(array, indices, axis=0, dtype=None, out=None):
pass
def outer(A, B, /, **kwargs):
pass
def at(a, indices, b=None, /):
pass
| ivy/ivy/functional/frontends/numpy/ufunc/methods.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/ufunc/methods.py",
"repo_id": "ivy",
"token_count": 2598
} | 33 |
# global
import ivy
from ivy.func_wrapper import (
with_unsupported_dtypes,
with_supported_dtypes,
with_supported_device_and_dtypes,
)
from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def abs(x, name=None):
return ivy.abs(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def acos(x, name=None):
return ivy.acos(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def acosh(x, name=None):
return ivy.acosh(x)
@with_unsupported_dtypes(
{"2.6.0 and below": ("bool", "unsigned", "int8", "float16", "bfloat16")}, "paddle"
)
@to_ivy_arrays_and_back
def add(x, y, name=None):
return ivy.add(x, y)
@with_unsupported_dtypes(
{"2.6.0 and below": ("bool", "unsigned", "int8", "float16", "bfloat16")}, "paddle"
)
@to_ivy_arrays_and_back
def add_(x, y, name=None):
return ivy.inplace_update(x, add(x, y))
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def addmm(input, x, y, beta=1.0, alpha=1.0, name=None):
value = alpha * ivy.matmul(x, y) + (beta * input)
return value
@with_supported_dtypes({"2.5.0 and below": "bool"}, "paddle")
@to_ivy_arrays_and_back
def all(x, axis, keepdim=False, name=None):
return ivy.all(x, axis=axis, keepdims=keepdim)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def amax(x, axis=None, keepdims=False):
if axis is None:
return ivy.max(x)
if isinstance(axis, int):
axis = [axis]
for i in range(len(axis)):
if axis[i] < 0:
axis[i] += x.ndim
for i in axis:
if i < 0 or i >= x.ndim:
raise ValueError(f"axis {i} is out of range [-0:{x.ndim}]")
return ivy.max(x, axis=axis, keepdims=keepdims)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def amin(x, axis=None, keepdim=False, name=None):
return ivy.min(x, axis=axis, keepdims=keepdim)
@with_supported_dtypes(
{"2.6.0 and below": ("complex64", "complex128", "float32", "float64")},
"paddle",
)
@to_ivy_arrays_and_back
def angle(x, name=None):
return ivy.angle(x)
@with_supported_dtypes({"2.5.0 and below": "bool"}, "paddle")
@to_ivy_arrays_and_back
def any(x, axis=None, keepdim=False, name=None):
return ivy.any(x, axis=axis, keepdims=keepdim)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def asin(x, name=None):
return ivy.asin(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def asinh(x, name=None):
return ivy.asinh(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def atan(x, name=None):
return ivy.atan(x)
@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def atan2(x, y, name=None):
return ivy.atan2(x, y)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def atanh(x, name=None):
return ivy.atanh(x)
@with_supported_dtypes({"2.6.0 and below": ("int32", "int64")}, "paddle")
@to_ivy_arrays_and_back
def broadcast_shape(x_shape, y_shape):
return ivy.broadcast_shapes(x_shape, y_shape)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def ceil(x, name=None):
return ivy.ceil(x)
@with_unsupported_dtypes({"2.4.2 and below": ("int16", "float16")}, "paddle")
@to_ivy_arrays_and_back
def conj(x, name=None):
return ivy.conj(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def cos(x, name=None):
return ivy.cos(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def cosh(x, name=None):
return ivy.cosh(x)
@with_supported_dtypes(
{"2.6.0 and below": ("int32", "int64", "float16", "float32", "float64", "bool")},
"paddle",
)
@to_ivy_arrays_and_back
def count_nonzero(x, axis=None, keepdim=False, name=None):
return ivy.astype(ivy.count_nonzero(x, axis=axis, keepdims=keepdim), ivy.int64)
@with_supported_dtypes(
{
"2.6.0 and below": (
"int32",
"int64",
"float32",
"float64",
"complex64",
"complex128",
)
},
"paddle",
)
@to_ivy_arrays_and_back
def cumprod(x, dim=None, dtype=None, name=None):
return ivy.cumprod(x, axis=dim, dtype=dtype)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def cumsum(x, axis=None, dtype=None, name=None):
return ivy.cumsum(x, axis=axis, dtype=dtype)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def deg2rad(x, name=None):
return ivy.deg2rad(x)
@with_supported_dtypes(
{
"2.6.0 and below": (
"int32",
"int64",
"float64",
"complex128",
"float32",
"complex64",
"bool",
)
},
"paddle",
)
@to_ivy_arrays_and_back
def diagonal(x, offset=0, axis1=0, axis2=1, name=None):
return ivy.diagonal(x, offset=offset, axis1=axis1, axis2=axis2)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def diff(x, n=1, axis=-1, prepend=None, append=None, name=None):
return ivy.diff(x, n=n, axis=axis, prepend=prepend, append=append)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def digamma(x, name=None):
digamma_fun = ivy.digamma
return ivy.array(digamma_fun(x), dtype=x.dtype)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def divide(x, y, name=None):
return ivy.divide(x, y)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def erf(x, name=None):
return ivy.erf(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def exp(x, name=None):
return ivy.exp(x)
@with_supported_dtypes({"2.6.0 and below": ("float16", "float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def expm1(x, name=None):
return ivy.expm1(x)
@with_supported_dtypes(
{"2.6.0 and below": ("bfloat16", "float32", "float64")}, "paddle"
)
@to_ivy_arrays_and_back
def floor(x, name=None):
return ivy.floor(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def floor_divide(x, y, name=None):
return ivy.floor_divide(x, y)
@with_supported_device_and_dtypes(
{
"2.6.0 and below": {
"cpu": ("float32", "float64", "int32", "int64"),
"gpu": ("float16", "float32", "float64", "int32", "int64"),
}
},
"paddle",
)
@to_ivy_arrays_and_back
def floor_mod(x, y, name=None):
return ivy.remainder(x, y)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def fmax(x, y, name=None):
return ivy.fmax(x, y)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def fmin(x, y, name=None):
return ivy.fmin(x, y)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def frac(x, name=None):
y = ivy.trunc(x)
return ivy.subtract(x, y)
@with_supported_dtypes({"2.6.0 and below": ("int32", "int64")}, "paddle")
@to_ivy_arrays_and_back
def gcd(x, y, name=None):
return ivy.gcd(x, y)
@with_supported_dtypes(
{"2.6.0 and below": ("float16", "float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def heaviside(x, y, name=None):
return ivy.heaviside(x, y)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def inner(x, y, name=None):
result = ivy.inner(x, y)
if (x.shape == () and y.shape == (1,)) or (x.shape == (1,) and y.shape == ()):
result = result.reshape((1,))
elif x.shape == (1,) and y.shape == (1,):
result = result.reshape((1,))
return result
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def inverse(x, name=None):
return ivy.inv(x)
@with_supported_dtypes(
{"2.6.0 and below": ("float16", "float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def isfinite(x, name=None):
return ivy.isfinite(x)
@with_supported_dtypes(
{"2.6.0 and below": ("float16", "float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def isinf(x, name=None):
return ivy.isinf(x)
@with_supported_dtypes(
{"2.6.0 and below": ("float16", "float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def isnan(x, name=None):
return ivy.isnan(x)
@with_supported_dtypes(
{"2.6.0 and below": ("float16", "float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def kron(x, y, name=None):
return ivy.kron(x, y)
@with_supported_dtypes({"2.6.0 and below": ("int32", "int64")}, "paddle")
@to_ivy_arrays_and_back
def lcm(x, y, name=None):
return ivy.lcm(x, y)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def lerp(x, y, weight, name=None):
return ivy.lerp(x, y, weight)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def lgamma(x, name=None):
return ivy.lgamma(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def log(x, name=None):
return ivy.log(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def log10(x, name=None):
return ivy.log10(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def log1p(x, name=None):
return ivy.log1p(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def log2(x, name=None):
return ivy.log2(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def logit(x, eps=None, name=None):
return ivy.logit(x, eps=eps)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def logsumexp(x, axis=None, y=None):
x = ivy.asarray(x)
if y is not None:
y = ivy.asarray(y)
x = ivy.where(y != 0, x, -ivy.inf)
if axis is None:
amax = ivy.max(x)
expsub = ivy.exp(x - amax)
sumexp = ivy.sum(expsub)
out = ivy.log(sumexp) + amax
else:
amax = ivy.max(x, axis=axis, keepdims=True)
expsub = ivy.exp(x - amax)
sumexp = ivy.sum(expsub, axis=axis, keepdims=True)
out = ivy.log(sumexp) + amax
if y is not None:
sign = ivy.stop_gradient(ivy.sign(sumexp))
out = ivy.where(sign < 0, ivy.nan, out)
return out
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def max(x, axis=None, keepdim=False, name=None):
return ivy.max(x, axis=axis, keepdims=keepdim)
# maximum
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def maximum(x, y, name=None):
return ivy.maximum(x, y)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def min(x, axis=None, keepdim=False, name=None):
return ivy.min(x, axis=axis, keepdims=keepdim)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def minimum(x, y, name=None):
return ivy.minimum(x, y)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def mm(input, mat2, name=None):
return ivy.matmul(input, mat2)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def multiply(x, y, name=None):
return ivy.multiply(x, y)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def nanmean(x, axis=None, keepdims=False):
return ivy.nanmean(x, axis=axis, keepdims=keepdims)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def nansum(x, axis=None, dtype=None, name=None):
return ivy.nansum(x, axis=axis, dtype=dtype)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int8", "int16", "int32", "int64")},
"paddle",
)
@to_ivy_arrays_and_back
def neg(x, name=None):
return ivy.negative(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def outer(x, y, name=None):
return ivy.outer(x, y)
@with_supported_dtypes(
{"2.6.0 and below": ("float16", "float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def pow(x, y, name=None):
return ivy.pow(x, y)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def prod(x, axis=None, keepdim=False, dtype=None, name=None):
return ivy.prod(x, axis=axis, keepdims=keepdim, dtype=dtype)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def rad2deg(x, name=None):
return ivy.rad2deg(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def reciprocal(x, name=None):
return ivy.reciprocal(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def remainder(x, y, name=None):
return ivy.remainder(x, y)
@with_supported_device_and_dtypes(
{
"2.6.0 and below": {
"cpu": ("float32", "float64"),
"gpu": ("float16", "float32", "float64"),
}
},
"paddle",
)
@to_ivy_arrays_and_back
def remainder_(x, y, name=None):
return ivy.inplace_update(x, remainder(x, y))
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def round(x, name=None):
sign = ivy.sign(x)
x = sign * ivy.floor(ivy.abs(x) + 0.5)
return x
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def rsqrt(x, name=None):
return 1 / ivy.sqrt(x)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def sgn(x, name=None):
return ivy.sign(x, np_variant=True)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def sign(x, name=None):
return ivy.sign(x, np_variant=False)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def sin(x, name=None):
return ivy.sin(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def sinh(x, name=None):
return ivy.sinh(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def sqrt(x, name=None):
return ivy.sqrt(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def square(x, name=None):
return ivy.square(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def stanh(x, scale_a=0.67, scale_b=1.7159, name=None):
# TODO this function will be simplified as soon as the ivy.stanh(x,a,b) is added
exp_ax = ivy.exp(ivy.multiply(scale_a, x))
exp_minus_ax = ivy.exp(ivy.multiply(-scale_a, x))
numerator = ivy.subtract(exp_ax, exp_minus_ax)
denominator = ivy.add(exp_ax, exp_minus_ax)
ret = ivy.multiply(scale_b, ivy.divide(numerator, denominator))
return ret
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def subtract(x, y, name=None):
return ivy.subtract(x, y)
@with_supported_dtypes(
{
"2.6.0 and below": (
"float64",
"int64",
)
},
"paddle",
)
@to_ivy_arrays_and_back
def sum(x, axis=None, dtype=None, keepdim=False, name=None):
return ivy.sum(
x,
axis=axis,
keepdims=keepdim,
dtype=dtype,
)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int6")}, "paddle"
)
@to_ivy_arrays_and_back
def take(
x,
index,
mode="raise",
name=None,
):
if mode not in ["raise", "wrap", "clip"]:
raise ValueError(
f"'mode' in 'take' should be 'raise', 'wrap', 'clip', but received {mode}."
)
x = ivy.reshape(x, (-1,))
if mode == "clip":
index = ivy.clip(index, 0, x.shape[-1] - 1)
elif mode == "wrap":
index = ivy.where(index < 0, index % x.shape[-1], index)
index = ivy.where(index >= x.shape[-1], index % x.shape[-1], index)
return ivy.gather(x, index, axis=0)
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def tan(x, name=None):
return ivy.tan(x)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def tanh(x, name=None):
return ivy.tanh(x)
@with_supported_dtypes(
{"2.6.0 and below": ("int32", "int64", "float32", "float64")}, "paddle"
)
@to_ivy_arrays_and_back
def trace(x, offset=0, axis1=0, axis2=1, name=None):
return ivy.trace(x, offset=offset, axis1=axis1, axis2=axis2)
@with_supported_dtypes(
{"2.4.2 and below": ("float32", "float64", "int32", "int64")}, "paddle"
)
@to_ivy_arrays_and_back
def trunc(x, name=None):
return ivy.trunc(x)
mod = remainder
| ivy/ivy/functional/frontends/paddle/math.py/0 | {
"file_path": "ivy/ivy/functional/frontends/paddle/math.py",
"repo_id": "ivy",
"token_count": 8739
} | 34 |
from . import attribute
from .attribute import *
from . import creation
from .creation import *
from . import linalg
from .linalg import *
from . import logic
from .logic import *
from . import manipulation
from .manipulation import *
from . import math
from .math import *
from . import random
from .random import *
from . import search
from .search import *
from . import stat
from .stat import *
from . import tensor
from .tensor import Tensor
| ivy/ivy/functional/frontends/paddle/tensor/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/paddle/tensor/__init__.py",
"repo_id": "ivy",
"token_count": 120
} | 35 |
import ivy
import numpy as np
import copy as py_copy
from ivy.functional.frontends.pandas.func_wrapper import outputs_to_self_class
import ivy.functional.frontends.pandas.series as series
from ivy.functional.frontends.pandas.index import Index
class NDFrame:
def __init__(self, data, index, columns, dtype, name, copy, *args, **kwargs):
self.name = name
self.columns = columns
self.dtype = dtype
self.copy = copy
self.orig_data = py_copy.deepcopy(data)
if ivy.is_native_array(data):
self.array = ivy.array(data)
# repeatedly used checks
data_is_array = isinstance(data, (ivy.Array, np.ndarray))
data_is_array_or_like = data_is_array or isinstance(data, (list, tuple))
# setup a default index if none provided
orig_data_len = len(self.orig_data)
if index is None:
if data_is_array_or_like:
index = ivy.arange(orig_data_len)
elif isinstance(data, dict):
index = list(data.keys())
elif isinstance(data, series.Series):
index = data.index
elif isinstance(data, dict) and len(index) > orig_data_len:
for i in index:
if i not in data:
data[i] = ivy.nan
if data_is_array_or_like:
self.index = index
self.array = ivy.array(data)
elif isinstance(data, dict):
self.index = index
self.array = ivy.array(list(data.values()))
elif isinstance(data, (int, float)):
if len(index) > 1:
data = [data] * len(index)
self.index = index
self.array = ivy.array(data)
elif isinstance(data, series.Series):
self.array = data.array
self.index = index
elif isinstance(data, str):
pass # TODO: implement string series
else:
raise TypeError(
"Data must be one of array, dict, iterables, scalar value or Series."
f" Got {type(data)}"
)
self.index = (
Index(self.index) if not isinstance(self.index, Index) else self.index
)
@property
def data(self):
# return underlying data in the original format
ret = self.array.to_list()
if isinstance(self.orig_data, tuple):
ret = tuple(ret)
elif isinstance(self.orig_data, dict):
ret = dict(zip(self.orig_data.keys(), ret))
return ret
@outputs_to_self_class
def abs(self):
return ivy.abs(self.array)
def to_numpy(self, dtype=None, copy=False, na_value=None):
ret = self.array.to_numpy()
if na_value is not None:
ret = np.where(ret == np.nan, na_value, ret)
if dtype is not None:
ret = ret.astype(dtype)
if copy:
return ret.copy()
return ret
def __array__(self):
return self.array.to_numpy()
@outputs_to_self_class
def __array_wrap__(self, array):
return array
def __getattr__(self, item):
raise NotImplementedError
| ivy/ivy/functional/frontends/pandas/generic.py/0 | {
"file_path": "ivy/ivy/functional/frontends/pandas/generic.py",
"repo_id": "ivy",
"token_count": 1523
} | 36 |
from .interpolate import *
| ivy/ivy/functional/frontends/scipy/interpolate/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/scipy/interpolate/__init__.py",
"repo_id": "ivy",
"token_count": 8
} | 37 |
class BaseEstimator:
def get_params(self, deep=True):
return {}
def set_params(self, **params):
return self
class ClassifierMixin:
def score(self, X, y, sample_weight=None):
raise NotImplementedError
def fit(self, X, y, **kwargs):
raise NotImplementedError
class TransformerMixin:
def fit_transform(self, X, y=None, **fit_params):
raise NotImplementedError
class RegressorMixin:
def score(self, X, y, sample_weight=None):
raise NotImplementedError
def fit(self, X, y, **kwargs):
raise NotImplementedError
def predict(self, X):
raise NotImplementedError
class MultiOutputMixin:
def _more_tags(self):
return {"multioutput": True}
| ivy/ivy/functional/frontends/sklearn/base.py/0 | {
"file_path": "ivy/ivy/functional/frontends/sklearn/base.py",
"repo_id": "ivy",
"token_count": 304
} | 38 |
import ivy
from ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back
from ivy.func_wrapper import with_unsupported_dtypes
@to_ivy_arrays_and_back
def as_float_array(X, *, copy=True, force_all_finite=True):
if X.dtype in [ivy.float32, ivy.float64]:
return X.copy_array() if copy else X
if ("bool" in X.dtype or "int" in X.dtype or "uint" in X.dtype) and ivy.itemsize(
X
) <= 4:
return_dtype = ivy.float32
else:
return_dtype = ivy.float64
return ivy.asarray(X, dtype=return_dtype)
@with_unsupported_dtypes({"1.3.0 and below": ("complex",)}, "sklearn")
@to_ivy_arrays_and_back
def column_or_1d(y, *, warn=False):
shape = y.shape
if len(shape) == 2 and shape[1] == 1:
y = ivy.reshape(y, (-1,))
elif len(shape) > 2:
raise ValueError("y should be a 1d array or a column vector")
return y
| ivy/ivy/functional/frontends/sklearn/utils/validation.py/0 | {
"file_path": "ivy/ivy/functional/frontends/sklearn/utils/validation.py",
"repo_id": "ivy",
"token_count": 400
} | 39 |
import ivy
from ivy.functional.frontends.tensorflow.func_wrapper import to_ivy_arrays_and_back
# --- Helpers --- #
# --------------- #
def _binary_matches(y_true, y_pred, threshold=0.5):
threshold = ivy.astype(ivy.array(threshold), y_pred.dtype)
y_pred = ivy.astype(ivy.greater(y_pred, threshold), y_pred.dtype)
return ivy.astype(
ivy.equal(y_true, y_pred), ivy.default_float_dtype(as_native=True)
)
def _cond_convert_labels(y_true):
are_zeros = ivy.equal(y_true, 0.0)
are_ones = ivy.equal(y_true, 1.0)
is_binary = ivy.all(ivy.logical_or(are_zeros, are_ones))
# convert [0, 1] labels to [-1, 1]
if is_binary:
return 2.0 * y_true - 1
return y_true
@to_ivy_arrays_and_back
def _sparse_categorical_matches(y_true, y_pred):
reshape = False
y_true = ivy.array(y_true)
y_pred = ivy.array(y_pred)
y_true_org_shape = ivy.shape(y_true)
y_true_rank = y_true.ndim
y_pred_rank = y_pred.ndim
# y_true shape to (num_samples,)
if (
(y_true_rank is not None)
and (y_pred_rank is not None)
and (len(ivy.shape(y_true)) == len(ivy.shape(y_pred)))
):
y_true = ivy.squeeze(y_true, axis=-1)
reshape = True
y_pred = ivy.argmax(y_pred, axis=-1)
# cast prediction type to be the same as ground truth
y_pred = ivy.astype(y_pred, y_true.dtype, copy=False)
matches = ivy.astype(ivy.equal(y_true, y_pred), ivy.float32)
if reshape:
matches = ivy.reshape(matches, shape=y_true_org_shape)
return matches
@to_ivy_arrays_and_back
def _sparse_top_k_categorical_matches(y_true, y_pred, k=5):
# Temporary composition
def _in_top_k(targets, predictions, topk):
# Sanity check
ivy.utils.assertions.check_equal(
targets.ndim,
1,
message="targets must be 1-dimensional",
as_array=False,
)
ivy.utils.assertions.check_equal(
predictions.ndim,
2,
message="predictions must be 2-dimensional",
as_array=False,
)
targets_batch = ivy.shape(targets)[0]
pred_batch = ivy.shape(predictions)[0]
ivy.utils.assertions.check_equal(
targets_batch,
pred_batch,
message=(
f"first dim of predictions: {pred_batch} must match targets length:"
f" {targets_batch}"
),
as_array=False,
)
# return array of top k values from the input
def _top_k(input, topk):
x = ivy.array(input)
sort = ivy.argsort(x, descending=True)
topk = min(x.shape[-1], topk)
# Safety check for equal values
result = []
for ind, li in enumerate(sort):
temp = [x[ind, _] for _ in li[:topk]]
result.append(temp)
return ivy.array(result)
top_k = _top_k(predictions, topk)
labels = ivy.shape(predictions)[1]
# float comparison?
return ivy.array(
[
(
0 <= res < labels
and ivy.min(top_k[ind] - predictions[ind, res]) <= 1e-9
)
for ind, res in enumerate(targets)
]
)
reshape = False
y_true = ivy.array(y_true)
y_pred = ivy.array(y_pred)
y_true_org_shape = ivy.shape(y_true)
y_true_rank = y_true.ndim
y_pred_rank = y_pred.ndim
# y_pred shape to (batch_size, num_samples), y_true shape to (num_samples,)
if (y_true_rank is not None) and (y_pred_rank is not None):
if y_pred_rank > 2:
y_pred = ivy.reshape(y_pred, shape=[-1, y_pred.shape[-1]])
if y_true_rank > 1:
reshape = True
y_true = ivy.reshape(y_true, shape=[-1])
matches = ivy.astype(
_in_top_k(targets=ivy.astype(y_true, ivy.int32), predictions=y_pred, topk=k),
ivy.float32,
)
# return to original shape
if reshape:
return ivy.reshape(matches, shape=y_true_org_shape)
return matches
# --- Main --- #
# ------------ #
@to_ivy_arrays_and_back
def binary_accuracy(y_true, y_pred, threshold=0.5):
return ivy.mean(_binary_matches(y_true, y_pred, threshold), axis=-1)
@to_ivy_arrays_and_back
def binary_crossentropy(
y_true, y_pred, from_logits: bool = False, label_smoothing: float = 0.0
):
y_pred = ivy.asarray(y_pred)
y_true = ivy.asarray(y_true, dtype=y_pred.dtype)
label_smoothing = ivy.asarray(label_smoothing, dtype=y_pred.dtype)
y_true = y_true * (1.0 - label_smoothing) + 0.5 * label_smoothing
if from_logits:
zeros = ivy.zeros_like(y_pred, dtype=y_pred.dtype)
cond = y_pred >= zeros
relu_logits = ivy.where(cond, y_pred, zeros)
neg_abs_logits = ivy.where(cond, -y_pred, y_pred)
bce = ivy.add(relu_logits - y_pred * y_true, ivy.log1p(ivy.exp(neg_abs_logits)))
else:
epsilon_ = 1e-7
y_pred = ivy.clip(y_pred, epsilon_, 1.0 - epsilon_)
bce = y_true * ivy.log(y_pred + epsilon_)
bce += (1 - y_true) * ivy.log(1 - y_pred + epsilon_)
bce = -bce
return ivy.mean(bce, axis=-1).astype(y_pred.dtype)
@to_ivy_arrays_and_back
def binary_focal_crossentropy(
y_true, y_pred, gamma=2.0, from_logits=False, label_smoothing=0.0, axis=-1
):
y_pred = ivy.asarray(y_pred)
y_true = ivy.asarray(y_true, dtype=y_pred.dtype)
label_smoothing = ivy.asarray(label_smoothing, dtype=y_pred.dtype)
gamma = ivy.asarray(gamma, dtype=y_pred.dtype)
if label_smoothing > 0.0:
y_true = y_true * (1.0 - label_smoothing) + 0.5 * label_smoothing
if from_logits:
sigmoidal = ivy.sigmoid(y_pred)
else:
sigmoidal = y_pred
p_t = (y_true * sigmoidal) + ((1 - y_true) * (1 - sigmoidal))
focal_factor = ivy.pow(1.0 - p_t, gamma)
if from_logits:
zeros = ivy.zeros_like(y_pred, dtype=y_pred.dtype)
cond = y_pred >= zeros
relu_logits = ivy.where(cond, y_pred, zeros)
neg_abs_logits = ivy.where(cond, -y_pred, y_pred)
bce = ivy.add(relu_logits - y_pred * y_true, ivy.log1p(ivy.exp(neg_abs_logits)))
else:
epsilon_ = 1e-7
y_pred = ivy.clip(y_pred, epsilon_, 1.0 - epsilon_)
bce = y_true * ivy.log(y_pred + epsilon_)
bce += (1 - y_true) * ivy.log(1 - y_pred + epsilon_)
bce = -bce
bfce = focal_factor * bce
return ivy.mean(bfce, axis=ivy.to_scalar(axis))
@to_ivy_arrays_and_back
def categorical_accuracy(y_true, y_pred):
return _sparse_categorical_matches(ivy.argmax(y_true, axis=-1), y_pred)
@to_ivy_arrays_and_back
def categorical_crossentropy(y_true, y_pred, from_logits=False, label_smoothing=0.0):
if from_logits:
y_pred = ivy.softmax(y_pred)
return ivy.mean(ivy.categorical_cross_entropy(y_true, y_pred, label_smoothing))
@to_ivy_arrays_and_back
def cosine_similarity(y_true, y_pred):
y_pred = ivy.asarray(y_pred)
y_true = ivy.asarray(y_true)
if len(y_pred.shape) == len(y_pred.shape) and len(y_true.shape) == 2:
numerator = ivy.sum(y_true * y_pred, axis=1)
else:
numerator = ivy.vecdot(y_true, y_pred)
denominator = ivy.matrix_norm(y_true) * ivy.matrix_norm(y_pred)
return numerator / denominator
@to_ivy_arrays_and_back
def hinge(y_true, y_pred):
y_true = ivy.astype(ivy.array(y_true), y_pred.dtype, copy=False)
y_true = _cond_convert_labels(y_true)
return ivy.mean(ivy.maximum(1.0 - y_true * y_pred, 0.0), axis=-1)
@to_ivy_arrays_and_back
def kl_divergence(y_true, y_pred):
# clip to range but avoid div-0
y_true = ivy.clip(y_true, 1e-7, 1)
y_pred = ivy.clip(y_pred, 1e-7, 1)
return ivy.sum(y_true * ivy.log(y_true / y_pred), axis=-1).astype(y_true.dtype)
@to_ivy_arrays_and_back
def log_cosh(y_true, y_pred):
y_true = ivy.astype(y_true, y_pred.dtype)
diff = y_pred - y_true
log_val = ivy.astype(ivy.log(2.0), diff.dtype)
return ivy.mean(diff + ivy.softplus(-2.0 * diff) - log_val, axis=-1)
@to_ivy_arrays_and_back
def mean_absolute_error(y_true, y_pred):
return ivy.mean(ivy.abs(y_true - y_pred), axis=-1)
@to_ivy_arrays_and_back
def mean_absolute_percentage_error(y_true, y_pred):
y_true = ivy.astype(y_true, y_pred.dtype, copy=False)
diff = ivy.abs((y_true - y_pred) / ivy.maximum(ivy.abs(y_true), 1e-7))
return 100.0 * ivy.mean(diff, axis=-1)
@to_ivy_arrays_and_back
def mean_squared_error(y_true, y_pred):
return ivy.mean(ivy.square(ivy.subtract(y_true, y_pred)), axis=-1)
@to_ivy_arrays_and_back
def mean_squared_logarithmic_error(y_true, y_pred):
y_true = ivy.astype(y_true, y_pred.dtype)
first_log = ivy.log(ivy.maximum(y_pred, 1e-7) + 1.0)
second_log = ivy.log(ivy.maximum(y_true, 1e-7) + 1.0)
return ivy.mean(ivy.square(ivy.subtract(first_log, second_log)), axis=-1)
@to_ivy_arrays_and_back
def poisson(y_true, y_pred):
y_true = ivy.astype(y_true, y_pred.dtype, copy=False)
return ivy.mean(y_pred - y_true * ivy.log(y_pred + 1e-7), axis=-1)
@to_ivy_arrays_and_back
def sparse_categorical_crossentropy(y_true, y_pred, from_logits=False, axis=-1):
if from_logits:
y_pred = ivy.softmax(y_pred)
return ivy.sparse_cross_entropy(y_true, y_pred, axis=axis)
@to_ivy_arrays_and_back
def sparse_top_k_categorical_accuracy(y_true, y_pred, k=5):
return _sparse_top_k_categorical_matches(y_true, y_pred, k)
@to_ivy_arrays_and_back
def squared_hinge(y_true, y_pred):
y_true = ivy.astype(ivy.array(y_true), y_pred.dtype)
y_true = _cond_convert_labels(y_true)
return ivy.mean(ivy.square(ivy.maximum(1.0 - y_true * y_pred, 0.0)), axis=-1)
kld = kl_divergence
kullback_leibler_divergence = kl_divergence
logcosh = log_cosh
mae = mean_absolute_error
mape = mean_absolute_percentage_error
mse = mean_squared_error
msle = mean_squared_logarithmic_error
| ivy/ivy/functional/frontends/tensorflow/keras/metrics.py/0 | {
"file_path": "ivy/ivy/functional/frontends/tensorflow/keras/metrics.py",
"repo_id": "ivy",
"token_count": 4983
} | 40 |
# global
import weakref
# local
import ivy
import ivy.functional.frontends.tensorflow as tf_frontend
from ivy.functional.frontends.tensorflow import EagerTensor
class TensorArray:
def __init__(
self,
dtype,
size=None,
dynamic_size=None,
clear_after_read=None,
tensor_array_name=None,
handle=None,
flow=None,
infer_shape=True,
element_shape=None,
colocate_with_first_write_call=True,
name=None,
):
del (flow, tensor_array_name, name)
self._handle = None
self._flow = tf_frontend.constant(0, dtype=tf_frontend.int32)
self._infer_shape = infer_shape
self._element_shape = (
ivy.Shape(element_shape) if element_shape is not None else element_shape
)
self._colocate_with_first_write_call = colocate_with_first_write_call
self._dtype = tf_frontend.as_dtype(dtype)
self._dynamic_size = dynamic_size or False
self._clear_after_read = True if clear_after_read is None else clear_after_read
self._previously_read_indices = []
if isinstance(size, EagerTensor):
size = size.ivy_array
self._tensor_array = [None for _ in range(size)]
self._parent = weakref.ref(self)
@property
def flow(self):
return self._flow
@property
def dtype(self):
return self._dtype
@property
def handle(self):
return self._handle
@property
def element_shape(self):
return self._element_shape
def identity(self):
return self._parent()
def grad(self, source, flow=None, name=None):
raise NotImplementedError(
"TensorArray.grad is not supported when executing eagerly; eager's "
"gradient implementation does not use/need this function to compute "
"gradients of operations that use TensorArrays."
)
@property
def dynamic_size(self):
return self._dynamic_size
@property
def infer_shape(self):
return self._infer_shape
def read(self, index, name=None):
if isinstance(index, EagerTensor):
index = ivy.to_scalar(index.ivy_array)
if index < 0:
raise IndexError(f"Reading from negative indices {index} is not allowed.")
if index >= len(self._tensor_array):
raise IndexError(
f"Tried to read from index {index} but array size is:"
f" {len(self._tensor_array)} "
)
tensor = self._tensor_array[index]
if tensor is None:
if index in self._previously_read_indices:
raise ValueError(
f"Could not read index {index} twice because it was cleared after a"
" previous read (perhaps try setting clear_after_read = false?)"
)
else:
tensor = self._tensor_array[index] = tf_frontend.zeros(
shape=self._element_shape, dtype=self._dtype
)
if self._clear_after_read:
self._tensor_array[index] = None
self._previously_read_indices.append(index)
return tensor
def _write(self, index, value, name=None):
if isinstance(index, EagerTensor):
index = ivy.to_scalar(index.ivy_array)
if index < 0:
raise IndexError(f"Reading from negative indices {index} is not allowed.")
size = len(self._tensor_array)
if index >= size:
if not self._dynamic_size:
raise IndexError(
"Tried to write to index {index} but array is not resizeable and"
" size is: {size}"
)
self._tensor_array.extend(None for _ in range(index - size + 1))
if not isinstance(value, EagerTensor):
value = tf_frontend.cast(value, self.dtype)
if self._dtype != value.dtype:
raise ValueError(
f"TensorArray dtype is {self._dtype} but Op is trying to write dtype"
f" {value.dtype} "
)
if self._infer_shape:
self._element_shape = self._merge_shape(value)
self._tensor_array[index] = value
def _merge_shape(self, value):
if self._element_shape is None:
return value.shape
if len(self._element_shape) != len(value.shape):
raise ValueError("Shapes not compatible")
shape = []
for a, b in zip(self._element_shape, value.shape):
if a == b or a is None:
shape.append(b)
else:
raise ValueError("Shapes not compatible")
return tuple(shape)
def write(self, index, value, name=None):
self._write(index, value)
return self._parent()
def stack(self, name=None):
if self._tensor_array:
for ix in range(len(self._tensor_array)):
if self._tensor_array[ix] is None:
self._tensor_array[ix] = tf_frontend.zeros(
shape=self._element_shape, dtype=self._dtype
)
if not self._tensor_array and self._element_shape.is_fully_defined():
return tf_frontend.constant(
[0] + list(self.element_shape), dtype=self._dtype
)
else:
return tf_frontend.stack(self._tensor_array)
def _maybe_zero(self, ix):
val = self._tensor_array[ix]
if val is None:
val = self._tensor_array[ix] = tf_frontend.zeros(
shape=self._element_shape, dtype=self._dtype
)
return val
def gather(self, indices, name=None):
if isinstance(indices, EagerTensor):
indices = indices.ivy_array
return tf_frontend.stack([self._maybe_zero(i) for i in indices])
def concat(self, name=None):
return tf_frontend.concat(
[self._maybe_zero(ix) for ix in range(len(self._tensor_array))],
0,
name=name,
)
def unstack(self, value, name=None):
tensors = tf_frontend.unstack(value, name=name)
if len(tensors) > len(self._tensor_array) and not self._dynamic_size:
raise ValueError(
f"Cannot unstack {len(tensors)} tensors into a TensorArray of static"
f" size {len(self._tensor_array)} "
)
self._tensor_array = tensors
return self._parent()
def scatter(self, indices, value, name=None):
if isinstance(indices, EagerTensor):
indices = indices.ivy_array
for index, val in zip(indices, tf_frontend.unstack(value)):
self._write(index, val)
return self._parent()
def size(self, name=None):
return tf_frontend.constant(len(self._tensor_array))
def close(self, name=None):
del self._tensor_array[:]
def split(self, value, lengths, name=None):
value = tf_frontend.cast(value, self.dtype)
lengths = (
tf_frontend.constant(lengths)
if not isinstance(lengths, EagerTensor)
else lengths
)
self._tensor_array = tf_frontend.split(value, lengths, name=name)
return self._parent()
| ivy/ivy/functional/frontends/tensorflow/tensorarray.py/0 | {
"file_path": "ivy/ivy/functional/frontends/tensorflow/tensorarray.py",
"repo_id": "ivy",
"token_count": 3462
} | 41 |
import ivy
import ivy.functional.frontends.torch as torch_frontend
from ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back
from ivy.func_wrapper import with_unsupported_dtypes
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, "torch")
@to_ivy_arrays_and_back
def cosine_similarity(x1, x2, *, dim=1, eps=1e-08):
x1, x2 = torch_frontend.promote_types_of_torch_inputs(x1, x2)
if len(x1.shape) == len(x2.shape) and len(x2.shape) >= 2:
numerator = ivy.sum(x1 * x2, axis=dim)
x1_squared_norm = ivy.sum(ivy.square(x1), axis=dim)
x2_squared_norm = ivy.sum(ivy.square(x2), axis=dim)
else:
numerator = ivy.sum(x1 * x2)
x1_squared_norm = ivy.sum(ivy.square(x1))
x2_squared_norm = ivy.sum(ivy.square(x2))
x1_norm = ivy.sqrt(x1_squared_norm)
x2_norm = ivy.sqrt(x2_squared_norm)
norm_mm = x1_norm * x2_norm
norm_mm, eps = torch_frontend.promote_types_of_torch_inputs(norm_mm, eps)
denominator = ivy.maximum(norm_mm, eps)
cosine = numerator / denominator
return cosine
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, "torch")
@to_ivy_arrays_and_back
def pairwise_distance(x1, x2, *, p=2.0, eps=1e-06, keepdim=False):
x1, x2 = torch_frontend.promote_types_of_torch_inputs(x1, x2)
x1_dim = len(x1.shape)
x2_dim = len(x2.shape)
if x1_dim > x2_dim:
output_dim = x1_dim
else:
output_dim = x2_dim
return ivy.vector_norm(x1 - x2 + eps, ord=p, axis=output_dim - 1, keepdims=keepdim)
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, "torch")
@to_ivy_arrays_and_back
def pdist(input, p=2):
x = ivy.array(
[
abs(input[i] - input[j])
for i in range(len(input) - 1)
for j in range(i + 1, len(input))
]
)
return ivy.vector_norm(x, ord=p, axis=1)
| ivy/ivy/functional/frontends/torch/nn/functional/distance_functions.py/0 | {
"file_path": "ivy/ivy/functional/frontends/torch/nn/functional/distance_functions.py",
"repo_id": "ivy",
"token_count": 940
} | 42 |
from .special_funcs import *
| ivy/ivy/functional/frontends/torch/special/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/torch/special/__init__.py",
"repo_id": "ivy",
"token_count": 9
} | 43 |
import ivy
class LogisticRegression:
@staticmethod
def pred_transform(x):
return ivy.sigmoid(x)
@staticmethod
def first_order_gradient(predt, label):
return predt - label
@staticmethod
def second_order_gradient(predt, label):
return ivy.fmax(predt * (1.0 - predt), 1e-16)
@staticmethod
def prob_to_margin(base_score):
return ivy.logit(base_score)
| ivy/ivy/functional/frontends/xgboost/objective/regression_loss.py/0 | {
"file_path": "ivy/ivy/functional/frontends/xgboost/objective/regression_loss.py",
"repo_id": "ivy",
"token_count": 179
} | 44 |
from typing import Optional, Union, Tuple, Sequence
import ivy
from ivy.func_wrapper import (
handle_array_function,
handle_out_argument,
to_native_arrays_and_back,
handle_array_like_without_promotion,
handle_nestable,
infer_dtype,
handle_device,
handle_backend_invalid,
)
from ivy.utils.exceptions import handle_exceptions
# TODO: Make bins optional by offering an automatic bins creation like numpy.
# Make density argument work in tensorflow
# Bins as str is not defined (check Numpy implementation).
# Permit multiple axis.
# Modify documentation to match the above modifications.
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@handle_device
def histogram(
a: Union[ivy.Array, ivy.NativeArray],
/,
*,
bins: Optional[Union[int, ivy.Array, ivy.NativeArray]] = None,
axis: Optional[int] = None,
extend_lower_interval: Optional[bool] = False,
extend_upper_interval: Optional[bool] = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
range: Optional[Tuple[float]] = None,
weights: Optional[Union[ivy.Array, ivy.NativeArray]] = None,
density: Optional[bool] = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the histogram of the array ``a``.
.. note::
Given bins = [c0, ..., cK], defining intervals I0 = [c0, c1), I1 = [c1, c2),
..., I_{K-1} = [c_{K-1}, cK].
Parameters
----------
a
input array.
bins
if ``bins`` is an int, it defines the number of equal-width bins in the given
range.
if ``bins`` is an array, it defines a monotonically increasing array of bin
edges, including the rightmost edge, allowing for non-uniform bin widths.
axis
dimension along which maximum values must be computed. By default, the maximum
value must be computed over the entire array. Default: ``None``.
extend_lower_interval
if True, extend the lowest interval I0 to (-inf, c1].
extend_upper_interval
ff True, extend the upper interval I_{K-1} to [c_{K-1}, +inf).
dtype
the output type.
range
the lower and upper range of the bins. The first element of the range must be
less than or equal to the second.
weights
each value in ``a`` only contributes its associated weight towards the bin count
(instead of 1). Must be of the same shape as a.
density
if True, the result is the value of the probability density function at the
bin, normalized such that the integral over the range of bins is 1.
out
optional output array, for writing the result to. It must have a shape that the
inputs broadcast to.
Returns
-------
ret
a tuple containing the values of the histogram and the bin edges.
Both the description and the type hints above assumes an array input for simplicity,
but this function is *nestable*, and therefore also accepts :class:`ivy.Container`
instances in place of any of the arguments.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([0, 1, 2])
>>> y = ivy.array([0., 0.5, 1., 1.5, 2.])
>>> z = ivy.histogram(x, bins=y)
>>> print(z)
ivy.array([1., 0., 1., 1.])
>>> x = ivy.array([[1.1, 2.2, 3.3],
... [4.4, 5.5, .6]])
>>> bins = 4
>>> range = (0., 5.)
>>> dtype = ivy.int32
>>> y = ivy.histogram(x, bins=bins, range=range, dtype=dtype)
>>> print(y)
ivy.array([2, 1, 1, 1])
>>> x = ivy.array([[1.1, 2.2, 3.3],
... [-4.4, -5.5, -6.6]])
>>> y = ivy.array([0., 1., 2., 3., 4., 5.])
>>> axis = 1
>>> extend_lower_interval = True
>>> extend_upper_interval = True
>>> dtype = ivy.float32
>>> weights = ivy.array([[1., 1., 1.], [1., 1., 1.]])
>>> z = ivy.histogram(
... x,
... bins=y,
... axis=axis,
... extend_lower_interval=extend_lower_interval,
... extend_upper_interval=extend_upper_interval,
... dtype=dtype,
... weights=weights)
>>> print(z)
ivy.array([[0., 3.],
[1., 0.],
[1., 0.],
[1., 0.],
[0., 0.]])
>>> x = ivy.Container(a=ivy.array([0., 1., 2.]), b=ivy.array([3., 4., 5.]))
>>> y = ivy.array([0., 1., 2., 3., 4., 5.])
>>> dtype = ivy.int32
>>> z = ivy.histogram(x, bins=y, dtype=dtype)
>>> print(z)
{
a: ivy.array([1, 1, 1, 0, 0]),
b: ivy.array([0, 0, 0, 1, 2])
}
"""
return ivy.current_backend(a).histogram(
a,
bins=bins,
axis=axis,
extend_lower_interval=extend_lower_interval,
extend_upper_interval=extend_upper_interval,
dtype=dtype,
range=range,
weights=weights,
density=density,
out=out,
)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@handle_device
def median(
input: ivy.Array,
/,
*,
axis: Optional[Union[Tuple[int], int]] = None,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the median along the specified axis.
Parameters
----------
input
Input array.
axis
Axis or axes along which the medians are computed. The default is to compute
the median along a flattened version of the array.
keepdims
If this is set to True, the axes which are reduced are left in the result
as dimensions with size one.
out
optional output array, for writing the result to.
Returns
-------
ret
The median of the array elements.
Examples
--------
>>> a = ivy.array([[10, 7, 4], [3, 2, 1]])
>>> ivy.median(a)
3.5
>>> ivy.median(a, axis=0)
ivy.array([6.5, 4.5, 2.5])
"""
return ivy.current_backend().median(input, axis=axis, keepdims=keepdims, out=out)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@infer_dtype
@handle_device
def nanmean(
a: ivy.Array,
/,
*,
axis: Optional[Union[Tuple[int], int]] = None,
keepdims: bool = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the mean of all non-NaN elements along the specified dimensions.
Parameters
----------
a
Input array.
axis
Axis or axes along which the means are computed.
The default is to compute the mean of the flattened array.
keepdims
If this is set to True, the axes which are reduced are left in the result
as dimensions with size one. With this option, the result will broadcast
correctly against the original a. If the value is anything but the default,
then keepdims will be passed through to the mean or sum methods of sub-classes
of ndarray. If the sub-classes methods does not implement keepdims any
exceptions will be raised.
dtype
The desired data type of returned tensor. Default is None.
out
optional output array, for writing the result to.
Returns
-------
ret
The nanmean of the array elements.
Examples
--------
>>> a = ivy.array([[1, ivy.nan], [3, 4]])
>>> ivy.nanmean(a)
2.6666666666666665
>>> ivy.nanmean(a, axis=0)
ivy.array([2., 4.])
"""
return ivy.current_backend(a).nanmean(
a, axis=axis, keepdims=keepdims, dtype=dtype, out=out
)
@handle_out_argument
@handle_nestable
@handle_backend_invalid
@handle_exceptions
@to_native_arrays_and_back
@handle_device
def nanmin(
x: ivy.Array,
/,
*,
axis: Optional[Union[Tuple[int], int]] = None,
keepdims: Optional[bool] = False,
out: Optional[ivy.Array] = None,
initial: Optional[Union[int, float, complex]] = None,
where: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Return minimum of an array or minimum along an axis, ignoring any NaNs.
Parameters
----------
a
Input array.
axis
Axis or axes along which the minimum is computed.
The default is to compute the minimum of the flattened array.
out
optional output array, for writing the result to.
keepdims
If this is set to True, the axes which are reduced are left in the result
as dimensions with size one. With this option, the result will broadcast
correctly against the original a.
initial
The maximum value of an output element.
where
Elements to compare for the minimum
Returns
-------
ret
Return minimum of an array or minimum along an axis, ignoring any NaNs
Functional Examples
-------------------
>>> a = ivy.array([[1, ivy.nan], [3, 4]])
>>> ivy.nanmin(a)
1.0
>>> ivy.nanmin(a, axis=1)
[1. 3.]
>>> ivy.nanmin(a, axis=0, keepdims=True)
[[1. 2.]]
"""
return ivy.current_backend(x).nanmin(
x,
axis=axis,
keepdims=keepdims,
out=out,
initial=initial,
where=where,
)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@infer_dtype
@handle_device
def nanprod(
a: ivy.Array,
/,
*,
axis: Optional[Union[Tuple[int], int]] = None,
keepdims: Optional[bool] = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
initial: Optional[Union[int, float, complex]] = None,
where: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the product of array elements over a given axis treating Not a
Numbers (NaNs) as ones.
Parameters
----------
a
Input array.
axis
Axis or axes along which the product is computed.
The default is to compute the product of the flattened array.
dtype
The desired data type of returned array. Default is None.
out
optional output array, for writing the result to.
keepdims
If this is set to True, the axes which are reduced are left in the result
as dimensions with size one. With this option, the result will broadcast
correctly against the original a.
initial
The starting value for this product.
where
Elements to include in the product
Returns
-------
ret
The product of array elements over a given axis treating
Not a Numbers (NaNs) as ones
Functional Examples
-------------------
>>> a = ivy.array([[1, ivy.nan], [3, 4]])
>>> ivy.nanprod(a)
12.0
>>> ivy.nanprod(a, axis=0)
[3. 4.]
>>> ivy.nanprod(a, axis=0, keepdims=True)
[[3. 4.]]
"""
return ivy.current_backend(a).nanprod(
a,
axis=axis,
keepdims=keepdims,
dtype=dtype,
out=out,
initial=initial,
where=where,
)
@handle_exceptions
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@handle_device
def quantile(
a: ivy.Array,
q: Union[ivy.Array, float],
/,
*,
axis: Optional[Union[Sequence[int], int]] = None,
keepdims: bool = False,
interpolation: str = "linear",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the q-th quantile of the data along the specified axis.
Parameters
----------
a
Input array.
q
Quantile or sequence of quantiles to compute, which must be
between 0 and 1 inclusive.
axis
Axis or axes along which the quantiles are computed. The default
is to compute the quantile(s) along a flattened version of the array.
keepdims
If this is set to True, the axes which are reduced are left in the result
as dimensions with size one. With this option, the result will broadcast
correctly against the original array a.
interpolation
{'nearest', 'linear', 'lower', 'higher', 'midpoint', 'nearest_jax'}.
Default value: 'linear'.
This specifies the interpolation method to use when the desired quantile lies
between two data points i < j:
- linear: i + (j - i) * fraction, where fraction is the fractional part of the
index surrounded by i and j.
- lower: i.
- higher: j.
- nearest: i or j, whichever is nearest.
- midpoint: (i + j) / 2. linear and midpoint interpolation do not work with
integer dtypes.
- nearest_jax: provides jax-like computation for interpolation='nearest'.
out
optional output array, for writing the result to.
Returns
-------
ret
A (rank(q) + N - len(axis)) dimensional array of same dtype as a, or, if axis
is None, a rank(q) array. The first rank(q) dimensions index quantiles for
different values of q.
Examples
--------
>>> a = ivy.array([[10., 7., 4.], [3., 2., 1.]])
>>> q = ivy.array(0.5)
>>> ivy.quantile(a, q)
ivy.array(3.5)
>>> a = ivy.array([[10., 7., 4.], [3., 2., 1.]])
>>> q = 0.5
>>> ivy.quantile(a, q)
ivy.array(3.5)
>>> ivy.quantile(a, q, axis=0)
ivy.array([6.5, 4.5, 2.5])
>>> ivy.quantile(a, q, axis=1)
ivy.array([7., 2.])
>>> ivy.quantile(a, q, axis=1, keepdims=True)
ivy.array([[7.],[2.]])
>>> a = ivy.array([1., 2., 3., 4.])
>>> q = ivy.array([0.3, 0.7])
>>> ivy.quantile(a, q, interpolation='lower')
ivy.array([1., 3.])
"""
return ivy.current_backend(a).quantile(
a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out
)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@handle_device
def corrcoef(
x: ivy.Array,
/,
*,
y: Optional[ivy.Array] = None,
rowvar: bool = True,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.current_backend(x).corrcoef(x, y=y, rowvar=rowvar, out=out)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@handle_device
def nanmedian(
input: ivy.Array,
/,
*,
axis: Optional[Union[Tuple[int], int]] = None,
keepdims: bool = False,
overwrite_input: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.nanmedian. This method simply
wraps the function, and so the docstring for ivy.nanmedian also applies to
this method with minimal changes.
Parameters
----------
self
Input array.
axis
Axis or axes along which the means are computed.
The default is to compute the mean of the flattened array.
keepdims
If this is set to True, the axes which are reduced are left in the result
as dimensions with size one. With this option, the result will broadcast
correctly against the original a. If the value is anything but the default,
then keepdims will be passed through to the mean or sum methods of
sub-classes of ndarray. If the sub-classes methods does not implement
keepdims any exceptions will be raised.
overwrite_input
If True, then allow use of memory of input array a for calculations.
The input array will be modified by the call to median. This will
save memory when you do not need to preserve the contents of the input array.
Treat the input as undefined, but it will probably be fully or partially sorted.
Default is False. If overwrite_input is True and a is not already an ndarray,
an error will be raised.
out
optional output array, for writing the result to.
Returns
-------
ret
A new array holding the result. If the input contains integers
This function is *nestable*, and therefore also accepts :code:'ivy.Container'
instance in place of the argument.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([[12.0, 10.0, 34.0], [45.0, 23.0, ivy.nan]])
>>> ivy.nanmedian(x)
ivy.array(23.)
With a mix of :class:`ivy.Container` and :class:`ivy.Array` input:
>>> x = ivy.Container(a=ivy.array([[10.0, ivy.nan, 4], [3, 2, 1]]),
b=ivy.array([[12, 10, 34], [45, 23, ivy.nan]]))
>>> ivy.nanmedian(x)
{
a: ivy.array(3.),
b: ivy.array(23.)
}
"""
return ivy.current_backend().nanmedian(
input, axis=axis, keepdims=keepdims, overwrite_input=overwrite_input, out=out
)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@handle_device
def bincount(
x: ivy.Array,
/,
*,
weights: Optional[ivy.Array] = None,
minlength: int = 0,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Count the number of occurrences of each value in an integer array.
Parameters
----------
self
Input array.
weights
An optional input array.
minlength
A minimum number of bins for the output array.
Returns
-------
ret
The bincount of the array elements.
Examples
--------
>>> a = ivy.Container([[10.0, ivy.nan, 4], [3, 2, 1]])
>>> a.bincount(a)
3.0
>>> a.bincount(a, axis=0)
array([6.5, 2. , 2.5])
"""
return ivy.current_backend(x).bincount(
x, weights=weights, minlength=minlength, out=out
)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_out_argument
@to_native_arrays_and_back
@handle_device
def igamma(
a: Union[ivy.Array, ivy.NativeArray],
/,
*,
x: Optional[Union[ivy.Array, ivy.NativeArray]] = None,
out: Optional[Union[ivy.Array, ivy.NativeArray]] = None,
) -> ivy.Array:
"""Compute the regularized lower gamma function of ``a`` and ``x``.
Parameters
----------
self
Input array.
x
An additional input array.
`x` has the same type as `a`.
out
optional output array, for writing the result to.
Returns
-------
ret
The lower incomplete gamma function of the array elements.
Examples
--------
>>> a = ivy.array([2.5])
>>> x = ivy.array([1.7, 1.2])
>>> a.igamma(x)
ivy.array([0.3614, 0.2085])
"""
return ivy.current_backend().igamma(a, x=x, out=out)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_array_like_without_promotion
@to_native_arrays_and_back
def cov(
x1: Union[ivy.Array, ivy.NativeArray],
x2: Union[ivy.Array, ivy.NativeArray] = None,
/,
*,
rowVar: bool = True,
bias: bool = False,
ddof: Optional[int] = None,
fweights: Optional[ivy.Array] = None,
aweights: Optional[ivy.Array] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
) -> ivy.Array:
"""Compute the covariance of matrix x1, or variables x1 and x2.
Parameters
----------
x1
a 1D or 2D input array, with a numeric data type.
x2
optional second 1D or 2D input array, with a numeric data type.
Must have the same shape as ``self``.
rowVar
optional variable where each row of input is interpreted as a variable
(default = True). If set to False, each column is instead interpreted
as a variable.
bias
optional variable for normalizing input (default = False) by (N - 1) where
N is the number of given observations. If set to True, then normalization
is instead by N. Can be overridden by keyword ``ddof``.
ddof
optional variable to override ``bias`` (default = None). ddof=1 will return
the unbiased estimate, even with fweights and aweights given. ddof=0 will
return the simple average.
fweights
optional 1D array of integer frequency weights; the number of times each
observation vector should be repeated.
aweights
optional 1D array of observation vector weights. These relative weights are
typically large for observations considered "important" and smaller for
observations considered less "important". If ddof=0 is specified, the array
of weights can be used to assign probabilities to observation vectors.
dtype
optional variable to set data-type of the result. By default, data-type
will have at least ``numpy.float64`` precision.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
an array containing the covariance matrix of an input matrix, or the
covariance matrix of two variables. The returned array must have a
floating-point data type determined by Type Promotion Rules and must be
a square matrix of shape (N, N), where N is the number of variables in the
input(s).
This function conforms to the `Array API Standard
<https://data-apis.org/array-api/latest/>`_. This docstring is an extension of the
`docstring <https://data-apis.org/array-api/latest/
extensions/generated/signatures.linalg.cov.html>`_
in the standard.
Both the description and the type hints above assumes an array input for simplicity,
but this function is *nestable*, and therefore also accepts :class:`ivy.Container`
instances in place of any of the arguments.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([[1, 2, 3],
... [4, 5, 6]])
>>> y = x[0].cov(x[1])
>>> print(y)
ivy.array([[1., 1.],
[1., 1.]])
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([1., 2., 3.]), b=ivy.array([1., 2., 3.]))
>>> y = ivy.Container(a=ivy.array([3., 2., 1.]), b=ivy.array([3., 2., 1.]))
>>> z = ivy.Container.static_cov(x, y)
>>> print(z)
{
a: ivy.array([[1., -1.],
[-1., 1.]]),
b: ivy.array([[1., -1.],
[-1., 1.]])
}
With a combination of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([1., 2., 3.])
>>> y = ivy.Container(a=ivy.array([3. ,2. ,1.]), b=ivy.array([-1., -2., -3.]))
>>> z = ivy.cov(x, y)
>>> print(z)
{
a: ivy.array([[1., -1.],
[-1., 1.]]),
b: ivy.array([[1., -1.],
[-1., 1.]])
}
With :class:`ivy.Array` input and rowVar flag set to False (True by default):
>>> x = ivy.array([[1,2,3],
... [4,5,6]])
>>> y = x[0].cov(x[1], rowVar=False)
>>> print(y)
ivy.array([[1., 1.],
[1., 1.]])
With :class:`ivy.Array` input and bias flag set to True (False by default):
>>> x = ivy.array([[1,2,3],
... [4,5,6]])
>>> y = x[0].cov(x[1], bias=True)
>>> print(y)
ivy.array([[0.66666667, 0.66666667],
[0.66666667, 0.66666667]])
With :class:`ivy.Array` input with both fweights and aweights given:
>>> x = ivy.array([[1,2,3],
... [4,5,6]])
>>> fw = ivy.array([1,2,3])
>>> aw = ivy.array([ 1.2, 2.3, 3.4 ])
>>> y = x[0].cov(x[1], fweights=fw, aweights=aw)
>>> print(y)
ivy.array([[0.48447205, 0.48447205],
[0.48447205, 0.48447205]])
"""
return ivy.current_backend(x1).cov(
x1,
x2,
rowVar=rowVar,
bias=bias,
ddof=ddof,
fweights=fweights,
aweights=aweights,
dtype=dtype,
)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_array_like_without_promotion
@handle_out_argument
@to_native_arrays_and_back
@handle_array_function
def cummax(
x: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Return a tuple containing the cumulative maximum of elements of input
along the given axis and index location of each maximum value found along
the given axis.
Parameters
----------
x
Input array.
axis
Axis along which the cumulative maximum is computed. Default is ``0``.
exclusive
Whether to perform cummax exclusively. Default is ``False``.
reverse
Whether to perform the cummax from last to first element in the selected
axis. Default is ``False`` (from first to last element)
out
Optional output array, for writing the result to. It must have a shape that the
inputs broadcast to.
Returns
-------
ret
Array which holds the result of applying cummax at each
original array elements along the specified axis.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([-86, -19, 41, 88, -5, 80, 32, 87, -90, -12])
>>> y = ivy.cummax(x, exclusive=False, reverse=False)
>>> print(y)
(ivy.array([-86, -19, 41, 88, 88, 88, 88, 88, 88, 88]),
ivy.array([0, 1, 2, 3, 3, 3, 3, 3, 3, 3]))
>>> x = ivy.array([ 14, 15, 49, -24, -39])
>>> y = ivy.cummax(x, axis=0, exclusive=False, reverse=False)
>>> print(y)
(ivy.array([14, 15, 49, 49, 49]), ivy.array([0, 1, 2, 2, 2]))
>>> x = ivy.array([[ 63, 43, -16, -4],[ 21, 82, 59, 33]])
>>> ivy.cummax(x, axis=0, reverse=False, dtype='int64', out=x)
>>> print(x)
ivy.array([[0, 0, 0, 0],
[0, 1, 1, 1]])
>>> x = ivy.array([[-36, 83, -81],
... [ 23, 29, 63],
... [-83, 85, 2],
... [ 31, 25, -86],
... [-10, -52, 0],
... [ 22, 38, 55],
... [ 33, 54, -16]])
>>> y = ivy.cummax(x, axis=1, exclusive=True, reverse=False)
>>> print(y)
(ivy.array([[ 0, 0, 83],
[ 0, 23, 29],
[ 0, 0, 85],
[ 0, 31, 31],
[ 0, 0, 0],
[ 0, 22, 38],
[ 0, 33, 54]]), ivy.array([[0, 0, 2],
[0, 1, 2],
[0, 0, 2],
[0, 1, 1],
[0, 0, 0],
[0, 1, 2],
[0, 1, 2]]))
>>> x = ivy.array([73, 15, 47])
>>> y = ivy.cummax(x, axis=0, reverse=True, exclusive=True)
>>> print(y)
(ivy.array([47, 47, 0]), ivy.array([0, 0, 0]))
>>> x = ivy.array([-47, -14, -67, 15, -23, -45])
>>> y = ivy.cummax(x, axis=0, reverse=True, exclusive=False)
>>> print(y)
(ivy.array([ 15, 15, 15, 15, -23, -45]), ivy.array([2, 2, 2, 2, 1, 0]))
"""
return ivy.current_backend(x).cummax(
x, axis=axis, exclusive=exclusive, reverse=reverse, dtype=dtype, out=out
)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_array_like_without_promotion
@handle_out_argument
@to_native_arrays_and_back
@handle_array_function
def cummin(
x: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Return the cumulative minimum of the elements along a given axis.
Parameters
----------
x
Input array.
axis
Axis along which the cumulative minimum is computed. Default is ``0``.
reverse
Whether to perform the cummin from last to first element in the selected
axis. Default is ``False`` (from first to last element)
dtype
Data type of the returned array. Default is ``None``.
If None, if the default data type corresponding to the data type βkindβ
(integer or floating-point) of x has a smaller range of values than the
data type of x (e.g., x has data type int64 and the default data type
is int32, or x has data type uint64 and the default data type is int64),
the returned array must have the same data type as x.
If x has a floating-point data type, the returned array must have the
default floating-point data type.
If x has a signed integer data type (e.g., int16), the returned array
must have the default integer data type.
If x has an unsigned integer data type (e.g., uint16), the returned
array must have an unsigned integer data type having the same number of
bits as the default integer data type (e.g., if the default integer data
type is int32, the returned array must have a uint32 data type).
If the data type (either specified or resolved) differs from the data type
of x, the input array should be cast to the specified data type before
computing the product.
out
Optional output array, for writing the result to. It must have a shape that the
inputs broadcast to.
Returns
-------
ret
Array which holds the result of applying cummin at each
original array elements along the specified axis.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([1, 5, 2, 0])
>>> y = ivy.cummin(x)
>>> print(y)
ivy.array([1, 1, 1, 0])
>>> x = ivy.array([[6, 4, 2],
... [1, 3, 0]])
>>> y = ivy.zeros((2,3))
>>> ivy.cummin(x, axis=0, reverse=True, out=y)
>>> print(y)
ivy.array([[1., 3., 0.],
[1., 3., 0.]])
>>> x = ivy.array([[2, 4, 5],
... [3, 6, 5],
... [1, 3, 10]])
>>> ivy.cummin(x,axis=1,reverse=True, dtype='int64', out=x)
>>> print(x)
ivy.array([[ 2, 4, 5],
[ 3, 5, 5],
[ 1, 3, 10]])
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([[1, 3, 5]]),
... b=ivy.array([[3, 5, 7]]))
>>> y = ivy.cummin(x, axis= 0)
>>> print(y)
{
a: ivy.array([[1, 3, 5]]),
b: ivy.array([[3, 5, 7]])
}
>>> x = ivy.Container(a=ivy.array([[1, 3, 4]]),
... b=ivy.array([[3, 5, 8],
... [5, 6, 5]]),
... c=ivy.array([[2, 4, 1],
... [3, 6, 9],
... [0, 2, 3]]))
>>> y = ivy.Container(a = ivy.zeros((1, 3)),
... b = ivy.zeros((2, 3)),
... c = ivy.zeros((3,3)))
>>> ivy.cummin(x,axis=1,reverse=True, out=y)
>>> print(y)
{
a: ivy.array([[1., 3., 4.]]),
b: ivy.array([[3., 5., 8.],
[5., 5., 5.]]),
c: ivy.array([[1., 1., 1.],
[3., 6., 9.],
[0., 2., 3.]])
}
>>> x = ivy.Container(a=ivy.array([[0],[5]]),
... b=ivy.array([[6, 8, 7],
... [4, 2, 3]]),
... c=ivy.array([[1, 2],
... [3, 4],
... [6, 4]]))
>>> ivy.cummin(x,axis=0,out=x)
>>> print(x)
{
a: ivy.array([[0],
[0]]),
b: ivy.array([[6, 8, 7],
[4, 2, 3]]),
c: ivy.array([[1, 2],
[1, 2],
[1, 2]])
}
"""
return ivy.current_backend(x).cummin(
x, axis=axis, exclusive=exclusive, reverse=reverse, dtype=dtype, out=out
)
| ivy/ivy/functional/ivy/experimental/statistical.py/0 | {
"file_path": "ivy/ivy/functional/ivy/experimental/statistical.py",
"repo_id": "ivy",
"token_count": 13918
} | 45 |
# global
from typing import Union, Optional, Sequence
# local
import ivy
from ivy.func_wrapper import (
handle_array_function,
to_native_arrays_and_back,
handle_out_argument,
handle_nestable,
handle_array_like_without_promotion,
handle_backend_invalid,
)
from ivy.utils.exceptions import handle_exceptions
# Array API Standard #
# -------------------#
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_array_like_without_promotion
@handle_out_argument
@to_native_arrays_and_back
@handle_array_function
def all(
x: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Test whether all input array elements evaluate to ``True`` along a
specified axis.
.. note::
Positive infinity, negative infinity, and NaN must evaluate to ``True``.
.. note::
If ``x`` is an empty array or the size of the axis (dimension) along which to
evaluate elements is zero, the test result must be ``True``.
Parameters
----------
x
input array.
axis
axis or axes along which to perform a logical AND reduction. By default, a
logical AND reduction must be performed over the entire array. If a tuple of
integers, logical AND reductions must be performed over multiple axes. A valid
``axis`` must be an integer on the interval ``[-N, N)``, where ``N`` is the rank
(number of dimensions) of ``x``. If an ``axis`` is specified as a negative
integer, the function must determine the axis along which to perform a reduction
by counting backward from the last dimension (where ``-1`` refers to the last
dimension). If provided an invalid ``axis``, the function must raise an
exception. Default ``None``.
keepdims
If ``True``, the reduced axes (dimensions) must be included in the result as
singleton dimensions, and, accordingly, the result must be compatible with the
input array (see :ref:`broadcasting`). Otherwise, if ``False``, the reduced axes
(dimensions) must not be included in the result. Default: ``False``.
out
optional output array, for writing the result to. It must have a shape that the
inputs broadcast to.
Returns
-------
ret
if a logical AND reduction was performed over the entire array, the returned
array must be a zero-dimensional array containing the test result; otherwise,
the returned array must be a non-zero-dimensional array containing the test
results. The returned array must have a data type of ``bool``.
This method conforms to the `Array API Standard
<https://data-apis.org/array-api/latest/>`_. This docstring is an extension of the
`docstring <https://data-apis.org/array-api/latest/
API_specification/generated/array_api.all.html>`_
in the standard.
Both the description and the type hints above assumes an array input for simplicit
y,but this function is *nestable*, and therefore also accepts :class:`ivy.Container`
instances in place of any of the arguments.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([1, 2, 3])
>>> y = ivy.all(x)
>>> print(y)
ivy.array(True)
>>> x = ivy.array([[0],[1]])
>>> y = ivy.zeros((1,1), dtype='bool')
>>> a = ivy.all(x, axis=0, out = y, keepdims=True)
>>> print(a)
ivy.array([[False]])
>>> x = ivy.array(False)
>>> y = ivy.all(ivy.array([[0, 4],[1, 5]]), axis=(0,1), out=x, keepdims=False)
>>> print(y)
ivy.array(False)
>>> x = ivy.array(False)
>>> y = ivy.all(ivy.array([[[0], [1]], [[1], [1]]]), axis=(0,1,2), out=x,
... keepdims=False)
>>> print(y)
ivy.array(False)
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([0, 1, 2]), b=ivy.array([3, 4, 5]))
>>> y = ivy.all(x)
>>> print(y)
{
a: ivy.array(False),
b: ivy.array(True)
}
>>> x = ivy.Container(a=ivy.native_array([0, 1, 2]),b=ivy.array([3, 4, 5]))
>>> y = ivy.all(x)
>>> print(y)
{
a: ivy.array(False),
b: ivy.array(True)
}
"""
return ivy.current_backend(x).all(x, axis=axis, keepdims=keepdims, out=out)
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_array_like_without_promotion
@handle_out_argument
@to_native_arrays_and_back
@handle_array_function
def any(
x: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Test whether any input array element evaluates to ``True`` along a
specified axis.
.. note::
Positive infinity, negative infinity, and NaN must evaluate to ``True``.
.. note::
If ``x`` is an empty array or the size of the axis (dimension) along which to
evaluate elements is zero, the test result must be ``False``.
Parameters
----------
x
input array.
axis
axis or axes along which to perform a logical OR reduction. By default, a
logical OR reduction must be performed over the entire array. If a tuple of
integers, logical OR reductions must be performed over multiple axes. A valid
``axis`` must be an integer on the interval ``[-N, N)``, where ``N`` is the rank
(number of dimensions) of ``x``. If an ``axis`` is specified as a negative
integer, the function must determine the axis along which to perform a reduction
by counting backward from the last dimension (where ``-1`` refers to the last
dimension). If provided an invalid ``axis``, the function must raise an
exception. Default: ``None``.
keepdims
If ``True``, the reduced axes (dimensions) must be included in the result as
singleton dimensions, and, accordingly, the result must be compatible with the
input array (see :ref:`broadcasting`). Otherwise, if ``False``, the reduced axes
(dimensions) must not be included in the result. Default: ``False``.
out
optional output array, for writing the result to. It must have a shape that the
inputs broadcast to.
Returns
-------
ret
if a logical OR reduction was performed over the entire array, the returned
array must be a zero-dimensional array containing the test result; otherwise,
the returned array must be a non-zero-dimensional array containing the test
results. The returned array must have a data type of ``bool``.
This method conforms to the `Array API Standard
<https://data-apis.org/array-api/latest/>`_. This docstring is an extension of the
`docstring <https://data-apis.org/array-api/latest/
API_specification/generated/array_api.any.html>`_
in the standard.
Both the description and the type hints above assumes an array input for simplicit
y,but this function is *nestable*, and therefore also accepts :class:`ivy.Container`
instances in place of any of the arguments.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([2, 3, 4])
>>> y = ivy.any(x)
>>> print(y)
ivy.array(True)
>>> x = ivy.array([[0],[1]])
>>> y = ivy.zeros((1,1), dtype='bool')
>>> a = ivy.any(x, axis=0, out = y, keepdims=True)
>>> print(a)
ivy.array([[True]])
>>> x=ivy.array(False)
>>> y=ivy.any(ivy.array([[0, 3],[1, 4]]), axis=(0,1), out=x, keepdims=False)
>>> print(y)
ivy.array(True)
>>> x=ivy.array(False)
>>> y=ivy.any(ivy.array([[[0],[1]],[[1],[1]]]),axis=(0,1,2), out=x, keepdims=False)
>>> print(y)
ivy.array(True)
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([0, 1, 2]), b=ivy.array([3, 4, 5]))
>>> y = ivy.any(x)
>>> print(y)
{
a: ivy.array(True),
b: ivy.array(True)
}
"""
return ivy.current_backend(x).any(x, axis=axis, keepdims=keepdims, out=out)
# Extra #
# ----- #
def save(item, filepath, format=None):
if isinstance(item, ivy.Container):
if format is not None:
item.cont_save(filepath, format=format)
else:
item.cont_save(filepath)
elif isinstance(item, ivy.Module):
item.save(filepath)
else:
raise ivy.utils.exceptions.IvyException("Unsupported item type for saving.")
@staticmethod
def load(filepath, format=None, type="module"):
if type == "module":
return ivy.Module.load(filepath)
elif type == "container":
if format is not None:
return ivy.Container.cont_load(filepath, format=format)
else:
return ivy.Container.cont_load(filepath)
else:
raise ivy.utils.exceptions.IvyException("Unsupported item type for loading.")
| ivy/ivy/functional/ivy/utility.py/0 | {
"file_path": "ivy/ivy/functional/ivy/utility.py",
"repo_id": "ivy",
"token_count": 3436
} | 46 |
import ast
import os
import sys
import traceback
from ast import parse
from string import Template
from importlib.util import spec_from_file_location
from importlib.abc import Loader, MetaPathFinder
# AST helpers ##################
# TODO add assertion to make sure module path exists
importlib_module_path = "ivy.utils._importlib"
importlib_abs_import_fn = "_absolute_import"
importlib_from_import_fn = "_from_import"
_global_import_template = Template(f"from {importlib_module_path} import $name")
_local_import_template = Template(
"$name = "
"ivy.utils.backend.handler._compiled_backends_ids[$ivy_id].utils._importlib.$name"
)
_unmodified_ivy_path = sys.modules["ivy"].__path__[0].rpartition(os.path.sep)[0]
_compiled_modules_cache = {}
def _retrive_local_modules():
ret = ["ivy"] # TODO temporary hacky solution for finder
# Get Ivy package root
wd = sys.modules["ivy"].__path__[0]
for entry in os.scandir(wd):
if entry.is_file() and entry.name.endswith(".py"):
ret.append(entry.name[:-3])
continue
if entry.is_dir() and "__init__.py" in os.listdir(f"{wd}/{entry.name}"):
ret.append(entry.name)
return ret
local_modules = _retrive_local_modules()
def _parse_absolute_fromimport(node: ast.ImportFrom):
# Not to override absolute imports to other packages
if node.module.partition(".")[0] not in local_modules:
return node
to_import = []
for entry in node.names:
to_import.append((entry.name, entry.asname))
# Return a function call
return ast.Expr(
value=ast.Call(
func=ast.Name(id=importlib_from_import_fn, ctx=ast.Load()),
args=[
ast.Constant(value=node.module, kind=None),
ast.Constant(value=None, kind=None),
ast.Call(
func=ast.Name(id="globals", ctx=ast.Load()), args=[], keywords=[]
),
_create_list(to_import),
],
keywords=[],
),
)
def _parse_relative_fromimport(node: ast.ImportFrom):
if node.module is None:
name = ""
else:
name = node.module
to_import = []
for entry in node.names:
to_import.append((entry.name, entry.asname))
# Return a function call
return ast.Expr(
value=ast.Call(
func=ast.Name(id=importlib_from_import_fn, ctx=ast.Load()),
args=[
ast.Constant(value=name, kind=None),
ast.Name(id="__package__", ctx=ast.Load()),
ast.Call(
func=ast.Name(id="globals", ctx=ast.Load()), args=[], keywords=[]
),
_create_list(to_import),
ast.Constant(value=node.level, kind=None),
],
keywords=[],
),
)
def _create_list(elements):
_elts = [ast.Constant(value=element, kind=None) for element in elements]
return ast.List(elts=_elts, ctx=ast.Load())
def _create_assign_to_variable(target, value):
return ast.Assign(
targets=[ast.Name(id=target, ctx=ast.Store())],
value=value,
)
def _create_fromimport_call(name):
return ast.Call(
func=ast.Name(id=importlib_from_import_fn, ctx=ast.Load()),
args=[
ast.Constant(value=name, kind=None),
],
keywords=[],
)
def _parse_import(node: ast.Import):
_local_modules = []
# We don't want to override imports for outside packages
for entry in node.names.copy():
if entry.name.partition(".")[0] in local_modules:
node.names.remove(entry)
_local_modules.append(entry)
return_nodes = []
# Not to include empty import
if len(node.names) > 0:
return_nodes.append(node)
for node in _local_modules:
return_nodes.append(
ast.Expr(
ast.Call(
func=ast.Name(id=importlib_abs_import_fn, ctx=ast.Load()),
args=[
ast.Constant(value=node.name, kind=None),
ast.Constant(value=node.asname, kind=None),
ast.Call(
func=ast.Name(id="globals", ctx=ast.Load()),
args=[],
keywords=[],
),
],
keywords=[],
)
)
)
return return_nodes, len(_local_modules) > 0
def _create_attrs_from_node(node, attrs=()):
# Attrs must be in order
last_node = node
for attr in attrs:
last_node = ast.Attribute(
value=last_node,
attr=attr,
ctx=ast.Load(),
)
return last_node
def _create_node(stmnt: str):
"""Create an AST node from a given statement.
Parameters
----------
stmnt
The statement to be parsed and represented as an AST node.
Returns
-------
The resulting AST node representing the given statement.
"""
return ast.parse(stmnt).body[0]
# End AST helpers ##############
class ImportTransformer(ast.NodeTransformer):
def __init__(self):
self.insert_index = 0 # TODO hacky solution for __future__
self.include_ivy_import = False
def visit_Import(self, node):
ret, should_impersonate = _parse_import(node)
if should_impersonate and not self.include_ivy_import:
self.include_ivy_import = True
return ret
def visit_ImportFrom(self, node):
self.include_ivy_import = True
if node.level == 0:
if node.module is not None and node.module == "__future__":
self.insert_index = 1
return _parse_absolute_fromimport(node)
else:
return _parse_relative_fromimport(node)
def impersonate_import(self, tree: ast.Module, local_ivy_id=None):
if not self.include_ivy_import:
return tree
# Convenient function to insert the parse the AST import statement and insert it
def insert_import(node):
return tree.body.insert(self.insert_index, _create_node(node))
if local_ivy_id is None:
insert_import(
_global_import_template.substitute(name=importlib_abs_import_fn)
)
insert_import(
_global_import_template.substitute(name=importlib_from_import_fn)
)
else:
insert_import(
_local_import_template.substitute(
name=importlib_abs_import_fn, ivy_id=local_ivy_id
)
)
insert_import(
_local_import_template.substitute(
name=importlib_from_import_fn, ivy_id=local_ivy_id
)
)
insert_import("import ivy")
return tree
class IvyPathFinder(MetaPathFinder):
def find_spec(self, fullname, path, target=None):
if fullname.partition(".")[0] not in local_modules:
return None
# We're local
if path is None or path == "":
path = [_unmodified_ivy_path]
if "." in fullname:
*_, name = fullname.split(".")
else:
name = fullname
for entry in path:
if os.path.isdir(os.path.join(entry, name)):
# this module has child modules
filename = os.path.join(entry, name, "__init__.py")
submodule_locations = [os.path.join(entry, name)]
else:
filename = os.path.join(entry, f"{name}.py")
submodule_locations = None
if not os.path.exists(filename):
continue
return spec_from_file_location(
fullname,
filename,
loader=IvyLoader(filename),
submodule_search_locations=submodule_locations,
)
return None
class IvyLoader(Loader):
def __init__(self, filename):
self.filename = filename
def exec_module(self, module, local_ivy_id=None):
if self.filename in _compiled_modules_cache:
compiled_obj = _compiled_modules_cache[self.filename]
else:
# enforce UTF-8 for compiling when installed as a package
# according to PEP 686
with open(self.filename, encoding="utf-8") as f:
data = f.read()
ast_tree = parse(data)
transformer = ImportTransformer()
transformer.visit(ast_tree)
transformer.impersonate_import(ast_tree, local_ivy_id)
ast.fix_missing_locations(ast_tree)
compiled_obj = compile(ast_tree, filename=self.filename, mode="exec")
_compiled_modules_cache[self.filename] = compiled_obj
try:
exec(compiled_obj, module.__dict__)
except Exception as e:
print(e)
traceback.print_exc()
raise e
| ivy/ivy/utils/backend/ast_helpers.py/0 | {
"file_path": "ivy/ivy/utils/backend/ast_helpers.py",
"repo_id": "ivy",
"token_count": 4367
} | 47 |
# global
from hypothesis import assume, strategies as st
from typing import Tuple
from functools import lru_cache
import math
import numpy as np
# local
import ivy
from . import array_helpers, number_helpers, dtype_helpers
from ..pipeline_helper import WithBackendContext
from ivy.functional.ivy.layers import _deconv_length
from ..globals import mod_backend
def matrix_is_stable(x, cond_limit=30):
"""Check if a matrix is numerically stable or not.
Used to avoid numerical instabilities in further computationally heavy calculations.
Parameters
----------
x
The original matrix whose condition number is to be determined.
cond_limit
The greater the condition number, the more ill-conditioned the matrix
will be, the more it will be prone to numerical instabilities.
There is no rule of thumb for what the exact condition number
should be to consider a matrix ill-conditioned(prone to numerical errors).
But, if the condition number is "1", the matrix is perfectly said to be a
well-conditioned matrix which will not be prone to any type of numerical
instabilities in further calculations, but that would probably be a
very simple matrix.
The cond_limit should start with "30", gradually decreasing it according
to our use, lower cond_limit would result in more numerically stable
matrices but more simple matrices.
The limit should always be in the range "1-30", greater the number greater
the computational instability. Should not increase 30, it leads to strong
multi-collinearity which leads to singularity.
Returns
-------
ret
If True, the matrix is suitable for further numerical computations.
"""
return np.all(np.linalg.cond(x.astype("float64")) <= cond_limit)
@lru_cache(None)
def apply_safety_factor(
dtype,
*,
backend: str,
min_value=None,
max_value=None,
abs_smallest_val=None,
small_abs_safety_factor=1.1,
large_abs_safety_factor=1.1,
safety_factor_scale="linear",
):
"""Apply safety factor scaling to numeric data type.
Parameters
----------
dtype
the data type to apply safety factor scaling to.
min_value
the minimum value of the data type.
max_value
the maximum value of the data type.
abs_smallest_val
the absolute smallest representable value of the data type.
large_abs_safety_factor
the safety factor to apply to the maximum value.
small_abs_safety_factor
the safety factor to apply to the minimum value.
safety_factor_scale
the scale to apply the safety factor to, either 'linear' or 'log'.
Returns
-------
A tuple of the minimum value, maximum value and absolute smallest representable
"""
assert small_abs_safety_factor >= 1, "small_abs_safety_factor must be >= 1"
assert large_abs_safety_factor >= 1, "large_value_safety_factor must be >= 1"
if "float" in dtype or "complex" in dtype:
kind_dtype = "float"
if mod_backend[backend]:
proc, input_queue, output_queue = mod_backend[backend]
input_queue.put(("dtype_info_helper", backend, kind_dtype, dtype))
dtype_info = output_queue.get()
else:
dtype_info = general_helpers_dtype_info_helper(
backend=backend, kind_dtype=kind_dtype, dtype=dtype
)
elif "int" in dtype:
kind_dtype = "int"
if mod_backend[backend]:
proc, input_queue, output_queue = mod_backend[backend]
input_queue.put(("dtype_info_helper", backend, kind_dtype, dtype))
dtype_info = output_queue.get()
else:
dtype_info = general_helpers_dtype_info_helper(
backend=backend, kind_dtype=kind_dtype, dtype=dtype
)
else:
raise TypeError(
f"{dtype} is not a valid numeric data type only integers and floats"
)
if min_value is None:
# 0th index is max, 1st is min, 2nd is smallest_normal
min_value = dtype_info[1]
if max_value is None:
max_value = dtype_info[0]
if safety_factor_scale == "linear":
min_value = min_value / large_abs_safety_factor
max_value = max_value / large_abs_safety_factor
if kind_dtype == "float" and not abs_smallest_val:
abs_smallest_val = dtype_info[2] * small_abs_safety_factor
elif safety_factor_scale == "log":
min_sign = math.copysign(1, min_value)
min_value = abs(min_value) ** (1 / large_abs_safety_factor) * min_sign
max_sign = math.copysign(1, max_value)
max_value = abs(max_value) ** (1 / large_abs_safety_factor) * max_sign
if kind_dtype == "float" and not abs_smallest_val:
m, e = math.frexp(dtype_info[2])
abs_smallest_val = m * (2 ** (e / small_abs_safety_factor))
else:
raise ValueError(
f"{safety_factor_scale} is not a valid safety factor scale."
" use 'log' or 'linear'."
)
if kind_dtype == "int":
return int(min_value), int(max_value), None
return min_value, max_value, abs_smallest_val
def general_helpers_dtype_info_helper(backend, kind_dtype, dtype):
with WithBackendContext(backend) as ivy_backend:
if kind_dtype == "float":
return (
ivy_backend.finfo(dtype).max,
ivy_backend.finfo(dtype).min,
getattr(ivy_backend.finfo(dtype), "smallest_normal", None),
)
elif kind_dtype == "int":
return (
ivy_backend.iinfo(dtype).max,
ivy_backend.iinfo(dtype).min,
getattr(ivy_backend.iinfo(dtype), "smallest_normal", None),
)
# Hypothesis #
# -----------#
# from array-api repo
class BroadcastError(ValueError):
"""Shapes do not broadcast with each other."""
# from array-api repo
def _broadcast_shapes(
shape1: Tuple[int, ...], shape2: Tuple[int, ...]
) -> Tuple[int, ...]:
N1 = len(shape1)
N2 = len(shape2)
N = max(N1, N2)
shape = [None for _ in range(N)]
i = N - 1
while i >= 0:
n1 = N1 - N + i
if N1 - N + i >= 0:
d1 = shape1[n1]
else:
d1 = 1
n2 = N2 - N + i
if N2 - N + i >= 0:
d2 = shape2[n2]
else:
d2 = 1
if d1 == 1:
shape[i] = d2
elif d2 == 1:
shape[i] = d1
elif d1 == d2:
shape[i] = d1
else:
raise BroadcastError()
i = i - 1
return tuple(shape)
# from array-api repo
def broadcast_shapes(*shapes: Tuple[int, ...]):
if len(shapes) == 0:
raise ValueError("shapes=[] must be non-empty")
elif len(shapes) == 1:
return shapes[0]
result = _broadcast_shapes(shapes[0], shapes[1])
for i in range(2, len(shapes)):
result = _broadcast_shapes(result, shapes[i])
return result
# from array-api repo
@st.composite
def two_broadcastable_shapes(draw):
shape1, shape2 = draw(array_helpers.mutually_broadcastable_shapes(2))
assume(broadcast_shapes(shape1, shape2) == shape1)
return (shape1, shape2)
# taken from
# https://github.com/data-apis/array-api-tests/array_api_tests/test_manipulation_functions.py
@st.composite
def reshape_shapes(draw, *, shape):
"""Draws a random shape with the same number of elements as the given
shape.
Parameters
----------
draw
special function that draws data randomly (but is reproducible) from a given
data-set (ex. list).
shape
list/strategy/tuple of integers representing an array shape.
Returns
-------
A strategy that draws a tuple.
"""
if isinstance(shape, st._internal.SearchStrategy):
shape = draw(shape)
size = 1 if len(shape) == 0 else math.prod(shape)
rshape = draw(
st.lists(number_helpers.ints(min_value=0)).filter(
lambda s: math.prod(s) == size
)
)
return tuple(rshape)
# taken from https://github.com/HypothesisWorks/hypothesis/issues/1115
@st.composite
def subsets(draw, *, elements):
"""Draws a subset of elements from the given elements.
Parameters
----------
draw
special function that draws data randomly (but is reproducible) from a given
data-set (ex. list).
elements
set of elements to be drawn from.
Returns
-------
A strategy that draws a subset of elements.
"""
return tuple(e for e in elements if draw(st.booleans()))
@st.composite
def get_shape(
draw,
*,
allow_none=False,
min_num_dims=0,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
):
"""Draws a tuple of integers drawn randomly from [min_dim_size,
max_dim_size] of size drawn from min_num_dims to max_num_dims. Useful for
randomly drawing the shape of an array.
Parameters
----------
draw
special function that draws data randomly (but is reproducible) from a given
data-set (ex. list).
allow_none
if True, allow for the result to be None.
min_num_dims
minimum size of the tuple.
max_num_dims
maximum size of the tuple.
min_dim_size
minimum value of each integer in the tuple.
max_dim_size
maximum value of each integer in the tuple.
Returns
-------
A strategy that draws a tuple.
"""
if allow_none:
shape = draw(
st.none()
| st.lists(
number_helpers.ints(min_value=min_dim_size, max_value=max_dim_size),
min_size=min_num_dims,
max_size=max_num_dims,
)
)
else:
shape = draw(
st.lists(
number_helpers.ints(min_value=min_dim_size, max_value=max_dim_size),
min_size=min_num_dims,
max_size=max_num_dims,
)
)
if shape is None:
return shape
return tuple(shape)
@st.composite
def get_mean_std(draw, *, dtype):
"""Draws two integers representing the mean and standard deviation for a
given data type.
Parameters
----------
draw
special function that draws data randomly (but is reproducible) from a given
data-set (ex. list).
dtype
data type.
Returns
-------
A strategy that can be used in the @given hypothesis decorator.
"""
none_or_float = none_or_float = number_helpers.floats(dtype=dtype) | st.none()
values = draw(array_helpers.list_of_size(x=none_or_float, size=2))
values[1] = abs(values[1]) if values[1] else None
return values[0], values[1]
@st.composite
def get_bounds(draw, *, dtype):
"""Draws two numbers; low and high, for a given data type such that low <
high.
Parameters
----------
draw
special function that draws data randomly (but is reproducible) from a given
data-set (ex. list).
dtype
data type.
Returns
-------
A strategy that draws a list of two numbers.
"""
if "int" in dtype:
values = draw(array_helpers.array_values(dtype=dtype, shape=2))
values[0], values[1] = abs(values[0]), abs(values[1])
low, high = min(values), max(values)
if low == high:
return draw(get_bounds(dtype=dtype))
else:
none_or_float = number_helpers.floats(dtype=dtype) | st.none()
values = draw(array_helpers.list_of_size(x=none_or_float, size=2))
if values[0] is not None and values[1] is not None:
low, high = min(values), max(values)
else:
low, high = values[0], values[1]
if ivy.default(low, 0.0) >= ivy.default(high, 1.0):
return draw(get_bounds(dtype=dtype))
return [low, high]
@st.composite
def get_axis(
draw,
*,
shape,
allow_neg=True,
allow_none=False,
sort_values=True,
unique=True,
min_size=1,
max_size=None,
force_tuple=False,
force_int=False,
):
"""Draws one or more axis for the given shape.
Parameters
----------
draw
special function that draws data randomly (but is reproducible) from a given
data-set (ex. list).
shape
shape of the array as a tuple, or a hypothesis strategy from which the shape
will be drawn
allow_neg
boolean; if True, allow negative axes to be drawn
allow_none
boolean; if True, allow None to be drawn
sort_values
boolean; if True, and a tuple of axes is drawn, tuple is sorted in increasing
fashion
unique
boolean; if True, and a tuple of axes is drawn, all axes drawn will be unique
min_size
int or hypothesis strategy; if a tuple of axes is drawn, the minimum number of
axes drawn
max_size
int or hypothesis strategy; if a tuple of axes is drawn, the maximum number of
axes drawn.
If None and unique is True, then it is set to the number of axes in the shape
force_tuple
boolean, if true, all axis will be returned as a tuple. If force_tuple and
force_int are true, then an AssertionError is raised
force_int
boolean, if true, all axis will be returned as an int. If force_tuple and
force_int are true, then an AssertionError is raised
Returns
-------
A strategy that draws an axis or axes.
"""
assert not (
force_int and force_tuple
), "Cannot return an int and a tuple. If both are valid then set both to False."
# Draw values from any strategies given
if isinstance(shape, st._internal.SearchStrategy):
shape = draw(shape)
if isinstance(min_size, st._internal.SearchStrategy):
min_size = draw(min_size)
if isinstance(max_size, st._internal.SearchStrategy):
max_size = draw(max_size)
axes = len(shape)
lower_axes_bound = axes if allow_neg else 0
if max_size is None and unique:
max_size = max(axes, min_size)
valid_strategies = []
if allow_none:
valid_strategies.append(st.none())
if min_size > 1:
force_tuple = True
if not force_tuple:
if axes == 0:
valid_strategies.append(st.just(0))
else:
valid_strategies.append(st.integers(-lower_axes_bound, axes - 1))
if not force_int:
if axes == 0:
valid_strategies.append(
st.lists(st.just(0), min_size=min_size, max_size=max_size)
)
else:
valid_strategies.append(
st.lists(
st.integers(-lower_axes_bound, axes - 1),
min_size=min_size,
max_size=max_size,
unique=unique,
)
)
axis = draw(
st.one_of(*valid_strategies).filter(
lambda x: (
all(i != axes + j for i in x for j in x)
if (isinstance(x, list) and unique and allow_neg)
else True
)
)
)
if isinstance(axis, list):
if sort_values:
def sort_key(ele, max_len):
if ele < 0:
return ele + max_len
return ele
axis.sort(key=(lambda ele: sort_key(ele, axes)))
axis = tuple(axis)
return axis
@st.composite
def x_and_filters(
draw,
dim: int = 2,
transpose: bool = False,
depthwise=False,
mixed_fn_compos=True,
):
"""Draws a random x and filters for a convolution.
Parameters
----------
draw
special function that draws data randomly (but is reproducible) from a given
data-set (ex. list).
dim
the dimension of the convolution
transpose
if True, draw a transpose convolution
depthwise
if True, draw a depthwise convolution
Returns
-------
A strategy that draws a random x and filters for a convolution.
"""
strides = draw(st.integers(min_value=1, max_value=2))
padding = draw(st.sampled_from(["SAME", "VALID"]))
batch_size = draw(st.integers(1, 5))
filter_shape = draw(
get_shape(min_num_dims=dim, max_num_dims=dim, min_dim_size=1, max_dim_size=5)
)
input_channels = draw(st.integers(1, 5))
output_channels = draw(st.integers(1, 5))
dilations = draw(st.integers(1, 2))
dtype = draw(
dtype_helpers.get_dtypes("float", mixed_fn_compos=mixed_fn_compos, full=False)
)
if dim == 2:
data_format = draw(st.sampled_from(["NCHW"]))
elif dim == 1:
data_format = draw(st.sampled_from(["NWC", "NCW"]))
else:
data_format = draw(st.sampled_from(["NDHWC", "NCDHW"]))
x_dim = []
if transpose:
output_shape = []
x_dim = draw(
get_shape(
min_num_dims=dim, max_num_dims=dim, min_dim_size=1, max_dim_size=20
)
)
for i in range(dim):
output_shape.append(
_deconv_length(x_dim[i], strides, filter_shape[i], padding, dilations)
)
else:
for i in range(dim):
min_x = filter_shape[i] + (filter_shape[i] - 1) * (dilations - 1)
x_dim.append(draw(st.integers(min_x, 100)))
x_dim = tuple(x_dim)
if not depthwise:
filter_shape = filter_shape + (input_channels, output_channels)
else:
filter_shape = filter_shape + (input_channels,)
if data_format in ["NHWC", "NWC", "NDHWC"]:
x_shape = (batch_size,) + x_dim + (input_channels,)
else:
x_shape = (batch_size, input_channels) + x_dim
vals = draw(
array_helpers.array_values(
shape=x_shape,
dtype=dtype[0],
large_abs_safety_factor=3,
small_abs_safety_factor=4,
safety_factor_scale="log",
)
)
filters = draw(
array_helpers.array_values(
shape=filter_shape,
dtype=dtype[0],
large_abs_safety_factor=3,
small_abs_safety_factor=4,
safety_factor_scale="log",
)
)
if transpose:
return (
dtype,
vals,
filters,
dilations,
data_format,
strides,
padding,
output_shape,
)
return dtype, vals, filters, dilations, data_format, strides, padding
@st.composite
def embedding_helper(draw, mixed_fn_compos=True):
"""Obtain weights for embeddings, the corresponding indices, the padding
indices.
Parameters
----------
draw
special function that draws data randomly (but is reproducible) from a given
data-set (ex. list).
Returns
-------
A strategy for generating a tuple
"""
dtype_weight, weight = draw(
array_helpers.dtype_and_values(
available_dtypes=[
x
for x in draw(
dtype_helpers.get_dtypes("numeric", mixed_fn_compos=mixed_fn_compos)
)
if "float" in x or "complex" in x
],
min_num_dims=2,
max_num_dims=2,
min_dim_size=1,
min_value=-1e04,
max_value=1e04,
)
)
num_embeddings, embedding_dim = weight[0].shape
dtype_indices, indices = draw(
array_helpers.dtype_and_values(
available_dtypes=["int32", "int64"],
min_num_dims=2,
min_dim_size=1,
min_value=0,
max_value=num_embeddings - 1,
).filter(lambda x: x[1][0].shape[-1] == embedding_dim)
)
padding_idx = draw(st.integers(min_value=0, max_value=num_embeddings - 1))
return dtype_indices + dtype_weight, indices[0], weight[0], padding_idx
def sizes_(shape, axis):
def factorization(n):
factors = [1]
def get_factor(n):
x_fixed = 2
cycle_size = 2
x = 2
factor = 1 if n % 2 else 2
while factor == 1:
for count in range(cycle_size):
if factor > 1:
break
x = (x * x + 1) % n
factor = math.gcd(x - x_fixed, n)
cycle_size *= 2
x_fixed = x
return factor
while n > 1:
next = get_factor(n)
factors.append(next)
n //= next
if len(factors) > 1:
factors.remove(1)
return factors
shape_ = (
tuple(factorization(shape[axis]))
if tuple(factorization(shape[axis]))
else shape
)
return shape_
@st.composite
def dims_and_offset(draw, shape, ensure_dim_unique=False):
shape_actual = draw(shape)
dim1 = draw(get_axis(shape=shape, force_int=True))
dim2 = draw(get_axis(shape=shape, force_int=True))
if ensure_dim_unique:
while dim1 == dim2:
dim2 = draw(get_axis(shape=shape, force_int=True))
offset = draw(
st.integers(min_value=-shape_actual[dim1], max_value=shape_actual[dim1])
)
return dim1, dim2, offset
| ivy/ivy_tests/test_ivy/helpers/hypothesis_helpers/general_helpers.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/helpers/hypothesis_helpers/general_helpers.py",
"repo_id": "ivy",
"token_count": 9687
} | 48 |
# global
import sys
import numpy as np
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import assert_all_close
from ivy_tests.test_ivy.helpers import handle_frontend_test, BackendHandler
# cholesky
@handle_frontend_test(
fn_tree="jax.lax.linalg.cholesky",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=10,
shape=helpers.ints(min_value=2, max_value=5).map(lambda x: (x, x)),
).filter(
lambda x: "float16" not in x[0]
and "bfloat16" not in x[0]
and np.linalg.cond(x[1][0]) < 1 / sys.float_info.epsilon
and np.linalg.det(np.asarray(x[1][0])) != 0
),
symmetrize_input=st.booleans(),
test_with_out=st.just(False),
)
def test_jax_cholesky(
*,
dtype_and_x,
symmetrize_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
x = np.asarray(x[0], dtype=dtype[0])
# make symmetric positive-definite beforehand
x = np.matmul(x.T, x) + np.identity(x.shape[0]) * 1e-3
fw_ret, gt_ret = helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-02,
x=x,
symmetrize_input=symmetrize_input,
test_values=False,
)
# ToDo: turn value test on when jax cholesky is fixed in issue
# https: // github.com / google / jax / issues / 16185
helpers.assertions.assert_same_type_and_shape([fw_ret, gt_ret])
# eigh
@handle_frontend_test(
fn_tree="jax.lax.linalg.eigh",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=10,
shape=helpers.ints(min_value=2, max_value=5).map(lambda x: (x, x)),
).filter(
lambda x: "float16" not in x[0]
and "bfloat16" not in x[0]
and np.linalg.cond(x[1][0]) < 1 / sys.float_info.epsilon
and np.linalg.det(np.asarray(x[1][0])) != 0
),
lower=st.booleans(),
symmetrize_input=st.booleans(),
test_with_out=st.just(False),
)
def test_jax_eigh(
*,
dtype_and_x,
lower,
symmetrize_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
x = np.array(x[0], dtype=dtype[0])
# make symmetric positive-definite beforehand
x = np.matmul(x.T, x) + np.identity(x.shape[0]) * 1e-3
ret, frontend_ret = helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
x=x,
lower=lower,
symmetrize_input=symmetrize_input,
)
with BackendHandler.update_backend(backend_fw) as ivy_backend:
ret = [ivy_backend.to_numpy(x) for x in ret]
frontend_ret = [np.asarray(x) for x in frontend_ret]
L, Q = ret
frontend_Q, frontend_L = frontend_ret
assert_all_close(
ret_np=Q @ np.diag(L) @ Q.T,
ret_from_gt_np=frontend_Q @ np.diag(frontend_L) @ frontend_Q.T,
atol=1e-2,
backend=backend_fw,
ground_truth_backend=frontend,
)
# qr
@handle_frontend_test(
fn_tree="jax.lax.linalg.qr",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float", index=1),
min_num_dims=3,
max_num_dims=5,
min_dim_size=2,
max_dim_size=5,
min_value=2,
max_value=5,
),
mode=st.sampled_from((True, False)),
test_with_out=st.just(False),
)
def test_jax_qr(
*,
dtype_and_x,
mode,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
ret, frontend_ret = helpers.test_frontend_function(
input_dtypes=dtype,
test_values=False,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=np.asarray(x[0], dtype[0]),
full_matrices=mode,
)
# svd
@handle_frontend_test(
fn_tree="jax.lax.linalg.svd",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=10,
shape=helpers.ints(min_value=2, max_value=5).map(lambda x: (x, x)),
).filter(
lambda x: "float16" not in x[0]
and "bfloat16" not in x[0]
and np.linalg.cond(x[1][0]) < 1 / sys.float_info.epsilon
and np.linalg.det(np.asarray(x[1][0])) != 0
),
full_matrices=st.booleans(),
compute_uv=st.booleans(),
test_with_out=st.just(False),
)
def test_jax_svd(
*,
dtype_and_x,
full_matrices,
compute_uv,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
x = np.asarray(x[0], dtype=dtype[0])
# make symmetric positive-definite beforehand
x = np.matmul(x.T, x) + np.identity(x.shape[0]) * 1e-3
ret, frontend_ret = helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
x=x,
full_matrices=full_matrices,
compute_uv=compute_uv,
)
if compute_uv:
with BackendHandler.update_backend(backend_fw) as ivy_backend:
ret = [ivy_backend.to_numpy(x) for x in ret]
frontend_ret = [np.asarray(x) for x in frontend_ret]
u, s, vh = ret
frontend_u, frontend_s, frontend_vh = frontend_ret
assert_all_close(
ret_np=u @ np.diag(s) @ vh,
ret_from_gt_np=frontend_u @ np.diag(frontend_s) @ frontend_vh,
rtol=1e-2,
atol=1e-2,
backend=backend_fw,
ground_truth_backend=frontend,
)
else:
with BackendHandler.update_backend(backend_fw) as ivy_backend:
ret = ivy_backend.to_numpy(ret)
assert_all_close(
ret_np=ret,
ret_from_gt_np=np.asarray(frontend_ret[0]),
rtol=1e-2,
atol=1e-2,
backend=backend_fw,
ground_truth_backend=frontend,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_jax/test_lax/test_linalg.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_jax/test_lax/test_linalg.py",
"repo_id": "ivy",
"token_count": 3395
} | 49 |
# global
import pytest
from hypothesis import strategies as st
import ivy
import numpy as np
import sys
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
# --- Helpers --- #
# --------------- #
@st.composite
def _all_gamma_params(draw):
shape = draw(
helpers.get_shape(
min_dim_size=1, max_dim_size=5, min_num_dims=2, max_num_dims=2
)
| st.just(None)
)
if shape is None:
a = draw(
helpers.array_values(
min_value=0.0,
max_value=100.0,
dtype=helpers.get_dtypes("float", full=False),
exclude_min=True,
shape=helpers.get_shape(
min_dim_size=1, max_dim_size=5, min_num_dims=1, max_num_dims=2
),
)
)
return a[0], shape
a = draw(st.floats(min_value=0, max_value=5, exclude_min=True))
return a, shape
# ToDo: Find solution around torch and paddle not running with uints 32,
# 64 and remove xfail fixture
@st.composite
def _get_minval_maxval(draw):
interval = draw(st.integers(min_value=1, max_value=50))
minval = draw(st.floats(min_value=-100, max_value=100))
maxval = draw(st.floats(min_value=-100 + interval, max_value=100 + interval))
return minval, maxval
@st.composite
def dtype_p_shape(draw):
dtype = draw(helpers.array_dtypes(available_dtypes=("float32", "float64")))
shape = draw(helpers.get_shape(allow_none=False, min_num_dims=1, max_num_dims=3))
dtype_and_probs = draw(
helpers.dtype_and_values(
available_dtypes=dtype, min_value=0, max_value=1, shape=shape
)
)
return dtype_and_probs, shape
@st.composite
def get_mean_cov_vector(draw):
input_dtype = draw(
st.shared(
st.sampled_from(draw(helpers.get_dtypes("float"))),
key="shared_dtype",
)
)
shared_size = draw(
st.shared(helpers.ints(min_value=2, max_value=4), key="shared_size")
)
# Generate shape for mean vector (..., n)
dtype_mean = draw(
helpers.array_values(
dtype=input_dtype,
shape=(shared_size,),
min_value=2,
max_value=5,
)
)
# Generate shape for covariance matrix (..., n, n)
dtype_cov = draw(
helpers.array_values(
dtype=input_dtype,
shape=(shared_size, shared_size),
min_value=2,
max_value=5,
).filter(lambda x: np.linalg.cond(x.tolist()) < 1 / sys.float_info.epsilon)
)
batch_shape = dtype_cov.shape[:-2]
return input_dtype, dtype_mean, dtype_cov, batch_shape
@st.composite
def get_shape_and_arrays(draw):
b_shapes = draw(
helpers.array_and_broadcastable_shape(dtype=helpers.get_dtypes("float"))
)
b, shapes = b_shapes
shapes = draw(st.sampled_from([None, shapes]))
return b, shapes
# --- Main --- #
# ------------ #
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.ball",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(
min_num_dims=1, max_num_dims=6, min_dim_size=1, max_dim_size=6
),
dtype=helpers.get_dtypes("float", full=False),
d=st.integers(min_value=1, max_value=100),
p=st.floats(min_value=1e-5, max_value=100, exclude_min=True),
test_with_out=st.just(False),
)
def test_jax_ball(
*,
dtype_key,
d,
p,
shape,
dtype,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
d=d,
p=p,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.bernoulli",
dtype_key=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("integer", full=False),
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
dtype_p_shape_=dtype_p_shape(),
)
def test_jax_bernoulli(
*,
dtype_key,
dtype_p_shape_,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
dtype_p, shape = dtype_p_shape_
dtype, p = dtype_p
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype + dtype,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
backend_to_test=backend_fw,
on_device=on_device,
test_values=False,
key=key[0],
p=p[0],
shape=shape,
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.beta",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
alpha=st.floats(min_value=0, max_value=5, exclude_min=True),
beta=st.floats(min_value=0, max_value=5, exclude_min=True),
shape=helpers.get_shape(
min_num_dims=2, max_num_dims=2, min_dim_size=1, max_dim_size=5
),
dtype=helpers.get_dtypes("float", full=False),
test_with_out=st.just(False),
)
def test_jax_beta(
*,
dtype_key,
alpha,
beta,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
a=alpha,
b=beta,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.categorical",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(
min_dim_size=1, max_num_dims=6, max_dim_size=6, min_num_dims=1, allow_none=False
),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_categorical(
*,
dtype_key,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.cauchy",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_cauchy(
*,
dtype_key,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.dirichlet",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
dtype_alpha=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float", full=False),
shape=st.tuples(
st.integers(min_value=2, max_value=5),
),
min_value=1.1,
max_value=100.0,
exclude_min=True,
),
shape=helpers.get_shape(
min_num_dims=2, max_num_dims=2, min_dim_size=2, max_dim_size=5
),
dtype=helpers.get_dtypes("float", full=False),
test_with_out=st.just(False),
)
def test_jax_dirichlet(
*,
dtype_key,
dtype_alpha,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
_, alpha = dtype_alpha
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
alpha=alpha[0],
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.double_sided_maxwell",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=1,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(),
dtype=helpers.get_dtypes("float", full=False),
loc=st.integers(min_value=10, max_value=100),
scale=st.floats(min_value=0, max_value=100, exclude_min=True),
test_with_out=st.just(False),
)
def test_jax_double_sided_maxwell(
*,
dtype_key,
loc,
scale,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
backend_to_test=backend_fw,
key=key[0],
loc=loc,
scale=scale,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(backend=backend_fw, ret=ret_np)
ret_from_np = helpers.flatten_and_to_np(backend=backend_fw, ret=ret_from_np)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.exponential",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(allow_none=False, min_num_dims=1, min_dim_size=1),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_exponential(
*,
dtype_key,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.fold_in",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
data=helpers.ints(),
)
def test_jax_fold_in(
*,
dtype_key,
data,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
data=data,
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.gamma",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
a_shape=_all_gamma_params(),
dtype=helpers.get_dtypes("float", full=False),
test_with_out=st.just(False),
)
def test_jax_gamma(
*,
dtype_key,
a_shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
a, shape = a_shape
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
backend_to_test=backend_fw,
on_device=on_device,
test_values=False,
key=key[0],
a=a,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.generalized_normal",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
p=st.floats(min_value=1e-5, max_value=100, exclude_min=True),
shape=helpers.get_shape(
min_num_dims=1, max_num_dims=6, min_dim_size=1, max_dim_size=6
),
dtype=helpers.get_dtypes("float", full=False),
test_with_out=st.just(False),
)
def test_jax_generalized_normal(
*,
dtype_key,
p,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
p=p,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.gumbel",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(
min_dim_size=1, max_num_dims=6, max_dim_size=6, min_num_dims=1, allow_none=False
),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_gumbel(
*,
dtype_key,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
# loggamma
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.loggamma",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
a_shape=_all_gamma_params(),
dtype=helpers.get_dtypes("float", full=False),
test_with_out=st.just(False),
)
def test_jax_loggamma(
*,
dtype_key,
a_shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
a, shape = a_shape
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
a=a,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.logistic",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(allow_none=False, min_num_dims=1, min_dim_size=1),
dtype=helpers.get_dtypes("float", full=False),
test_with_out=st.just(False),
)
def test_jax_logistic(
*,
dtype_key,
shape,
dtype,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
key=key[0],
shape=shape,
dtype=dtype[0],
test_values=False,
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.maxwell",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_maxwell(
*,
dtype_key,
shape,
dtype,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.multivariate_normal",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
dtype=helpers.get_dtypes("float", full=False),
mean_cov_vector=get_mean_cov_vector(),
method=st.sampled_from(["cholesky", "eigh", "svd"]),
test_with_out=st.just(False),
)
def test_jax_multivariate_normal(
*,
dtype_key,
mean_cov_vector,
dtype,
method,
frontend,
backend_fw,
test_flags,
fn_tree,
):
input_dtype, key = dtype_key
shared_dtype, mean, cov, shape = mean_cov_vector
spd = np.matmul(cov.T, cov) + np.identity(cov.shape[0])
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype + [shared_dtype],
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
test_values=False,
key=key[0],
mean=mean,
cov=spd,
shape=shape,
dtype=dtype[0],
method=method,
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.normal",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_normal(
*,
dtype_key,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.orthogonal",
dtype_key=helpers.dtype_and_values(
available_dtypes=["float32", "float64"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=3,
min_dim_size=2,
max_dim_size=5,
),
n=helpers.get_shape(),
shape=helpers.get_shape(),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_orthogonal(
*,
dtype_key,
n,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
n=n,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
# Check if the output matrices are orthogonal
assert ivy.allclose(ivy.eye(n), ivy.matmul(u.T, u))
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.pareto",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
b_shapes=get_shape_and_arrays(),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_pareto(
*,
dtype_key,
b_shapes,
dtype,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
input_dtype, key = dtype_key
b, shape = b_shapes
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
b=b,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
if shape is not None:
assert ret_np.shape == shape
else:
assert ret_np.shape == b.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.permutation",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
x=st.integers(min_value=0, max_value=10),
axis=st.integers(min_value=0, max_value=0),
)
def test_jax_permutation(
*,
dtype_key,
x,
axis,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
x=x,
axis=axis,
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.poisson",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
lam=st.floats(min_value=0, max_value=5, exclude_min=True),
shape=helpers.get_shape(
min_num_dims=2, max_num_dims=2, min_dim_size=1, max_dim_size=5
),
dtype=helpers.get_dtypes("integer", full=False),
test_with_out=st.just(False),
)
def test_jax_poisson(
*,
dtype_key,
lam,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
lam=lam,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(
ret=ret_np,
backend=backend_fw,
)
ret_from_np = helpers.flatten_and_to_np(
ret=ret_from_np,
backend=backend_fw,
)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.rademacher",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(allow_none=False, min_num_dims=1, min_dim_size=1),
dtype=helpers.get_dtypes("integer", full=False),
)
def test_jax_rademacher(
*,
dtype_key,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.randint",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(allow_none=False, min_num_dims=1, min_dim_size=1),
dtype=helpers.get_dtypes("integer", full=False),
min_max=helpers.general_helpers.get_bounds(dtype="int16"),
)
def test_jax_randint(
*,
dtype_key,
shape,
dtype,
min_max,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
minval, maxval = min_max
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
minval=minval,
maxval=maxval,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.shuffle",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=("float32", "float64"),
valid_axis=True,
max_axes_size=1,
force_int_axis=True,
),
)
def test_jax_shuffle(
*,
dtype_key,
dtype_x_axis,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
key_dtype, key = dtype_key
x_dtypes, x, axis = dtype_x_axis
def call():
return helpers.test_frontend_function(
input_dtypes=key_dtype + x_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
x=x[0],
axis=axis,
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.t",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
df=st.floats(min_value=0, max_value=5, exclude_min=True),
shape=helpers.get_shape(
min_num_dims=1, max_num_dims=6, min_dim_size=1, max_dim_size=6
),
dtype=helpers.get_dtypes("float", full=False),
test_with_out=st.just(False),
)
def test_jax_t(
*,
dtype_key,
df,
shape,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
df=df,
shape=shape,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.uniform",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(),
dtype=helpers.get_dtypes("float", full=False),
dtype_minval_maxval=_get_minval_maxval(),
)
def test_jax_uniform(
*,
dtype_key,
shape,
dtype,
dtype_minval_maxval,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, key = dtype_key
minval, maxval = dtype_minval_maxval
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
dtype=dtype[0],
minval=minval,
maxval=maxval,
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
@pytest.mark.xfail
@handle_frontend_test(
fn_tree="jax.random.weibull_min",
dtype_key=helpers.dtype_and_values(
available_dtypes=["uint32"],
min_value=0,
max_value=2000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
shape=helpers.get_shape(allow_none=False, min_num_dims=1, min_dim_size=1),
dtype=helpers.get_dtypes("float", full=False),
)
def test_jax_weibull_min(
*,
dtype_key,
shape,
scale,
concentration,
dtype,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
input_dtype, key = dtype_key
def call():
return helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
backend_to_test=backend_fw,
on_device=on_device,
test_values=False,
key=key[0],
shape=shape,
scale=scale,
concentration=concentration,
dtype=dtype[0],
)
ret = call()
if not ivy.exists(ret):
return
ret_np, ret_from_np = ret
ret_np = helpers.flatten_and_to_np(ret=ret_np, backend=backend_fw)
ret_from_np = helpers.flatten_and_to_np(ret=ret_from_np, backend=backend_fw)
for u, v in zip(ret_np, ret_from_np):
assert u.dtype == v.dtype
assert u.shape == v.shape
| ivy/ivy_tests/test_ivy/test_frontends/test_jax/test_random.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_jax/test_random.py",
"repo_id": "ivy",
"token_count": 22221
} | 50 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
import ivy_tests.test_ivy.helpers.globals as test_globals
from ivy_tests.test_ivy.helpers import handle_frontend_test, BackendHandler
# --- Helpers --- #
# --------------- #
# full and full_like helper
@st.composite
def _input_fill_and_dtype(draw):
dtype = draw(helpers.get_dtypes("float", full=False))
dtype_and_input = draw(helpers.dtype_and_values(dtype=dtype))
with BackendHandler.update_backend(test_globals.CURRENT_BACKEND) as ivy_backend:
if ivy_backend.is_uint_dtype(dtype[0]):
fill_values = draw(st.integers(min_value=0, max_value=5))
elif ivy_backend.is_int_dtype(dtype[0]):
fill_values = draw(st.integers(min_value=-5, max_value=5))
else:
fill_values = draw(
helpers.floats(
min_value=-5,
max_value=5,
large_abs_safety_factor=10,
small_abs_safety_factor=10,
safety_factor_scale="log",
)
)
dtype_to_cast = draw(helpers.get_dtypes("float", full=False))
return dtype, dtype_and_input[1], fill_values, dtype_to_cast[0]
@st.composite
def _shape_and_function(
draw,
*,
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
):
shape = draw(
helpers.get_shape(
allow_none=allow_none,
min_num_dims=min_num_dims,
max_num_dims=max_num_dims,
min_dim_size=min_dim_size,
max_dim_size=max_dim_size,
)
)
VARS = "abcdefghijklmnopqrstuvw"
args = ""
out = ""
for i in range(len(shape)):
args += f"{VARS[i]},"
out += f"{VARS[i]}+"
fn_str = f"lambda {args[:-1]}: {out[:-1]}"
def fn2(*args):
return args[0]
if len(shape) > 1:
def fn3(*args):
return args[0] == args[1]
else:
def fn3(*args):
return args[0] > 10
function = draw(st.sampled_from([eval(fn_str), fn2, fn3]))
return shape, function
# --- Main --- #
# ------------ #
# empty
@handle_frontend_test(
fn_tree="numpy.empty",
shape=helpers.get_shape(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_numpy_empty(
shape,
dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
shape=shape,
dtype=dtype[0],
)
# empty_like
@handle_frontend_test(
fn_tree="numpy.empty_like",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
),
shape=helpers.get_shape(
allow_none=True,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_numpy_empty_like(
dtype_and_x,
shape,
dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
prototype=x[0],
dtype=dtype[0],
order="K",
subok=True,
shape=shape,
)
# eye
@handle_frontend_test(
fn_tree="numpy.eye",
rows=helpers.ints(min_value=3, max_value=10),
cols=helpers.ints(min_value=3, max_value=10),
k=helpers.ints(min_value=0, max_value=2),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_numpy_eye(
rows,
cols,
k,
dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
N=rows,
M=cols,
k=k,
dtype=dtype[0],
)
@handle_frontend_test(
fn_tree="numpy.fromfunction",
shape_and_function=_shape_and_function(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=5,
),
# not using valid as bool a is problematic dtype
dtype=helpers.get_dtypes("numeric", full=False),
test_with_out=st.just(False),
)
def test_numpy_fromfunction(
shape_and_function,
dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
shape, function = shape_and_function
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
function=function,
shape=shape,
dtype=dtype[0],
)
# full
@handle_frontend_test(
fn_tree="numpy.full",
shape=helpers.get_shape(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
input_fill_dtype=_input_fill_and_dtype(),
test_with_out=st.just(False),
)
def test_numpy_full(
shape,
input_fill_dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x, fill, dtype_to_cast = input_fill_dtype
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
shape=shape,
fill_value=fill,
dtype=dtype_to_cast,
)
# full_like
@handle_frontend_test(
fn_tree="numpy.full_like",
input_fill_dtype=_input_fill_and_dtype(),
shape=helpers.get_shape(
allow_none=True,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
test_with_out=st.just(False),
)
def test_numpy_full_like(
input_fill_dtype,
shape,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x, fill, dtype_to_cast = input_fill_dtype
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
fill_value=fill,
dtype=dtype_to_cast,
order="K",
subok=True,
shape=shape,
)
# identity
@handle_frontend_test(
fn_tree="numpy.identity",
n=helpers.ints(min_value=1, max_value=10),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_numpy_identity(
n,
dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
n=n,
dtype=dtype[0],
)
# ones
@handle_frontend_test(
fn_tree="numpy.ones",
shape=helpers.get_shape(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_numpy_ones(
shape,
dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
shape=shape,
dtype=dtype[0],
)
# ones_like
@handle_frontend_test(
fn_tree="numpy.ones_like",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
),
shape=helpers.get_shape(
allow_none=True,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_numpy_ones_like(
dtype_and_x,
shape,
dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
dtype=dtype[0],
order="K",
subok=True,
shape=shape,
)
# zeros
@handle_frontend_test(
fn_tree="numpy.zeros",
shape=helpers.get_shape(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_numpy_zeros(
shape,
dtype,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
shape=shape,
dtype=dtype[0],
)
# zeros_like
@handle_frontend_test(
fn_tree="numpy.zeros_like",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
),
shape=helpers.get_shape(
allow_none=True,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_numpy_zeros_like(
dtype_and_x,
dtype,
shape,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
dtype=dtype[0],
order="K",
subok=True,
shape=shape,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_creation_routines/test_from_shape_or_value.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_creation_routines/test_from_shape_or_value.py",
"repo_id": "ivy",
"token_count": 5737
} | 51 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
import ivy_tests.test_ivy.test_frontends.test_numpy.helpers as np_frontend_helpers
# rollaxis
@handle_frontend_test(
fn_tree="numpy.rollaxis",
dtype_and_a=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
min_num_dims=3,
min_dim_size=2,
),
axis=helpers.ints(min_value=-2, max_value=2),
start=helpers.ints(min_value=-2, max_value=2),
test_with_out=st.just(False),
)
def test_numpy_rollaxis(
*,
dtype_and_a,
axis,
start,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, a = dtype_and_a
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=a[0],
axis=axis,
start=start,
)
# swapaxes
@handle_frontend_test(
fn_tree="numpy.swapaxes",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid", full=True),
shape=st.shared(helpers.get_shape(min_num_dims=2), key="shape"),
),
axis1=helpers.get_axis(
shape=st.shared(helpers.get_shape(min_num_dims=2), key="shape"), force_int=True
),
axis2=helpers.get_axis(
shape=st.shared(helpers.get_shape(min_num_dims=2), key="shape"), force_int=True
),
test_with_out=st.just(False),
)
def test_numpy_swapaxes(
*,
dtype_and_x,
axis1,
axis2,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
axis1=axis1,
axis2=axis2,
)
# transpose
@handle_frontend_test(
fn_tree="numpy.transpose",
array_and_axes=np_frontend_helpers._array_and_axes_permute_helper(
min_num_dims=0,
max_num_dims=5,
min_dim_size=0,
max_dim_size=10,
),
test_with_out=st.just(False),
)
def test_numpy_transpose(
*,
array_and_axes,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
array, dtype, axes = array_and_axes
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
array=array,
axes=axes,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_manipulation_routines/test_transpose_like_operations.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_manipulation_routines/test_transpose_like_operations.py",
"repo_id": "ivy",
"token_count": 1396
} | 52 |
# global
from hypothesis import assume, strategies as st, given
import numpy as np
# local
import ivy.functional.frontends.numpy as np_frontend
from ivy.functional.frontends.numpy.ufunc import (
ufuncs,
)
# --- Helpers --- #
# --------------- #
# strategy to generate a ufunc from given list
@st.composite
def generate_ufunc(draw, ufuncs=ufuncs):
return draw(st.sampled_from(ufuncs))
# identity
@given(
ufunc_name=generate_ufunc(),
)
def test_numpy_identity(
ufunc_name,
):
assume(hasattr(np_frontend, ufunc_name))
frontend_ufunc = getattr(np_frontend, ufunc_name)
np_ufunc = getattr(np, ufunc_name)
assert frontend_ufunc.identity == np_ufunc.identity
# nargs
@given(
ufunc_name=generate_ufunc(),
)
def test_numpy_nargs(
ufunc_name,
):
assume(hasattr(np_frontend, ufunc_name))
frontend_ufunc = getattr(np_frontend, ufunc_name)
np_ufunc = getattr(np, ufunc_name)
assert frontend_ufunc.nargs == np_ufunc.nargs
# nin
@given(
ufunc_name=generate_ufunc(),
)
def test_numpy_nin(
ufunc_name,
):
assume(hasattr(np_frontend, ufunc_name))
frontend_ufunc = getattr(np_frontend, ufunc_name)
np_ufunc = getattr(np, ufunc_name)
assert frontend_ufunc.nin == np_ufunc.nin
# nout
@given(
ufunc_name=generate_ufunc(),
)
def test_numpy_nout(
ufunc_name,
):
assume(hasattr(np_frontend, ufunc_name))
frontend_ufunc = getattr(np_frontend, ufunc_name)
np_ufunc = getattr(np, ufunc_name)
assert frontend_ufunc.nout == np_ufunc.nout
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_ufunc/test_methods.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_ufunc/test_methods.py",
"repo_id": "ivy",
"token_count": 646
} | 53 |
# global
from hypothesis import strategies as st
# local
import ivy
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
from ivy_tests.test_ivy.test_frontends.test_torch.test_nn.test_functional.test_linear_functions import ( # noqa: E501
_x_and_linear,
)
# --- Helpers --- #
# --------------- #
# interpolate
@st.composite
def _interp_args(draw, mode=None, mode_list=None):
mixed_fn_compos = draw(st.booleans())
curr_backend = ivy.current_backend_str()
torch_modes = [
"linear",
"bilinear",
"trilinear",
"nearest",
"nearest-exact",
"area",
]
tf_modes = [
"linear",
"bilinear",
"trilinear",
"nearest-exact",
"tf_area",
"tf_bicubic",
"lanczos3",
"lanczos5",
"mitchellcubic",
"gaussian",
]
jax_modes = [
"linear",
"bilinear",
"trilinear",
"nearest-exact",
"tf_bicubic",
"lanczos3",
"lanczos5",
]
if not mode and not mode_list:
if curr_backend == "torch" and not mixed_fn_compos:
mode = draw(st.sampled_from(torch_modes))
elif curr_backend == "tensorflow" and not mixed_fn_compos:
mode = draw(st.sampled_from(tf_modes))
elif curr_backend == "jax" and not mixed_fn_compos:
mode = draw(st.sampled_from(jax_modes))
else:
mode = draw(
st.sampled_from(
[
"linear",
"bilinear",
"trilinear",
"nearest",
"nearest-exact",
"area",
"tf_area",
"tf_bicubic",
"lanczos3",
"lanczos5",
"mitchellcubic",
"gaussian",
]
)
)
elif mode_list:
mode = draw(st.sampled_from(mode_list))
align_corners = draw(st.booleans())
if curr_backend in ["tensorflow", "jax"] and not mixed_fn_compos:
align_corners = False
if mode == "linear":
num_dims = 3
elif mode in [
"bilinear",
"tf_bicubic",
"bicubic",
"mitchellcubic",
"gaussian",
]:
num_dims = 4
elif mode == "trilinear":
num_dims = 5
elif mode in [
"nearest",
"area",
"tf_area",
"lanczos3",
"lanczos5",
"nearest-exact",
]:
num_dims = (
draw(
helpers.ints(min_value=1, max_value=3, mixed_fn_compos=mixed_fn_compos)
)
+ 2
)
align_corners = False
if curr_backend == "tensorflow" and not mixed_fn_compos:
num_dims = 3
dtype, x = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes(
"float", mixed_fn_compos=mixed_fn_compos
),
min_num_dims=num_dims,
max_num_dims=num_dims,
min_dim_size=2,
max_dim_size=5,
large_abs_safety_factor=50,
small_abs_safety_factor=50,
safety_factor_scale="log",
)
)
if draw(st.booleans()):
scale_factor = draw(
st.one_of(
helpers.lists(
x=helpers.floats(
min_value=1.0, max_value=2.0, mixed_fn_compos=mixed_fn_compos
),
min_size=num_dims - 2,
max_size=num_dims - 2,
),
helpers.floats(
min_value=1.0, max_value=2.0, mixed_fn_compos=mixed_fn_compos
),
)
)
recompute_scale_factor = draw(st.booleans())
size = None
else:
size = draw(
st.one_of(
helpers.lists(
x=helpers.ints(
min_value=1, max_value=3, mixed_fn_compos=mixed_fn_compos
),
min_size=num_dims - 2,
max_size=num_dims - 2,
),
st.integers(min_value=1, max_value=3),
)
)
recompute_scale_factor = False
scale_factor = None
if curr_backend in ["tensorflow", "jax"] and not mixed_fn_compos:
if not recompute_scale_factor:
recompute_scale_factor = True
return (dtype, x, mode, size, align_corners, scale_factor, recompute_scale_factor)
# zeropad2d
@st.composite
def _zero2pad(draw):
dtype, input, shape = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
ret_shape=True,
min_num_dims=4,
max_num_dims=4,
min_value=-100,
max_value=100,
)
)
ndim = len(shape)
min_dim = min(shape)
padding = draw(
st.lists(
st.integers(min_value=0, max_value=min_dim),
min_size=ndim,
max_size=ndim,
)
)
return dtype, input, padding
@st.composite
def paddle_unfold_handler(draw, dtype):
dtype = draw(dtype)
h_size = draw(helpers.ints(min_value=10, max_value=30))
w_size = draw(helpers.ints(min_value=10, max_value=30))
channels = draw(helpers.ints(min_value=1, max_value=3))
batch = draw(helpers.ints(min_value=1, max_value=10))
x = draw(
helpers.array_values(
dtype=dtype[0],
shape=[batch, channels, h_size, w_size],
min_value=0,
max_value=1,
)
)
kernel_sizes = draw(helpers.ints(min_value=1, max_value=3))
strides = draw(helpers.ints(min_value=1, max_value=3))
paddings = draw(helpers.ints(min_value=1, max_value=3))
dilations = draw(helpers.ints(min_value=1, max_value=3))
return dtype, x, kernel_sizes, strides, paddings, dilations
# --- Main --- #
# ------------ #
# Cosine Similarity
@handle_frontend_test(
fn_tree="paddle.nn.functional.common.cosine_similarity",
d_type_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
shared_dtype=True,
min_value=2,
max_value=5,
min_dim_size=2,
shape=(4, 4),
),
axis=st.integers(min_value=-1, max_value=1),
)
def test_paddle_cosine_similarity(
*,
d_type_and_x,
axis,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = d_type_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-01,
x1=x[0],
x2=x[1],
axis=axis,
)
# dropout
@handle_frontend_test(
fn_tree="paddle.nn.functional.common.dropout",
d_type_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=1,
shared_dtype=True,
min_value=2,
max_value=5,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
),
p=st.floats(min_value=0.0, max_value=1.0),
axis=st.integers(min_value=0, max_value=1),
training=st.booleans(),
mode=st.sampled_from(["upscale_in_train", "downscale_in_infer"]),
)
def test_paddle_dropout(
*,
d_type_and_x,
p,
on_device,
fn_tree,
backend_fw,
frontend,
test_flags,
training,
axis,
mode,
):
dtype, x = d_type_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
p=p,
frontend=frontend,
backend_to_test=backend_fw,
fn_tree=fn_tree,
test_flags=test_flags,
on_device=on_device,
x=x[0],
training=training,
axis=axis,
mode=mode,
)
# Dropout2d
@handle_frontend_test(
fn_tree="paddle.nn.functional.common.dropout2d",
d_type_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=1,
shared_dtype=True,
min_value=2,
max_value=5,
min_dim_size=4,
shape=(
st.integers(min_value=2, max_value=10),
4,
st.integers(min_value=12, max_value=64),
st.integers(min_value=12, max_value=64),
),
),
p=st.floats(min_value=0.0, max_value=1.0),
training=st.booleans(),
data_format=st.sampled_from(["NCHW", "NHWC"]),
)
def test_paddle_dropout2d(
*,
d_type_and_x,
p,
training,
data_format,
backend_fw,
on_device,
fn_tree,
frontend,
test_flags,
):
dtype, x = d_type_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
p=p,
training=training,
data_format=data_format,
)
# Dropout3d
@handle_frontend_test(
fn_tree="paddle.nn.functional.common.dropout3d",
d_type_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=5,
max_num_dims=5,
),
p=st.floats(min_value=0.0, max_value=1.0),
training=st.booleans(),
data_format=st.sampled_from(["NCDHW", "NDHWC"]),
)
def test_paddle_dropout3d(
*,
d_type_and_x,
p,
training,
data_format,
on_device,
backend_fw,
fn_tree,
frontend,
test_flags,
):
dtype, x = d_type_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
p=p,
training=training,
data_format=data_format,
)
@handle_frontend_test(
fn_tree="paddle.nn.functional.common.interpolate",
dtype_x_mode=_interp_args(),
)
def test_paddle_interpolate(
dtype_x_mode,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
(
input_dtype,
x,
mode,
size,
align_corners,
scale_factor,
recompute_scale_factor,
) = dtype_x_mode
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
size=size,
scale_factor=scale_factor,
mode=mode,
align_corners=align_corners,
)
# linear
@handle_frontend_test(
fn_tree="paddle.nn.functional.common.linear",
dtype_x_weight_bias=_x_and_linear(
dtypes=helpers.get_dtypes("valid", full=False),
),
)
def test_paddle_linear(
*,
dtype_x_weight_bias,
on_device,
fn_tree,
backend_fw,
frontend,
test_flags,
):
dtype, x, weight, bias = dtype_x_weight_bias
weight = ivy.swapaxes(weight, -1, -2)
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x,
weight=weight,
bias=bias,
)
@handle_frontend_test(
fn_tree="paddle.nn.functional.common.unfold",
dtype_inputs=paddle_unfold_handler(dtype=helpers.get_dtypes("valid", full=False)),
)
def test_paddle_unfold(
*,
dtype_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x, kernel_sizes, strides, paddings, dilations = dtype_inputs
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x,
kernel_sizes=kernel_sizes,
strides=strides,
paddings=paddings,
dilations=dilations,
)
@handle_frontend_test(
fn_tree="paddle.nn.functional.common.zeropad2d",
d_type_and_x_paddings=_zero2pad(),
dataformat=st.sampled_from(["NCHW", "NHWC"]),
)
def test_paddle_zeropad2d(
*,
d_type_and_x_paddings,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
dataformat,
):
dtype, x, padding = d_type_and_x_paddings
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
padding=padding,
data_format=dataformat,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_paddle/test_nn/test_functional/test_common.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_paddle/test_nn/test_functional/test_common.py",
"repo_id": "ivy",
"token_count": 7084
} | 54 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
@handle_frontend_test(
fn_tree="paddle.tensor.random.exponential_",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=1000,
min_num_dims=1,
max_num_dims=10,
min_dim_size=2,
max_dim_size=10,
),
)
def test_paddle_exponential_(
fn_tree,
dtype_and_x,
frontend,
backend_fw,
test_flags,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
test_values=False,
x=x[0],
)
@handle_frontend_test(
fn_tree="paddle.tensor.random.uniform_",
min=helpers.floats(min_value=-1, max_value=0),
max=helpers.floats(min_value=0.1, max_value=1),
seed=st.integers(min_value=2, max_value=5),
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=1000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
)
def test_paddle_uniform_(
fn_tree,
min,
max,
seed,
dtype_and_x,
frontend,
backend_fw,
test_flags,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
test_values=False,
x=x[0],
min=min,
max=max,
seed=seed,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_random.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_random.py",
"repo_id": "ivy",
"token_count": 920
} | 55 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers.testing_helpers import handle_frontend_test
# add
@handle_frontend_test(
fn_tree="tensorflow.__operators__.add",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
shared_dtype=True,
),
test_with_out=st.just(False),
)
def test_tensorflow___operators___add(
*,
dtype_and_x,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
y=x[1],
)
| ivy/ivy_tests/test_ivy/test_frontends/test_tensorflow/test___operators__.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_tensorflow/test___operators__.py",
"repo_id": "ivy",
"token_count": 419
} | 56 |
# global
import numpy as np
from hypothesis import assume, strategies as st
import sys
import ivy
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test, assert_all_close
from ivy_tests.test_ivy.test_functional.test_core.test_linalg import (
_get_dtype_value1_value2_axis_for_tensordot,
)
from ivy_tests.test_ivy.helpers.hypothesis_helpers.general_helpers import (
matrix_is_stable,
)
from ivy_tests.test_ivy.test_functional.test_core.test_linalg import _matrix_rank_helper
# --- Helpers --- #
# --------------- #
# cholesky_solve
@st.composite
def _get_cholesky_matrix(draw):
# batch_shape, random_size, shared
input_dtype = draw(
st.shared(
st.sampled_from(draw(helpers.get_dtypes("float"))),
key="shared_dtype",
)
)
shared_size = draw(
st.shared(helpers.ints(min_value=2, max_value=4), key="shared_size")
)
gen = draw(
helpers.array_values(
dtype=input_dtype,
shape=(shared_size, shared_size),
min_value=2,
max_value=5,
).filter(lambda x: np.linalg.cond(x.tolist()) < 1 / sys.float_info.epsilon)
)
spd = np.matmul(gen.T, gen) + np.identity(gen.shape[0])
spd_chol = np.linalg.cholesky(spd)
return input_dtype, spd_chol
@st.composite
def _get_dtype_and_matrix(draw):
arbitrary_dims = draw(helpers.get_shape(max_dim_size=5))
random_size = draw(st.integers(min_value=1, max_value=4))
shape = (*arbitrary_dims, random_size, random_size)
return draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
shape=shape,
min_value=-10,
max_value=10,
)
)
@st.composite
def _get_dtype_and_matrix_and_num(draw):
arbitrary_dims = draw(helpers.get_shape(max_dim_size=5))
random_size = draw(st.integers(min_value=1, max_value=4))
shape = (*arbitrary_dims, random_size, random_size)
dtype_and_values = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
shape=shape,
min_value=-10,
max_value=10,
)
)
num_lower = draw(st.integers(min_value=-1, max_value=random_size - 1))
num_upper = draw(st.integers(min_value=-1, max_value=random_size - 1))
return (*dtype_and_values, num_lower, num_upper)
@st.composite
def _get_dtype_and_rank_2k_tensors(draw):
arbitrary_dims = draw(helpers.get_shape(max_dim_size=5))
shape = arbitrary_dims + arbitrary_dims
return draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
shape=shape,
min_value=-10,
max_value=10,
)
)
@st.composite
def _get_dtype_and_sequence_of_arrays(draw):
array_dtype = draw(helpers.get_dtypes("float", full=False))
arbitrary_size = draw(st.integers(min_value=2, max_value=10))
values = []
for i in range(arbitrary_size):
values.append(
draw(
helpers.array_values(
dtype=array_dtype[0], shape=helpers.get_shape(), allow_nan=True
)
)
)
return array_dtype, values
# logdet
@st.composite
def _get_hermitian_pos_def_matrix(draw):
# batch_shape, random_size, shared
input_dtype = draw(
st.shared(
st.sampled_from(draw(helpers.get_dtypes("float"))),
key="shared_dtype",
)
)
shared_size = draw(
st.shared(helpers.ints(min_value=2, max_value=4), key="shared_size")
)
gen = draw(
helpers.array_values(
dtype=input_dtype,
shape=(shared_size, shared_size),
min_value=2,
max_value=5,
).filter(lambda x: np.linalg.cond(x.tolist()) < 1 / sys.float_info.epsilon)
)
hpd = np.matmul(np.matrix(gen).getH(), np.matrix(gen)) + np.identity(gen.shape[0])
return [input_dtype], hpd
@st.composite
def _get_second_matrix(draw):
# batch_shape, shared, random_size
input_dtype = draw(
st.shared(
st.sampled_from(draw(helpers.get_dtypes("float"))),
key="shared_dtype",
)
)
shared_size = draw(
st.shared(helpers.ints(min_value=2, max_value=4), key="shared_size")
)
return input_dtype, draw(
helpers.array_values(
dtype=input_dtype, shape=(shared_size, 1), min_value=2, max_value=5
)
)
@st.composite
def _get_tridiagonal_dtype_matrix_format(draw):
input_dtype_strategy = st.shared(
st.sampled_from(draw(helpers.get_dtypes("float_and_complex"))),
key="shared_dtype",
)
input_dtype = draw(input_dtype_strategy)
shared_size = draw(
st.shared(helpers.ints(min_value=2, max_value=4), key="shared_size")
)
diagonals_format = draw(st.sampled_from(["compact", "sequence", "matrix"]))
if diagonals_format == "matrix":
matrix = draw(
helpers.array_values(
dtype=input_dtype,
shape=(shared_size, shared_size),
min_value=2,
max_value=5,
).filter(tridiagonal_matrix_filter)
)
elif diagonals_format in ["compact", "sequence"]:
matrix = draw(
helpers.array_values(
dtype=input_dtype,
shape=(3, shared_size),
min_value=2,
max_value=5,
).filter(tridiagonal_compact_filter)
)
if diagonals_format == "sequence":
matrix = list(matrix)
return input_dtype, matrix, diagonals_format
# --- Main --- #
# ------------ #
# adjoint
@handle_frontend_test(
fn_tree="tensorflow.linalg.adjoint",
dtype_and_x=_get_dtype_and_matrix().filter(
lambda x: "float16" not in x[0] and "bfloat16" not in x[0]
), # TODO : remove this filter when paddle.conj supports float16
test_with_out=st.just(False),
)
def test_tensorflow_adjoint(
*,
dtype_and_x,
backend_fw,
frontend,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
matrix=x[0],
)
# band_part
@handle_frontend_test(
fn_tree="tensorflow.linalg.band_part",
dtype_and_input=_get_dtype_and_matrix_and_num(),
test_with_out=st.just(False),
)
def test_tensorflow_band_part(
*,
dtype_and_input,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x, num_lower, num_upper = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
num_lower=num_lower,
num_upper=num_upper,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.cholesky_solve",
x=_get_cholesky_matrix(),
y=_get_second_matrix(),
test_with_out=st.just(False),
)
def test_tensorflow_cholesky_solve(
*,
x,
y,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype1, x1 = x
input_dtype2, x2 = y
helpers.test_frontend_function(
input_dtypes=[input_dtype1, input_dtype2],
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-3,
atol=1e-3,
chol=x1,
rhs=x2,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.det",
dtype_and_input=_get_dtype_and_matrix(),
test_with_out=st.just(False),
)
def test_tensorflow_det(
*,
dtype_and_input,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
)
# diag
@handle_frontend_test(
fn_tree="tensorflow.linalg.diag",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=["int64", "int32"],
min_num_dims=1,
max_num_dims=2,
min_dim_size=5,
max_dim_size=10,
min_value=0,
max_value=10,
),
k=st.just(0),
)
def test_tensorflow_diag(
dtype_and_x,
k,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
v=x[0],
k=k,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.eigh",
dtype_and_input=_get_dtype_and_matrix(),
test_with_out=st.just(False),
)
def test_tensorflow_eigh(
*,
dtype_and_input,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_input
assume(matrix_is_stable(x[0]))
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
tensor=x[0],
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.eigvals",
dtype_and_input=_get_dtype_and_matrix(),
test_with_out=st.just(False),
)
def test_tensorflow_eigvals(
*,
dtype_and_input,
frontend,
test_flags,
fn_tree,
on_device,
backend_fw,
):
input_dtype, x = dtype_and_input
assume(matrix_is_stable(x[0]))
if x[0].dtype == ivy.float32:
x[0] = x[0].astype("float64")
input_dtype = [ivy.float64]
ret, frontend_ret = helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
tensor=x[0],
test_values=False,
)
ret = ivy.to_numpy(ret)
ret = ret.round(6)
ret = np.sort(ret)
frontend_ret = frontend_ret[0].numpy()
frontend_ret = frontend_ret.round(6)
frontend_ret = np.sort(frontend_ret)
assert_all_close(
ret_np=ret,
ret_from_gt_np=frontend_ret,
rtol=1e-06,
atol=1e-06,
ground_truth_backend=frontend,
backend=backend_fw,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.eigvalsh",
dtype_and_input=_get_dtype_and_matrix(),
test_with_out=st.just(False),
)
def test_tensorflow_eigvalsh(
*,
dtype_and_input,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_input
assume(matrix_is_stable(x[0]))
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
tensor=x[0],
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.expm",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=1,
min_value=1,
max_value=10,
shape=helpers.ints(min_value=3, max_value=3).map(lambda x: (x, x)),
).filter(lambda x: "float16" not in x[0]),
test_with_out=st.just(False),
)
def test_tensorflow_expm(
*,
dtype_and_x,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
atol=1,
rtol=1e-01,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.global_norm",
dtype_and_input=_get_dtype_and_sequence_of_arrays(),
test_with_out=st.just(False),
)
def test_tensorflow_global_norm(
*,
dtype_and_input,
backend_fw,
frontend,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
t_list=x,
)
# inv
@handle_frontend_test(
fn_tree="tensorflow.linalg.inv",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_value=-100,
max_value=100,
shape=helpers.ints(min_value=1, max_value=20).map(lambda x: (x, x)),
).filter(
lambda x: "bfloat16" not in x[0]
and np.linalg.cond(x[1][0]) < 1 / sys.float_info.epsilon
and np.linalg.det(np.asarray(x[1][0])) != 0
),
adjoint=st.booleans(),
test_with_out=st.just(False),
)
def test_tensorflow_inv(
*,
dtype_and_x,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
adjoint,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
rtol=1e-01,
atol=1e-01,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
adjoint=adjoint,
)
# l2_normalize
@handle_frontend_test(
fn_tree="tensorflow.linalg.l2_normalize",
dtype_values_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=3,
max_num_dims=5,
min_dim_size=1,
max_dim_size=4,
min_axis=-3,
max_axis=2,
),
)
def test_tensorflow_l2_normalize(
*,
dtype_values_axis,
backend_fw,
frontend,
test_flags,
fn_tree,
on_device,
):
input_dtype, x, axis = dtype_values_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
axis=axis,
)
# cholesky
@handle_frontend_test(
fn_tree="tensorflow.linalg.cholesky",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=10,
shape=helpers.ints(min_value=2, max_value=5).map(lambda x: (x, x)),
).filter(
lambda x: "float16" not in x[0]
and "bfloat16" not in x[0]
and np.linalg.cond(x[1][0]) < 1 / sys.float_info.epsilon
and np.linalg.det(np.asarray(x[1][0])) != 0
),
test_with_out=st.just(False),
)
def test_tensorflow_linalg_cholesky(
*,
dtype_and_x,
backend_fw,
on_device,
fn_tree,
frontend,
test_flags,
):
dtype, x = dtype_and_x
x = np.asarray(x[0], dtype=dtype[0])
# make symmetric positive-definite beforehand
x = np.matmul(x.T, x) + np.identity(x.shape[0]) * 1e-3
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-02,
input=x,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.cross",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("integer"),
num_arrays=2,
min_num_dims=1,
max_num_dims=5,
min_dim_size=3,
max_dim_size=3,
shared_dtype=True,
),
)
def test_tensorflow_linalg_cross(
frontend,
on_device,
dtype_and_x,
*,
fn_tree,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
b=x[1],
)
# einsum
@handle_frontend_test(
fn_tree="tensorflow.linalg.einsum",
eq_n_op_n_shp=helpers.einsum_helper(),
dtype=helpers.get_dtypes("numeric", full=False),
)
def test_tensorflow_linalg_einsum(
*,
eq_n_op_n_shp,
dtype,
on_device,
fn_tree,
backend_fw,
frontend,
test_flags,
):
eq, operands, dtypes = eq_n_op_n_shp
kw = {}
for i, x_ in enumerate(operands):
dtype = dtypes[i][0]
kw[f"x{i}"] = np.array(x_).astype(dtype)
test_flags.num_positional_args = len(operands) + 1
helpers.test_frontend_function(
input_dtypes=dtypes,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
equation=eq,
**kw,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.logdet",
dtype_and_x=_get_hermitian_pos_def_matrix(),
)
def test_tensorflow_logdet(
*,
dtype_and_x,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
matrix=x,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.matmul",
dtype_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
shape=(3, 3),
num_arrays=2,
shared_dtype=True,
min_value=-1,
max_value=100,
),
transpose_a=st.booleans(),
transpose_b=st.booleans(),
test_with_out=st.just(False),
)
def test_tensorflow_matmul(
*,
dtype_x,
transpose_a,
transpose_b,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
b=x[1],
transpose_a=transpose_a,
transpose_b=transpose_b,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.matrix_rank",
dtype_x_hermitian_atol_rtol=_matrix_rank_helper(),
test_with_out=st.just(False),
)
def test_tensorflow_matrix_rank(
*,
dtype_x_hermitian_atol_rtol,
frontend,
test_flags,
backend_fw,
fn_tree,
on_device,
):
dtype, x, hermitian, atol, rtol = dtype_x_hermitian_atol_rtol
assume(matrix_is_stable(x, cond_limit=10))
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x,
tol=atol,
)
# matrix_transpose
@handle_frontend_test(
fn_tree="tensorflow.linalg.matrix_transpose",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=2,
),
conjugate=st.booleans(),
test_with_out=st.just(False),
)
def test_tensorflow_matrix_transpose(
dtype_and_input,
conjugate,
backend_fw,
frontend,
test_flags,
fn_tree,
):
input_dtype, x = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
a=x[0],
conjugate=conjugate,
)
# norm
@handle_frontend_test(
fn_tree="tensorflow.linalg.norm",
aliases=["tensorflow.norm"],
dtype_values_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=3,
max_num_dims=5,
min_dim_size=1,
max_dim_size=4,
min_axis=-3,
max_axis=2,
),
ord=st.sampled_from([1, 2, np.inf]),
keepdims=st.booleans(),
)
def test_tensorflow_norm(
*,
dtype_values_axis,
ord,
keepdims,
backend_fw,
frontend,
test_flags,
fn_tree,
on_device,
):
input_dtype, x, axis = dtype_values_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
tensor=x[0],
ord=ord,
axis=axis,
keepdims=keepdims,
)
# normalize
@handle_frontend_test(
fn_tree="tensorflow.linalg.normalize",
dtype_values_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=24,
small_abs_safety_factor=24,
safety_factor_scale="log",
min_num_dims=3,
max_num_dims=5,
min_dim_size=1,
max_dim_size=4,
min_axis=-3,
max_axis=2,
),
ord=st.sampled_from([1, 2, np.inf]),
test_with_out=st.just(False),
)
def test_tensorflow_normalize(
*,
dtype_values_axis,
ord,
backend_fw,
frontend,
test_flags,
fn_tree,
on_device,
):
input_dtype, x, axis = dtype_values_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
tensor=x[0],
ord=ord,
axis=axis,
atol=1e-08,
)
# pinv
@handle_frontend_test(
fn_tree="tensorflow.linalg.pinv",
dtype_and_input=_get_dtype_and_matrix(),
)
def test_tensorflow_pinv(
*,
dtype_and_input,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-3,
atol=1e-3,
a=x[0],
rcond=1e-15,
)
# qr
@handle_frontend_test(
fn_tree="tensorflow.linalg.qr",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=10,
shape=helpers.ints(min_value=2, max_value=5).map(lambda x: (x, x)),
),
)
def test_tensorflow_qr(
*,
dtype_and_x,
frontend,
test_flags,
fn_tree,
on_device,
backend_fw,
):
dtype, x = dtype_and_x
x = np.asarray(x[0], dtype=dtype[0])
x = np.matmul(x.T, x) + np.identity(x.shape[0]) * 1e-3
ret, frontend_ret = helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
atol=1e-03,
rtol=1e-05,
input=x,
)
ret = [ivy.to_numpy(x) for x in ret]
frontend_ret = [np.asarray(x) for x in frontend_ret]
assert_all_close(
ret_np=ret[0],
ret_from_gt_np=frontend_ret[0],
rtol=1e-2,
atol=1e-2,
ground_truth_backend=frontend,
backend=backend_fw,
)
# Tests for tensorflow.linalg.set_diag function's frontend
@handle_frontend_test(
fn_tree="tensorflow.linalg.set_diag",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=2,
max_num_dims=3,
min_dim_size=3,
max_dim_size=6,
min_value=-10.0,
max_value=10.0,
),
)
def test_tensorflow_set_diag(
dtype_and_x,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
dtype, x = dtype_and_x
x = ivy.squeeze(x)
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x,
diagonal=x[0],
)
# slogdet
@handle_frontend_test(
fn_tree="tensorflow.linalg.slogdet",
dtype_and_x=_get_dtype_and_matrix(),
test_with_out=st.just(False),
)
def test_tensorflow_slogdet(
*,
dtype_and_x,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
)
# solve
@handle_frontend_test(
fn_tree="tensorflow.linalg.solve",
x=helpers.get_first_solve_batch_matrix(choose_adjoint=True),
y=helpers.get_second_solve_batch_matrix(allow_simplified=False),
test_with_out=st.just(False),
)
def test_tensorflow_solve(
*,
x,
y,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype1, x1, adjoint = x
input_dtype2, x2, _ = y
helpers.test_frontend_function(
input_dtypes=[input_dtype1, input_dtype2],
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-3,
atol=1e-3,
matrix=x1,
rhs=x2,
adjoint=adjoint,
)
@handle_frontend_test(
fn_tree="tensorflow.linalg.svd",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=10,
shape=helpers.ints(min_value=2, max_value=5).map(lambda x: (x, x)),
),
full_matrices=st.booleans(),
compute_uv=st.just(True),
)
def test_tensorflow_svd(
*,
dtype_and_x,
backend_fw,
full_matrices,
compute_uv,
frontend,
test_flags,
fn_tree,
on_device,
):
dtype, x = dtype_and_x
x = np.asarray(x[0], dtype=dtype[0])
# make symmetric positive definite beforehand
x = np.matmul(x.T, x) + np.identity(x.shape[0]) * 1e-3
ret, frontend_ret = helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
atol=1e-03,
rtol=1e-05,
a=x,
full_matrices=full_matrices,
compute_uv=compute_uv,
)
ret = [ivy.to_numpy(x) for x in ret]
frontend_ret = [np.asarray(x) for x in frontend_ret]
u, s, vh = ret
frontend_s, frontend_u, frontend_vh = frontend_ret
assert_all_close(
ret_np=u @ np.diag(s) @ vh,
ret_from_gt_np=frontend_u @ np.diag(frontend_s) @ frontend_vh.T,
rtol=1e-2,
atol=1e-2,
ground_truth_backend=frontend,
backend=backend_fw,
)
# tensor_diag
@handle_frontend_test(
fn_tree="tensorflow.linalg.tensor_diag",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=1,
max_num_dims=1,
min_dim_size=5,
max_dim_size=10,
min_value=1,
max_value=10,
),
test_with_out=st.just(False),
)
def test_tensorflow_tensor_diag(
*,
dtype_and_x,
frontend,
test_flags,
fn_tree,
on_device,
backend_fw,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
diagonal=x[0],
)
# Tests for tensorflow.linalg.tensor_diag_part function's frontend
@handle_frontend_test(
fn_tree="tensorflow.linalg.tensor_diag_part",
dtype_and_input=_get_dtype_and_rank_2k_tensors(),
test_with_out=st.just(False),
)
def test_tensorflow_tensor_diag_part(
*,
dtype_and_input,
frontend,
test_flags,
fn_tree,
on_device,
backend_fw,
):
dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# tensordot
@handle_frontend_test(
fn_tree="tensorflow.linalg.tensordot",
dtype_x_y_axes=_get_dtype_value1_value2_axis_for_tensordot(
available_dtypes=helpers.get_dtypes("numeric"),
),
)
def test_tensorflow_tensordot(
*,
dtype_x_y_axes,
backend_fw,
frontend,
test_flags,
fn_tree,
on_device,
):
(
dtype,
x,
y,
axes,
) = dtype_x_y_axes
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x,
b=y,
axes=axes,
)
# trace
@handle_frontend_test(
fn_tree="tensorflow.linalg.trace",
dtype_and_input=_get_dtype_and_matrix(),
test_with_out=st.just(False),
)
def test_tensorflow_trace(
dtype_and_input,
backend_fw,
frontend,
test_flags,
fn_tree,
):
input_dtype, x = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
x=x[0],
)
# tridiagonal_solve
@handle_frontend_test(
fn_tree="tensorflow.linalg.tridiagonal_solve",
x=_get_tridiagonal_dtype_matrix_format(),
y=_get_second_matrix(),
transpose_rhs=st.just(False),
conjugate_rhs=st.booleans(),
)
def test_tensorflow_tridiagonal_solve(
*,
x,
y,
transpose_rhs,
conjugate_rhs,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype1, x1, diagonals_format = x
input_dtype2, x2 = y
helpers.test_frontend_function(
input_dtypes=[input_dtype1, input_dtype2],
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-3,
atol=1e-3,
diagonals=x1,
rhs=x2,
diagonals_format=diagonals_format,
transpose_rhs=transpose_rhs,
conjugate_rhs=conjugate_rhs,
)
def tridiagonal_compact_filter(x):
diagonals = ivy.array(x)
dim = diagonals[0].shape[0]
diagonals[[0, -1], [-1, 0]] = 0
dummy_idx = [0, 0]
indices = ivy.array(
[
[(i, i + 1) for i in range(dim - 1)] + [dummy_idx],
[(i, i) for i in range(dim)],
[dummy_idx] + [(i + 1, i) for i in range(dim - 1)],
]
)
matrix = ivy.scatter_nd(
indices, diagonals, ivy.array([dim, dim]), reduction="replace"
)
return tridiagonal_matrix_filter(matrix)
def tridiagonal_matrix_filter(x):
dim = x.shape[0]
if ivy.abs(ivy.det(x)) < 1e-3:
return False
for i in range(dim):
for j in range(dim):
cell = x[i][j]
if i in [j, j - 1, j + 1]:
if cell == 0:
return False
else:
if cell != 0:
return False
return True
| ivy/ivy_tests/test_ivy/test_frontends/test_tensorflow/test_linalg.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_tensorflow/test_linalg.py",
"repo_id": "ivy",
"token_count": 16615
} | 57 |
# global
import numpy as np
from hypothesis import assume, strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers.testing_helpers import handle_frontend_test
# --- Helpers --- #
# --------------- #
@st.composite
def _topk_helper(draw):
dtype, x, axis = draw(
helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("numeric"),
min_num_dims=1,
force_int_axis=True,
valid_axis=True,
)
)
k = draw(st.integers(min_value=1, max_value=x[0].shape[axis]))
return dtype, x, axis, k
# --- Main --- #
# ------------ #
# allclose
@handle_frontend_test(
fn_tree="torch.allclose",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
shared_dtype=True,
),
equal_nan=st.booleans(),
)
def test_torch_allclose(
*,
dtype_and_input,
equal_nan,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-05,
atol=1e-08,
input=input[0],
other=input[1],
equal_nan=equal_nan,
)
# argsort
@handle_frontend_test(
fn_tree="torch.argsort",
dtype_input_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("numeric"),
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=5,
min_axis=-1,
max_axis=0,
),
descending=st.booleans(),
)
def test_torch_argsort(
*,
dtype_input_axis,
descending,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input, axis = dtype_input_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
dim=axis,
descending=descending,
)
# eq
@handle_frontend_test(
fn_tree="torch.eq",
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
allow_inf=False,
shared_dtype=True,
),
)
def test_torch_eq(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
inputs_dtypes, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=inputs_dtypes,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# equal
@handle_frontend_test(
fn_tree="torch.equal",
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid", full=False),
num_arrays=2,
allow_inf=False,
shared_dtype=True,
),
)
def test_torch_equal(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
inputs_dtypes, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=inputs_dtypes,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# fmax
@handle_frontend_test(
fn_tree="torch.fmax",
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
shared_dtype=True,
min_value=-np.inf,
max_value=np.inf,
),
)
def test_torch_fmax(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# fmin
@handle_frontend_test(
fn_tree="torch.fmin",
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
shared_dtype=True,
min_value=-np.inf,
max_value=np.inf,
),
)
def test_torch_fmin(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# greater
@handle_frontend_test(
fn_tree="torch.gt",
aliases=["torch.greater"],
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
allow_inf=False,
shared_dtype=True,
),
)
def test_torch_greater(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# greater_equal
@handle_frontend_test(
fn_tree="torch.ge",
aliases=["torch.greater_equal"],
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
allow_inf=False,
shared_dtype=True,
),
)
def test_torch_greater_equal(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# isclose
@handle_frontend_test(
fn_tree="torch.isclose",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
shared_dtype=True,
),
equal_nan=st.booleans(),
)
def test_torch_isclose(
*,
dtype_and_input,
equal_nan,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-05,
atol=1e-08,
input=input[0],
other=input[1],
equal_nan=equal_nan,
)
# isfinite
@handle_frontend_test(
fn_tree="torch.isfinite",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
min_value=-np.inf,
max_value=np.inf,
),
)
def test_torch_isfinite(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
@handle_frontend_test(
fn_tree="torch.isin",
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=2,
shared_dtype=True,
),
assume_unique=st.booleans(),
invert=st.booleans(),
)
def test_torch_isin(
*,
dtype_and_inputs,
assume_unique,
invert,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
elements=inputs[0],
test_elements=inputs[1],
assume_unique=assume_unique,
invert=invert,
)
# isinf
@handle_frontend_test(
fn_tree="torch.isinf",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
min_value=-np.inf,
max_value=np.inf,
),
)
def test_torch_isinf(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# isnan
@handle_frontend_test(
fn_tree="torch.isnan",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid", full=False),
min_value=-np.inf,
max_value=np.inf,
),
)
def test_torch_isnan(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# isneginf
@handle_frontend_test(
fn_tree="torch.isneginf",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
min_value=-np.inf,
max_value=np.inf,
),
)
def test_torch_isneginf(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# isposinf
@handle_frontend_test(
fn_tree="torch.isposinf",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
min_value=-np.inf,
max_value=np.inf,
),
)
def test_torch_isposinf(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# isreal
@handle_frontend_test(
fn_tree="torch.isreal",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
min_value=-np.inf,
max_value=np.inf,
),
)
def test_torch_isreal(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# kthvalue
@handle_frontend_test(
fn_tree="torch.kthvalue",
dtype_input_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=1,
valid_axis=True,
force_int_axis=True,
).filter(lambda v: len(np.unique(v[1][0])) == len(np.ravel(v[1][0]))),
k=st.integers(min_value=1),
keepdim=st.booleans(),
)
def test_torch_kthvalue(
*,
dtype_input_axis,
k,
keepdim,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input, dim = dtype_input_axis
assume(k <= input[0].shape[dim])
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
k=k,
dim=dim,
keepdim=keepdim,
)
# less
@handle_frontend_test(
fn_tree="torch.less",
aliases=["torch.lt"],
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
shared_dtype=True,
),
)
def test_torch_less(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# less_equal
@handle_frontend_test(
fn_tree="torch.less_equal",
aliases=["torch.le"],
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
shared_dtype=True,
),
)
def test_torch_less_equal(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# maximum
@handle_frontend_test(
fn_tree="torch.maximum",
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=2,
shared_dtype=True,
),
)
def test_torch_maximum(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
@handle_frontend_test(
fn_tree="torch.minimum",
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=2,
shared_dtype=True,
),
)
def test_torch_minimum(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# msort
@handle_frontend_test(
fn_tree="torch.msort",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
min_num_dims=2,
min_dim_size=2,
),
)
def test_torch_msort(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
)
# not_equal
@handle_frontend_test(
fn_tree="torch.not_equal",
aliases=["torch.ne"],
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid", full=False),
num_arrays=2,
shared_dtype=True,
),
)
def test_torch_not_equal(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
# sort
@handle_frontend_test(
fn_tree="torch.sort",
dtype_input_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("numeric"),
min_num_dims=1,
min_dim_size=1,
min_axis=-1,
max_axis=0,
),
descending=st.booleans(),
stable=st.booleans(),
)
def test_torch_sort(
*,
dtype_input_axis,
descending,
stable,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input, axis = dtype_input_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
dim=axis,
descending=descending,
stable=stable,
)
# topk
# TODO: add value test after the stable sorting is added to torch
# https://github.com/pytorch/pytorch/issues/88184
@handle_frontend_test(
fn_tree="torch.topk",
dtype_x_axis_k=_topk_helper(),
largest=st.booleans(),
sorted=st.booleans(),
)
def test_torch_topk(
dtype_x_axis_k,
largest,
sorted,
frontend,
test_flags,
fn_tree,
backend_fw,
):
input_dtype, input, axis, k = dtype_x_axis_k
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
input=input[0],
k=k,
dim=axis,
largest=largest,
sorted=sorted,
test_values=False,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_torch/test_comparison_ops.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_torch/test_comparison_ops.py",
"repo_id": "ivy",
"token_count": 9522
} | 58 |
# global
import ivy
from hypothesis import assume, strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
# --- Helpers --- #
# --------------- #
def _filter_dtypes(input_dtype):
assume(("bfloat16" not in input_dtype) and ("float16" not in input_dtype))
@st.composite
def _generate_prelu_arrays(draw):
arr_size = draw(helpers.ints(min_value=2, max_value=5))
dtype = draw(helpers.get_dtypes("float", index=1, full=False))
input = draw(
helpers.array_values(
dtype=dtype[0], shape=(arr_size), min_value=0, max_value=10
)
)
weight = draw(
helpers.array_values(dtype=dtype[0], shape=(1,), min_value=0, max_value=1.0)
)
input_weight = input, weight
return dtype, input_weight
@st.composite
def _glu_arrays(draw):
dtype = draw(helpers.get_dtypes("float", index=1, full=False))
shape = draw(st.shared(helpers.ints(min_value=1, max_value=5)))
shape = shape * 2
input = draw(helpers.array_values(dtype=dtype[0], shape=(shape, shape)))
dim = draw(st.shared(helpers.get_axis(shape=(shape,), force_int=True)))
return dtype, input, dim
@st.composite
def _x_and_scaled_attention(draw, dtypes):
dtype = draw(dtypes)
num_queries = draw(helpers.ints(min_value=2, max_value=4))
num_keys = draw(helpers.ints(min_value=2, max_value=4))
feat_dim = draw(helpers.ints(min_value=2, max_value=4))
batch_size = draw(helpers.ints(min_value=1, max_value=2))
q_shape = (batch_size,) + (num_queries,) + (feat_dim,)
k_shape = (batch_size,) + (num_keys,) + (feat_dim,)
v_shape = (batch_size,) + (num_keys,) + (feat_dim,)
mask_shape = (batch_size,) + (num_queries,) + (num_keys,)
query = draw(
helpers.array_values(
dtype=dtype[0],
shape=q_shape,
min_value=0,
max_value=1e2,
large_abs_safety_factor=7,
small_abs_safety_factor=7,
safety_factor_scale="linear",
)
)
key = draw(
helpers.array_values(
dtype=dtype[0],
shape=k_shape,
min_value=0,
max_value=1e2,
large_abs_safety_factor=7,
small_abs_safety_factor=7,
safety_factor_scale="linear",
)
)
value = draw(
helpers.array_values(
dtype=dtype[0],
shape=v_shape,
min_value=0,
max_value=1e2,
large_abs_safety_factor=7,
small_abs_safety_factor=7,
safety_factor_scale="linear",
)
)
mask = draw(
helpers.array_values(
dtype="bool",
shape=mask_shape,
)
| st.none()
)
return dtype, query, key, value, mask
# --- Main --- #
# ------------ #
# celu
@handle_frontend_test(
fn_tree="torch.nn.functional.celu",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
),
alpha=helpers.floats(min_value=0.1, max_value=1.0),
test_inplace=st.booleans(),
test_with_out=st.just(False),
)
def test_torch_celu(
*,
dtype_and_input,
alpha,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
rtol=1e-02,
atol=1e-02,
alpha=alpha,
)
# celu_
@handle_frontend_test(
fn_tree="torch.nn.functional.celu_",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
),
alpha=helpers.floats(min_value=0.1, max_value=1.0),
test_inplace=st.just(True),
test_with_out=st.just(False),
)
def test_torch_celu_(
*,
dtype_and_input,
alpha,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
alpha=alpha,
)
# elu
@handle_frontend_test(
fn_tree="torch.nn.functional.elu",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
alpha=helpers.floats(min_value=0.1, max_value=1.0, exclude_min=True),
test_inplace=st.booleans(),
test_with_out=st.just(False),
)
def test_torch_elu(
*,
dtype_and_input,
alpha,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
alpha=alpha,
)
# elu_
@handle_frontend_test(
fn_tree="torch.nn.functional.elu_",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
alpha=helpers.floats(min_value=0.1, max_value=1.0, exclude_min=True),
)
def test_torch_elu_(
*,
dtype_and_input,
alpha,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
input=input[0],
alpha=alpha,
)
# gelu
@handle_frontend_test(
fn_tree="torch.nn.functional.gelu",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
max_value=1e04,
),
approximate=st.sampled_from(["none", "tanh"]),
)
def test_torch_gelu(
*,
dtype_and_x,
approximate,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
rtol=1e-02,
atol=1e-02,
approximate=approximate,
)
# glu
@handle_frontend_test(
fn_tree="torch.nn.functional.glu",
dtype_input_dim=_glu_arrays(),
)
def test_torch_glu(
*,
dtype_input_dim,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input, dim = dtype_input_dim
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
dim=dim,
)
# gumbel_softmax
@handle_frontend_test(
fn_tree="torch.nn.functional.gumbel_softmax",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
tau=st.floats(min_value=0),
hard=st.booleans(),
eps=st.floats(min_value=0, max_value=1),
dim=st.integers(),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_gumbel_softmax(
*,
dtype_and_x,
tau,
hard,
eps,
dim,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
logits=x[0],
tau=tau,
hard=hard,
eps=eps,
dim=dim,
)
# hardshrink
@handle_frontend_test(
fn_tree="torch.nn.functional.hardshrink",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
lambd=helpers.floats(min_value=0, max_value=1, exclude_min=True),
)
def test_torch_hardshrink(
*,
dtype_and_input,
lambd,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
lambd=lambd,
)
# hardsigmoid
@handle_frontend_test(
fn_tree="torch.nn.functional.hardsigmoid",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_hardsigmoid(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# hardswish
@handle_frontend_test(
fn_tree="torch.nn.functional.hardswish",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
safety_factor_scale="log",
),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_hardswish(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# hardtanh
@handle_frontend_test(
fn_tree="torch.nn.functional.hardtanh",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
max_val=st.floats(min_value=0, max_value=1, exclude_min=True),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_hardtanh(
*,
dtype_and_x,
max_val,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
max_min = max_val, -max_val
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
min_val=max_min[1],
max_val=max_min[0],
)
# hardtanh_
@handle_frontend_test(
fn_tree="torch.nn.functional.hardtanh_",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
max_val=st.floats(min_value=0, max_value=1, exclude_min=True),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_hardtanh_(
*,
dtype_and_x,
max_val,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
max_min = max_val, -max_val
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
input=x[0],
min_val=max_min[1],
max_val=max_min[0],
)
# leaky_relu
@handle_frontend_test(
fn_tree="torch.nn.functional.leaky_relu",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
alpha=st.floats(min_value=0.0, max_value=1.0, exclude_min=True),
test_inplace=st.booleans(),
test_with_out=st.just(False),
)
def test_torch_leaky_relu(
*,
dtype_and_x,
alpha,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
rtol=1e-02,
atol=1e-02,
negative_slope=alpha,
)
# leaky_relu_
# ToDo test for value test once inplace testing implemented
@handle_frontend_test(
fn_tree="torch.nn.functional.leaky_relu_",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
alpha=st.floats(min_value=0, max_value=1, exclude_min=True),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_leaky_relu_(
*,
dtype_and_x,
alpha,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
input=x[0],
negative_slope=alpha,
)
# local_response_norm
@handle_frontend_test(
fn_tree="torch.nn.functional.local_response_norm",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=3,
max_num_dims=4,
large_abs_safety_factor=2,
small_abs_safety_factor=2,
safety_factor_scale="log",
),
size=helpers.ints(min_value=3, max_value=10),
alpha=helpers.floats(min_value=1e-4, max_value=1e-3),
beta=helpers.floats(min_value=0.5, max_value=2.0),
k=helpers.ints(min_value=0, max_value=1),
)
def test_torch_local_response_norm(
*,
dtype_and_x,
size,
alpha,
beta,
k,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
_filter_dtypes(dtype)
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
size=size,
alpha=alpha,
beta=beta,
k=k,
)
# log_softmax
@handle_frontend_test(
fn_tree="torch.nn.functional.log_softmax",
dtype_x_and_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
max_axes_size=1,
force_int_axis=True,
valid_axis=True,
),
dtypes=helpers.get_dtypes("float", none=False, full=False),
)
def test_torch_log_softmax(
*,
dtype_x_and_axis,
dtypes,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x, axis = dtype_x_and_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
dim=axis,
_stacklevel=3,
dtype=dtypes[0],
)
# logsigmoid
@handle_frontend_test(
fn_tree="torch.nn.functional.logsigmoid",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
)
def test_torch_logsigmoid(
*,
dtype_and_x,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
)
# mish
@handle_frontend_test(
fn_tree="torch.nn.functional.mish",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
test_inplace=st.booleans(),
test_with_out=st.just(False),
)
def test_torch_mish(
*,
dtype_and_input,
fn_tree,
frontend,
on_device,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# normalize
@handle_frontend_test(
fn_tree="torch.nn.functional.normalize",
dtype_x_and_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
max_axes_size=1,
force_int_axis=True,
valid_axis=True,
),
p=helpers.ints(min_value=2, max_value=5),
)
def test_torch_normalize(
*,
dtype_x_and_axis,
p,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x, axis = dtype_x_and_axis
_filter_dtypes(dtype)
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
p=p,
dim=axis,
eps=1e-12,
)
# prelu
@handle_frontend_test(
fn_tree="torch.nn.functional.prelu",
dtype_input_and_weight=_generate_prelu_arrays(),
)
def test_torch_prelu(
*,
dtype_input_and_weight,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, inputs = dtype_input_and_weight
_filter_dtypes(dtype)
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
weight=inputs[1],
)
# relu
@handle_frontend_test(
fn_tree="torch.nn.functional.relu",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
test_inplace=st.booleans(),
test_with_out=st.just(False),
)
def test_torch_relu(
dtype_and_input,
frontend,
test_flags,
fn_tree,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
input=input[0],
)
# relu6
@handle_frontend_test(
fn_tree="torch.nn.functional.relu6",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
),
test_inplace=st.booleans(),
test_with_out=st.just(False),
)
def test_torch_relu6(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# relu_
@handle_frontend_test(
fn_tree="torch.nn.functional.relu_",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
test_with_out=st.just(False),
)
def test_torch_relu_(
dtype_and_input,
frontend,
test_flags,
fn_tree,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
input=input[0],
)
# rrelu
@handle_frontend_test(
fn_tree="torch.nn.functional.rrelu",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
lower=helpers.floats(min_value=0, max_value=0.5, exclude_min=True),
upper=helpers.floats(min_value=0.5, max_value=1.0, exclude_min=True),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_rrelu(
*,
dtype_and_input,
lower,
upper,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
lower=lower,
upper=upper,
)
# rrelu_
@handle_frontend_test(
fn_tree="torch.nn.functional.rrelu_",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
lower=helpers.floats(min_value=0, max_value=0.5, exclude_min=True),
upper=helpers.floats(min_value=0.5, max_value=1.0, exclude_min=True),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_rrelu_(
*,
dtype_and_input,
lower,
upper,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
input=input[0],
lower=lower,
upper=upper,
)
# scaled_dot_product_attention
@handle_frontend_test(
fn_tree="torch.nn.functional.scaled_dot_product_attention",
dtype_q_k_v_mask=_x_and_scaled_attention(
dtypes=helpers.get_dtypes("float"),
),
dropout_p=st.floats(min_value=0, max_value=0.99),
is_causal=st.booleans(),
)
def test_torch_scaled_dot_product_attention(
*,
dtype_q_k_v_mask,
dropout_p,
is_causal,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
(dtype, query, key, value, mask) = dtype_q_k_v_mask
is_causal = is_causal if mask is None else False
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=dropout_p == 0.0,
rtol=1e-05,
atol=1e-05,
query=query,
key=key,
value=value,
attn_mask=mask,
dropout_p=dropout_p,
is_causal=is_causal,
)
# selu
@handle_frontend_test(
fn_tree="torch.nn.functional.selu",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
test_inplace=st.booleans(),
test_with_out=st.just(False),
)
def test_torch_selu(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# sigmoid
@handle_frontend_test(
fn_tree="torch.nn.functional.sigmoid",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
)
def test_torch_sigmoid(
*,
dtype_and_x,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
atol=1e-2,
input=x[0],
)
# silu
@handle_frontend_test(
fn_tree="torch.nn.functional.silu",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_silu(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-2,
atol=1e-2,
input=input[0],
)
# softmax
@handle_frontend_test(
fn_tree="torch.nn.functional.softmax",
dtype_x_and_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
max_axes_size=1,
force_int_axis=True,
valid_axis=True,
),
dtypes=helpers.get_dtypes("float", full=False),
)
def test_torch_softmax(
*,
dtype_x_and_axis,
dtypes,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x, axis = dtype_x_and_axis
ivy.set_backend(backend_fw)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
dim=axis,
_stacklevel=3,
dtype=dtypes[0],
atol=1e-03,
)
ivy.previous_backend()
# softmin
@handle_frontend_test(
fn_tree="torch.nn.functional.softmin",
dtype_x_and_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
max_axes_size=1,
force_int_axis=True,
valid_axis=True,
),
dtypes=helpers.get_dtypes("float", full=False),
)
def test_torch_softmin(
*,
dtype_x_and_axis,
dtypes,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x, axis = dtype_x_and_axis
ivy.set_backend(backend_fw)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
dim=axis,
dtype=ivy.as_ivy_dtype(dtypes[0]),
)
ivy.previous_backend()
# softplus
@handle_frontend_test(
fn_tree="torch.nn.functional.softplus",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
beta=st.integers(min_value=1, max_value=20),
threshold=st.integers(min_value=0, max_value=40),
test_with_out=st.just(False),
)
def test_torch_softplus(
*,
dtype_and_x,
beta,
threshold,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
beta=beta,
threshold=threshold,
)
# softshrink
@handle_frontend_test(
fn_tree="torch.nn.functional.softshrink",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
lambd=helpers.floats(min_value=0, max_value=1, exclude_min=True),
)
def test_torch_softshrink(
*,
dtype_and_input,
lambd,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
_filter_dtypes(input_dtype)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
lambd=lambd,
)
# softsign
@handle_frontend_test(
fn_tree="torch.nn.functional.softsign",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
),
)
def test_torch_softsign(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# tanh
@handle_frontend_test(
fn_tree="torch.nn.functional.tanh",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
)
def test_torch_tanh(
*,
dtype_and_x,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
atol=1e-2,
input=x[0],
)
# tanhshrink
@handle_frontend_test(
fn_tree="torch.nn.functional.tanhshrink",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
)
def test_torch_tanhshrink(
*,
dtype_and_input,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
)
# threshold
@handle_frontend_test(
fn_tree="torch.nn.functional.threshold",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
threshold=helpers.floats(min_value=0.0, max_value=1.0),
value=helpers.ints(min_value=5, max_value=20),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_threshold(
*,
dtype_and_input,
threshold,
value,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
threshold=threshold,
value=value,
)
# threshold_
@handle_frontend_test(
fn_tree="torch.nn.functional.threshold_",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
threshold=helpers.floats(min_value=0.0, max_value=1.0),
value=helpers.ints(min_value=5, max_value=20),
test_with_out=st.just(False),
test_inplace=st.booleans(),
)
def test_torch_threshold_(
*,
dtype_and_input,
threshold,
value,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
input_dtype, input = dtype_and_input
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=input[0],
threshold=threshold,
value=value,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_torch/test_nn/test_functional/test_non_linear_activation_functions.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_torch/test_nn/test_functional/test_non_linear_activation_functions.py",
"repo_id": "ivy",
"token_count": 16318
} | 59 |
"""Collection of tests for searching functions."""
# Global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_test
# --- Helpers --- #
# --------------- #
@st.composite
def _broadcastable_trio(draw):
shape = draw(helpers.get_shape(min_num_dims=1, min_dim_size=1))
dtype = draw(st.one_of(st.just(["bool"]), helpers.get_dtypes("valid", full=False)))
cond = draw(helpers.array_values(dtype=dtype[0], shape=shape))
dtypes, xs = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=2,
shape=shape,
shared_dtype=True,
large_abs_safety_factor=16,
small_abs_safety_factor=16,
safety_factor_scale="log",
)
)
return cond, xs, dtypes
# Helpers #
############
@st.composite
def _dtype_x_limited_axis(draw, *, allow_none=False):
dtype, x, shape = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
min_dim_size=1,
ret_shape=True,
)
)
if allow_none and draw(st.booleans()):
return dtype, x, None
axis = draw(helpers.ints(min_value=0, max_value=len(shape) - 1))
return dtype, x, axis
# --- Main --- #
# ------------ #
# Functions #
#############
@handle_test(
fn_tree="functional.ivy.argmax",
dtype_x_axis=_dtype_x_limited_axis(allow_none=True),
keepdims=st.booleans(),
dtype=helpers.get_dtypes("numeric", full=False, none=True),
select_last_index=st.booleans(),
test_gradients=st.just(False),
)
def test_argmax(
*,
dtype_x_axis,
keepdims,
dtype,
select_last_index,
test_flags,
backend_fw,
fn_name,
on_device
):
input_dtype, x, axis = dtype_x_axis
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
axis=axis,
keepdims=keepdims,
dtype=dtype[0],
select_last_index=select_last_index,
)
@handle_test(
fn_tree="functional.ivy.argmin",
dtype_x_axis=_dtype_x_limited_axis(allow_none=True),
keepdims=st.booleans(),
output_dtype=helpers.get_dtypes("integer", full=False, none=True),
select_last_index=st.booleans(),
)
def test_argmin(
*,
dtype_x_axis,
keepdims,
output_dtype,
select_last_index,
test_flags,
backend_fw,
fn_name,
on_device
):
input_dtype, x, axis = dtype_x_axis
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
axis=axis,
keepdims=keepdims,
dtype=output_dtype[0],
select_last_index=select_last_index,
)
# argwhere
@handle_test(
fn_tree="functional.ivy.argwhere",
dtype_and_x=helpers.dtype_and_values(available_dtypes=("bool",)),
ground_truth_backend="torch",
)
def test_argwhere(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
@handle_test(
fn_tree="functional.ivy.nonzero",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=1,
min_dim_size=1,
),
as_tuple=st.booleans(),
size=st.integers(min_value=1, max_value=5),
fill_value=st.one_of(st.integers(0, 5), helpers.floats()),
test_with_out=st.just(False),
)
def test_nonzero(
*,
dtype_and_x,
as_tuple,
size,
fill_value,
test_flags,
backend_fw,
fn_name,
on_device
):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
as_tuple=as_tuple,
size=size,
fill_value=fill_value,
)
@handle_test(
fn_tree="functional.ivy.where",
broadcastables=_broadcastable_trio(),
)
def test_where(*, broadcastables, test_flags, backend_fw, fn_name, on_device):
cond, xs, dtypes = broadcastables
helpers.test_function(
input_dtypes=[str(cond.dtype)] + dtypes,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
condition=cond,
x1=xs[0],
x2=xs[1],
)
| ivy/ivy_tests/test_ivy/test_functional/test_core/test_searching.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_functional/test_core/test_searching.py",
"repo_id": "ivy",
"token_count": 2354
} | 60 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_test
@handle_test(
fn_tree="functional.ivy.experimental.l2_normalize",
dtype_and_x=helpers.arrays_and_axes(
available_dtypes=helpers.get_dtypes("float"),
num=1,
return_dtype=True,
force_int_axis=True,
),
test_gradients=st.just(False),
)
def test_l2_normalize(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x, axis = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
rtol_=1e-1,
x=x[0],
axis=axis,
)
# lp_normalize
@handle_test(
fn_tree="functional.ivy.experimental.lp_normalize",
dtype_and_x=helpers.arrays_and_axes(
available_dtypes=helpers.get_dtypes("float"),
num=1,
return_dtype=True,
force_int_axis=True,
),
p=st.floats(min_value=0.1, max_value=2),
test_gradients=st.just(False),
)
def test_lp_normalize(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device, p):
input_dtype, x, axis = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
rtol_=1e-1,
atol_=1e-1,
x=x[0],
axis=axis,
p=p,
)
| ivy/ivy_tests/test_ivy/test_functional/test_experimental/test_core/test_norms.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_functional/test_experimental/test_core/test_norms.py",
"repo_id": "ivy",
"token_count": 781
} | 61 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_test
# binary_cross_entropy
@handle_test(
fn_tree="functional.ivy.binary_cross_entropy",
dtype_and_true=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("integer"),
min_value=1e-04,
max_value=1,
allow_inf=False,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
shape=(5,),
),
dtype_and_pred=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=1e-04,
max_value=1,
allow_inf=False,
exclude_min=True,
exclude_max=True,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
shape=(5,),
),
dtype_and_pos=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=1e-04,
max_value=1,
allow_inf=False,
exclude_min=True,
exclude_max=True,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
shape=(5,),
),
reduction=st.sampled_from(["none", "sum", "mean"]),
axis=helpers.ints(min_value=-1, max_value=0),
epsilon=helpers.floats(min_value=0, max_value=1.0),
from_logits=st.booleans(),
)
def test_binary_cross_entropy(
dtype_and_true,
dtype_and_pred,
dtype_and_pos,
from_logits,
reduction,
axis,
epsilon,
test_flags,
backend_fw,
fn_name,
on_device,
):
dtype_true, true = dtype_and_true
dtype_pred, pred = dtype_and_pred
dtype_pos_weight, pos_weight = dtype_and_pos
if from_logits:
helpers.test_function(
input_dtypes=dtype_true + dtype_pred + dtype_pos_weight,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
rtol_=1e-02,
atol_=1e-02,
true=true[0],
pred=pred[0],
axis=axis,
epsilon=epsilon,
reduction=reduction,
from_logits=from_logits,
pos_weight=pos_weight[0],
)
else:
helpers.test_function(
input_dtypes=dtype_true + dtype_pred,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
rtol_=1e-02,
atol_=1e-02,
true=true[0],
pred=pred[0],
axis=axis,
epsilon=epsilon,
reduction=reduction,
from_logits=from_logits,
)
# cross_entropy
@handle_test(
fn_tree="functional.ivy.cross_entropy",
dtype_true_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("integer"),
min_value=1e-04,
max_value=1,
allow_inf=False,
valid_axis=True,
allow_neg_axes=True,
force_int_axis=True,
),
dtype_and_pred=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=1e-04,
max_value=1,
allow_inf=False,
),
reduction=st.sampled_from(["none", "sum", "mean"]),
epsilon=helpers.floats(min_value=0.0, max_value=1.0),
)
def test_cross_entropy(
dtype_true_axis,
dtype_and_pred,
reduction,
epsilon,
test_flags,
backend_fw,
fn_name,
on_device,
):
pred_dtype, pred = dtype_and_pred
true_dtype, true, axis = dtype_true_axis
helpers.test_function(
input_dtypes=true_dtype + pred_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
rtol_=1e-02,
atol_=1e-02,
true=true[0],
pred=pred[0],
axis=axis,
epsilon=epsilon,
reduction=reduction,
)
# sparse_cross_entropy
@handle_test(
fn_tree="functional.ivy.sparse_cross_entropy",
dtype_and_true=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("integer"),
min_value=0,
max_value=2,
allow_inf=False,
min_num_dims=1,
max_num_dims=1,
min_dim_size=3,
),
dtype_and_pred=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
small_abs_safety_factor=4,
safety_factor_scale="log",
max_value=1,
allow_inf=False,
exclude_min=True,
exclude_max=True,
min_num_dims=1,
max_num_dims=1,
min_dim_size=3,
),
reduction=st.sampled_from(["none", "sum", "mean"]),
axis=helpers.ints(min_value=-1, max_value=0),
epsilon=helpers.floats(min_value=0.01, max_value=0.49),
)
def test_sparse_cross_entropy(
dtype_and_true,
dtype_and_pred,
reduction,
axis,
epsilon,
test_flags,
backend_fw,
fn_name,
on_device,
):
true_dtype, true = dtype_and_true
pred_dtype, pred = dtype_and_pred
helpers.test_function(
input_dtypes=true_dtype + pred_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
true=true[0],
pred=pred[0],
axis=axis,
epsilon=epsilon,
reduction=reduction,
)
| ivy/ivy_tests/test_ivy/test_functional/test_nn/test_losses.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_functional/test_nn/test_losses.py",
"repo_id": "ivy",
"token_count": 2867
} | 62 |
import numpy as np
import ivy
import pytest
from unittest.mock import patch
from ivy.func_wrapper import handle_array_like_without_promotion
from typing import Union, Tuple, List, Sequence
# --- Helpers --- #
# --------------- #
def _fn1(x: Union[ivy.Array, Tuple[int, int]]):
return x
def _fn2(x: Union[ivy.Array, ivy.NativeArray]):
return x
def _fn3(x: List[ivy.Array]):
return x
def _fn4(x: Union[Sequence[ivy.Array], ivy.Array]):
return x
def _fn5(x):
# Test input was converted to native array
assert isinstance(x, ivy.NativeArray)
def _fn6(x):
# Assert input was converted to Ivy Array
assert isinstance(x, ivy.Array)
def _fn7(x):
# Assert input was converted to native array
assert isinstance(x, ivy.NativeArray)
return x
def _fn8(x):
return ivy.ones_like(x)
def _jl(x, *args, fn_original, **kwargs):
return fn_original(x) * 3j
# --- Main --- #
# ------------ #
@pytest.mark.parametrize(
("fn", "x", "expected_type"),
[
(_fn1, (1, 2), tuple),
(_fn2, (1, 2), ivy.Array),
(_fn2, [1, 2], ivy.Array),
(_fn3, [1, 2], list),
(_fn4, [1, 2], list),
],
)
def test_handle_array_like_without_promotion(fn, x, expected_type, backend_fw):
ivy.set_backend(backend_fw)
assert isinstance(handle_array_like_without_promotion(fn)(x), expected_type)
ivy.previous_backend()
@pytest.mark.parametrize(
("x", "mode", "jax_like", "expected"),
[
([3.0, 7.0, -5.0], None, None, [1.0, 1.0, 1.0]),
([3 + 4j, 7 - 6j, -5 - 2j], None, None, [1 + 0j, 1 + 0j, 1 + 0j]),
([3 + 4j, 7 - 6j, -5 - 2j], "split", None, [1 + 1j, 1 + 1j, 1 + 1j]),
(
[3 + 4j, 7 - 6j, -5 - 2j],
"magnitude",
None,
[0.6 + 0.8j, 0.75926 - 0.65079j, -0.92848 - 0.37139j],
),
([3 + 4j, 7 - 6j, -5 - 2j], "jax", None, [1 + 0j, 1 + 0j, 1 + 0j]),
([3 + 4j, 7 - 6j, -5 - 2j], "jax", "entire", [1 + 0j, 1 + 0j, 1 + 0j]),
([3 + 4j, 7 - 6j, -5 - 2j], "jax", "split", [1 + 1j, 1 + 1j, 1 + 1j]),
(
[3 + 4j, 7 - 6j, -5 - 2j],
"jax",
"magnitude",
[0.6 + 0.8j, 0.75926 - 0.65079j, -0.92848 - 0.37139j],
),
([3 + 4j, 7 - 6j, -5 - 2j], "jax", _jl, [3j, 3j, 3j]),
],
)
def test_handle_complex_input(x, mode, jax_like, expected, backend_fw):
ivy.set_backend(backend_fw)
x = ivy.array(x)
expected = ivy.array(expected)
if jax_like is not None:
_fn8.jax_like = jax_like
elif hasattr(_fn8, "jax_like"):
# _fn8 might have the jax_like attribute still attached from previous tests
delattr(_fn8, "jax_like")
test_fn = ivy.handle_complex_input(_fn8)
out = test_fn(x) if mode is None else test_fn(x, complex_mode=mode)
if "float" in x.dtype:
assert ivy.all(out == expected)
else:
assert ivy.all(
ivy.logical_or(
ivy.real(out) > ivy.real(expected) - 1e-4,
ivy.real(out) < ivy.real(expected) + 1e-4,
)
)
assert ivy.all(
ivy.logical_or(
ivy.imag(out) > ivy.imag(expected) - 1e-4,
ivy.imag(out) < ivy.imag(expected) + 1e-4,
)
)
ivy.previous_backend()
@pytest.mark.parametrize(
("x", "weight", "expected"),
[
([[1, 1], [1, 1]], [[1, 1], [1, 1], [1, 1]], True),
(
[[1, 1], [1, 1]],
[
[[1, 1], [1, 1], [1, 1]],
[[1, 1], [1, 1], [1, 1]],
[[1, 1], [1, 1], [1, 1]],
],
False,
),
],
)
def test_handle_partial_mixed_function(x, weight, expected, backend_fw):
ivy.set_backend(backend_fw)
test_fn = "torch.nn.functional.linear"
if ivy.current_backend_str() != "torch":
# ivy.matmul is used inside the compositional implementation
test_fn = "ivy.matmul"
expected = True
with patch(test_fn) as test_mock_function:
ivy.linear(ivy.array(x), ivy.array(weight))
assert test_mock_function.called == expected
ivy.previous_backend()
def test_inputs_to_ivy_arrays(backend_fw):
ivy.set_backend(backend_fw)
ivy.inputs_to_ivy_arrays(_fn6)(ivy.native_array(1))
ivy.previous_backend()
def test_inputs_to_native_arrays(backend_fw):
ivy.set_backend(backend_fw)
ivy.inputs_to_native_arrays(_fn5)(ivy.array(1))
ivy.previous_backend()
def test_outputs_to_ivy_arrays(backend_fw):
ivy.set_backend(backend_fw)
assert isinstance(
ivy.outputs_to_ivy_arrays(_fn1)(ivy.to_native(ivy.array([2.0]))), ivy.Array
)
assert ivy.outputs_to_ivy_arrays(_fn1)(ivy.array(1)) == ivy.array(1)
ivy.previous_backend()
def test_to_native_arrays_and_back(backend_fw):
ivy.set_backend(backend_fw)
x = ivy.array(1.0)
res = ivy.func_wrapper.to_native_arrays_and_back(_fn7)(x)
assert isinstance(res, ivy.Array)
ivy.previous_backend()
@pytest.mark.parametrize(
"array_to_update",
[0, 1, 2, 3, 4],
)
def test_views(array_to_update, backend_fw):
ivy.set_backend(backend_fw)
a = ivy.random.random_normal(shape=(6,))
a_copy = ivy.copy_array(a)
b = a.reshape((2, 3))
b_copy = ivy.copy_array(b)
c = ivy.flip(b)
c_copy = ivy.copy_array(c)
d = ivy.rot90(c, k=3)
d_copy = ivy.copy_array(d)
e = ivy.split(d)
e_copy = ivy.copy_array(e[0])
array = (a, b, c, d, e)[array_to_update]
if array_to_update == 4:
for arr in array:
arr += 1
else:
array += 1
assert np.allclose(a, a_copy + 1)
assert np.allclose(b, b_copy + 1)
assert np.allclose(c, c_copy + 1)
assert np.allclose(d, d_copy + 1)
assert np.allclose(e[0], e_copy + 1)
ivy.previous_backend()
| ivy/ivy_tests/test_ivy/test_misc/test_func_wrapper.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_misc/test_func_wrapper.py",
"repo_id": "ivy",
"token_count": 3034
} | 63 |
"""Collection of tests for Ivy sequential."""
# global
import itertools
from hypothesis import strategies as st
# local
import ivy
from ivy_tests.test_ivy import helpers
from ivy_tests.test_ivy.helpers.testing_helpers import handle_method
class TrainableModule(ivy.Module):
def __init__(self, in_size, hidden_size, out_size):
self._linear0 = ivy.Linear(in_size, hidden_size)
self._linear1 = ivy.Linear(hidden_size, out_size)
ivy.Module.__init__(self)
def _forward(self, x):
x = self._linear0(x)
return self._linear1(x)
# --- Helpers --- #
# --------------- #
def _copy_weights(v1, v2):
# copy weights from layer1 to layer2
v2.w = ivy.copy_array(v1.w)
v2.b = ivy.copy_array(v1.b)
# Helpers #
###########
def _train(module, input_arr):
def loss_fn(_v):
return ivy.abs(ivy.mean(input_arr) - ivy.mean(module(input_arr, v=_v)))
# initial loss
loss_tm1, grads = ivy.execute_with_gradients(loss_fn, module.v)
loss = None
losses = []
for i in range(5):
loss, grads = ivy.execute_with_gradients(loss_fn, module.v)
module.v = ivy.gradient_descent_update(module.v, grads, 1e-5)
losses.append(loss)
# loss is lower or very close to initial loss
assert loss <= loss_tm1 or ivy.abs(loss - loss_tm1) < 1e-5
return losses
# --- Main --- #
# ------------ #
@handle_method(
method_tree="Sequential.__call__",
input_array=st.lists(
helpers.floats(
min_value=-1,
max_value=1,
allow_nan=False,
allow_inf=False,
small_abs_safety_factor=1.5,
safety_factor_scale="log",
),
min_size=1,
max_size=5,
),
dims=st.lists(st.integers(1, 10), min_size=1, max_size=5),
use_activation=st.booleans(),
)
def test_sequential_construction_and_value(
input_array, dims, use_activation, on_device, backend_fw
):
with ivy.utils.backend.ContextManager(backend_fw):
dims = [len(input_array)] + dims
layer_count = len(dims)
layers = [
ivy.Linear(dims[i], dims[i + 1], device=on_device)
for i in range(layer_count - 1)
]
if use_activation:
activations = [ivy.GELU() for _ in range(layer_count - 1)]
layers = itertools.chain.from_iterable(zip(layers, activations))
module = ivy.Sequential(*layers)
input_array = ivy.array(input_array, dtype="float32", device=on_device)
if backend_fw != "numpy":
_train(module, input_array)
@handle_method(
method_tree="Sequential.__call__",
input_array=st.lists(
helpers.floats(
min_value=0,
max_value=1,
allow_nan=False,
allow_inf=False,
small_abs_safety_factor=1.5,
safety_factor_scale="log",
),
min_size=1,
max_size=5,
),
dims=st.lists(st.integers(1, 10), min_size=2, max_size=2),
)
def test_sequential_same_as_class(input_array, dims, backend_fw):
with ivy.utils.backend.ContextManager(backend_fw):
dims = [len(input_array)] + dims
layer_count = len(dims)
layers = [ivy.Linear(dims[i], dims[i + 1]) for i in range(layer_count - 1)]
m_sequential = ivy.Sequential(*layers)
m_class = TrainableModule(dims[0], dims[1], dims[2])
# copy weights
_copy_weights(m_class.v.linear0, m_sequential.v.submodules.v0)
_copy_weights(m_class.v.linear1, m_sequential.v.submodules.v1)
input_array = ivy.array(input_array, dtype="float32")
if backend_fw != "numpy":
sequential_loss = _train(m_sequential, input_array)
class_loss = _train(m_class, input_array)
assert sequential_loss == class_loss
| ivy/ivy_tests/test_ivy/test_stateful/test_sequential.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_stateful/test_sequential.py",
"repo_id": "ivy",
"token_count": 1793
} | 64 |
import pickle # noqa
import subprocess
from pydriller import Repository
import os # noqa
import bz2
import _pickle as cPickle
import sys
from get_all_tests import get_all_tests
MAX_TESTS = 10
def get_tests(_tests_file, _line):
tests_file_line = set()
if 0 <= _line < len(_tests_file):
tests_file_line = _tests_file[_line]
return set() if len(tests_file_line) >= MAX_TESTS else tests_file_line
def determine_tests_line(_tests_file, _line, _tests_to_run):
tests_file_line = get_tests(_tests_file, _line)
tests_file_prev = get_tests(_tests_file, _line - 1)
tests_file_next = get_tests(_tests_file, _line + 1)
_tests_to_run.update(tests_file_line)
_tests_to_run.update(tests_file_prev)
_tests_to_run.update(tests_file_next)
return _tests_to_run
def main():
tests = bz2.BZ2File("tests.pbz2", "rb")
tests = cPickle.load(tests)
ref_commit_hash = tests["commit"]
print("Reference Commit: ", ref_commit_hash)
tests_to_run = set()
for commit in Repository(".", single=ref_commit_hash).traverse_commits():
ref_commit = commit._c_object
break
for commit in Repository(".", order="reverse").traverse_commits():
tests["commit"] = commit.hash
diff_index = ref_commit.diff(commit._c_object, create_patch=True)
modified_files = commit._parse_diff(diff_index)
for file in modified_files:
try:
file_name = f"{file.new_path},cover"
except Exception: # noqa
continue
if file_name not in tests.keys():
continue
tests_file = tests[file_name]
change = file.diff_parsed
added = {x - 1 for (x, _) in change["added"]}
deleted = {x - 1 for (x, _) in change["deleted"]}
updated = added.intersection(deleted)
added = added.difference(updated)
deleted = deleted.difference(updated)
# Now Update the Mapping and compute the tests to run
for line in deleted:
tests_to_run = determine_tests_line(tests_file, line, tests_to_run)
for line in sorted(deleted, reverse=True):
if line < len(tests_file):
del tests_file[line]
for line in added:
top = -1
bottom = -1
if 0 <= line - 1 < len(tests_file):
top = tests_file[line - 1]
if 0 <= line + 1 < len(tests_file):
bottom = tests_file[line + 1]
tests_line = set()
if top != -1 and bottom != -1:
tests_line = top.intersection(bottom)
elif top != -1:
tests_line = top
elif bottom != -1:
tests_line = bottom
tests_file.insert(line, tests_line)
tests[file_name] = tests_file
# Now Compute the Tests to Run
for line in updated:
tests_to_run = determine_tests_line(tests_file, line, tests_to_run)
for line in added:
tests_to_run = determine_tests_line(tests_file, line, tests_to_run)
break
if len(sys.argv) >= 2 and sys.argv[1] == "1":
print("Checking for any new tests added!")
new_tests = get_all_tests()
print("Done!")
# Check for any new tests present
old_tests = tests["index_mapping"]
added_tests = set(new_tests) - set(old_tests)
removed_tests = set(old_tests) - set(new_tests)
with open("tests_to_remove", "w") as f:
for test in removed_tests:
f.write(test + "\n")
added_tests = list(added_tests)
# if it is a PR, we must check that the tests added were in the files_changes
if len(sys.argv) >= 3 and sys.argv[2] == "pr":
relevant_added_tests = []
subprocess.run(
["git", "remote", "add", "upstream", "https://github.com/unifyai/ivy"]
)
subprocess.run(["git", "fetch", "upstream"])
lca_sha = subprocess.check_output(
["git", "merge-base", "HEAD", "upstream/main"]
)
lca_hash = lca_sha.decode().strip()
for commit in Repository(".", single=lca_hash).traverse_commits():
lca_commit = commit._c_object
break
for commit in Repository(".", order="reverse").traverse_commits():
diff_index = lca_commit.diff(commit._c_object, create_patch=True)
modified_files = commit._parse_diff(diff_index)
break
for test in added_tests:
for file in modified_files:
if file.new_path.strip() in test:
relevant_added_tests.append(test)
break
added_tests = relevant_added_tests
elif len(added_tests) > 50:
added_tests = added_tests[:50]
# Add these new_tests in the Mapping
old_num_tests = len(old_tests)
tests["index_mapping"] += added_tests
new_tests = tests["index_mapping"]
num_tests = len(new_tests)
for i in range(old_num_tests, num_tests):
tests["tests_mapping"][new_tests[i]] = i
directories = (
[x[0] for x in os.walk("ivy")]
+ [x[0] for x in os.walk("ivy_tests/test_ivy")]
+ ["ivy_tests"]
)
directories_filtered = [
x
for x in directories
if not x.endswith("__pycache__") and "hypothesis" not in x
]
directories = set(directories_filtered)
for test_backend in new_tests[old_num_tests:num_tests]:
tests_to_run.add(tests["tests_mapping"][test_backend])
if len(sys.argv) < 3:
print("Computing Coverage:", test_backend)
test_name, backend = test_backend.split(",")
command = (
f'docker run -v "$(pwd)":/ivy unifyai/ivy:latest /bin/bash -c "coverage run --source=ivy,' # noqa
f"ivy_tests -m pytest {test_name} --backend {backend} --disable-warnings > coverage_output;coverage " # noqa
f'annotate > coverage_output" '
)
os.system(command)
for directory in directories:
for file_name in os.listdir(directory):
if file_name.endswith("cover"):
file_name = f"{directory}/{file_name}"
if file_name not in tests:
tests[file_name] = []
with open(file_name) as f:
for line in f:
tests[file_name].append(set())
with open(file_name) as f:
i = 0
for line in f:
if i >= len(tests[file_name]):
tests[file_name].append(set())
if line[0] == ">":
tests[file_name][i].add(
tests["tests_mapping"][test_backend]
)
i += 1
os.system("find . -name \\*cover -type f -delete")
with bz2.BZ2File("tests.pbz2", "w") as f:
cPickle.dump(tests, f)
print("----- Determined Tests -----")
print(len(tests_to_run))
for test_index in tests_to_run:
print(tests["index_mapping"][test_index])
print("----------------------------")
with open("tests_to_run", "w") as f:
for test_index in tests_to_run:
test = tests["index_mapping"][test_index]
f.write(test + "\n")
if __name__ == "__main__":
main()
| ivy/scripts/determine_tests/determine_tests.py/0 | {
"file_path": "ivy/scripts/determine_tests/determine_tests.py",
"repo_id": "ivy",
"token_count": 4192
} | 65 |
import sys
from get_all_tests import get_all_tests
torch_req = ["torch/2.0.0", "torch/2.0.1"]
tensorflow_req = [
"tensorflow/2.13.0",
"tensorflow/2.14.0",
]
jax_req = [
"jax/0.4.10",
"jax/0.4.14",
]
numpy_req = [
"numpy/1.25.0",
"numpy/1.24.0",
]
framework_versions = {
"numpy": numpy_req,
"torch": torch_req,
"jax": jax_req,
"tensorflow": tensorflow_req,
}
run_iter = int(sys.argv[1])
all_tests = get_all_tests()
test_names_without_backend = [test.split(",")[0].strip() for test in all_tests]
test_names = []
for test_name in test_names_without_backend:
for backend, backend_versions in framework_versions.items():
for backend_version in backend_versions:
test_backend = test_name + "," + backend_version
test_names.append(test_backend)
# Run 150 tests in each iteration of the cron job
num_tests = len(test_names)
tests_per_run = 5
start = run_iter * tests_per_run
end = (run_iter + 1) * tests_per_run
print("Running Tests:")
with open("tests_to_run", "w") as f:
for i in range(start, end):
i = i % num_tests
test = test_names[i]
if "test_frontends" in test:
continue # skip frontend tests (No support from testing)
print(test)
f.write(test + "\n")
| ivy/scripts/setup_tests/cron_tests_multi_version.py/0 | {
"file_path": "ivy/scripts/setup_tests/cron_tests_multi_version.py",
"repo_id": "ivy",
"token_count": 577
} | 66 |
#!/bin/bash
# shellcheck disable=SC2046
docker run --rm -v "$(pwd)":/ivy unifyai/ivy:latest python3 ivy/test_dependencies.py -fp ivy/requirements.txt,ivy/optional.txt
| ivy/scripts/shell/test_dependencies.sh/0 | {
"file_path": "ivy/scripts/shell/test_dependencies.sh",
"repo_id": "ivy",
"token_count": 67
} | 67 |
{
"tensorflow": [
{
"tensorflow-probability": {
"2.13.0": "0.21.0",
"2.12.0": "0.20.0",
"2.11.0": "0.19.0"
}
}
],
"jax": [
"dm-haiku",
"flax",
{
"jaxlib": {
"0.4.17": "0.4.17",
"0.4.14": "0.4.14",
"0.4.10": "0.4.10",
"0.4.8": "0.4.7"
}
},
{
"ml_dtypes": {
"0.4.10": "0.2.0"
}
}
],
"numpy": [
"numpy"
],
"paddlepaddle": [
"paddlepaddle"
],
"mxnet": [
"mxnet"
],
"torch": [
"torch-scatter",
"torchvision"
]
}
| ivy/docker/requirement_mappings_multiversion.json/0 | {
"file_path": "ivy/docker/requirement_mappings_multiversion.json",
"repo_id": "ivy",
"token_count": 399
} | 0 |
Setting Up
==========
.. _`repo`: https://github.com/unifyai/ivy
.. _`discord`: https://discord.gg/sXyFF8tDtm
.. _`pycharm thread`: https://discord.com/channels/799879767196958751/1186628916522262629
.. _`docker thread`: https://discord.com/channels/799879767196958751/1186629067966009424
.. _`pre-commit thread`: https://discord.com/channels/799879767196958751/1186629635694399539
.. _`pip packages thread`: https://discord.com/channels/799879767196958751/1186629837515935765
.. _`miniconda`: https://docs.conda.io/en/latest/miniconda.html
.. _`venv`: https://docs.python.org/3/library/venv.html
.. _`ivy/scripts`: https://github.com/unifyai/ivy/tree/bcddc79978afe447958dfa3ea660716845c85846/scripts
.. _`platform compatibility tags`: https://packaging.python.org/en/latest/specifications/platform-compatibility-tags/
.. _`logging level`: https://docs.python.org/3/library/logging.html#logging.Logger.setLevel
We're really happy you'd like to learn how to contribute towards Ivy π
This page explains the main steps to get started!
Forking and cloning the repo
----------------------------
#. You will first need to fork the Ivy repository from the repository page here `repo`_ by using the fork button on the top right. This creates a copy of the Ivy repository in your GitHub account.
#. Clone your forked repo to your local machine.
Depending on your preferred mode of cloning, any of the below should work:
.. code-block:: none
git clone --recurse-submodules [email protected]:YOUR_USERNAME/ivy.git
.. code-block:: none
git clone --recurse-submodules https://github.com/YOUR_USERNAME/ivy.git
.. code-block:: none
gh repo clone YOUR_USERNAME/ivy your_folder -- --recurse-submodules
Then enter into your cloned ivy folder, for example :code:`cd ~/ivy` and add Ivy original repository as upstream, to easily sync with the latest changes.
.. code-block:: none
git remote add upstream https://github.com/unifyai/ivy.git
Pre-Commit
----------
Our development team also makes use of the :code:`pre-commit` PyPI `package <https://pypi.org/project/pre-commit/>`_.
Check out their `page <https://pre-commit.com/>`_ for more details.
In a nutshell, this enables us to add pre-commit hooks which check for lint errors before a commit is accepted, and then also (in most cases) automatically make the necessary fixes.
If the lint tests fail when a commit is attempted, then the commit will not succeed, and the problematic lines are printed to the terminal.
Fixes are then applied automatically where possible.
To proceed with the commit, the modified files must be re-added using git, and the commit will then succeed on the next attempt.
In order to install and properly set up pre-commit, these steps should be followed:
1. Run :code:`python3 -m pip install pre-commit`
2. Enter into your cloned ivy folder, for example :code:`cd ~/ivy`
3. Run :code:`pre-commit install`
That's it! Now when you make a commit, the pre-commit hooks will all be run correctly, as explained above.
For questions, please reach out on `discord`_ in the `pre-commit thread`_!
PyCharm
-------
`Pycharm <https://www.jetbrains.com/pycharm/>`_ is the main IDE of choice for our development team.
However, you are of course welcome to use whatever Integrated Development Environment (IDE) you're most familiar with.
If you do decide to use PyCharm, you should make sure to check whether you are eligible for a `free student license <https://www.jetbrains.com/community/education/#students>`_.
Many people seem to miss this option, so we thought we would add an explicit reminder here in the setting up guide!
**Important Points**
#. Once you don't have a student account, the student license will expire and you won't be able to access PyCharm Professional.
#. To continue using PyCharm Professional, you can use the trial version making a jetbrains account but that would be only valid for 1 month.
#. After the trial expires you have to buy the paid version of PyCharm Professional.
For questions, please reach out on `discord`_ in the `pycharm thread`_!
Virtual environments - No Docker
--------------------------------
Due to the rapid pace of updates in Ivy, it is strongly suggested for developers to use the latest ivy package from GitHub source, as explained below.
This is to ensure the contributors' code and examples are as aligned and in accordance with the latest as possible.
The stable version of Ivy from PyPI maybe used for personal projects and experiments but avoided in development, for now.
If you want to use the stable version, you are welcome to use the docker container or pip install ivy.
Below is a guide to creating your own virtual environment.
The benefit of creating a python environment is the ability to install certain packages for a project and then other packages (perhaps different versions) in a new environment for another project.
This makes it very easy to keep track of installed packages and their versions.
Below is a guide for setting up a developing environment for Ivy.
You can either use `miniconda`_ or `venv`_:
Using miniconda
***************
#. Install `miniconda`_
#. Open conda terminal
#. Create the environment by running the command (:code:`ivy_dev` is the name of the environment)
.. code-block:: none
conda create --name ivy_dev python=3.10.0
#. Activate the environment by:
.. code-block:: none
conda activate ivy_dev
#. Now install the ivy package for development by running the command below:
.. code-block:: none
pip install -e .
#. Setup the interpreter by:
#. Pycharm
a. Going to settings -> project -> Python Interpreter
b. Clicking add interpreter (currently by clicking the β icon on the right side) which should open a new window.
c. Choosing "conda environment" from the left panel. Choose the existing environment and select the drop down and you should find the path python in the environment.
#. VSCode
a. Go to the command palette (Ctrl+Shift+P) or (β+shift+p) for Mac and type "Python: Select Interpreter" and select the environment you created.
If you don't find a path to your created python environment, you can run :code:`where python` in the conda command line while the environment is activate and it should give the path which can be added manually.
#. Installing the development dependencies.
a. On Linux, Windows, or Intel Mac, you will need to use the `optional.txt` requirements file. To install dependencies.
.. code-block:: none
pip install -r requirements/optional.txt
b. On M1 Mac, you will need to use the optional_apple_silicon_1 and optional_apple_silicon_2 requirements files. To install dependencies.
.. code-block:: none
pip install -r requirements/optional_apple_silicon_1.txt
pip install -r requirements/optional_apple_silicon_2.txt
#. Installing array API testing dependencies.
To make sure you have all the packages for running tests available change the directory to :code:`ivy/ivy_tests/array_api_testing/test_array_api` in your cloned fork using the :code:`cd` command and run the command below (while your :code:`ivy_dev` environment is active):
.. code-block:: none
pip install -r requirements.txt
This will install packages required for running the tests in Array API suite.
Using venv
**********
This is a builtin package and doesn't require explicit installation.
#. Open your terminal/cmd in the directory where you would like to have the folder with the environment files.
#. Create the environment by running the command below with a new environment name.
We named it :code:`ivy_dev` like above.
.. code-block:: none
python -m venv ivy_dev
Try :code:`python3` if :code:`python` doesn't work.
#. Activate the created environment by running (in the same working directory as the environment folder):
.. code-block:: none
ivy_dev\Scripts\activate.bat
(on Windows)
OR
.. code-block:: none
source ivy_dev/bin/activate
(on Mac/Linux)
#. Now install the ivy package for development by running the command below:
.. code-block:: none
pip install -e .
#. Setup the interpreter by:
#. Pycharm
a. Going to settings -> project -> Python Interpreter
b. Clicking add interpreter (currently by clicking the β icon on the right side) which should open a new window.
c. Choosing "virtualenv environment" from the left panel. Choose an existing environment and add the path to python. The path to python can be found by :code:`where python` on Windows and :code:`which python` in Linux/Mac OS.
Note: You may tick "Make available to all projects" so you will be able to find the interpreter from the conda/venv environment in any future projects.
#. VSCode
a. Go to the command palette (Ctrl+Shift+P) or (β+shift+p) for Mac and type `Python: Select Interpreter` and select the environment you created.
#. Installing the development dependencies.
a. On Linux, Windows, or Intel Mac, you will need to use the `optional.txt` requirements file. To install dependencies.
.. code-block:: none
pip install -r requirements/optional.txt
Note: In case you are using Ubuntu 22.04, PaddlePaddle won't install properly. You have to download it from the source.
.. code-block:: none
wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.20_amd64.deb
sudo dpkg -i libssl1.1_1.1.1f-1ubuntu2.20_amd64.deb
PS: If the link gets expired at some point in the future, check http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/?C=M;O=D for a valid one.
b. On M1 Mac, you will need to use the optional_apple_silicon_1 and optional_apple_silicon_2 requirements files. To install dependencies.
.. code-block:: none
pip install -r requirements/optional_apple_silicon_1.txt
pip install -r requirements/optional_apple_silicon_2.txt
#. Installing array API testing dependencies.
To make sure you have all the packages for running tests available change the directory to :code:`ivy/ivy_tests/array_api_testing/test_array_api` in your cloned fork using the :code:`cd` command and run the command below (while your :code:`ivy_dev` environment is active):
.. code-block:: none
pip install -r requirements.txt
This will install packages required for running the tests in the Array API suite.
Here are the visual guides for setting up a `virtualenv environment <https://www.jetbrains.com/help/pycharm/creating-virtual-environment.html#0>`_ OR `conda environment <https://www.jetbrains.com/help/pycharm/conda-support-creating-conda-virtual-environment.html>`_ in pycharm from JetBrains.
For VSCode, you can follow the instructions `virtual environments <https://code.visualstudio.com/docs/python/environments#_creating-environments>`_.
**Installing Ivy from source**
You can also install Ivy from source if you want to take advantage of the latest changes, but we can't ensure everything will work as expected. All the steps will remain the same for miniconda and venv as described above, only the command for point 4 for venv and point 5 for miniconda will change, everything else will remain the same. You have to run the following instead:
.. code-block:: none
pip install git+https://github.com/unifyai/ivy.git
Docker Interpreter with PyCharm
-------------------------------
Setting up and using the same remote python interpreter provided as a docker container helps make sure we are all using the same packages (same environment) and helps to mitigate any potential version conflicts etc.
In addition, it makes it possible to use modules not yet available for a particular operating system, such as :code:`jaxlib` on a Windows machine.
Below, we provide instructions for setting up a docker interpreter for `Pycharm <https://www.jetbrains.com/pycharm/>`_, which, as mentioned above, is the IDE of choice for our development team:
Windows
*******
#. Install `Docker Desktop <https://www.docker.com/products/docker-desktop>`_
#. Install `WSL 2 <https://docs.microsoft.com/en-us/windows/wsl/install>`_.
For most, it will only require running the command :code:`wsl --install` in powershell admin mode.
Visit the link if it doesn't.
#. Install `Pycharm Professional Version <https://www.jetbrains.com/pycharm/>`_, make sure to only install the Professional version of PyCharm, not the Community version.
#. Open pycharm with your cloned Ivy repository.
Add the remote python interpreter by:
a. Going to the settings -> Build, Execution, Deployment -> Docker
Click the "+" on the top left and it should add a docker connection.
b. Going to settings -> project -> Python Interpreter
c. Clicking add interpreter (currently by clicking the β icon on the right side) which should open a new small drop down menu. Select "On Docker...". A window will open which will have three steps.
#. It will ask to create a new Docker target, at this step you have to select the following:
a. Docker image -> Docker
b. Image -> Pull
c. Image tag -> unifyai/ivy:latest
d. Select "Next"
#. The image will start pulling. It will take a respectable amount of time to complete. Once you see the "Introspection Completed" message, select "Next".
#. Another window will appear, at this step select the following:
a. In the left panel select "System Interpreter".
b. For Interpreter, select the default option which will be "/usr/bin/python3" the select "Create".
#. Opening "Edit Run/Debug configurations" dialog -> "Edit Configurations..." and making sure that "Working directory" is empty in case of getting the "Can't run process: the working directory '\ivy' is invalid, it needs to be an absolute path" error.
#. Everyone using PyCharm with the latest docker image and facing issues after setting up everything. All you need to do is add the paths here once, and then go to :code:`File--> Save all` for this configuration to persist. Just as shown in the image below, The paths would be:
.. code-block:: none
/opt/fw/numpy
/opt/fw/jax
/opt/fw/tensorflow
/opt/fw/torch
/opt/fw/paddle
/opt/fw/mxnet
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/pycharm_with_docker/docker_newimage_fix.png?raw=true
:width: 420
Once these steps are finished, your interpreter should be set up correctly!
If Docker's latest version causes an error, try using an earlier version by visiting `Docker release note <https://docs.docker.com/desktop/release-notes/>`_.
For some Windows users, it might be necessary to enable virtualisation from the BIOS setup.
**Video**
.. raw:: html
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/7I_46c2AvJg" class="video" allowfullscreen="true">
</iframe>
MacOS
*****
#. Install `Docker Desktop <https://www.docker.com/products/docker-desktop>`_.
#. Get the latest Docker Image for Ivy by:
a. Running Docker desktop.
b. Opening the terminal, and running the command: :code:`docker pull unifyai/ivy:latest`
#. Install `Pycharm Professional Version <https://www.jetbrains.com/pycharm/>`_
#. Open pycharm with your cloned Ivy repository.
Add the remote python interpreter by:
a. Going to the settings -> Build, Execution, Deployment -> Docker.
Click the "+" on the top left and it should add a docker connection.
b. Going to settings -> project -> Python Interpreter
c. Clicking add interpreter (currently by clicking the β icon on the right side) which should open a new window.
d. Choosing "On Docker" from the dropdown menu.
e. Choosing "Docker" from the "Docker server" dropdown menu, choosing "Pull" if you want to use a remote interpreter, and using :code:`unifyai/ivy:latest` as the image tag.
f. If you don't want to use a remote interpreter, choose "Build" and use the suitable Dockerfile; then choosing :code:`docker/Dockerfile` to be the Dockerfile.
g. Clicking next and navigating to the system interpreter tab from the menu on the left.
h. Choosing the built interpreter from the dropdown menu.
Once these steps are finished, your interpreter should be set up correctly!
If Docker's latest version causes an error, try using an earlier version by visiting `Docker release note <https://docs.docker.com/desktop/release-notes/>`_.
**Important Note**
When setting up on an M1 Mac, you would have to update the Dockerfile to install libraries from :code:`requirements/optional_apple_silicon_1.txt` and :code:`requirements/optional_apple_silicon_2.txt` instead of :code:`requirements/optional.txt`.
**Video**
.. raw:: html
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/5BxizBIC-GQ" class="video" allowfullscreen="true">
</iframe>
Ubuntu
******
#. Install Docker by running the commands below one by one in the Linux terminal.
You may visit `Docker Ubuntu Installation Page <https://docs.docker.com/engine/install/ubuntu/>`_ for the details.
.. code-block:: none
sudo apt-get update
.. code-block:: none
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release
.. code-block:: none
sudo mkdir -p /etc/apt/keyrings
.. code-block:: none
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
.. code-block:: none
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
.. code-block:: none
sudo apt-get update
.. code-block:: none
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
#. Get the latest Docker Image for Ivy by:
a. Opening terminal and running :code:`systemctl start docker`
b. Running the command: :code:`docker pull unifyai/ivy:latest`
Note: If you get permission related errors please visit the simple steps at `Linux post-installation page <https://docs.docker.com/engine/install/linux-postinstall/>`_.
#. Install Pycharm Professional Version.
You may use Ubuntu Software for this.
#. Open pycharm with your cloned Ivy repository.
Add the remote python interpreter by:
a. Going to the settings -> Build, Execution, Deployment -> Docker.
Click the "+" on the top left and it should add a docker connection.
b. Going to settings -> project -> Python Interpreter
c. Clicking add interpreter (currently by clicking the β icon on the right side) which should open a new window.
d. Choosing "Docker" from the left panel.
Type python3 (with the number) in python interpreter path and press ok.
**Docker Connection not Successful**
This is a common error which you might face. If you are not successfully able to connect docker with Pycharm(point 4a) and your docker is also running, the issue is that you are not able to use your docker socket. So, executing the below two commands should solve this.
.. code-block:: none
sudo chmod a+rwx /var/run/docker.sock
.. code-block:: none
sudo chmod a+rwx /var/run/docker.pid
For questions, please reach out on `discord`_ in the `docker thread`_!
**Video**
.. raw:: html
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/UHeSnZu0pAI" class="video" allowfullscreen="true">
</iframe>
Setting Up Testing in PyCharm
-----------------------------
There are a couple of options to choose from when running ivy tests in PyCharm.
To run a single unit test, e.g. `test_abs`, you can avail of the context menu in the PyCharm code editor by pressing the green βΆοΈ symbol which appears to the left of `def test_abs(`.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/setting_up_testing/pycharm_test_run_1.png?raw=true
:width: 420
You can then click 'Run pytest for...' or 'Debug pytest for...'.
Keyboard shortcuts for running the rest are displayed also.
These screenshots are from a Mac, hence the shortcut for running a test is :code:`ctrl - shift - R`.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/setting_up_testing/pycharm_test_run_2.png?raw=true
:width: 420
The test run should pop up in a window at the bottom of the screen (or elsewhere, depending on your settings).
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/setting_up_testing/pycharm_test_run_3.png?raw=true
:width: 420
To run all the tests in a file, press :code:`ctrl` - right click (on Mac) on the :code:`test_elementwise.py` open tab.
A menu will appear in which you can find 'Run pytest in test_elementwise.py...'
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/setting_up_testing/pycharm_run_all_1.png?raw=true
:width: 420
Click this and you should see a progress bar of all the tests running in the file.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/setting_up_testing/pycharm_run_all_2.png?raw=true
:width: 420
It is also possible to run the entire set of ivy tests or the array api test suite using pre-written shell scripts that can be run from the 'Terminal' tab in PyCharm.
There are a number of such shell scripts in `ivy/scripts`_:
.. code-block:: bash
:emphasize-lines: 4,5,8,9,10
scripts/setup_tests/run_ivy_core_test.py
scripts/setup_tests/run_ivy_nn_test.py
scripts/setup_tests/run_ivy_stateful_test.py
scripts/shell/run_tests.sh
scripts/shell/test_array_api.sh
scripts/test_dependencies.py
scripts/shell/test_dependencies.sh
scripts/shell/test_ivy_core.sh
scripts/shell/test_ivy_nn.sh
scripts/shell/test_ivy_stateful.sh
**For Unix-based systems (Linux and macOS):**
* :code:`scripts/shell/run_tests.sh` is run by typing :code:`./scripts/shell/run_tests.sh` in the :code:`/ivy` directory.
This runs all tests in :code:`ivy/ivy_tests`.
* :code:`scripts/shell/test_array_api.sh` is run by typing :code:`./scripts/shell/test_array_api.sh [backend] test_[submodule]`.
This runs all array-api tests for a certain submodule in a certain backend.
* :code:`scripts/shell/test_ivy_core.sh` is run by typing :code:`./scripts/shell/test_ivy_core.sh [backend] test_[submodule]` in the ivy directory.
This runs all ivy tests for a certain submodule in a certain backend in :code:`test_ivy/test_functional/test_core`.
* :code:`scripts/shell/test_ivy_nn.sh`, :code:`scripts/shell/test_ivy_stateful.sh` are run in a similar manner to :code:`scripts/shell/test_ivy_core.sh`.
Make sure to check the submodule names in the source code before running.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/setting_up_testing/pycharm_run_array_api_tests.png?raw=true
:width: 420
**For Windows users:**
For Windows users, you may need to specify that the shell scripts should be run by :code:`sh`, which comes with Git. In the Terminal, prepend sh to the script commands like so:
* To run :code:`scripts/shell/run_tests.sh` on Windows, type :code:`sh ./scripts/shell/run_tests.sh` in the :code:`/ivy` directory.
This runs all tests in :code:`ivy/ivy_tests`.
* To run :code:`scripts/shell/test_array_api.sh` on Windows, type :code:`sh ./scripts/shell/test_array_api.sh [backend] test_[submodule]`.
This runs all array-api tests for a certain submodule in a certain backend.
* To run :code:`scripts/shell/test_ivy_core.sh` on Windows, type :code:`sh ./scripts/shell/test_ivy_core.sh [backend] test_[submodule]` in the ivy directory.
This runs all ivy tests for a certain submodule in a certain backend in :code:`test_ivy/test_functional/test_core`.
* :code:`scripts/shell/test_ivy_nn.sh`, :code:`scripts/shell/test_ivy_stateful.sh` are run in a similar manner to :code:`scripts/shell/test_ivy_core.sh` on Windows.
Make sure to check the submodule names in the source code before running.
The above instructions for running tests on Windows assume that you have installed Git and have access to the Git Bash terminal. If you do not have Git Bash, you can download it from the `official Git website <https://git-scm.com/downloads>`_.
If you wish to run tests of all submodules of `ivy_core`, `ivy_nn` or `ivy_stateful`, there are :code:`.py` available in :code:`scripts/shell`.
All are run like: :code:`python scripts/setup_tests/run_ivy_nn_test.py 1`, where 1 = numpy, 2 = torch, 3 = jax, and 4 = tensorflow.
More Detailed Hypothesis Logs in PyCharm
---------------------------------------
For testing, we use the `Hypothesis <https://hypothesis.readthedocs.io/en/latest/#>`_ module for data generation.
During testing, if Hypothesis detects an error, it will do its best to find the simplest values that are causing the error.
However, when using PyCharm, if Hypothesis detects two or more distinct errors, it will return the number of errors found and not return much more information.
This is because PyCharm by default turns off headers and summary's while running tests.
To get more detailed information on errors in the code, we recommend doing the following:
#. Going to the settings -> Advanced
#. Using the search bar to search for 'Pytest'
#. Make sure that the checkbox for 'Pytest: do not add "--no-header --no-summary -q"' is checked.
a. .. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/more_detailed_hypothesis_logs/detailed_hypothesis_setting.png?raw=true
:width: 420
Now, if Hypothesis detects an error in the code it will return more detailed information on each of the failing examples:
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/more_detailed_hypothesis_logs/detailed_hypothesis_example.png?raw=true
:width: 420
For questions, please reach out on `discord`_ in the `docker thread`_!
**"Empty Suite" error fix:**
Click on the "green arrow button" from where you run the function in PyCharm. Open "Modify Run Configuration...", under "Target:" on the right side click on "..." it'll open a new window, manually add the path to the specific function, For instance, for stateful -> "test_stateful.test_submodule_name.test_function_name" and for functional -> "test_submodule_name.test_function_name", the function will pop up below, select that, click on "Apply" then "OK". Now, do not run the test from the "green arrow button" in the left panel, run it from above where there is a "green arrow button" on the left side of the "debugger button" making sure you've selected the latest modified configuration of that specific test you want to run.
Setting up for Free
-------------------
Visual Studio Code is a recommended free alternative to setting up, especially if you're not eligible for a student license with PyCharm Professional.
The most easiest and the most efficient way would be using Visual Studio Code with the Docker extension.
You'll hopefully be done with this in no time.
The steps to be followed are listed below:
Windows
*******
#. Install `Docker Desktop <https://www.docker.com/products/docker-desktop>`_
#. Install `Visual Studio Code here <https://code.visualstudio.com/>`_
#. Open the Docker desktop, make sure it's running while following the process below.
You can close the Docker desktop window afterwards, Docker will continue to run in the background.
#. Open Visual Studio Code, open the Ivy repo folder, and follow the steps listed below:
a. At the bottom right a window will pop up asking for "Dev Containers" extension, install that.
In case the window doesn't pop up, search for the "Dev Containers" extension in the Visual Studio Code and install that.
b. Install the "Docker" extension for Visual Studio Code, you'll easily find that by searching "docker" in the extensions tab.
c. Once done, restart Visual Studio Code, at the bottom left corner there would be an icon similar to " >< " overlapped on each other.
d. Clicking on that will open a bar at the top which will give you an option "Open Folder in Container...", click on that.
e. You'll be inside the container now, where you can locally run the tests that you've modified by running the command, "pytest test_file_path::test_fn_name". Opening the container may take a long time, as the Docker image is very large (5+ GB).
Ubuntu
******
#. Install `Docker Engine <https://docs.docker.com/engine/install/ubuntu/>`_
#. Install `Visual Studio Code <https://code.visualstudio.com/>`_
#. Clone your fork of the Ivy repository.
#. Open Visual Studio Code, open the Ivy repo folder, and follow the steps listed below:
a. Install the :code:`Dev Containers` and :code:`Docker` extensions.
b. Open the :code:`.devcontainer/devcontainer.json` file.
c. Add a comma (:code:`,`) to the end entry :code:`"postCreateCommand": "bash .devcontainer/post_create_commands.sh"`, making it :code:`"postCreateCommand": "bash .devcontainer/post_create_commands.sh",`.
d. Add in the line :code:`"postStartCommand": "git config --global --add safe.directory ${containerWorkspaceFolder}"` on the line immediately after the :code:`postCreateCommand` line.
e. Click the remote explorer icon in the bottom left. It looks roughly like "><" overlapped on each other.
f. Click :code:`Reopen in Container` in the dropdown menu.
g. You'll be inside the container now, where you can locally run the tests running the command, :code:`pytest test_fle_path::test_fn_name`. Opening the container may take a long time, as the Docker image is very large (5+ GB).
**Important Note**
For windows users, the file path should be entered with "/" (forward-slashes), for other OS it would be the regular "\\" (back-slashes).
WSL
***
It is understandable that working with computationally heavy tools like Docker and PyCharm is not always comfortable for developers.
By utilizing WSL, you can run a Linux distribution on your Windows machine, and in addition, venv is leveraged to create
isolated Python environments eliminating the need for a full-fledged containerization solution like Docker, and with VSCode being an appropriate alternative to PyCharm,
the steps explained below will help you in setting up a less resource-intensive Ivy environment.
#. Install `WSL <https://learn.microsoft.com/en-us/windows/wsl/install>`_.
#. Install `Visual Studio Code <https://code.visualstudio.com/>`_.
You can follow `this guide <https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-vscode>`_ to integrate WSL into VSCode.
#. Open the WSL terminal by typing in the name of your Linux distribution in the windows start menu (e.g. :code:`Ubuntu`).
#. Create a virtual environment by following the steps below:
a. Install the python virtual environment package :code:`venv`.
.. code-block:: none
sudo apt install python3-venv
b. Create your virtual environment named :code:`ivy_dev`.
.. code-block:: none
python3 -m venv ivy_dev
c. Activate your environment.
.. code-block:: none
source ivy_dev/bin/activate
#. You can now install the Ivy package from Github by running:
.. code-block:: none
pip install git+https://github.com/unifyai/ivy.git
#. Or else, if you want to set up a local repository, you can do so by following :ref:`this guide <overview/contributing/setting_up:Forking and cloning the repo>`
as explained above and install the required development dependencies by running:
.. code-block:: none
cd ivy/
.. code-block:: none
pip install -r requirements/requirements.txt
pip install -r requirements/optional.txt
#. Once done, you can now open VSCode right from your terminal and get started with your development by just running:
.. code-block:: none
code .
#. To set up the Python Interpreter in VSCode, go to the command palette (Ctrl+Shift+P) and type **Python: Select Interpreter** and select the environment you created.
For a more detailed explanation, you can follow `this guide <https://code.visualstudio.com/docs/python/environments#_working-with-python-interpreters>`_.
#. Now that your development environment is set up, you can now run tests locally by running :code:`pytest test_fle_path::test_fn_name` in the terminal or
if you want to set up testing in VSCode, you may follow the guide **Setting Up Testing** for VSCode as explained below, next to this subsection.
GitHub Codespaces
*****************
It can be a headache to install Docker and setup the PyCharm development environment, especially on recent ARM architectures like the new M1 Macs.
Instead, we could make use of the GitHub Codespaces feature provided; this feature creates a VM (Virtual Machine) on the Azure cloud (which means no local computation) with the same configuration as defined by :code:`ivy/Dockerfile`.
Since it's a VM, we no longer have to worry about installing the right packages, modules etc.
We can develop as we usually do on Visual Studio Code with all your favourite extensions and themes available in Codespaces too.
With all the computations being done on the cloud, we could contribute to Ivy using unsupported hardware, old/slow systems, even from your iPad as long as you have Visual Studio code or a browser installed.
**Important Note**
There are several versions of GitHub.
If you are using the free one you will have *limited* access to GitHub Codespaces, you can read the exact quotas available `here <https://docs.github.com/en/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces#monthly-included-storage-and-core-hours-for-personal-accounts>`_.
**Pre-requisites**
1. Before we setup GitHub Codespaces, we need to have Visual Studio Code installed (you can get it from `here <https://code.visualstudio.com/>`_).
2. Once the Visual Studio Code is installed, head over to the extension page (it's icon is on the left pane), search "Codespaces" and then install the extension locally.
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/extension_install.png?raw=true
:width: 420
Now we are ready to begin!
**Setting up Codespaces**
Just follow the steps outlined below:
1. Go to your fork of :code:`ivy`, and then click on the green "Code" dropdown, go to the Codespaces tab, and then click on three dots, then click ``new with options...``.
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/fork_create_codespace.png?raw=true
:width: 420
2. You will get the following screen, then you will select the branch.
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/select_branch.png?raw=true
:width: 420
3. Then you will head to the dropdown of "Dev container configuration", then select an image to set up with. As there are six options available as of now
- :code:`Default project configuration` - This is the default option, it will set up with the default codespaces environment.
- :code:`Ivy Development Environment (build)` - This will set up the development environment of ivy for CPU and build image from :code:`ivy/docker/Dockerfile`.
- :code:`Ivy GPU Development Environment (build)` - This will set up the development environment of ivy for GPU and build image from :code:`ivy/docker/DockerfileGPU`.
- :code:`Ivv Development Environment for Multiver...` - This will set up the development environment of multiversion support with ivy and build image from :code:`ivy/docker/DockerfileMultiversion`.
- :code:`Ivy Development Environment (image)` - This will set up the development environment of ivy for CPU and build image from the latest image from dockerhub.
- :code:`Ivy GPU Development Environment (image)` - This will set up the development environment of ivy for GPU and build image from the latest image from dockerhub.
For now, we will select :code:`Ivy Development Environment (image)`.
Select your region and preferred machine type, then click on "Create Codespace".
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/devcontainer_config.png?raw=true
:width: 420
4. This will open up a new tab, where you click on "Open this codespaces on VS code desktop".
Give the relevant permissions to the browser to open up Visual Studio Code.
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/open_vscode_desktop.png?raw=true
:width: 420
5. Once visual studio code opens up, it will start building the remote container.
In order to view the logs while the container is being built, you may click on "Building Codespace..." on the bottom right box.
Please be patient while the container is being built, it may take upto 10-15 minutes, but it's a one-time process.
Any subsequent connections to your ivy codespace will launch in 10-12 seconds.
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/building_codespace.png?raw=true
:width: 420
The Log of the container being built would look like the below:
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/log_codespace.png?raw=true
:width: 420
6. Once the container is built, you would see the following output log saying "Finished configuring codespace".
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/codespace_built.png?raw=true
:width: 420
7. That's it, you have just setup GitHub codespaces and can start developing Ivy.
The configuration files install all the required packages, and extensions for you to get started quickly.
**Setting up Codespaces with a GPU**
If you want to setup a GPU instance on codespaces and also have access to it, kindly follow the guidelines below:
1. Points 1 and 2 are the same from ref:`Setting up Codespaces` section above. You will be on a screen shown below. Just select the Machine Type to be "6-Core (1 GPU)".
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/Selecting_the_GPU.png?raw=true
:width: 420
2. Refer to the ref:`Setting up Codespaces` section for the other configurations such as the "Dev container configuration". Your Machine Type section will look like the following image shown below. Feel free to click on the green button to create the instance.
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/Interface_after_selecting_the_GPU_1.png?raw=true
:width: 420
**Opening an existing Codespace**
If you have already setup codespaces, refer to the following to open your previously setup codespaces environment.
There are 3 ways to connect your existing codespaces, you can use any of the approaches mentioned below.
1. Go to your fork of ivy, click on the green coloured dropdown "Code", go to the codespaces tab, then select your codespace.
This will open up a new tab, from there either you can develop on the browser itself, or click on "Open this codespaces on VS code desktop" to open up the visual studio code application and develop from there.
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/existing_codespace_fork.png?raw=true
:width: 420
2. Another way to connect is to open up the visual studio code application.
There is a good chance that you would see :code:`ivy [Codespaces]` or :code:`ivy [vscode-remote]` on your recently opened projects.
If you click either of those, it will open up your codespace.
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/recent_projects.png?raw=true
:width: 420
3. If in any case, it doesn't show your codespace on recent projects, go to the "Remote Connection Explorer" extension tab on the left pane, from there make sure you have selected "Github Codespaces" on the top-left dropdown.
Once you find your codespace, right click on it and then select "Connect to codespace in current window".
.. image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/contributing/setting_up/github_codespaces/connect_existing.png?raw=true
:width: 420
**Troubleshooting**
Sometimes, visual studio code is not able to select the python interpreter.
However, you can do that manually if that ever happens.
Open up any python file, then click on the bottom right where it is written "Select Python Interpreter".
From there, select :code:`Python 3.8.10 64-bit usr/bin/python3`.
**Setting Up Testing**
The steps are as following to setup testing on VS Code when using a new Codespace.
1. Under the flask Icon in the toolbar select "Configure Python Tests" and select PyTest as the test framework.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/vs_code_testing_setup/vs_testing_01.png?raw=true
:width: 420
2. Select ivy_tests as the root directory for testing.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/vs_code_testing_setup/vs_testing_02.png?raw=true
:width: 420
3. Configure the _array_module.py file in the array_api_tests to be set to one of the supported frameworks.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/contributing/setting_up/vs_code_testing_setup/vs_testing_03.png?raw=true
:width: 420
4. Following all of this, you should refresh the test suite and you should now be able to run tests right from VS Code!
5. To simply run the tests using the play button in the toolbar, you will need to add the .vscode folder to your workspace. Then add the ``settings.json`` file containing the following:
.. code-block:: json
{
"python.testing.pytestArgs": [
"./ivy_tests/test_ivy/",
"./ivy_tests/array_api_testing/test_array_api/",
"--continue-on-collection-errors",
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.autoTestDiscoverOnSaveEnabled": true,
}
Note: Currently you do not need to comment out the :code:`conftest.py` file in the :code:`array_api_tests` directory.
The Binaries
------------
Some features in :code:`ivy` are served as compiled binaries, such as the transpiler.
These binaries aren't maintained in the :code:`ivy` repository directly, but on a separate :code:`binaries` repository.
All the binaries that are required to make use of the full potential of :code:`ivy` are recorded in the :code:`binaries.json`.
The supported configurations (Python version - OS - Architecture) are recorded in the :code:`available_configs.json`.
The format of representing a configuration is based on PyPI's `platform compatibility tags`_,
meaning :code:`cp310-none-manylinux_2_17_x86_64` represents a configuration that can be used in a Python 3.10 environment on a linux system with x86-64.
We continue to add support to many more supported configurations to our binaries to work with various python versions, OS and architecture.
On installing :code:`ivy` with :code:`pip install -e .` all the required binaries with a supported configuration to your system get downloaded.
Just to have another check on whether all binaries are present, there's a warning that gets thrown when you :code:`import ivy` if any binaries are missing of the form,
.. code-block:: none
WARNING:root: Some binaries seem to be missing in your system. This could be either because we don't have compatible binaries for your system or that newer binaries were available.
In the latter case, calling ivy.utils.cleanup_and_fetch_binaries() should fetch the binaries binaries. Feel free to create an issue on https://github.com/unifyai/ivy.git in case of the former
WARNING:root:
Following are the supported configurations :
compiler : cp38-none-manylinux_2_17_x86_64, cp310-none-manylinux_2_17_x86_64
engines : cp310-none-manylinux_2_17_x86_64
WARNING:root: /workspaces/ivy/ivy/compiler/_compiler.so not found.
In case there are no supported binaries for your configuration, then feel free to create an issue on the :code:`ivy` repo asking for adding support to the same.
Feel free to ignore the warning in the meantime, set a `logging level`_ to avoid receiving the warning.
In case the you are using a supported configuration and still receiving this warning, it indicates that you are yet to do a :code:`pip install -e .` as mentioned in the previous sections.
Running a :code:`pip install -e .` is sufficient to download the binaries if they're supported but the :func:`ivy.utils.cleanup_and_fetch_binaries` function is provided just in case you want to download the binaries without a local installation.
.. code-block:: python
import ivy
ivy.utils.cleanup_and_fetch_binaries()
.. note:: Bear in mind that the binaries are **not** required for working on the open tasks for the most part, so it's totally fine to not have the binaries downloaded on your system for working on any of the open tasks.
**Video**
.. raw:: html
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/8rDcMMIl8dM" class="video" allowfullscreen="true">
</iframe>
**Round Up**
This should have hopefully given you a good understanding of how to get things properly set up.
If you have any questions, please feel free to reach out on `discord`_ in the `pycharm thread`_, `docker thread`_, `pre-commit thread`_, `pip packages thread`_ depending on the question!
| ivy/docs/overview/contributing/setting_up.rst/0 | {
"file_path": "ivy/docs/overview/contributing/setting_up.rst",
"repo_id": "ivy",
"token_count": 13806
} | 1 |
Formatting
==========
.. _`flake8`: https://flake8.pycqa.org/en/latest/index.html
.. _`black`: https://black.readthedocs.io/en/stable/index.html
.. _`formatting thread`: https://discord.com/channels/799879767196958751/1190247322626572408
.. _`discord`: https://discord.gg/sXyFF8tDtm
Currently, Ivy follows the `black`_ code style, and `flake8`_ formatter in order to ensure that our code is consistent,
readable, and bug free. This deep-dive will explain how to use these tools to ensure that your code is formatted
correctly.
Please ensure to conform to the formatting rules before submitting a pull request. You are encouraged to take a look
these coding style guides before you start contributing to Ivy.
Lint Checks
-----------
In addition to `black`_ and `flake8`_, Ivy uses other linters to help automate the formatting process, especially for
issues `flake8`_ detects but doesn't fix automatically. In addition to that, we validate docstring as part of our
linting process. You can learn more about our docstring formatting in the `Docstrings <docstrings.rst>`_ section.
We use the following linters:
* `black`_
* `flake8`_
* `autoflake <https://github.com/PyCQA/autoflake>`_
* `docformatter <https://github.com/PyCQA/docformatter>`_
* `pydocstyle <https://github.com/pycqa/pydocstyle>`_
* `ivy-lint <https://github.com/unifyai/lint-hook>`_
You can also take a look at our configuration for linting in `setup.cfg <https://github.com/unifyai/ivy/blob/main/setup.cfg>`_
file.
Setup Formatting Locally
------------------------
Pre-commit
~~~~~~~~~~
To centralize the formatting process, we use `pre-commit <https://pre-commit.com/>`_. This tool allows us to run all
the checks written in the `.pre-commit-config.yaml <https://github.com/unifyai/ivy/blob/main/.pre-commit-config.yaml>`_
file.
Pre-commit can run alone or as a git hook. To install it, you can run the following command:
.. code-block:: bash
pip install pre-commit
Once you have installed pre-commit, you can install the git hook by running the following command:
.. code-block:: bash
pre-commit install
This will install the git hook and will run the checks before you commit your code. If you want to run the checks
manually, you can run the following command:
.. code-block:: bash
pre-commit run --all-files
This will run all the required checks and will show you the output of each check.
Also when you make a commit, pre-commit will run the required checks and will show you the output of each check. If
there are any errors, it will not allow you to commit your code. You can fix the errors and commit again.
You should expect to see something similar to the following output when you run the checks:
.. code-block:: text
[INFO] Stashing unstaged files to ~/.cache/pre-commit/patch1687898304-8072.
black....................................................................Passed
autoflake................................................................Passed
flake8...................................................................Passed
docformatter.............................................................Passed
pydocstyle...............................................................Passed
ivy-lint.................................................................Passed
[INFO] Restored changes from ~/.cache/pre-commit/patch1687898304-8072.
[formatting-docs 3516aed563] Test commit
1 file changed, 1 insertion(+)
If something goes wrong, you will see the following output:
.. code-block:: text
[INFO] Stashing unstaged files to ~/.cache/pre-commit/patch1687898304-8072.
black....................................................................Failed
- hook id: black
- files were modified by this hook
reformatted ivy/stateful/activations.py
All done! β¨ π° β¨
1 file reformatted.
autoflake................................................................Passed
flake8...................................................................Passed
docformatter.............................................................Passed
pydocstyle...............................................................Passed
ivy-lint.................................................................Passed
[INFO] Restored changes from ~/.cache/pre-commit/patch1687898304-8072.
You will notice that some files have changed if you checked ``git status``, you'll need to add them and commit again.
VS Code
~~~~~~~
There are some helpful extensions for VS Code that can detect and format your code according to our style guide. Here
is the list of extensions that we recommend:
* `Black Formatter <https://marketplace.visualstudio.com/items?itemName=ms-python.black-formatter>`_
* `Flake8 Extension <https://marketplace.visualstudio.com/items?itemName=ms-python.flake8>`_
PyCharm
~~~~~~~
Unfortunately, PyCharm doesn't have formatting extensions like VS Code. We don't have specific instructions for PyCharm
but you can use the following links to set up the formatting:
* `Akshay Jain's article on Pycharm + Black with Formatting on Auto-save
<https://akshay-jain.medium.com/pycharm-black-with-formatting-on-auto-save-4797972cf5de>`_
Common Issues with Pre-Commit
-----------------------------
As the pre-commit hook runs before each commit, when it fails it provides an error message that's readable on terminals
but not on IDE GUIs. So you might see a cryptic error message like one of the following:
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/deep_dive/formatting/vscode_error.png?raw=true
:alt: git commit error in VS Code
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/deep_dive/formatting/pycharm_error.png?raw=true
:alt: git commit error in PyCharm
We recommend you commit your code from the terminal when you contribute to Ivy. But if you want to commit from your IDE,
you can always either click on "Show Command Output" or "Show details in console" to see the error message.
And be aware that some of the linters we use format your code automatically like ``black`` and ``autoflake``. So you
will need to add the changes to your commit and commit again.
Continuous Integration
----------------------
We have multiple GitHub actions to check and fix the formatting of the code. They can be divided into lint checks and
lint formatting (or lint-bot).
All the checks we do are made by pre-commit, you don't need to worry about lint errors arising from the CI checks that
are not caught by pre-commit.
Lint Checks
~~~~~~~~~~~
We have a GitHub action that runs:
1. Every commit
2. Every pull request
The important check is the one that runs on every pull request. You should expect this check to pass if you have
pre-commit correctly set up. Note that you can also reformat your code directly from GitHub by making a comment with
``ivy-gardener``, we will go through more details about it in the next section.
Lint Formatting
~~~~~~~~~~~~~~~
We have a GitHub action that runs:
1. Every day at 08:00 UTC
2. Manually invoked by making a comment with ``ivy-gardener`` on a PR
The first action is to ensure that the code in the whole codebase is always formatted correctly. The second action
is to reformat the files you changed in your PR directly on GitHub. This is useful in case if you didn't setup
pre-commit correctly or if you or one of our maintainers want to reformat your code remotely.
Under the hood, when ``ivy-gardener`` is found in a comment, an ivy bot will trigger the same set of lint checks
as in the pre-commit process. Then the suggested changes produced in the checks will be applied automatically as
a new commit if there is any.
However, it is possible for the linters run in the ``ivy-gardener`` and the GitHub action every day to face
formatting errors that need human intervention like typos and uninitialized arguments. In this case, errors will
be thrown by the linters and by the lint checks that runs later, while fixes to other simpler errors will still
be applied by the ``ivy-gardener`` properly.
On the other hand, ``ivy-gardener`` itself can fail if the bot handling it (ivy-branch) can not apply the changes
suggested by the linters, for example, when it does not have access to edit the target branch. In this case, you
should try to give the maintainer bot the access to your branch (which is an option shown in GitHub UI) and give it
another try, or manually resolve the formatting errors by committing the changes yourself.
**Round Up**
This should have hopefully given you a good feel for what is our coding style and how to format your code to contribute
to Ivy.
If you have any questions, please feel free to reach out on `discord`_ in the `formatting thread`_!
**Video**
.. raw:: html
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/JXQ8aI8vJ_8" class="video">
</iframe>
| ivy/docs/overview/deep_dive/formatting.rst/0 | {
"file_path": "ivy/docs/overview/deep_dive/formatting.rst",
"repo_id": "ivy",
"token_count": 2419
} | 2 |
Ivy Array
=========
Here, we explain the :class:`ivy.Array` class, which is the class used to represent all arrays in Ivy.
Every Ivy method returns :class:`ivy.Array` instances for all returned arrays.
The Array Class
---------------
Letβs dive straight in and check out what the :class:`ivy.Array` constructor looks like.
.. code-block:: python
# ivy/array/array.py
class Array(
ArrayWithActivations,
ArrayWithCreation,
ArrayWithDataTypes,
ArrayWithDevice,
ArrayWithElementwise,
ArrayWithGeneral,
ArrayWithGradients,
ArrayWithImage,
ArrayWithLayers,
ArrayWithLinearAlgebra,
ArrayWithLosses,
ArrayWithManipulation,
ArrayWithNorms,
ArrayWithRandom,
ArrayWithSearching,
ArrayWithSet,
ArrayWithSorting,
ArrayWithStatistical,
ArrayWithUtility,
):
def __init__(self, data):
ArrayWithActivations.__init__(self)
ArrayWithCreation.__init__(self)
ArrayWithDataTypes.__init__(self)
ArrayWithDevice.__init__(self)
ArrayWithElementwise.__init__(self)
ArrayWithGeneral.__init__(self)
ArrayWithGradients.__init__(self)
ArrayWithImage.__init__(self)
ArrayWithLayers.__init__(self)
ArrayWithLinearAlgebra.__init__(self)
ArrayWithLosses.__init__(self)
ArrayWithManipulation.__init__(self)
ArrayWithNorms.__init__(self)
ArrayWithRandom.__init__(self)
ArrayWithSearching.__init__(self)
ArrayWithSet.__init__(self)
ArrayWithSorting.__init__(self)
ArrayWithStatistical.__init__(self)
ArrayWithUtility.__init__(self)
self._init(data)
def _init(self, data):
if ivy.is_ivy_array(data):
self._data = data.data
else:
assert ivy.is_native_array(data)
self._data = data
self._shape = self._data.shape
self._size = (
functools.reduce(mul, self._data.shape) if len(self._data.shape) > 0 else 0
)
self._dtype = ivy.dtype(self._data)
self._device = ivy.dev(self._data)
self._dev_str = ivy.as_ivy_dev(self._device)
self._pre_repr = "ivy."
if "gpu" in self._dev_str:
self._post_repr = f", dev={self._dev_str})"
else:
self._post_repr = ")"
self.framework_str = ivy.current_backend_str()
# Properties #
# -----------#
# noinspection PyPep8Naming
@property
def mT(self):
assert len(self._data.shape) >= 2
return ivy.matrix_transpose(self._data)
@property
def data(self):
return self._data
@property
def shape(self):
return ivy.Shape(self._shape)
We can see that the :class:`ivy.Array` class is a simple wrapper around an :class:`ivy.NativeArray` class (such as :class:`np.ndarray`, :class:`torch.Tensor` etc), stored in the :code:`self._data` attribute.
This all makes sense, but the first question you might ask is, why do we need a dedicated :class:`ivy.Array` class at all?
Can't we just operate with the native arrays directly such as :class:`np.ndarray`, :class:`torch.Tensor` etc. when calling ivy methods?
This is a great question, and has a couple of answers with varying importance.
Perhaps the most important motivation for having a dedicated :class:`ivy.Array` class is the unification of array operators, which we discuss next!
Unifying Operators
------------------
Let's assume that there is no such thing as the :class:`ivy.Array` class,
and we are just returning native arrays from all Ivy methods.
Consider the code below:
.. code-block:: python
ivy.set_backend(...)
x = ivy.array([1, 2, 3])
x[0] = 0
print(x)
Let's first assume we use numpy in the backend by calling :code:`ivy.set_backend('numpy')` in the first line.
:code:`x` would then be a :class:`np.ndarray` instance.
In this case, the code will execute without error, printing :code:`array([0, 2, 3])` to the console.
Now consider we use JAX in the backend by calling :code:`ivy.set_backend('jax')` in the first line.
:code:`x` would then be a :code:`jax.numpy.ndarray` instance.
The code will now throw the error :code:`TypeError: '<class 'jaxlib.xla_extension.DeviceArray'>' object does not support item assignment.` :code:`JAX arrays are immutable.` :code:`Instead of x[idx] = y, use x = x.at[idx].set(y) or another .at[] method` when we try to set index 0 to the value 0.
As can be seen from the error message, the reason for this is that JAX does not support inplace updates for arrays.
This is a problem.
The code written above is **pure Ivy code** which means it should behave identically irrespective of the backend, but as we've just seen it behaves **differently** with different backends.
Therefore, in this case, we could not claim that the Ivy code was truly framework-agnostic.
For the purposes of explanation, we can re-write the above code as follows:
.. code-block:: python
ivy.set_backend(...)
x = ivy.array([1, 2, 3])
x.__setitem__(0, 0)
print(x)
If :code:`x` is an :class:`ivy.NativeArray` instance, such as :class:`torch.Tensor` or :class:`np.ndarray`,
then the :meth:`__setitem__` method is defined in the native array class, which is completely outside of our control.
However, if :code:`x` is an :class:`ivy.Array` instance then the :meth:`__setitem__` method is defined in the :class:`ivy.Array` class, which we do have control over.
Let's take a look at how that method is implemented in the :class:`ivy.Array` class:
.. code-block:: python
@_native_wrapper
def __setitem__(self, query, val):
try:
self._data.__setitem__(query, val)
except (AttributeError, TypeError):
self._data = ivy.scatter_nd(
query, val, tensor=self._data, reduction="replace"
)._data
self._dtype = ivy.dtype(self._data)
We can implement inplace updates in the :class:`ivy.Array` class without requiring inplace updates in the backend array classes.
If the backend does not support inplace updates, then we can use the :func:`ivy.scatter_nd` method to return a new array and store this in the :code:`self._data` attribute.
Now, with :class:`ivy.Array` instances, our code will run without error, regardless of which backend is selected.
We can genuinely say our code is fully framework-agnostic.
The same logic applies to all python operators.
For example, if :code:`x` and :code:`y` are both :class:`ivy.NativeArray` instances then the following code **might** execute identically for all backend frameworks:
.. code-block:: python
x = ivy.some_method(...)
y = ivy.some_method(...)
z = ((x + y) * 3) ** 0.5
print(z)
Similarly, for demonstration purposes, this code can be rewritten as:
.. code-block:: python
x = ivy.some_method(...)
y = ivy.some_method(...)
z = x.__add__(y).__mul__(3).__pow__(0.5)
print(z)
Even if this works fine for all backend frameworks now, what if Ivy is updated to support new backends in the future, and one of them behaves a little bit differently?
For example, maybe one framework makes the strange decision to return rounded integer data types when integer arrays are raised to floating point powers.
Without enforcing the use of the :class:`ivy.Array` class for arrays returned from Ivy methods, we would have no way to control this behaviour and unify the output :code:`z` for all backends.
Therefore, with the design of Ivy, we have made the decision to require all arrays returned from Ivy methods to be instances of the :class:`ivy.Array` class.
API Monkey Patching
-------------------
All ivy functions with array inputs/outputs have been wrapped to return :class:`ivy.Array` instances while accepting both :class:`ivy.Array` and :class:`ivy.NativeArray` instances.
This allows for the control required to provide a unified array interface.
For more details on wrapping, see the `Function Wrapping <../../deep_dive/function_wrapping.rst>`_ page in deep dive.
Instance Methods
----------------
Taking a look at the class definition, you may wonder why there are so many parent classes!
The only reason the Array class derives from so many different Array classes is so we can compartmentalize the different array functions into separate classes for better code readability.
All methods in the Ivy functional API are implemented as public instance methods in the :class:`ivy.Array` class via inheritance.
For example, a few functions in :class:`ivy.ArrayWithGeneral` are shown below.
.. code-block:: python
# ivy/array/general.py
class ArrayWithGeneral(abc.ABC):
def reshape(self, newshape):
return ivy.reshape(self, new_shape)
def transpose(self, axes=None):
return ivy.transpose(self, axes)
def flip(self, axis=None, batch_shape=None):
return ivy.flip(self, axis, batch_shape)
One benefit of these instance methods is that they can help to tidy up code.
For example:
.. code-block:: python
x = ivy.ones((1, 2, 3, 4, 5))
# without ivy.Array
y = ivy.reshape(ivy.flip(ivy.matrix_transpose(
ivy.reshape(x, (6, 20))), axis=0), (2, 10, 6))
# with ivy.Array
y = x.reshape((6, 20)).matrix_transpose().flip(axis=0).reshape((2, 10, 6))
In the example above, not only is the :class:`ivy.Array` approach shorter to write, but more importantly there is much better alignment between each function and the function arguments.
Itβs hard to work out which shape parameters align with which method in the first case, but in the second case this is crystal clear.
In addition to the functions in the topic-specific parent classes, there are about 50 builtin methods implemented directly in the :class:`ivy.Array` class, most of which directly wrap a method in Ivy's functional API.
Some examples are given below.
.. code-block:: python
# ivy/array/array.py
def __add__(self, other):
return ivy.add(self, other)
def __sub__(self, other):
return ivy.sub(self, other)
def __mul__(self, other):
return ivy.mul(self, other)
**Round Up**
That should hopefully be enough to get you started with the Ivy Array π
Please reach out on `discord <https://discord.gg/sXyFF8tDtm>`_ if you have any questions!
| ivy/docs/overview/design/ivy_as_a_framework/ivy_array.rst/0 | {
"file_path": "ivy/docs/overview/design/ivy_as_a_framework/ivy_array.rst",
"repo_id": "ivy",
"token_count": 3922
} | 3 |
Related Work
============
.. _`RWorks API Standards`: related_work/api_standards.rst
.. _`RWorks Wrapper Frameworks`: related_work/wrapper_frameworks.rst
.. _`RWorks Frameworks`: related_work/frameworks.rst
.. _`RWorks Graph Tracers`: related_work/graph_tracers.rst
.. _`RWorks Exchange Formats`: related_work/exchange_formats.rst
.. _`RWorks Compiler Infrastructure`: related_work/compiler_infrastructure.rst
.. _`RWorks Multi-Vendor Compiler Frameworks`: related_work/multi_vendor_compiler_frameworks.rst
.. _`RWorks Vendor-Specific APIs`: related_work/vendor_specific_apis.rst
.. _`RWorks Vendor-Specific Compilers`: related_work/vendor_specific_compilers.rst
.. _`RWorks ML-Unifying Companies`: related_work/ml_unifying_companies.rst
.. _`RWorks What does Ivy Add?`: related_work/what_does_ivy_add.rst
In this section, we explain how Ivy compares to many other very important and related pieces of work, which also address fragmentation but at other areas within the ML stack.
Firstly, we need to look at the overall ML stack, and understand how the high level frameworks relate to the low level components.
In order to conceptualize this rather complex hierarchy, we have broken the ML stack into 9 groups, which are: `RWorks API Standards`_, `RWorks Wrapper Frameworks`_, `RWorks Frameworks`_, `RWorks Graph Tracers`_, `RWorks Exchange Formats`_, `RWorks Compiler Infrastructure`_, `RWorks Multi-Vendor Compiler Frameworks`_, `RWorks Vendor-Specific APIs`_ and `RWorks Vendor-Specific Compilers`_, going from high level to low level respectively.
.. image:: https://github.com/unifyai/unifyai.github.io/blob/main/img/externally_linked/related_work/ml_stack.png?raw=true
:width: 100%
Each of these groups within the ML stack has it's own sub-section, linked below, within which we discuss various related projects which operate at that particular level within the stack.
We then compare Ivy to some other ML-unifying companies which are working on very important problems and are helping to unify the lower levels of the ML stack.
We see these efforts as being very complimentary to Ivy's vision for high level unification.
Finally, we discuss how Ivy compares to each of these important works at all levels within the ML stack.
| (a) `RWorks API Standards`_ π€π½
| Standardized APIs which similar libraries should adhere to
|
| (b) `RWorks Wrapper Frameworks`_ π
| Frameworks which wrap other ML frameworks
|
| (c) `RWorks Frameworks`_ π’
| Standalone ML Frameworks
|
| (d) `RWorks Graph Tracers`_ πΈοΈ
| Extracting acyclic directed computation graphs from code
|
| (e) `RWorks Exchange Formats`_ π±
| File formats to exchange neural networks between frameworks
|
| (f) `RWorks Compiler Infrastructure`_ ποΈποΈ
| Infrastructure and standards to simplify the lives of compiler designers
|
| (g) `RWorks Multi-Vendor Compiler Frameworks`_ π₯οΈπ»π
| Executing ML code on a variety of hardware targets
|
| (h) `RWorks Vendor-Specific APIs`_ π»π’
| Interfacing with specific hardware in an intuitive manner
|
| (i) `RWorks Vendor-Specific Compilers`_ π»π
| Compiling code to specific hardware
|
| (j) `RWorks ML-Unifying Companies`_ π
| Companies working towards unification in ML
|
| (k) `RWorks What does Ivy Add?`_ π’
| How does Ivy fit into all of this?
.. toctree::
:hidden:
:maxdepth: -1
:caption: Related Work
related_work/api_standards.rst
related_work/wrapper_frameworks.rst
related_work/frameworks.rst
related_work/graph_tracers.rst
related_work/exchange_formats.rst
related_work/compiler_infrastructure.rst
related_work/multi_vendor_compiler_frameworks.rst
related_work/vendor_specific_apis.rst
related_work/vendor_specific_compilers.rst
related_work/ml_unifying_companies.rst
related_work/what_does_ivy_add.rst
| ivy/docs/overview/related_work.rst/0 | {
"file_path": "ivy/docs/overview/related_work.rst",
"repo_id": "ivy",
"token_count": 1179
} | 4 |
# global
import copy
import re
import warnings
import logging
import builtins
import numpy as np
import sys
import inspect
import importlib
import os
from collections.abc import Sequence
import ivy.utils.backend.handler
from ivy.utils import check_for_binaries
from ivy._version import __version__ as __version__
_not_imported_backends = list(ivy.utils.backend.handler._backend_dict.keys())
try:
# Skip numpy from frameworks installed
_not_imported_backends.remove("numpy")
except KeyError:
pass
for backend_framework in _not_imported_backends.copy():
# If a framework was already imported before our init execution
if backend_framework in sys.modules:
_not_imported_backends.remove(backend_framework)
warnings.filterwarnings("ignore", module="^(?!.*ivy).*$")
# Local Ivy
import_module_path = "ivy.utils._importlib"
def is_local():
return hasattr(ivy, "_is_local_pkg")
# class placeholders
class FrameworkStr(str):
def __new__(cls, fw_str):
ivy.utils.assertions.check_elem_in_list(
fw_str, ivy.utils.backend.handler._backend_dict.keys()
)
return str.__new__(cls, fw_str)
class Framework:
pass
class NativeArray:
pass
class NativeDevice:
pass
class NativeDtype:
pass
class NativeShape:
pass
class NativeModule:
pass
class Container:
pass
class Array:
pass
class TuckerTensor:
pass
class CPTensor:
pass
class TRTensor:
pass
class Parafac2Tensor:
pass
class TTTensor:
pass
class Device(str):
def __new__(cls, dev_str):
if dev_str != "":
ivy.utils.assertions.check_elem_in_list(dev_str[0:3], ["gpu", "tpu", "cpu"])
if dev_str != "cpu":
# ivy.assertions.check_equal(dev_str[3], ":")
ivy.utils.assertions.check_true(
dev_str[4:].isnumeric(),
message=f"{dev_str[4:]} must be numeric",
)
return str.__new__(cls, dev_str)
class Dtype(str):
def __new__(cls, dtype_str):
if dtype_str is builtins.int:
dtype_str = default_int_dtype()
if dtype_str is builtins.float:
dtype_str = default_float_dtype()
if dtype_str is builtins.complex:
dtype_str = default_complex_dtype()
if dtype_str is builtins.bool:
dtype_str = "bool"
if not isinstance(dtype_str, str):
raise ivy.utils.exceptions.IvyException("dtype must be type str")
if dtype_str not in _all_ivy_dtypes_str:
raise ivy.utils.exceptions.IvyException(
f"{dtype_str} is not supported by ivy"
)
return str.__new__(cls, dtype_str)
def __ge__(self, other):
if isinstance(other, str):
other = Dtype(other)
if not isinstance(other, Dtype):
raise ivy.utils.exceptions.IvyException(
"Attempted to compare a dtype with something which"
"couldn't be interpreted as a dtype"
)
return self == ivy.promote_types(self, other)
def __gt__(self, other):
if isinstance(other, str):
other = Dtype(other)
if not isinstance(other, Dtype):
raise ivy.utils.exceptions.IvyException(
"Attempted to compare a dtype with something which"
"couldn't be interpreted as a dtype"
)
return self >= other and self != other
def __lt__(self, other):
if isinstance(other, str):
other = Dtype(other)
if not isinstance(other, Dtype):
raise ivy.utils.exceptions.IvyException(
"Attempted to compare a dtype with something which"
"couldn't be interpreted as a dtype"
)
return self != ivy.promote_types(self, other)
def __le__(self, other):
if isinstance(other, str):
other = Dtype(other)
if not isinstance(other, Dtype):
raise ivy.utils.exceptions.IvyException(
"Attempted to compare a dtype with something which"
"couldn't be interpreted as a dtype"
)
return self < other or self == other
@property
def is_bool_dtype(self):
return is_bool_dtype(self)
@property
def is_int_dtype(self):
return is_int_dtype(self)
@property
def is_float_dtype(self):
return is_float_dtype(self)
@property
def is_uint_dtype(self):
return is_uint_dtype(self)
@property
def is_complex_dtype(self):
return is_complex_dtype(self)
@property
def dtype_bits(self):
return dtype_bits(self)
@property
def as_native_dtype(self):
return as_native_dtype(self)
@property
def name(self) -> str:
return str(self)
@property
def info(self):
if self.is_int_dtype or self.is_uint_dtype:
return iinfo(self)
elif self.is_float_dtype:
return finfo(self)
else:
raise ivy.utils.exceptions.IvyError(f"{self} is not supported by info")
def can_cast(self, to):
return can_cast(self, to)
class Shape(Sequence):
def __init__(self, shape_tup):
valid_types = (int, list, tuple, ivy.Array, ivy.Shape)
if len(backend_stack) != 0:
valid_types += (ivy.NativeShape, ivy.NativeArray)
else:
valid_types += (
current_backend(shape_tup).NativeShape,
current_backend(shape_tup).NativeArray,
)
ivy.utils.assertions.check_isinstance(shape_tup, valid_types)
if len(backend_stack) == 0:
if isinstance(shape_tup, np.ndarray):
shape_tup = tuple(shape_tup.tolist())
self._shape = shape_tup
elif isinstance(shape_tup, valid_types):
self._shape = ivy.to_native_shape(shape_tup)
else:
self._shape = None
@staticmethod
def _shape_casting_helper(ivy_shape, other):
if isinstance(other, tuple) and not isinstance(ivy_shape, tuple):
return tuple(ivy_shape)
elif isinstance(other, list) and not isinstance(ivy_shape, list):
return list(ivy_shape)
else:
return ivy_shape
def __repr__(self):
pattern = r"\d+(?:,\s*\d+)*"
shape_repr = re.findall(pattern, self._shape.__str__())
shape_repr = ", ".join([str(i) for i in shape_repr])
shape_repr = f"{shape_repr}," if len(shape_repr) == 1 else shape_repr
return (
f"ivy.Shape({shape_repr})" if self._shape is not None else "ivy.Shape(None)"
)
def __deepcopy__(self, memo):
ret = self.__class__.__new__(self.__class__)
ret._shape = self.shape
return ret
def __iter__(self):
return iter(self._shape)
def __add__(self, other):
try:
self._shape = self._shape + other
except TypeError:
self._shape = self._shape + list(other)
return self
def __radd__(self, other):
try:
self._shape = other + self._shape
except TypeError:
self._shape = list(other) + self._shape
return self
def __mul__(self, other):
if ivy.current_backend_str() == "tensorflow":
shape_tup = builtins.tuple(self._shape) * other
self._shape = ivy.to_native_shape(shape_tup)
else:
self._shape = self._shape * other
return self
def __rmul__(self, other):
# handle tensorflow case as tf.TensorShape doesn't support multiplications
if ivy.current_backend_str() == "tensorflow":
shape_tup = other * builtins.tuple(self._shape)
self._shape = ivy.to_native_shape(shape_tup)
else:
self._shape = other * self._shape
return self
def __bool__(self):
return builtins.bool(self._shape)
def __div__(self, other):
return self._shape // other
def __floordiv__(self, other):
return self._shape // other
def __mod__(self, other):
return self._shape % other
def __rdiv__(self, other):
return other // self._shape
def __rmod__(self, other):
return other % self._shape
def __reduce__(self):
return (self.__class__, (self._shape,))
def as_dimension(self, other):
if isinstance(other, self._shape):
return other
else:
return self._shape
def __sub__(self, other):
try:
self._shape = self._shape - other
except TypeError:
self._shape = self._shape - list(other)
return self
def __rsub__(self, other):
try:
self._shape = other - self._shape
except TypeError:
self._shape = list(other) - self._shape
return self
def __eq__(self, other):
self._shape = Shape._shape_casting_helper(self._shape, other)
return self._shape == other
def __int__(self):
if hasattr(self._shape, "__int__"):
res = self._shape.__int__()
else:
res = int(self._shape)
if res is NotImplemented:
return res
return to_ivy(res)
def __ge__(self, other):
self._shape = Shape._shape_casting_helper(self._shape, other)
return self._shape >= other
def __gt__(self, other):
self._shape = Shape._shape_casting_helper(self._shape, other)
return self._shape > other
def __le__(self, other):
self._shape = Shape._shape_casting_helper(self._shape, other)
return self._shape <= other
def __lt__(self, other):
self._shape = Shape._shape_casting_helper(self._shape, other)
return self._shape < other
def __getattribute__(self, item):
return super().__getattribute__(item)
def __getitem__(self, key):
try:
return self._shape[key]
except (TypeError, IndexError):
return None
def __len__(self):
return len(self._shape) if self._shape is not None else 0
def __delattr__(self, item):
return super().__delattr__(item)
def __hash__(self):
return hash(self._shape)
def __sizeof__(self):
return len(self._shape) if self._shape is not None else 0
def __dir__(self):
return self._shape.__dir__()
@property
def shape(self):
return self._shape
@property
def value(self):
return self._value
def concatenate(self, other):
if self._shape is None or other.dims is None:
raise ValueError("Unknown Shape")
else:
return Shape(self.dims + other.dims)
def index(self, index):
assert isinstance(self._shape, Shape)
if self._shape.rank is None:
return Shape(None)
else:
return self._shape[index]
def as_dimension(self):
if isinstance(self._shape, Shape):
return self._shape
else:
return Shape(self._shape)
def is_compatible_with(self, other):
return self._shape is None or other.value is None or self._shape == other.value
@property
def rank(self):
"""Returns the rank of this shape, or None if it is unspecified."""
if self._shape is not None:
return len(self._shape)
return None
def assert_same_rank(self, other):
other = Shape(other)
if self.rank != other.rank:
raise ValueError(f"Shapes {self} and {other} must have the same rank")
def assert_has_rank(self, rank):
if self.rank not in (None, rank):
raise ValueError(f"Shape {self} must have rank {rank}")
@staticmethod
def unknown_shape(rank=None, **kwargs):
if rank is None and "ndims" in kwargs:
rank = kwargs.pop("ndims")
if kwargs:
raise TypeError(f"Unknown argument: {kwargs}")
if rank is None:
return Shape(None)
else:
return Shape([Shape(None)] * rank)
def with_rank(self, rank):
try:
return self.merge_with(self.unknown_shape(rank=rank))
except ValueError as e:
raise ValueError(f"Shape {self} must have rank {rank}") from e
def with_rank_at_least(self, rank):
if self.rank is not None and self.rank < rank:
raise ValueError(f"Shape {self} must have rank at least {rank}")
else:
return self
def with_rank_at_most(self, rank):
if self.rank is not None and self.rank > rank:
raise ValueError(f"Shape {self} must have rank at most {rank}")
else:
return self
@staticmethod
def as_shape(shape):
if isinstance(shape, Shape):
return shape
else:
return Shape(shape)
@property
def dims(self):
if self._shape is None:
return None
# return [as_dimension(d) for d in self._shape]
@property
def ndims(self):
"""Deprecated accessor for `rank`."""
return self.rank
@property
def is_fully_defined(self):
return self._shape is not None and all(
shape is not None for shape in self._shape
)
@property
def num_elements(self):
if not self.is_fully_defined():
return None
@property
def assert_is_fully_defined(self):
if not self.is_fully_defined():
raise ValueError(f"Shape {self} is not fully defined")
def as_list(self):
if self._shape is None:
raise ivy.utils.exceptions.IvyException(
"Cannot convert a partially known Shape to a list"
)
return list(self._shape)
class IntDtype(Dtype):
def __new__(cls, dtype_str):
if dtype_str is builtins.int:
dtype_str = default_int_dtype()
if not isinstance(dtype_str, str):
raise ivy.utils.exceptions.IvyException("dtype_str must be type str")
if "int" not in dtype_str:
raise ivy.utils.exceptions.IvyException(
"dtype must be string and starts with int"
)
if dtype_str not in _all_ivy_dtypes_str:
raise ivy.utils.exceptions.IvyException(
f"{dtype_str} is not supported by ivy"
)
return str.__new__(cls, dtype_str)
@property
def info(self):
return iinfo(self)
class FloatDtype(Dtype):
def __new__(cls, dtype_str):
if dtype_str is builtins.float:
dtype_str = default_float_dtype()
if not isinstance(dtype_str, str):
raise ivy.utils.exceptions.IvyException("dtype_str must be type str")
if "float" not in dtype_str:
raise ivy.utils.exceptions.IvyException(
"dtype must be string and starts with float"
)
if dtype_str not in _all_ivy_dtypes_str:
raise ivy.utils.exceptions.IvyException(
f"{dtype_str} is not supported by ivy"
)
return str.__new__(cls, dtype_str)
@property
def info(self):
return finfo(self)
class UintDtype(IntDtype):
def __new__(cls, dtype_str):
if not isinstance(dtype_str, str):
raise ivy.utils.exceptions.IvyException("dtype_str must be type str")
if "uint" not in dtype_str:
raise ivy.utils.exceptions.IvyException(
"dtype must be string and starts with uint"
)
if dtype_str not in _all_ivy_dtypes_str:
raise ivy.utils.exceptions.IvyException(
f"{dtype_str} is not supported by ivy"
)
return str.__new__(cls, dtype_str)
@property
def info(self):
return iinfo(self)
class ComplexDtype(Dtype):
def __new__(cls, dtype_str):
if not isinstance(dtype_str, str):
raise ivy.utils.exceptions.IvyException("dtype_str must be type str")
if "complex" not in dtype_str:
raise ivy.utils.exceptions.IvyException(
"dtype must be string and starts with complex"
)
if dtype_str not in _all_ivy_dtypes_str:
raise ivy.utils.exceptions.IvyException(
f"{dtype_str} is not supported by ivy"
)
return str.__new__(cls, dtype_str)
@property
def info(self):
return finfo(self)
class Node(str):
# ToDo: add formatting checks once multi-node is supported
pass
array_significant_figures_stack = []
array_decimal_values_stack = []
warning_level_stack = []
nan_policy_stack = []
dynamic_backend_stack = []
warn_to_regex = {"all": "!.*", "ivy_only": "^(?!.*ivy).*$", "none": ".*"}
cython_wrappers_stack = []
# local
import threading
# devices
# ToDo: add gpu and tpu for valid devices when we test for them
all_devices = ("cpu", "gpu", "tpu")
valid_devices = ("cpu", "gpu")
invalid_devices = ("tpu",)
# data types as string (to be used by Dtype classes)
# any changes here should also be reflected in the data type initialisation underneath
_all_ivy_dtypes_str = (
"int8",
"int16",
"int32",
"int64",
"uint8",
"uint16",
"uint32",
"uint64",
"bfloat16",
"float16",
"float32",
"float64",
"complex64",
"complex128",
"bool",
)
# data types
# any changes here should also be reflected in the data type string tuple above
int8 = IntDtype("int8")
int16 = IntDtype("int16")
int32 = IntDtype("int32")
int64 = IntDtype("int64")
uint8 = UintDtype("uint8")
uint16 = UintDtype("uint16")
uint32 = UintDtype("uint32")
uint64 = UintDtype("uint64")
bfloat16 = FloatDtype("bfloat16")
float16 = FloatDtype("float16")
float32 = FloatDtype("float32")
float64 = FloatDtype("float64")
double = float64
complex64 = ComplexDtype("complex64")
complex128 = ComplexDtype("complex128")
bool = Dtype("bool")
# native data types
native_int8 = IntDtype("int8")
native_int16 = IntDtype("int16")
native_int32 = IntDtype("int32")
native_int64 = IntDtype("int64")
native_uint8 = UintDtype("uint8")
native_uint16 = UintDtype("uint16")
native_uint32 = UintDtype("uint32")
native_uint64 = UintDtype("uint64")
native_bfloat16 = FloatDtype("bfloat16")
native_float16 = FloatDtype("float16")
native_float32 = FloatDtype("float32")
native_float64 = FloatDtype("float64")
native_double = native_float64
native_complex64 = ComplexDtype("complex64")
native_complex128 = ComplexDtype("complex128")
native_bool = Dtype("bool")
# all
all_dtypes = (
int8,
int16,
int32,
int64,
uint8,
uint16,
uint32,
uint64,
bfloat16,
float16,
float32,
float64,
complex64,
complex128,
bool,
)
all_numeric_dtypes = (
int8,
int16,
int32,
int64,
uint8,
uint16,
uint32,
uint64,
bfloat16,
float16,
float32,
float64,
complex64,
complex128,
)
all_int_dtypes = (
int8,
int16,
int32,
int64,
uint8,
uint16,
uint32,
uint64,
)
all_float_dtypes = (
bfloat16,
float16,
float32,
float64,
)
all_uint_dtypes = (
uint8,
uint16,
uint32,
uint64,
)
all_complex_dtypes = (
complex64,
complex128,
)
# valid data types
valid_dtypes = all_dtypes
valid_numeric_dtypes = all_numeric_dtypes
valid_int_dtypes = all_int_dtypes
valid_float_dtypes = all_float_dtypes
valid_uint_dtypes = all_uint_dtypes
valid_complex_dtypes = all_complex_dtypes
# invalid data types
invalid_dtypes = ()
invalid_numeric_dtypes = ()
invalid_int_dtypes = ()
invalid_float_dtypes = ()
invalid_uint_dtypes = ()
invalid_complex_dtypes = ()
locks = {"backend_setter": threading.Lock()}
from .wrappers import *
from .func_wrapper import *
from .data_classes.array import Array, add_ivy_array_instance_methods
from .data_classes.array.conversions import *
from .data_classes.array import conversions as arr_conversions
from .data_classes.container import conversions as cont_conversions
from .data_classes.container import (
ContainerBase,
Container,
add_ivy_container_instance_methods,
)
from .data_classes.nested_array import NestedArray
from .data_classes.factorized_tensor import (
TuckerTensor,
CPTensor,
TRTensor,
TTTensor,
Parafac2Tensor,
)
from ivy.utils.backend import (
current_backend,
compiled_backends,
with_backend,
set_backend,
set_numpy_backend,
set_jax_backend,
set_tensorflow_backend,
set_torch_backend,
set_paddle_backend,
set_mxnet_backend,
previous_backend,
backend_stack,
choose_random_backend,
unset_backend,
)
from . import wrappers
from . import func_wrapper
from .utils import assertions, exceptions, verbosity
from .utils.backend import handler
from . import functional
from .functional import *
from . import stateful
from .stateful import *
from ivy.utils.inspection import fn_array_spec, add_array_specs
add_array_specs()
_imported_frameworks_before_compiler = list(sys.modules.keys())
try:
from .engines import XLA as xla
from .engines import ivy2xla
except: # noqa: E722
pass
try:
from .compiler.compiler import transpile, trace_graph, unify
except: # noqa: E722
pass # Added for the finally statement
try:
from .compiler.replace_with import replace_with, transform_function
except: # noqa: E722
pass
finally:
# Skip framework imports done by Ivy compiler for now
for backend_framework in _not_imported_backends.copy():
if backend_framework in sys.modules:
if backend_framework not in _imported_frameworks_before_compiler:
_not_imported_backends.remove(backend_framework)
# add instance methods to Ivy Array and Container
from ivy.functional.ivy import (
activations,
creation,
data_type,
device,
elementwise,
general,
gradients,
layers,
linear_algebra,
losses,
manipulation,
norms,
random,
searching,
set,
sorting,
statistical,
utility,
)
add_ivy_array_instance_methods(
Array,
[
activations,
arr_conversions,
creation,
data_type,
device,
elementwise,
general,
gradients,
layers,
linear_algebra,
losses,
manipulation,
norms,
random,
searching,
set,
sorting,
statistical,
utility,
],
)
add_ivy_container_instance_methods(
Container,
[
activations,
cont_conversions,
creation,
data_type,
device,
elementwise,
general,
gradients,
layers,
linear_algebra,
losses,
manipulation,
norms,
random,
searching,
set,
sorting,
statistical,
utility,
],
)
add_ivy_container_instance_methods(
Container,
[
activations,
cont_conversions,
creation,
data_type,
device,
elementwise,
general,
gradients,
layers,
linear_algebra,
losses,
manipulation,
norms,
random,
searching,
set,
sorting,
statistical,
utility,
],
static=True,
)
class GlobalsDict(dict):
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
__name__ = dict.__name__
def __deepcopy__(self, memo):
ret = self.__class__.__new__(self.__class__)
for k, v in self.items():
ret[k] = copy.deepcopy(v)
return ret
# defines ivy.globals attribute
globals_vars = GlobalsDict(
{
"backend_stack": backend_stack,
"default_device_stack": device.default_device_stack,
"valid_dtypes": valid_dtypes,
"valid_numeric_dtypes": valid_numeric_dtypes,
"valid_int_dtypes": valid_int_dtypes,
"valid_uint_dtypes": valid_uint_dtypes,
"valid_complex_dtypes": valid_complex_dtypes,
"valid_devices": valid_devices,
"invalid_dtypes": invalid_dtypes,
"invalid_numeric_dtypes": invalid_numeric_dtypes,
"invalid_int_dtypes": invalid_int_dtypes,
"invalid_float_dtypes": invalid_float_dtypes,
"invalid_uint_dtypes": invalid_uint_dtypes,
"invalid_complex_dtypes": invalid_complex_dtypes,
"invalid_devices": invalid_devices,
"array_significant_figures_stack": array_significant_figures_stack,
"array_decimal_values_stack": array_decimal_values_stack,
"warning_level_stack": warning_level_stack,
"queue_timeout_stack": general.queue_timeout_stack,
"array_mode_stack": general.array_mode_stack,
"inplace_mode_stack": general.inplace_mode_stack,
"soft_device_mode_stack": device.soft_device_mode_stack,
"shape_array_mode_stack": general.shape_array_mode_stack,
"show_func_wrapper_trace_mode_stack": general.show_func_wrapper_trace_mode_stack,
"min_denominator_stack": general.min_denominator_stack,
"min_base_stack": general.min_base_stack,
"tmp_dir_stack": general.tmp_dir_stack,
"precise_mode_stack": general.precise_mode_stack,
"nestable_mode_stack": general.nestable_mode_stack,
"exception_trace_mode_stack": general.exception_trace_mode_stack,
"default_dtype_stack": data_type.default_dtype_stack,
"default_float_dtype_stack": data_type.default_float_dtype_stack,
"default_int_dtype_stack": data_type.default_int_dtype_stack,
"default_uint_dtype_stack": data_type.default_uint_dtype_stack,
"nan_policy_stack": nan_policy_stack,
"dynamic_backend_stack": dynamic_backend_stack,
"cython_wrappers_stack": cython_wrappers_stack,
}
)
_default_globals = copy.deepcopy(globals_vars)
def reset_globals():
global globals_vars
globals_vars = copy.deepcopy(_default_globals)
def set_global_attr(attr_name, attr_val):
setattr(globals_vars, attr_name, attr_val)
def del_global_attr(attr_name):
delattr(globals_vars, attr_name)
backend = ""
backend_version = {}
native_inplace_support = None
supports_gradients = None
# Array Significant Figures #
def _assert_array_significant_figures_formatting(sig_figs):
ivy.utils.assertions.check_isinstance(sig_figs, int)
ivy.utils.assertions.check_greater(sig_figs, 0, as_array=False)
# ToDo: SF formatting for complex number
def vec_sig_fig(x, sig_fig=3):
if isinstance(x, np.bool_):
return x
if isinstance(x, complex):
return complex(x)
if np.issubdtype(x.dtype, np.floating):
x_positive = np.where(np.isfinite(x) & (x != 0), np.abs(x), 10 ** (sig_fig - 1))
mags = 10 ** (sig_fig - 1 - np.floor(np.log10(x_positive)))
return np.round(x * mags) / mags
return x
ivy.array_significant_figures = (
array_significant_figures_stack[-1] if array_significant_figures_stack else 10
)
def set_array_significant_figures(sig_figs):
"""Summary.
Parameters
----------
sig_figs
optional int, number of significant figures to be shown when printing
"""
_assert_array_significant_figures_formatting(sig_figs)
global array_significant_figures_stack
array_significant_figures_stack.append(sig_figs)
ivy.__setattr__("array_significant_figures", sig_figs, True)
def unset_array_significant_figures():
"""Unset the currently set array significant figures."""
global array_significant_figures_stack
if array_significant_figures_stack:
array_significant_figures_stack.pop(-1)
sig_figs = (
array_significant_figures_stack[-1]
if array_significant_figures_stack
else 10
)
ivy.__setattr__("array_significant_figures", sig_figs, True)
# Decimal Values #
def _assert_array_decimal_values_formatting(dec_vals):
ivy.utils.assertions.check_isinstance(dec_vals, int)
ivy.utils.assertions.check_greater(dec_vals, 0, allow_equal=True, as_array=False)
ivy.array_decimal_values = (
array_decimal_values_stack[-1] if array_decimal_values_stack else 8
)
def set_array_decimal_values(dec_vals):
"""Summary.
Parameters
----------
dec_vals
optional int, number of significant figures to be shown when printing
"""
_assert_array_decimal_values_formatting(dec_vals)
global array_decimal_values_stack
array_decimal_values_stack.append(dec_vals)
ivy.__setattr__("array_decimal_values", dec_vals, True)
def unset_array_decimal_values():
"""Unset the currently set array decimal values."""
global array_decimal_values_stack
if array_decimal_values_stack:
array_decimal_values_stack.pop(-1)
dec_vals = array_decimal_values_stack[-1] if array_decimal_values_stack else 8
ivy.__setattr__("array_decimal_values", dec_vals, True)
ivy.warning_level = warning_level_stack[-1] if warning_level_stack else "ivy_only"
def set_warning_level(warn_level):
"""Summary.
Parameters
----------
warn_level
string for the warning level to be set, one of "none", "ivy_only", "all"
"""
global warning_level_stack
warning_level_stack.append(warn_level)
ivy.__setattr__("warning_level", warn_level, True)
def unset_warning_level():
"""Unset the currently set warning level."""
global warning_level_stack
if warning_level_stack:
warning_level_stack.pop(-1)
warn_level = warning_level_stack[-1] if warning_level_stack else "ivy_only"
ivy.__setattr__("warning_level", warn_level, True)
def warn(warning_message, stacklevel=0):
warn_level = ivy.warning_level
warnings.filterwarnings("ignore", module=warn_to_regex[warn_level])
warnings.warn(warning_message, stacklevel=stacklevel)
# nan policy #
ivy.nan_policy = nan_policy_stack[-1] if nan_policy_stack else "nothing"
def set_nan_policy(warn_level):
"""Summary.
Parameters
----------
nan_policy
string for the nan policy to be set, one of
"nothing", "warns", "raise_exception"
"""
if warn_level not in ["nothing", "warns", "raise_exception"]:
raise ivy.utils.exceptions.IvyException(
"nan_policy must be one of 'nothing', 'warns', 'raise_exception'"
)
global nan_policy_stack
nan_policy_stack.append(warn_level)
ivy.__setattr__("nan_policy", warn_level, True)
def unset_nan_policy():
"""Unset the currently set nan policy."""
global nan_policy_stack
if nan_policy_stack:
nan_policy_stack.pop(-1)
warn_level = nan_policy_stack[-1] if nan_policy_stack else "nothing"
ivy.__setattr__("nan_policy", warn_level, True)
# Dynamic Backend
ivy.dynamic_backend = dynamic_backend_stack[-1] if dynamic_backend_stack else True
def set_dynamic_backend(flag): # noqa: D209
"""Set the global dynamic backend setting to the provided flag (True or
False)"""
global dynamic_backend_stack
if flag not in [True, False]:
raise ValueError("dynamic_backend must be a boolean value (True or False)")
dynamic_backend_stack.append(flag)
ivy.__setattr__("dynamic_backend", flag, True)
def unset_dynamic_backend():
"""Remove the current dynamic backend setting.
Also restore the previous setting (if any)
"""
global dynamic_backend_stack
if dynamic_backend_stack:
dynamic_backend_stack.pop()
flag = dynamic_backend_stack[-1] if dynamic_backend_stack else True
ivy.__setattr__("dynamic_backend", flag, True)
# Cython wrappers
ivy.cython_wrappers_mode = cython_wrappers_stack[-1] if cython_wrappers_stack else False
@handle_exceptions
def set_cython_wrappers_mode(flag: bool = True) -> None:
"""Set the mode of whether to use cython wrappers for functions.
Parameter
---------
flag
boolean whether to use cython wrappers for functions
Examples
--------
>>> ivy.set_cython_wrappers_mode(False)
>>> ivy.cython_wrappers_mode
False
>>> ivy.set_cython_wrappers_mode(True)
>>> ivy.cython_wrappers_mode
True
"""
global cython_wrappers_stack
if flag not in [True, False]:
raise ValueError("cython_wrappers_mode must be a boolean value (True or False)")
cython_wrappers_stack.append(flag)
ivy.__setattr__("cython_wrappers_mode", flag, True)
# Context Managers
class DynamicBackendContext:
def __init__(self, value):
self.value = value
self.original = None
def __enter__(self):
self.original = ivy.dynamic_backend
set_dynamic_backend(self.value)
def __exit__(self, type, value, traceback):
unset_dynamic_backend()
if self.original is not None:
set_dynamic_backend(self.original)
def dynamic_backend_as(value):
return DynamicBackendContext(value)
for backend_framework in _not_imported_backends:
if backend_framework in sys.modules:
warnings.warn(
f"{backend_framework} module has been imported while ivy doesn't "
"import it without setting a backend, ignore if that's intended"
)
# sub_backends
from ivy.utils.backend.sub_backend_handler import (
set_sub_backend,
unset_sub_backend,
clear_sub_backends,
)
available_sub_backends = []
current_sub_backends = []
# casting modes
downcast_dtypes = False
upcast_dtypes = False
crosscast_dtypes = False
def cast_dtypes():
return downcast_dtypes and upcast_dtypes and crosscast_dtypes
def downcast_data_types(val=True):
global downcast_dtypes
downcast_dtypes = val
def upcast_data_types(val=True):
global upcast_dtypes
upcast_dtypes = val
def crosscast_data_types(val=True):
global crosscast_dtypes
crosscast_dtypes = val
def cast_data_types(val=True):
global upcast_dtypes
global downcast_dtypes
global crosscast_dtypes
upcast_dtypes = val
downcast_dtypes = val
crosscast_dtypes = val
# Promotion Tables #
# ---------------- #
# data type promotion
array_api_promotion_table = {
(bool, bool): bool,
(int8, int8): int8,
(int8, int16): int16,
(int8, int32): int32,
(int8, int64): int64,
(int16, int16): int16,
(int16, int32): int32,
(int16, int64): int64,
(int32, int32): int32,
(int32, int64): int64,
(int64, int64): int64,
(uint8, int8): int16,
(uint8, int16): int16,
(uint8, int32): int32,
(uint8, int64): int64,
(uint8, uint8): uint8,
(uint8, uint16): uint16,
(uint8, uint32): uint32,
(uint8, uint64): uint64,
(uint16, int8): int32,
(uint16, int16): int32,
(uint16, int32): int32,
(uint16, int64): int64,
(uint16, uint16): uint16,
(uint16, uint32): uint32,
(uint16, uint64): uint64,
(uint32, int8): int64,
(uint32, int16): int64,
(uint32, int32): int64,
(uint32, int64): int64,
(uint32, uint32): uint32,
(uint32, uint64): uint64,
(uint64, uint64): uint64,
(float16, float16): float16,
(float16, float32): float32,
(float16, float64): float64,
(float32, float32): float32,
(float32, float64): float64,
(float64, float64): float64,
}
# the extra promotion table follows numpy safe casting convention
# the following link discusses the different approaches to dtype promotions
# https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html
common_extra_promotion_table = {
(bool, int8): int8,
(bool, int16): int16,
(bool, int32): int32,
(bool, int64): int64,
(bool, uint8): uint8,
(bool, uint16): uint16,
(bool, uint32): uint32,
(bool, uint64): uint64,
(bool, float16): float16,
(bool, float32): float32,
(bool, float64): float64,
(bool, bfloat16): bfloat16,
(bool, complex64): complex64,
(bool, complex128): complex128,
(int8, float16): float16,
(int8, float32): float32,
(int8, float64): float64,
(int8, bfloat16): bfloat16,
(int8, complex64): complex64,
(int8, complex128): complex128,
(int16, float32): float32,
(int16, float64): float64,
(int16, complex64): complex64,
(int16, complex128): complex128,
(int32, float64): float64,
(int32, complex128): complex128,
(int64, float64): float64,
(int64, complex128): complex128,
(uint8, float16): float16,
(uint8, float32): float32,
(uint8, float64): float64,
(uint8, bfloat16): bfloat16,
(uint8, complex64): complex64,
(uint8, complex128): complex128,
(uint16, float32): float32,
(uint16, float64): float64,
(uint16, complex64): complex64,
(uint16, complex128): complex128,
(uint32, float64): float64,
(uint32, complex128): complex128,
(uint64, int8): float64,
(uint64, int16): float64,
(uint64, int32): float64,
(uint64, int64): float64,
(uint64, float64): float64,
(uint64, complex128): complex128,
(float16, bfloat16): float32,
(float16, complex64): complex64,
(float16, complex128): complex128,
(float32, complex64): complex64,
(float32, complex128): complex128,
(float64, complex64): complex128,
(float64, complex128): complex128,
(bfloat16, float16): float32,
(bfloat16, float32): float32,
(bfloat16, float64): float64,
(bfloat16, bfloat16): bfloat16,
(bfloat16, complex64): complex64,
(bfloat16, complex128): complex128,
(complex64, float64): complex128,
(complex64, complex64): complex64,
(complex64, complex128): complex128,
(complex128, complex128): complex128,
}
# Avoiding All Precision Loss (Numpy Approach)
precise_extra_promotion_table = {
(float16, int16): float32,
(float16, int32): float64,
(float16, int64): float64,
(float16, uint16): float32,
(float16, uint32): float64,
(float16, uint64): float64,
(float32, int32): float64,
(float32, int64): float64,
(float32, uint32): float64,
(float32, uint64): float64,
(bfloat16, int16): float32,
(bfloat16, int32): float64,
(bfloat16, int64): float64,
(bfloat16, uint16): float32,
(bfloat16, uint32): float64,
(bfloat16, uint64): float64,
(complex64, int32): complex128,
(complex64, int64): complex128,
(complex64, uint32): complex128,
(complex64, uint64): complex128,
}
extra_promotion_table = {
(float16, int16): float16,
(float16, int32): float16,
(float16, int64): float16,
(float16, uint16): float16,
(float16, uint32): float16,
(float16, uint64): float16,
(float32, int32): float32,
(float32, int64): float32,
(float32, uint32): float32,
(float32, uint64): float32,
(bfloat16, int16): bfloat16,
(bfloat16, int32): bfloat16,
(bfloat16, int64): bfloat16,
(bfloat16, uint16): bfloat16,
(bfloat16, uint32): bfloat16,
(bfloat16, uint64): bfloat16,
(complex64, int32): complex64,
(complex64, int64): complex64,
(complex64, uint32): complex64,
(complex64, uint64): complex64,
}
# TODO: change when it's not the default mode anymore
promotion_table = {
**array_api_promotion_table,
**common_extra_promotion_table,
**precise_extra_promotion_table,
}
# global parameter properties
GLOBAL_PROPS = [
"array_significant_figures",
"array_decimal_values",
"warning_level",
"nan_policy",
"array_mode",
"nestable_mode",
"inplace_mode",
"exception_trace_mode",
"show_func_wrapper_trace_mode",
"min_denominator",
"min_base",
"queue_timeout",
"tmp_dir",
"shape_array_mode",
"dynamic_backend",
"precise_mode",
"soft_device_mode",
"logging_mode",
"default_dtype",
"default_float_dtype",
"default_int_dtype",
"default_complex_dtype",
"default_uint_dtype",
"cython_wrappers_mode",
]
INTERNAL_FILENAMES = [
os.path.join("ivy", "compiler"),
os.path.join("ivy", "functional"),
os.path.join("ivy", "data_classes"),
os.path.join("ivy", "stateful"),
os.path.join("ivy", "utils"),
os.path.join("ivy_tests", "test_ivy"),
os.path.join("ivy", "func_wrapper.py"),
os.path.join("ivy", "__init__.py"),
]
def _is_from_internal(filename):
return builtins.any([fn in filename for fn in INTERNAL_FILENAMES])
class LoggingMode:
logging_modes = ["DEBUG", "INFO", "WARNING", "ERROR"]
logging_mode_stack = []
def __init__(self):
# Set up the initial logging mode
logging.basicConfig(level=logging.WARNING)
self.logging_mode_stack.append(logging.WARNING)
def set_logging_mode(self, mode):
"""Set the current logging mode for Ivy.
Possible modes are 'DEBUG', 'INFO', 'WARNING', 'ERROR'.
"""
assert (
mode in self.logging_modes
), "Invalid logging mode. Choose from: " + ", ".join(self.logging_modes)
# Update the logging level
logging.getLogger().setLevel(mode)
self.logging_mode_stack.append(mode)
def unset_logging_mode(self): # noqa: D209
"""Remove the most recently set logging mode, returning to the previous
one."""
if len(self.logging_mode_stack) > 1:
# Remove the current mode
self.logging_mode_stack.pop()
# Set the previous mode
logging.getLogger().setLevel(self.logging_mode_stack[-1])
class IvyWithGlobalProps(sys.modules[__name__].__class__):
def __setattr__(self, name, value, internal=False):
previous_frame = inspect.currentframe().f_back
filename = inspect.getframeinfo(previous_frame)[0]
internal = internal and _is_from_internal(filename)
if not internal and name in GLOBAL_PROPS:
raise ivy.utils.exceptions.IvyException(
f"Property: {name} is read only! Please use the setter: set_{name}()"
" for setting its value!"
)
self.__dict__[name] = value
def __reduce__(self):
def _get_module_and_replace_name(module_name: str):
module = importlib.import_module(module_name)
module.__class__ = self.__class__
return module
return (_get_module_and_replace_name, (self.__name__,))
if (
"ivy" in sys.modules
and sys.modules["ivy"].utils._importlib.IS_COMPILING_WITH_BACKEND
):
# Required for ivy.with_backend internal compilation
sys.modules["ivy"].utils._importlib.import_cache[
__name__
].__class__ = IvyWithGlobalProps
else:
sys.modules[__name__].__class__ = IvyWithGlobalProps
# check if all expected binaries are present
# in this else block to avoid raising the same warning again
# on using with_backend
check_for_binaries()
| ivy/ivy/__init__.py/0 | {
"file_path": "ivy/ivy/__init__.py",
"repo_id": "ivy",
"token_count": 18614
} | 5 |
# global
import abc
class _ArrayWithConversionsExperimental(abc.ABC):
pass
| ivy/ivy/data_classes/array/experimental/conversions.py/0 | {
"file_path": "ivy/ivy/data_classes/array/experimental/conversions.py",
"repo_id": "ivy",
"token_count": 27
} | 6 |
# global
import abc
from typing import Optional
# local
import ivy
class _ArrayWithSortingExperimental(abc.ABC):
def lexsort(
self: ivy.Array,
/,
*,
axis: int = -1,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.lexsort. This method simply
wraps the function, and so the docstring for ivy.lexsort also applies
to this method with minimal changes.
Parameters
----------
self
input array.
axis
axis of each key to be indirectly sorted.
By default, sort over the last axis of each key.
out
optional output array, for writing the result to.
Returns
-------
ret
array of integer indices with shape N, that sort the input array as keys.
Examples
--------
>>> a = [1,5,1,4,3,4,4] # First column
>>> b = [9,4,0,4,0,2,1] # Second column
>>> keys = ivy.asarray([b,a])
>>> keys.lexsort() # Sort by a, then by b
array([2, 0, 4, 6, 5, 3, 1])
"""
return ivy.lexsort(self._data, axis=axis, out=out)
| ivy/ivy/data_classes/array/experimental/sorting.py/0 | {
"file_path": "ivy/ivy/data_classes/array/experimental/sorting.py",
"repo_id": "ivy",
"token_count": 553
} | 7 |
# global
from typing import Optional, Union, Sequence
import abc
# local
import ivy
class _ArrayWithUtility(abc.ABC):
def all(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.all. This method simply
wraps the function, and so the docstring for ivy.all also applies to
this method with minimal changes.
Parameters
----------
self
input array.
axis
axis or axes along which to perform a logical AND reduction. By default, a
logical AND reduction must be performed over the entire array. If a tuple of
integers, logical AND reductions must be performed over multiple axes. A
valid ``axis`` must be an integer on the interval ``[-N, N)``, where ``N``
is the rank(number of dimensions) of ``self``. If an ``axis`` is specified
as a negative integer, the function must determine the axis along which to
perform a reduction by counting backward from the last dimension (where
``-1`` refers to the last dimension). If provided an invalid ``axis``, the
function must raise an exception. Default ``None``.
keepdims
If ``True``, the reduced axes (dimensions) must be included in the result as
singleton dimensions, and, accordingly, the result must be compatible with
the input array (see :ref:`broadcasting`). Otherwise, if ``False``, the
reduced axes(dimensions) must not be included in the result.
Default: ``False``.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
if a logical AND reduction was performed over the entire array, the returned
array must be a zero-dimensional array containing the test result;
otherwise, the returned array must be a non-zero-dimensional array
containing the test results. The returned array must have a data type of
``bool``.
Examples
--------
>>> x = ivy.array([0, 1, 2])
>>> y = x.all()
>>> print(y)
ivy.array(False)
>>> x = ivy.array([[[0, 1], [0, 0]], [[1, 2], [3, 4]]])
>>> y = x.all(axis=1)
>>> print(y)
ivy.array([[False, False],
[ True, True]])
"""
return ivy.all(self._data, axis=axis, keepdims=keepdims, out=out)
def any(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.any. This method simply
wraps the function, and so the docstring for ivy.any also applies to
this method with minimal changes.
Parameters
----------
self
input array.
axis
axis or axes along which to perform a logical OR reduction. By default, a
logical OR reduction must be performed over the entire array. If a tuple of
integers, logical OR reductions must be performed over multiple axes. A
valid ``axis`` must be an integer on the interval ``[-N, N)``, where ``N``
is the rank(number of dimensions) of ``self``. If an ``axis`` is specified
as a negative integer, the function must determine the axis along which to
perform a reduction by counting backward from the last dimension (where
``-1`` refers to the last dimension). If provided an invalid ``axis``, the
function must raise an exception. Default: ``None``.
keepdims
If ``True``, the reduced axes (dimensions) must be included in the result as
singleton dimensions, and, accordingly, the result must be compatible with
the input array (see :ref:`broadcasting`). Otherwise, if ``False``, the
reduced axes(dimensions) must not be included in the result.
Default: ``False``.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
if a logical OR reduction was performed over the entire array, the returned
array must be a zero-dimensional array containing the test result;
otherwise, the returned array must be a non-zero-dimensional array
containing the test results. The returned array must have a data type of
``bool``.
Examples
--------
>>> x = ivy.array([0, 1, 2])
>>> y = x.any()
>>> print(y)
ivy.array(True)
>>> x = ivy.array([[[0, 1], [0, 0]], [[1, 2], [3, 4]]])
>>> y = x.any(axis=2)
>>> print(y)
ivy.array([[ True, False],
[ True, True]])
"""
return ivy.any(self._data, axis=axis, keepdims=keepdims, out=out)
| ivy/ivy/data_classes/array/utility.py/0 | {
"file_path": "ivy/ivy/data_classes/array/utility.py",
"repo_id": "ivy",
"token_count": 2163
} | 8 |
from ivy.data_classes.container.base import ContainerBase
class _ContainerWithDeviceExperimental(ContainerBase):
pass
| ivy/ivy/data_classes/container/experimental/device.py/0 | {
"file_path": "ivy/ivy/data_classes/container/experimental/device.py",
"repo_id": "ivy",
"token_count": 33
} | 9 |
# global
from numbers import Number
from typing import Any, Union, List, Dict, Iterable, Optional, Callable
# local
from ivy.data_classes.container.base import ContainerBase
import ivy
# ToDo: implement all methods here as public instance methods
# noinspection PyMissingConstructor
class _ContainerWithGeneral(ContainerBase):
@staticmethod
def _static_is_native_array(
x: ivy.Container,
/,
*,
exclusive: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.is_native_array. This
method simply wraps the function, and so the docstring for
ivy.is_native_array also applies to this method with minimal changes.
Parameters
----------
x
The input to check
exclusive
Whether to check if the data type is exclusively an array, rather than a
variable or traced array.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean, whether or not x is a native array.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1]), b=ivy.native_array([2, 3]))
>>> y = ivy.Container.static_is_native_array(x)
>>> print(y)
{
a: false,
b: true
}
"""
return ContainerBase.cont_multi_map_in_function(
"is_native_array",
x,
exclusive=exclusive,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def is_native_array(
self: ivy.Container,
/,
*,
exclusive: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.is_native_array. This
method simply wraps the function, and so the docstring for
ivy.ivy.is_native_array also applies to this method with minimal
changes.
Parameters
----------
self
The input to check
exclusive
Whether to check if the data type is exclusively an array, rather than a
variable or traced array.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean, whether or not x is a native array.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1]), b=ivy.native_array([2, 3]))
>>> y = x.is_native_array()
>>> print(y)
{
a: False,
b: True
}
"""
return self._static_is_native_array(
self,
exclusive=exclusive,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_is_ivy_array(
x: ivy.Container,
/,
*,
exclusive: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.is_ivy_array. This method
simply wraps the function, and so the docstring for ivy.is_ivy_array
also applies to this method with minimal changes.
Parameters
----------
x
The input to check
exclusive
Whether to check if the data type is exclusively an array, rather than a
variable or traced array.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean, whether or not x is an array.
>>> x = ivy.Container(a=ivy.array([1]), b=ivy.native_array([2, 3]))
>>> y = ivy.Container.static_is_ivy_array(x)
>>> print(y)
{
a: true,
b: false
}
"""
return ContainerBase.cont_multi_map_in_function(
"is_ivy_array",
x,
exclusive=exclusive,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def is_ivy_array(
self: ivy.Container,
/,
*,
exclusive: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.is_native_array. This
method simply wraps the function, and so the docstring for
ivy.ivy.is_native_array also applies to this method with minimal
changes.
Parameters
----------
self
The input to check
exclusive
Whether to check if the data type is exclusively an array, rather than a
variable or traced array.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean, whether or not x is an array.
>>> x = ivy.Container(a=ivy.array([1]), b=ivy.native_array([2, 3]))
>>> y = x.is_ivy_array()
>>> print(y)
{
a: True,
b: False
}
"""
return self._static_is_ivy_array(
self,
exclusive=exclusive,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_is_array(
x: ivy.Container,
/,
*,
exclusive: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.is_array. This method
simply wraps the function, and so the docstring for ivy.ivy.is_array
also applies to this method with minimal changes.
Parameters
----------
x
The input to check
exclusive
Whether to check if the data type is exclusively an array, rather than a
variable or traced array.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output, for writing the result to. It must have a shape that the
inputs broadcast to.
Returns
-------
ret
Boolean, whether or not x is an array.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1]), b=ivy.native_array([2, 3]))
>>> y = ivy.Container.static_is_array(x)
>>> print(y)
{
a: true,
b: true
}
"""
return ContainerBase.cont_multi_map_in_function(
"is_array",
x,
exclusive=exclusive,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def is_array(
self: ivy.Container,
/,
*,
exclusive: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.is_array. This method
simply wraps the function, and so the docstring for ivy.is_array also
applies to this method with minimal changes.
Parameters
----------
self
The input to check
exclusive
Whether to check if the data type is exclusively an array, rather than a
variable or traced array.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean, whether or not x is an array.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1]), b=ivy.native_array([2, 3]))
>>> y = x.is_array()
>>> print(y)
{
a: True,
b: True
}
"""
return self._static_is_array(
self,
exclusive=exclusive,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_clip_vector_norm(
x: Union[ivy.Container, ivy.Array, ivy.NativeArray],
max_norm: Union[float, ivy.Container],
/,
*,
p: Union[float, ivy.Container] = 2.0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.clip_vector_norm. This
method simply wraps the function, and so the docstring for
ivy.clip_vector_norm also applies to this method with minimal changes.
Parameters
----------
x
input array
max_norm
float, the maximum value of the array norm.
p
optional float, the p-value for computing the p-norm.
Default is 2.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped.
Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
ret
An array with the vector norm downscaled to the max norm if needed.
Examples
--------
With :class:`ivy.Container` instance method:
>>> x = ivy.Container(a=ivy.array([0., 1., 2.]),b=ivy.array([3., 4., 5.]))
>>> y = ivy.Container.static_clip_vector_norm(x, 2.0)
>>> print(y)
{
a: ivy.array([0., 0.894, 1.79]),
b: ivy.array([0.849, 1.13, 1.41])
}
"""
return ContainerBase.cont_multi_map_in_function(
"clip_vector_norm",
x,
max_norm,
p=p,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def clip_vector_norm(
self: ivy.Container,
max_norm: Union[float, ivy.Container],
/,
*,
p: Union[float, ivy.Container] = 2.0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.clip_vector_norm. This
method simply wraps the function, and so the docstring for
ivy.clip_vector_norm also applies to this method with minimal changes.
Parameters
----------
self
input array
max_norm
float, the maximum value of the array norm.
p
optional float, the p-value for computing the p-norm.
Default is 2.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped.
Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
ret
An array with the vector norm downscaled to the max norm if needed.
Examples
--------
With :class:`ivy.Container` instance method:
>>> x = ivy.Container(a=ivy.array([0., 1., 2.]),
... b=ivy.array([3., 4., 5.]))
>>> y = x.clip_vector_norm(2.0, p=1.0)
>>> print(y)
{
a: ivy.array([0., 0.667, 1.33]),
b: ivy.array([0.5, 0.667, 0.833])
}
"""
return self._static_clip_vector_norm(
self,
max_norm,
p=p,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_inplace_update(
x: Union[ivy.Container, ivy.Array, ivy.NativeArray],
val: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
ensure_in_backend: Union[bool, ivy.Container] = False,
keep_input_dtype: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.inplace_update. This
method simply wraps the function, and so the docstring for
ivy.inplace_update also applies to this method with minimal changes.
Parameters
----------
x
input container to be updated inplace
val
value to update the input container with
ensure_in_backend
Whether to ensure that the `ivy.NativeArray` is also inplace updated.
In cases where it should be, backends which do not natively support inplace
updates will raise an exception.
keep_input_dtype
Whether or not to preserve `x` data type after the update, otherwise `val`
data type will be applied. Defaults to False.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
ret
An array with the vector norm downscaled to the max norm if needed.
"""
# inplace update the leaves
cont = x
cont = ContainerBase.cont_multi_map_in_function(
"inplace_update",
cont,
val,
ensure_in_backend=ensure_in_backend,
keep_input_dtype=keep_input_dtype,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
# inplace update the container
x.cont_inplace_update(cont)
return x
def inplace_update(
self: ivy.Container,
val: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
ensure_in_backend: Union[bool, ivy.Container] = False,
keep_input_dtype: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.inplace_update. This
method simply wraps the function, and so the docstring for
ivy.inplace_update also applies to this method with minimal changes.
Parameters
----------
self
input container to be updated inplace
val
value to update the input container with
ensure_in_backend
Whether to ensure that the `ivy.NativeArray` is also inplace updated.
In cases where it should be, backends which do not natively support inplace
updates will raise an exception.
keep_input_dtype
Whether or not to preserve `x` data type after the update, otherwise `val`
data type will be applied. Defaults to False.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
ret
An array with the vector norm downscaled to the max norm if needed.
Examples
--------
With :class:`ivy.Container` input and default backend set as `numpy`:
>>> x = ivy.Container(a=ivy.array([5, 6]), b=ivy.array([7, 8]))
>>> y = ivy.Container(a=ivy.array([1]), b=ivy.array([2]))
>>> x.inplace_update(y)
>>> print(x)
{
a: ivy.array([1]),
b: ivy.array([2])
}
"""
return self._static_inplace_update(
self,
val,
ensure_in_backend=ensure_in_backend,
keep_input_dtype=keep_input_dtype,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_inplace_decrement(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
val: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.inplace_decrement. This
method simply wraps the function, and so the docstring for
ivy.inplace_decrement also applies to this method with minimal changes.
Parameters
----------
x
The input array to be decremented by the defined value.
val
The value of decrement.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
The array following an in-place decrement.
Examples
--------
Decrement by a value
>>> x = ivy.Container(a=ivy.array([0.5, -5., 30.]),b=ivy.array([0., -25., 50.]))
>>> y = ivy.inplace_decrement(x, 1.5)
>>> print(y)
{
a: ivy.array([-1., -6.5, 28.5]),
b: ivy.array([-1.5, -26.5, 48.5])
}
Decrement by a Container
>>> x = ivy.Container(a=ivy.array([0., 15., 30.]), b=ivy.array([0., 25., 50.]))
>>> y = ivy.Container(a=ivy.array([0., 15., 30.]), b=ivy.array([0., 25., 50.]))
>>> z = ivy.inplace_decrement(x, y)
>>> print(z)
{
a: ivy.array([0., 0., 0.]),
b: ivy.array([0., 0., 0.])
}
>>> x = ivy.Container(a=ivy.array([3., 7., 10.]), b=ivy.array([0., 75., 5.5]))
>>> y = ivy.Container(a=ivy.array([2., 5.5, 7.]), b=ivy.array([0., 25., 2.]))
>>> z = ivy.inplace_decrement(x, y)
>>> print(z)
{
a: ivy.array([1., 1.5, 3.]),
b: ivy.array([0., 50., 3.5])
}
"""
return ContainerBase.cont_multi_map_in_function(
"inplace_decrement",
x,
val,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def inplace_decrement(
self: ivy.Container,
val: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.inplace_decrement. This
method simply wraps the function, and so the docstring for
ivy.inplace_decrement also applies to this method with minimal changes.
Parameters
----------
self
Input container to apply an in-place decrement.
val
The value of decrement.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A container with the array following the in-place decrement.
Examples
--------
Using :class:`ivy.Container` instance method:
>>> x = ivy.Container(a=ivy.array([-6.7, 2.4, -8.5]),
... b=ivy.array([1.5, -0.3, 0]),
... c=ivy.array([-4.7, -5.4, 7.5]))
>>> y = x.inplace_decrement(2)
>>> print(y)
{
a: ivy.array([-8.7, 0.4, -10.5]),
b: ivy.array([-0.5, -2.3, -2]),
c: ivy.array([-6.7, -7.4, 5.5])
}
"""
return self._static_inplace_decrement(
self,
val,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_inplace_increment(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
val: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.inplace_increment. This
method simply wraps the function, and so the docstring for
ivy.inplace_increment also applies to this method with minimal changes.
Parameters
----------
x
The input array to be incremented by the defined value.
val
The value of increment.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
The array following an in-place increment.
Examples
--------
Increment by a value
>>> x = ivy.Container(a=ivy.array([0.5, -5., 30.]),b=ivy.array([0., -25., 50.]))
>>> y = ivy.inplace_increment(x, 1.5)
>>> print(y)
{
a: ivy.array([2., -3.5, 31.5]),
b: ivy.array([1.5, -23.5, 51.5])
}
Increment by a Container
>>> x = ivy.Container(a=ivy.array([0., 15., 30.]), b=ivy.array([0., 25., 50.]))
>>> y = ivy.Container(a=ivy.array([0., 15., 30.]), b=ivy.array([0., 25., 50.]))
>>> z = ivy.inplace_increment(x, y)
>>> print(z)
{
a: ivy.array([0., 30., 60.]),
b: ivy.array([0., 50., 100.])
}
>>> x = ivy.Container(a=ivy.array([3., 7., 10.]), b=ivy.array([0., 75., 5.5]))
>>> y = ivy.Container(a=ivy.array([2., 5.5, 7.]), b=ivy.array([0., 25., 2.]))
>>> z = ivy.inplace_increment(x, y)
>>> print(z)
{
a: ivy.array([5., 12.5, 17.]),
b: ivy.array([0., 100., 7.5])
}
"""
return ContainerBase.cont_multi_map_in_function(
"inplace_increment",
x,
val,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def inplace_increment(
self: ivy.Container,
val: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.inplace_increment. This
method wraps the function, and so the docstring for
ivy.inplace_increment also applies to this method with minimal changes.
Parameters
----------
self
Input container to apply an in-place increment.
val
The value of increment.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A container with the array following the in-place increment.
Examples
--------
Using :class:`ivy.Container` instance method:
>>> x = ivy.Container(a=ivy.array([-6.7, 2.4, -8.5]),
... b=ivy.array([1.5, -0.3, 0]),
... c=ivy.array([-4.7, -5.4, 7.5]))
>>> y = x.inplace_increment(2)
>>> print(y)
{
a: ivy.array([-4.7, 4.4, -6.5]),
b: ivy.array([3.5, 1.7, 2.]),
c: ivy.array([-2.7, -3.4, 9.5])
}
"""
return self._static_inplace_increment(
self,
val,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_assert_supports_inplace(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.assert_supports_inplace.
This method simply wraps the function, and so the docstring for
ivy.assert_supports_inplace also applies to this method with minimal
changes.
Parameters
----------
x
input container to check for inplace support for.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
True if support, raises exception otherwise`
"""
return ContainerBase.cont_multi_map_in_function(
"assert_supports_inplace",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def assert_supports_inplace(
self: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of
ivy.assert_supports_inplace. This method simply wraps the function, and
so the docstring for ivy.assert_supports_inplace also applies to this
method with minimal changes.
Parameters
----------
x
input container to check for inplace support for.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
An ivy.Container instance of True bool values if nodes of the Container \
support in-place operations, raises IvyBackendException otherwise
Examples
--------
>>> ivy.set_backend("numpy")
>>> x = ivy.Container(a=ivy.array([5, 6]), b=ivy.array([7, 8]))
>>> print(x.assert_supports_inplace())
{
a: True,
b: True
}
"""
return self._static_assert_supports_inplace(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_all_equal(
x1: ivy.Container,
*xs: Union[Iterable[Any], ivy.Container],
equality_matrix: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.all_equal. This method
simply wraps the function, and so the docstring for ivy.all_equal also
applies to this method with minimal changes.
Parameters
----------
x1
input container.
xs
arrays or containers to be compared to ``x1``.
equality_matrix
Whether to return a matrix of equalities comparing each input with every
other. Default is ``False``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean, whether or not the inputs are equal, or matrix container of
booleans if equality_matrix=True is set.
Examples
--------
With one :class:`ivy.Container` input:
>>> x1 = ivy.Container(a=ivy.array([1, 0, 1, 1]), b=ivy.array([1, -1, 0, 0]))
>>> x2 = ivy.array([1, 0, 1, 1])
>>> y = ivy.Container.static_all_equal(x1, x2, equality_matrix= False)
>>> print(y)
{
a: ivy.array([True, True, True, True]),
b: ivy.array([True, False, False, False])
}
With multiple :class:`ivy.Container` input:
>>> x1 = ivy.Container(a=ivy.array([1, 0, 1, 1]),
... b=ivy.native_array([1, 0, 0, 1]))
>>> x2 = ivy.Container(a=ivy.native_array([1, 0, 1, 1]),
... b=ivy.array([1, 0, -1, -1]))
>>> y = ivy.Container.static_all_equal(x1, x2, equality_matrix= False)
>>> print(y)
{
a: ivy.array([True, True, True, True]),
b: ivy.array([True, True, False, False])
}
"""
return ContainerBase.cont_multi_map_in_function(
"all_equal",
x1,
*xs,
equality_matrix=equality_matrix,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def all_equal(
self: ivy.Container,
*xs: Union[Iterable[Any], ivy.Container],
equality_matrix: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.all_equal. This method
simply wraps the function, and so the docstring for ivy.all_equal also
applies to this method with minimal changes.
Parameters
----------
self
input container.
xs
arrays or containers to be compared to ``self``.
equality_matrix
Whether to return a matrix of equalities comparing each input with every
other. Default is ``False``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean, whether or not the inputs are equal, or matrix container of
booleans if equality_matrix=True is set.
Examples
--------
With one :class:`ivy.Container` instances:
>>> x1 = ivy.Container(a=ivy.array([1, 0, 1, 1]), b=ivy.array([1, -1, 0, 0]))
>>> x2 = ivy.array([1, 0, 1, 1])
>>> y = x1.all_equal(x2, equality_matrix= False)
>>> print(y)
{
a: True,
b: False
}
>>> x1 = ivy.Container(a=ivy.array([1, 0, 1, 1]), b=ivy.array([1, -1, 0, 0]))
>>> x2 = ivy.array([1, 0, 1, 1])
>>> y = x1.all_equal(x2, equality_matrix= False)
>>> print(y)
{
a: True,
b: False
}
With multiple :class:`ivy.Container` instances:
>>> x1 = ivy.Container(a=ivy.native_array([1, 0, 0]),
... b=ivy.array([1, 2, 3]))
>>> x2 = ivy.Container(a=ivy.native_array([1, 0, 1]),
... b=ivy.array([1, 2, 3]))
>>> y = x1.all_equal(x2, equality_matrix= False)
>>> print(y)
{
a: False,
b: True
}
>>> x1 = ivy.Container(a=ivy.native_array([1, 0, 0]),
... b=ivy.array([1, 2, 3]))
>>> x2 = ivy.Container(a=ivy.native_array([1, 0, 1]),
... b=ivy.array([1, 2, 3]))
>>> y = x1.all_equal(x2, equality_matrix= False)
>>> print(y)
{
a: False,
b: True
}
"""
return self._static_all_equal(
self,
*xs,
equality_matrix=equality_matrix,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_fourier_encode(
x: Union[ivy.Container, ivy.Array, ivy.NativeArray],
max_freq: Union[float, ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
num_bands: Union[int, ivy.Container] = 4,
linear: Union[bool, ivy.Container] = False,
flatten: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.fourier_encode. This
method simply wraps the function, and so the docstring for
ivy.fourier_encode also applies to this method with minimal changes.
Parameters
----------
x
Input container to apply fourier_encode.
max_freq
The maximum frequency of the encoding.
num_bands
The number of frequency bands for the encoding. Default is 4.
linear
Whether to space the frequency bands linearly as opposed to geometrically.
Default is ``False``.
flatten
Whether to flatten the position dimension into the batch dimension.
Default is ``False``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
New container with the final dimension expanded of arrays at its leaves,
and the encodings stored in this channel.
Examples
--------
>>> x = ivy.Container(a = ivy.array([1,2]),
... b = ivy.array([3,4]))
>>> y = 1.5
>>> z = ivy.Container.static_fourier_encode(x, y)
>>> print(z)
{
a: (<classivy.array.array.Array>shape=[2,9]),
b: (<classivy.array.array.Array>shape=[2,9])
}
>>> x = ivy.Container(a = ivy.array([3,10]),
... b = ivy.array([4,8]))
>>> y = 2.5
>>> z = ivy.Container.static_fourier_encode(x, y, num_bands=3)
>>> print(z)
{
a: ivy.array([[ 3.0000000e+00, 3.6739404e-16, 3.6739404e-16,
3.6739404e-16, -1.0000000e+00, -1.0000000e+00, -1.0000000e+00],
[ 1.0000000e+01, -1.2246468e-15, -1.2246468e-15, -1.2246468e-15,
1.0000000e+00, 1.0000000e+00, 1.0000000e+00]]),
b: ivy.array([[ 4.00000000e+00, -4.89858720e-16, -4.89858720e-16,
-4.89858720e-16, 1.00000000e+00, 1.00000000e+00, 1.00000000e+00],
[ 8.00000000e+00, -9.79717439e-16, -9.79717439e-16, -9.79717439e-16,
1.00000000e+00, 1.00000000e+00, 1.00000000e+00]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"fourier_encode",
x,
max_freq,
num_bands=num_bands,
linear=linear,
concat=True,
flatten=flatten,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def fourier_encode(
self: ivy.Container,
max_freq: Union[float, ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
num_bands: Union[int, ivy.Container] = 4,
linear: Union[bool, ivy.Container] = False,
flatten: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.fourier_encode. This
method simply wraps the function, and so the docstring for
ivy.fourier_encode also applies to this method with minimal changes.
Parameters
----------
self
Input container to apply fourier_encode at leaves.
max_freq
The maximum frequency of the encoding.
num_bands
The number of frequency bands for the encoding. Default is 4.
linear
Whether to space the frequency bands linearly as opposed to geometrically.
Default is ``False``.
flatten
Whether to flatten the position dimension into the batch dimension.
Default is ``False``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
dtype
Data type of the returned array. Default is ``None``.
out
Optional output container. Default is ``None``.
Returns
-------
ret
New container with the final dimension expanded of arrays at its leaves,
and the encodings stored in this channel.
Examples
--------
>>> x = ivy.Container(a = ivy.array([1,2]),
... b = ivy.array([3,4]))
>>> y = 1.5
>>> z = x.fourier_encode(y)
>>> print(z)
{
a: (<class ivy.data_classes.array.array.Array> shape=[2, 9]),
b: (<class ivy.data_classes.array.array.Array> shape=[2, 9])
}
>>> x = ivy.Container(a = ivy.array([3,10]),
... b = ivy.array([4,8]))
>>> y = 2.5
>>> z = x.fourier_encode(y,num_bands=3)
>>> print(z)
{
a: (<class ivy.data_classes.array.array.Array> shape=[2, 7]),
b: (<class ivy.data_classes.array.array.Array> shape=[2, 7])
}
"""
return self._static_fourier_encode(
self,
max_freq,
num_bands=num_bands,
linear=linear,
flatten=flatten,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_gather(
params: Union[ivy.Container, ivy.Array, ivy.NativeArray],
indices: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
axis: Union[int, ivy.Container] = -1,
batch_dims: Union[int, ivy.Container] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.gather. This method
simply wraps the function, and so the docstring for ivy.gather also
applies to this method with minimal changes.
Parameters
----------
params
The container from which to gather values.
indices
The container or array which indicates the indices that will be
gathered along the specified axis.
axis
The axis from which the indices will be gathered. Default is ``-1``.
batch_dims
optional int, lets you gather different items from each element of a batch.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
Optional array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
New container with the values gathered at the specified indices
along the specified axis.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a = ivy.array([0., 1., 2.]),
... b = ivy.array([4., 5., 6.]))
>>> y = ivy.Container(a = ivy.array([0, 1]),
... b = ivy.array([1, 2]))
>>> print(ivy.Container.static_gather(x, y))
{
a: ivy.array([0., 1.]),
b: ivy.array([5., 6.])
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.Container(a = ivy.array([0., 1., 2.]),
... b = ivy.array([4., 5., 6.]))
>>> y = ivy.array([0, 1])
>>> z = ivy.Container.static_gather(x, y)
>>> print(z)
{
a: ivy.array([0., 1.]),
b: ivy.array([4., 5.])
}
"""
return ContainerBase.cont_multi_map_in_function(
"gather",
params,
indices,
axis=axis,
batch_dims=batch_dims,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def gather(
self: ivy.Container,
indices: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
axis: Union[int, ivy.Container] = -1,
batch_dims: Union[int, ivy.Container] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.gather. This method
simply wraps the function, and so the docstring for ivy.gather also
applies to this method with minimal changes.
Parameters
----------
self
The container from which to gather values.
indices
The container or array which indicates the indices that will be
gathered along the specified axis.
axis
The axis from which the indices will be gathered. Default is ``-1``.
batch_dims
optional int, lets you gather different items from each element of a batch.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples). Default is
False.
out
Optional array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
New container with the values gathered at the specified indices
along the specified axis.
Examples
--------
>>> x = ivy.Container(a = ivy.array([0., 1., 2.]),
... b = ivy.array([4., 5., 6.]))
>>> y = ivy.Container(a = ivy.array([0, 1]),
... b = ivy.array([1, 2]))
>>> z = x.gather(y)
>>> print(z)
{
a: ivy.array([0., 1.]),
b: ivy.array([5., 6.])
}
"""
return self._static_gather(
self,
indices,
axis=axis,
batch_dims=batch_dims,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_has_nans(
x: ivy.Container,
/,
*,
include_infs: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""
Determine whether arrays in the container contain any nans, as well as infs or
-infs if specified.
Parameters
----------
x
The container to check for nans.
include_infs
Whether to include infs and -infs in the check. Default is True.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
Whether the container has any nans, applied either leafwise or across the
entire container.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1, 2]), b=ivy.array([float('nan'), 2]))
>>> y = ivy.Container.static_has_nans(x)
>>> print(y)
{
a: false,
b: true
}
"""
return ContainerBase.cont_multi_map_in_function(
"has_nans",
x,
include_infs=include_infs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def has_nans(
self: ivy.Container,
/,
*,
include_infs: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""
Determine whether arrays in the container contain any nans, as well as infs or
-infs if specified.
Parameters
----------
include_infs
Whether to include infs and -infs in the check. Default is True.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
Whether the container has any nans, applied across the entire container.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1, 2]), b=ivy.array([float('nan'), 2]))
>>> y = x.has_nans()
>>> print(y)
{
a: False,
b: True
}
"""
return self._static_has_nans(
self,
include_infs=include_infs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_scatter_nd(
indices: Union[ivy.Array, ivy.NativeArray, ivy.Container],
updates: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
shape: Optional[Union[ivy.Array, ivy.NativeArray, ivy.Container]] = None,
*,
reduction: Union[str, ivy.Container] = "sum",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.scatter_nd. This method
simply wraps the function, and so the docstring for ivy.scatter_nd also
applies to this method with minimal changes.
Parameters
----------
indices
Index array or container.
updates
values to update input tensor with
shape
The shape of the result. Default is ``None``, in which case tensor argument
must be provided.
reduction
The reduction method for the scatter, one of 'sum', 'min', 'max'
or 'replace'
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ref
New container of given shape, with the values updated at the indices.
Examples
--------
scatter into an empty array
>>> indices = ivy.Container(a=ivy.array([[5],[6],[7]]),
... b=ivy.array([[2],[3],[4]]))
>>> updates = ivy.Container(a=ivy.array([50, 60, 70]),
... b=ivy.array([20, 30, 40]))
>>> shape = ivy.Container(a=ivy.array([10]),
... b=ivy.array([10]))
>>> z = ivy.Container.static_scatter_nd(indices, updates, shape=shape)
>>> print(z)
{
a: ivy.array([0, 0, 0, 0, 0, 50, 60, 70, 0, 0]),
b: ivy.array([0, 0, 20, 30, 40, 0, 0, 0, 0, 0])
}
scatter into a container
>>> indices = ivy.Container(a=ivy.array([[5],[6],[7]]),
... b=ivy.array([[2],[3],[4]]))
>>> updates = ivy.Container(a=ivy.array([50, 60, 70]),
... b=ivy.array([20, 30, 40]))
>>> z = ivy.Container(a=ivy.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]),
... b=ivy.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]))
>>> ivy.Container.static_scatter_nd(indices, updates,
... reduction='replace', out = z)
>>> print(z)
{
a: ivy.array([1, 2, 3, 4, 5, 50, 60, 70, 9, 10]),
b: ivy.array([1, 2, 20, 30, 40, 6, 7, 8, 9, 10])
}
"""
return ContainerBase.cont_multi_map_in_function(
"scatter_nd",
indices,
updates,
shape=shape,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def scatter_nd(
self: ivy.Container,
updates: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
shape: Optional[Union[ivy.Array, ivy.NativeArray, ivy.Container]] = None,
*,
reduction: Union[str, ivy.Container] = "sum",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.scatter_nd. This method
simply wraps the function, and so the docstring for ivy.scatter_nd also
applies to this method with minimal changes.
Parameters
----------
self
Index array or container.
updates
values to update input tensor with
shape
The shape of the result. Default is ``None``, in which case tensor argument
must be provided.
reduction
The reduction method for the scatter, one of 'sum', 'min', 'max'
or 'replace'
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
New container of given shape, with the values updated at the indices.
Examples
--------
scatter into an empty container
>>> indices = ivy.Container(a=ivy.array([[4],[3],[6]]),
... b=ivy.array([[5],[1],[2]]))
>>> updates = ivy.Container(a=ivy.array([100, 200, 200]),
... b=ivy.array([20, 30, 40]))
>>> shape = ivy.Container(a=ivy.array([10]),
... b=ivy.array([10]))
>>> z = indices.scatter_nd(updates, shape=shape)
>>> print(z)
{
a: ivy.array([0, 0, 0, 200, 100, 0, 200, 0, 0, 0]),
b: ivy.array([0, 30, 40, 0, 0, 20, 0, 0, 0, 0])
}
With scatter into a container.
>>> indices = ivy.Container(a=ivy.array([[5],[6],[7]]),
... b=ivy.array([[2],[3],[4]]))
>>> updates = ivy.Container(a=ivy.array([50, 60, 70]),
... b=ivy.array([20, 30, 40]))
>>> z = ivy.Container(a=ivy.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]),
... b=ivy.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]))
>>> indices.scatter_nd(updates,reduction='replace', out = z)
>>> print(z)
{
a: ivy.array([1, 2, 3, 4, 5, 50, 60, 70, 9, 10]),
b: ivy.array([1, 2, 20, 30, 40, 6, 7, 8, 9, 10])
}
"""
return self._static_scatter_nd(
self,
updates,
shape=shape,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_scatter_flat(
indices: Union[ivy.Array, ivy.NativeArray, ivy.Container],
updates: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
size: Optional[Union[int, ivy.Container]] = None,
reduction: Union[str, ivy.Container] = "sum",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.scatter_flat. This method
simply wraps the function, and so the docstring for ivy.scatter_flat
also applies to this method with minimal changes.
Parameters
----------
indices
Index array or container.
updates
values to update input tensor with
size
The size of the result. Default is `None`, in which case tensor
argument out must be provided.
reduction
The reduction method for the scatter, one of 'sum', 'min', 'max'
or 'replace'
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ref
New container of given shape, with the values updated at the indices.
"""
return ContainerBase.cont_multi_map_in_function(
"scatter_flat",
indices,
updates,
size=size,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def scatter_flat(
self: ivy.Container,
updates: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
size: Optional[Union[int, ivy.Container]] = None,
reduction: Union[str, ivy.Container] = "sum",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.scatter_flat. This
method simply wraps the function, and so the docstring for
ivy.scatter_flat also applies to this method with minimal changes.
Parameters
----------
self
Index array or container.
updates
values to update input tensor with
size
The size of the result. Default is `None`, in which case tensor
argument out must be provided.
reduction
The reduction method for the scatter, one of 'sum', 'min', 'max'
or 'replace'
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
New container of given shape, with the values updated at the indices.
Examples
--------
With :class:`ivy.Container` input:
>>> indices = ivy.Container(a=ivy.array([1, 0, 1, 0, 2, 2, 3, 3]),
... b=ivy.array([0, 0, 1, 0, 2, 2, 3, 3]))
>>> updates = ivy.Container(a=ivy.array([9, 2, 0, 2, 3, 2, 1, 8]),
... b=ivy.array([5, 1, 7, 2, 3, 2, 1, 3]))
>>> size = 8
>>> print(ivy.scatter_flat(indices, updates, size=size))
{
a: ivy.array([2, 0, 2, 8, 0, 0, 0, 0]),
b: ivy.array([2, 7, 2, 3, 0, 0, 0, 0])
}
"""
return self._static_scatter_flat(
self,
updates,
size=size,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_gather_nd(
params: Union[ivy.Container, ivy.Array, ivy.NativeArray],
indices: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
batch_dims: Union[int, ivy.Container] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""Gather slices from all container params into a arrays with shape
specified by indices.
Parameters
----------
params
The container from which to gather values.
indices
Index array.
batch_dims
optional int, lets you gather different items from each element of a batch.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
Container object with all sub-array dimensions gathered.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[0., 10., 20.],[30.,40.,50.]]),
... b=ivy.array([[0., 100., 200.],[300.,400.,500.]]))
>>> y = ivy.Container(a=ivy.array([1,0]),
... b=ivy.array([0]))
>>> print(ivy.Container.static_gather_nd(x, y))
{
a: ivy.array(30.),
b: ivy.array([0., 100., 200.])
}
"""
return ContainerBase.cont_multi_map_in_function(
"gather_nd",
params,
indices,
batch_dims=batch_dims,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def gather_nd(
self: ivy.Container,
indices: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
batch_dims: Union[int, ivy.Container] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.gather_nd. This method
simply wraps the function, and so the docstring for ivy.gather_nd also
applies to this method with minimal changes.
Parameters
----------
self
The container from which to gather values.
indices
Index array or container.
batch_dims
optional int, lets you gather different items from each element of a batch.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
New container of given shape, with the values gathered at the indices.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[[0., 10.], [20.,30.]],
... [[40.,50.], [60.,70.]]]),
... b=ivy.array([[[0., 100.], [200.,300.]],
... [[400.,500.],[600.,700.]]]))
>>> y = ivy.Container(a=ivy.array([1,0]),
... b=ivy.array([0]))
>>> z = x.gather_nd(y)
>>> print(z)
{
a: ivy.array([40., 50.]),
b: ivy.array([[0., 100.],
[200., 300.]])
}
"""
return self._static_gather_nd(
self,
indices,
batch_dims=batch_dims,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_einops_reduce(
x: ivy.Container,
pattern: Union[str, ivy.Container],
reduction: Union[str, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
**axes_lengths: Union[Dict[str, int], ivy.Container],
) -> ivy.Container:
"""Perform einops reduce operation on each sub array in the container.
Parameters
----------
x
input container.
pattern
Reduction pattern.
reduction
One of available reductions ('min', 'max', 'sum', 'mean', 'prod'), or
callable.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
axes_lengths
Any additional specifications for dimensions.
out
optional array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ivy.Container with each array having einops.reduce applied.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[-4.47, 0.93, -3.34],
... [3.66, 24.29, 3.64]]),
... b=ivy.array([[4.96, 1.52, -10.67],
... [4.36, 13.96, 0.3]]))
>>> reduced = ivy.Container.static_einops_reduce(x, 'a b -> a', 'mean')
>>> print(reduced)
{
a: ivy.array([-2.29333329, 10.53000069]),
b: ivy.array([-1.39666676, 6.20666695])
}
"""
return ContainerBase.cont_multi_map_in_function(
"einops_reduce",
x,
pattern,
reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
**axes_lengths,
)
def einops_reduce(
self: ivy.Container,
pattern: Union[str, ivy.Container],
reduction: Union[str, Callable, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
**axes_lengths: Union[Dict[str, int], ivy.Container],
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.einops_reduce. This
method simply wraps the function, and so the docstring for
ivy.einops_reduce also applies to this method with minimal changes.
Parameters
----------
self
Input container to be reduced.
pattern
Reduction pattern.
reduction
One of available reductions ('min', 'max', 'sum', 'mean', 'prod'), or
callable.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a
shape that the inputs broadcast to.
axes_lengths
Any additional specifications for dimensions.
Returns
-------
ret
New container with einops.reduce having been applied.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[[5, 4, 3],
... [11, 2, 9]],
... [[3, 5, 7],
... [9, 7, 1]]]),
... b=ivy.array([[[9,7,6],
... [5,2,1]],
... [[4,1,2],
... [2,3,6]],
... [[1, 9, 6],
... [0, 2, 1]]]))
>>> reduced = x.einops_reduce('a b c -> a b', 'sum')
>>> print(reduced)
{
a: ivy.array([[12, 22],
[15, 17]]),
b: ivy.array([[22, 8],
[7, 11],
[16, 3]])
}
"""
return self._static_einops_reduce(
self,
pattern,
reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
**axes_lengths,
)
@staticmethod
def _static_einops_repeat(
x: ivy.Container,
pattern: Union[str, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
**axes_lengths: Union[Dict[str, int], ivy.Container],
) -> ivy.Container:
"""Perform einops repeat operation on each sub array in the container.
Parameters
----------
x
input container.
pattern
Rearrangement pattern.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
axes_lengths
Any additional specifications for dimensions.
**axes_lengths
Returns
-------
ivy.Container with each array having einops.repeat applied.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[30, 40], [50, 75]]),
... b=ivy.array([[1, 2], [4, 5]]))
>>> repeated = ivy.Container.static_einops_repeat(
... x, 'h w -> (tile h) w', tile=2)
>>> print(repeated)
{
a: ivy.array([[30, 40],
[50, 75],
[30, 40],
[50, 75]]),
b: ivy.array([[1, 2],
[4, 5],
[1, 2],
[4, 5]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"einops_repeat",
x,
pattern,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
**axes_lengths,
)
def einops_repeat(
self: ivy.Container,
pattern: Union[str, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
**axes_lengths: Union[Dict[str, int], ivy.Container],
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.einops_repeat. This
method simply wraps the function, and so the docstring for
ivy.einops_repeat also applies to this method with minimal changes.
Parameters
----------
self
Input array or container to be repeated.
pattern
Rearrangement pattern.
axes_lengths
Any additional specifications for dimensions.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a
shape that the inputs broadcast to.
Returns
-------
ret
New container with einops.repeat having been applied.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[30, 40], [50, 75]]),
... b=ivy.array([[1, 2], [4, 5]]))
>>> repeated = x.einops_repeat('h w -> h (w tile)', tile=2)
>>> print(repeated)
{
a: ivy.array([[30, 30, 40, 40],
[50, 50, 75, 75]]),
b: ivy.array([[1, 1, 2, 2],
[4, 4, 5, 5]])
}
"""
return self._static_einops_repeat(
self,
pattern,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
**axes_lengths,
)
@staticmethod
def _static_value_is_nan(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
include_infs: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.value_is_nan. This method
simply wraps the function, and so the docstring for ivy.value_is_nan
also applies to this method with minimal changes.
Parameters
----------
x
input container.
include_infs
Whether to include infs and -infs in the check. Default is ``True``.
key_chains
The key-chains to apply or not apply the method to. Default is
None.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean as to whether the input value is a nan or not.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([452]), b=ivy.array([float('inf')]))
>>> y = ivy.Container.static_value_is_nan(x)
>>> print(y)
{
a: False,
b: True
}
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([float('nan')]), b=ivy.array([0]))
>>> y = ivy.Container.static_value_is_nan(x)
>>> print(y)
{
a: True,
b: False
}
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([float('inf')]), b=ivy.array([22]))
>>> y = ivy.Container.static_value_is_nan(x, include_infs=False)
>>> print(y)
{
a: False,
b: False
}
"""
return ContainerBase.cont_multi_map_in_function(
"value_is_nan",
x,
include_infs=include_infs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def value_is_nan(
self: ivy.Container,
/,
*,
include_infs: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.value_is_nan. This
method simply wraps the function, and so the docstring for
ivy.value_is_nan also applies to this method with minimal changes.
Parameters
----------
self
input container.
include_infs
Whether to include infs and -infs in the check. Default is ``True``.
key_chains
The key-chains to apply or not apply the method to. Default is
None.
to_apply
If True, the method will be applied to key_chains, otherwise
key_chains will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Boolean as to whether the input value is a nan or not.
Examples
--------
>>> x = ivy.Container(a=ivy.array([425]), b=ivy.array([float('nan')]))
>>> y = x.value_is_nan()
>>> print(y)
{
a: False,
b: True
}
>>> x = ivy.Container(a=ivy.array([float('inf')]), b=ivy.array([0]))
>>> y = x.value_is_nan()
>>> print(y)
{
a: True,
b: False
}
>>> x = ivy.Container(a=ivy.array([float('inf')]), b=ivy.array([22]))
>>> y = x.value_is_nan(include_infs=False)
>>> print(y)
{
a: False,
b: False
}
"""
return self._static_value_is_nan(
self,
include_infs=include_infs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_to_numpy(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
copy: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.to_numpy. This method
simply wraps the function, and so the docstring for ivy.to_numpy also
applies to this method with minimal changes.
Parameters
----------
x
input container.
copy
Whether to copy the input. Default is ``True``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
a container of numpy arrays copying all the element of the container
``self``.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([1, 0, 1, 1]),
... b=ivy.array([1, -1, 0, 0]))
>>> y = ivy.Container.static_to_numpy(x)
>>> print(y)
{
a: array([1, 0, 1, 1], dtype=int32),
b: array([1, -1, 0, 0], dtype=int32)
}
>>> x = ivy.Container(a=ivy.array([1., 0., 0., 1.]),
... b=ivy.native_array([1, 1, -1, 0]))
>>> y = ivy.Container.static_to_numpy(x)
>>> print(y)
{
a: array([1., 0., 0., 1.], dtype=float32),
b: array([1, 1, -1, 0], dtype=int32)
}
"""
return ContainerBase.cont_multi_map_in_function(
"to_numpy",
x,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def to_numpy(
self: ivy.Container,
/,
*,
copy: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.to_numpy. This method
simply wraps the function, and so the docstring for ivy.to_numpy also
applies to this method with minimal changes.
Parameters
----------
self
input container.
copy
Whether to copy the input. Default is ``True``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
a container of numpy arrays copying all the element of the container
``self``.
Examples
--------
With one :class:`ivy.Container` instances:
>>> x = ivy.Container(a=ivy.array([-1, 0, 1]), b=ivy.array([1, 0, 1, 1]))
>>> y = x.to_numpy()
>>> print(y)
{
a: array([-1, 0, 1], dtype=int32),
b: array([1, 0, 1, 1], dtype=int32)
}
>>> x = ivy.Container(a=ivy.native_array([[-1, 0, 1], [-1, 0, 1], [1, 0, -1]]),
... b=ivy.native_array([[-1, 0, 0], [1, 0, 1], [1, 1, 1]]))
>>> y = x.to_numpy()
>>> print(y)
{
a: array([[-1, 0, 1],
[-1, 0, 1],
[1, 0, -1]], dtype=int32),
b: array([[-1, 0, 0],
[1, 0, 1],
[1, 1, 1]], dtype=int32)
}
"""
return self._static_to_numpy(
self,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_to_scalar(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.to_scalar. This method
simply wraps the function, and so the docstring for ivy.to_scalar also
applies to this method with minimal changes.
Parameters
----------
x
input container.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
a container of scalar values copying all the element of the container
``x``.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([-1]), b=ivy.array([3]))
>>> y = ivy.Container.static_to_scalar(x)
>>> print(y)
{
a: -1,
b: 3
}
"""
return ContainerBase.cont_multi_map_in_function(
"to_scalar",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def to_scalar(
self: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.to_scalar. This method
simply wraps the function, and so the docstring for ivy.to_scalar also
applies to this method with minimal changes.
Parameters
----------
self
input container.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
a container of scalar values copying all the element of the container
``self``.
Examples
--------
With one :class:`ivy.Container` instance:
>>> x = ivy.Container(a=ivy.array([1]), b=ivy.array([0]),
... c=ivy.array([-1]))
>>> y = x.to_scalar()
>>> print(y)
{
a: 1,
b: 0,
c: -1
}
"""
return self._static_to_scalar(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_to_list(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.to_list. This method
simply wraps the function, and so the docstring for ivy.to_list also
applies to this method with minimal changes.
Parameters
----------
x
input container.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A container with list representation of the leave arrays.
Examples
--------
With one :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([0, 1, 2]))
>>> y = ivy.Container.static_to_list(x)
>>> print(y)
{a:[0,1,2]}
"""
return ContainerBase.cont_multi_map_in_function(
"to_list",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def to_list(
self: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.to_list. This method
simply wraps the function, and so the docstring for ivy.to_list also
applies to this method with minimal changes.
Parameters
----------
self
input container.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A container with list representation of the leave arrays.
Examples
--------
With one :class:`ivy.Container` instances:
>>> x = ivy.Container(a=ivy.array([0, 1, 2]))
>>> y = x.to_list()
>>> print(y)
{a:[0,1,2]}
"""
return self._static_to_list(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_stable_divide(
numerator: ivy.Container,
denominator: Union[Number, ivy.Array, ivy.Container],
/,
*,
min_denominator: Optional[
Union[Number, ivy.Array, ivy.NativeArray, ivy.Container]
] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.stable_divide. This
method simply wraps the function, and so the docstring for
ivy.stable_divide also applies to this method with minimal changes.
Parameters
----------
numerator
Container of the numerators of the division.
denominator
Container of the denominators of the division.
min_denominator
Container of the minimum denominator to use,
use global ivy.min_denominator by default.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A container of elements containing the new items following the numerically
stable division.
Examples
--------
>>> x = ivy.Container(a=ivy.asarray([10., 15.]), b=ivy.asarray([20., 25.]))
>>> y = ivy.Container.stable_divide(x, 0.5)
>>> print(y)
{
a: ivy.array([20., 30.]),
b: ivy.array([40., 50.])
}
>>> x = ivy.Container(a=1, b=10)
>>> y = ivy.asarray([4, 5])
>>> z = ivy.Container.stable_divide(x, y)
>>> print(z)
{
a: ivy.array([0.25, 0.2]),
b: ivy.array([2.5, 2.])
}
>>> x = ivy.Container(a=1, b=10)
>>> y = np.array((4.5, 9))
>>> z = ivy.Container.stable_divide(x, y)
>>> print(z)
{
a: array([0.22222222, 0.11111111]),
b: array([2.22222222, 1.11111111])
}
>>> x = ivy.Container(a=ivy.asarray([1., 2.]), b=ivy.asarray([3., 4.]))
>>> y = ivy.Container(a=ivy.asarray([0.5, 2.5]), b=ivy.asarray([3.5, 0.4]))
>>> z = ivy.Container.stable_divide(x, y)
>>> print(z)
{
a: ivy.array([2., 0.8]),
b: ivy.array([0.857, 10.])
}
>>> x = ivy.Container(a=ivy.asarray([1., 2.], [3., 4.]),
... b=ivy.asarray([5., 6.], [7., 8.]))
>>> y = ivy.Container(a=ivy.asarray([0.5, 2.5]), b=ivy.asarray([3.5, 0.4]))
>>> z = ivy.Container.stable_divide(x, y, min_denominator=2)
>>> print(z)
{
a: ivy.array([0.4, 0.444]),
b: ivy.array([0.909, 2.5])
}
"""
return ContainerBase.cont_multi_map_in_function(
"stable_divide",
numerator,
denominator,
min_denominator=min_denominator,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def stable_divide(
self,
denominator: Union[Number, ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
min_denominator: Optional[
Union[Number, ivy.Array, ivy.NativeArray, ivy.Container]
] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.stable_divide. This
method simply wraps the function, and so the docstring for
ivy.stable_divide also applies to this method with minimal changes.
Parameters
----------
self
input container.
denominator
Container of the denominators of the division.
min_denominator
Container of the minimum denominator to use,
use global ivy.min_denominator by default.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
a container of numpy arrays copying all the element of the container
``self``.
A container of elements containing the new items following the numerically
stable division, using ``self`` as the numerator.
Examples
--------
>>> x = ivy.Container(a=ivy.asarray([3., 6.]), b=ivy.asarray([9., 12.]))
>>> y = x.stable_divide(5)
>>> print(y)
{
a: ivy.array([0.6, 1.2]),
b: ivy.array([1.8, 2.4])
}
>>> x = ivy.Container(a=ivy.asarray([[2., 4.], [6., 8.]]),
... b=ivy.asarray([[10., 12.], [14., 16.]]))
>>> z = x.stable_divide(2, min_denominator=2)
>>> print(z)
{
a: ivy.array([[0.5, 1.],
[1.5, 2.]]),
b: ivy.array([[2.5, 3.],
[3.5, 4.]])
}
>>> x = ivy.Container(a=ivy.asarray([3., 6.]), b=ivy.asarray([9., 12.]))
>>> y = ivy.Container(a=ivy.asarray([6., 9.]), b=ivy.asarray([12., 15.]))
>>> z = x.stable_divide(y)
>>> print(z)
{
a: ivy.array([0.5, 0.667]),
b: ivy.array([0.75, 0.8])
}
"""
return self._static_stable_divide(
self,
denominator,
min_denominator=min_denominator,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_stable_pow(
base: ivy.Container,
exponent: Union[Number, ivy.Array, ivy.Container],
/,
*,
min_base: Optional[Union[float, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.stable_pow. This method
simply wraps the function, and so the docstring for ivy.stable_pow also
applies to this method with minimal changes.
Parameters
----------
base
Container of the base.
exponent
Container of the exponent.
min_base
The minimum base to use, use global ivy.min_base by default.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise
key_chains will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples). Default is
False.
Returns
-------
ret
A container of elements containing the new items following the
numerically stable power.
Examples
--------
>>> x = ivy.Container(a=ivy.asarray([2, 4]), b=ivy.asarray([6, 8]))
>>> y = ivy.Container.stable_pow(x, 2)
>>> print(y)
{
a: ivy.array([4.00004, 16.00008]),
b: ivy.array([36.00012, 64.00016])
}
>>> x = ivy.Container(a=4, b=8)
>>> y = ivy.Container.stable_pow(x, 2)
>>> print(y)
{
a: ivy.array(16.00008),
b: ivy.array(64.00016)
}
>>> x = ivy.Container(a=4, b=8)
>>> y = ivy.asarray([1, 2])
>>> z = ivy.Container.stable_pow(x, y)
>>> print(z)
{
a: ivy.array([4.00001, 16.00008]),
b: ivy.array([8.00001, 64.00016])
}
>>> x = ivy.Container(a=ivy.asarray([2, 4]), b=ivy.asarray([6, 8]))
>>> y = ivy.Container(a=4, b=8)
>>> z = ivy.Container.stable_pow(x, y)
>>> print(z)
{
a: ivy.array([16.00032, 256.00256]),
b: ivy.array([1679638.395, 16777383.77])
}
"""
return ContainerBase.cont_multi_map_in_function(
"stable_pow",
base,
exponent,
min_base=min_base,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def stable_pow(
self,
exponent: Union[Number, ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
min_base: Optional[Union[float, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.stable_pow. This method
simply wraps the function, and so the docstring for ivy.stable_pow also
applies to this method with minimal changes.
Parameters
----------
self
Container of the base.
exponent
Container of the exponent.
min_base
The minimum base to use, use global ivy.min_base by default.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise
key_chains will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples). Default is
False.
Returns
-------
ret
A container of elements containing the new items following the
numerically stable power.
Examples
--------
>>> x = ivy.Container(a=ivy.asarray([2, 4]), b=ivy.asarray([6, 8]))
>>> y = x.stable_pow(2)
>>> print(y)
{
a: ivy.array([4.00004, 16.00008]),
b: ivy.array([36.00012, 64.00016])
}
>>> x = ivy.Container(a=4, b=8)
>>> y = x.stable_pow(2)
>>> print(y)
{
a: ivy.array(16.00008),
b: ivy.array(64.00016)
}
>>> x = ivy.Container(a=4, b=8)
>>> y = ivy.asarray([1, 2])
>>> z = x.stable_pow(y)
>>> print(z)
{
a: ivy.array([4.00001, 16.00008]),
b: ivy.array([8.00001, 64.00016])
}
>>> x = ivy.Container(a=ivy.asarray([2, 4]), b=ivy.asarray([6, 8]))
>>> y = ivy.Container(a=4, b=8)
>>> z = x.stable_pow(y)
>>> print(z)
{
a: ivy.array([16.00032, 256.00256]),
b: ivy.array([1679638.395, 16777383.77])
}
"""
return self._static_stable_pow(
self,
exponent,
min_base=min_base,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_einops_rearrange(
x: ivy.Container,
pattern: Union[str, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
**axes_lengths: Union[Dict[str, int], ivy.Container],
) -> ivy.Container:
"""ivy.Container static method variant of ivy.einops_rearrange. This
method simply wraps the function, and so the docstring for
ivy.einops_rearrange also applies to this method with minimal changes.
Parameters
----------
pattern
Rearrangement pattern.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
axes_lengths
Any additional specifications for dimensions.
Returns
-------
ivy.Container with each array having einops.rearrange applied.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([[1, 2, 3],
... [-4, -5, -6]]),
... b=ivy.array([[7, 8, 9],
... [10, 11, 12]]))
>>> y = ivy.static_einops_rearrange(x, "height width -> width height")
>>> print(y)
{
a: ivy.array([[1, -4],
[2, -5],
[3, -6]]),
b: ivy.array([[7, 10],
[8, 11],
[9, 12]])
}
>>> x = ivy.Container(a=ivy.array([[[ 1, 2, 3],
... [ 4, 5, 6]],
... [[ 7, 8, 9],
... [10, 11, 12]]]))
>>> y = ivy.static_einops_rearrange(x, "c h w -> c (h w)")
>>> print(y)
{
a: (<class ivy.array.array.Array> shape=[2, 6])
}
>>> x = ivy.Container(a=ivy.array([[1, 2, 3, 4, 5, 6],
... [7, 8, 9, 10, 11, 12]]))
>>> y = ivy.static_einops_rearrange(x, "c (h w) -> (c h) w", h=2, w=3)
{
a: (<class ivy.array.array.Array> shape=[4, 3])
}
"""
return ContainerBase.cont_multi_map_in_function(
"einops_rearrange",
x,
pattern,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
**axes_lengths,
)
def einops_rearrange(
self: ivy.Container,
pattern: Union[str, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
**axes_lengths: Union[Dict[str, int], ivy.Container],
):
"""ivy.Container instance method variant of ivy.einops_rearrange. This
method simply wraps the function, and so the docstring for
ivy.einops_rearrange also applies to this method with minimal changes.
Parameters
----------
pattern
Rearrangement pattern.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
axes_lengths
Any additional specifications for dimensions.
**axes_lengths
Returns
-------
ivy.Container with each array having einops.rearrange applied.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[1, 2, 3],
... [-4, -5, -6]]),
... b=ivy.array([[7, 8, 9],
... [10, 11, 12]]))
>>> y = x.einops_rearrange("height width -> width height")
>>> print(y)
{
a: ivy.array([[1, -4],
[2, -5],
[3, -6]]),
b: ivy.array([[7, 10],
[8, 11],
[9, 12]])
}
>>> x = ivy.Container(a=ivy.array([[[ 1, 2, 3],
... [ 4, 5, 6]],
... [[ 7, 8, 9],
... [10, 11, 12]]]))
>>> y = x.einops_rearrange("c h w -> c (h w)")
>>> print(y)
{
a: (<class ivy.data_classes.array.array.Array> shape=[2, 6])
}
>>> x = ivy.Container(a=ivy.array([[1, 2, 3, 4, 5, 6],
... [7, 8, 9, 10, 11, 12]]))
>>> y = x.einops_rearrange("c (h w) -> (c h) w", h=2, w=3)
>>> print(y)
{
a: (<class ivy.data_classes.array.array.Array> shape=[4, 3])
}
"""
return self._static_einops_rearrange(
self,
pattern,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
**axes_lengths,
)
@staticmethod
def _static_clip_matrix_norm(
x: Union[ivy.Container, ivy.Array, ivy.NativeArray],
max_norm: Union[float, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
p: Union[float, ivy.Container] = 2.0,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.clip_matrix_norm. This
method simply wraps the function, and so the docstring for
ivy.clip_matrix_norm also applies to this method with minimal changes.
Parameters
----------
x
Input array containing elements to clip.
max_norm
The maximum value of the array norm.
p
The p-value for computing the p-norm. Default is 2.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
ret
An array with the matrix norm downscaled to the max norm if needed.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([[0., 1., 2.]]),
... b=ivy.array([[3., 4., 5.]]))
>>> y = ivy.Container.static_clip_matrix_norm(x, 2.0)
>>> print(y)
{
a: ivy.array([[0., 0.894, 1.79]]),
b: ivy.array([[0.849, 1.13, 1.41]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"clip_matrix_norm",
x,
max_norm,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
p=p,
out=out,
)
def clip_matrix_norm(
self: ivy.Container,
max_norm: Union[float, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
p: Union[float, ivy.Container] = 2.0,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.clip_matrix_norm. This
method simply wraps the function, and so the docstring for
ivy.clip_matrix_norm also applies to this method with minimal changes.
Parameters
----------
self
Input array containing elements to clip.
max_norm
The maximum value of the array norm.
p
The p-value for computing the p-norm. Default is 2.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
ret
An array with the matrix norm downscaled to the max norm if needed.
Examples
--------
With :class:`ivy.Container` instance method:
>>> x = ivy.Container(a=ivy.array([[0., 1., 2.]]),
... b=ivy.array([[3., 4., 5.]]))
>>> y = x.clip_matrix_norm(2.0, p=1.0)
>>> print(y)
{
a: ivy.array([[0., 1., 2.]]),
b: ivy.array([[1.2, 1.6, 2.]])
}
"""
return self._static_clip_matrix_norm(
self,
max_norm,
p=p,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_supports_inplace_updates(
x: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.supports_inplace_updates.
This method simply wraps the function, and so the docstring for
ivy.supports_inplace_updates also applies to this method with minimal
changes.
Parameters
----------
x
An ivy.Container.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped.
Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
An ivy.Container instance of bool values.
True if nodes of x support in-place operations. False otherwise.
"""
return ContainerBase.cont_multi_map_in_function(
"supports_inplace_updates",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def supports_inplace_updates(
self: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of
ivy.supports_inplace_updates. This method simply wraps the static
function, and so the docstring for the static variant also applies to
this method with minimal changes.
Parameters
----------
self
An ivy.Container whose elements are data types supported by Ivy.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped.
Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
An ivy.Container instance of bool values.
True if nodes of the Container support in-place operations. False otherwise.
Examples
--------
With :class:`ivy.Container` input and backend set as `torch`:
>>> x = ivy.Container(a=ivy.array([5., 6.]), b=ivy.array([7., 8.]))
>>> ret = x.supports_inplace_updates()
>>> print(ret)
{
a: True,
b: True
}
With :class:`ivy.Container` input and backend set as `jax`:
>>> x = ivy.Container(a=ivy.array([5.]), b=ivy.array([7.]))
>>> ret = x.supports_inplace_updates()
>>> print(ret)
{
a: False,
b: False
}
"""
return _ContainerWithGeneral._static_supports_inplace_updates(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_get_num_dims(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
as_array: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.get_num_dims. This
method simply wraps the function, and so the docstring for
ivy.get_num_dims also applies to this method with minimal changes.
Parameters
----------
x
ivy.Container to infer the number of dimensions for
as_array
Whether to return the shape as a array, default False.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Shape of the array
Examples
--------
>>> x = ivy.Container(b = ivy.asarray([[0.,1.,1.],[1.,0.,0.],[8.,2.,3.]]))
>>> ivy.Container.static_get_num_dims(x)
{
b: 2
}
>>> x = ivy.Container(b = ivy.array([[[0,0,0],[0,0,0],[0,0,0]]
... [[0,0,0],[0,0,0],[0,0,0]],
... [[0,0,0],[0,0,0],[0,0,0]]]))
>>> ivy.Container.static_get_num_dims(x)
{
b: 3
}
>>> x = ivy.Container(b = ivy.array([[[0,0,0],[0,0,0],[0,0,0]],
... [[0,0,0],[0,0,0],[0,0,0]]]),
... c = ivy.asarray([[0.,1.,1.],[8.,2.,3.]]))
>>> ivy.Container.static_get_num_dims(x)
{
b: 3,
c: 2
}
>>> ivy.Container.static_get_num_dims(x, as_array=True)
{
b: ivy.array(3),
c: ivy.array(2)
}
"""
return ContainerBase.cont_multi_map_in_function(
"get_num_dims",
x,
as_array=as_array,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def get_num_dims(
self: ivy.Container,
/,
*,
as_array: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.get_num_dims. This
method simply wraps the function, and so the docstring for
ivy.get_num_dims also applies to this method with minimal changes.
Parameters
----------
self
ivy.Container to infer the number of dimensions for
as_array
Whether to return the shape as a array, default False.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Shape of the array
Examples
--------
>>> a = ivy.Container(b = ivy.asarray([[0.,1.,1.],[1.,0.,0.],[8.,2.,3.]]))
>>> a.get_num_dims()
{
b: 2
}
>>> a = ivy.Container(b = ivy.array([[[0,0,0],[0,0,0],[0,0,0]],
... [[0,0,0],[0,0,0],[0,0,0]],
... [[0,0,0],[0,0,0],[0,0,0]]]))
>>> a.get_num_dims()
{
b: 3
}
>>> a = ivy.Container(b = ivy.array([[[0,0,0],[0,0,0],[0,0,0]],
... [[0,0,0],[0,0,0],[0,0,0]]]),
... c = ivy.asarray([[0.,1.,1.],[8.,2.,3.]]))
>>> a.get_num_dims()
{
b: 3,
c: 2
}
>>> a.get_num_dims(as_array=True)
{
b: ivy.array(3),
c: ivy.array(2)
}
"""
return _ContainerWithGeneral._static_get_num_dims(
self,
as_array=as_array,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_array_equal(
x0: Union[ivy.Array, ivy.NativeArray, ivy.Container],
x1: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.array_equal. This
method simply wraps the function, and so the docstring for
ivy.array_equal also applies to this method with minimal changes.
Parameters
----------
x0
The first input container to compare.
x1
The second input container to compare.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A boolean container indicating whether the two containers are
equal at each level.
Examples
--------
>>> a = ivy.array([[0., 1.], [1. ,0.]])
>>> b = ivy.array([[-2., 1.], [1. ,2.]])
>>> c = ivy.array([[0., 1.], [1. ,0.]])
>>> d = ivy.array([[2., 1.], [1. ,2.]])
>>> a0 = ivy.Container(a = a, b = b)
>>> a1 = ivy.Container(a = c, b = d)
>>> y = ivy.Container.static_array_equal(a0, a1)
>>> print(y)
{
a: true,
b: false
}
"""
return ContainerBase.cont_multi_map_in_function(
"array_equal",
x0,
x1,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def array_equal(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.array_equal. This
method simply wraps the function, and so the docstring for
ivy.array_equal also applies to this method with minimal changes.
Parameters
----------
self
The first input container to compare.
x
The second input container to compare.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A boolean container indicating whether the two containers are
equal at each level.
Examples
--------
>>> a = ivy.array([[0., 1.], [1. ,0.]])
>>> b = ivy.array([[-2., 1.], [1. ,2.]])
>>> c = ivy.array([[0., 1.], [1. ,0.]])
>>> d = ivy.array([[2., 1.], [1. ,2.]])
>>> a1 = ivy.Container(a = a, b = b)
>>> a2 = ivy.Container(a = c, b = d)
>>> y = a1.array_equal(a2)
>>> print(y)
{
a: True,
b: False
}
>>> x1 = ivy.Container(a=ivy.native_array([1, 0, 0]),
b=ivy.array([1, 2, 3]))
>>> x2 = ivy.Container(a=ivy.native_array([1, 0, 1]),
b=ivy.array([1, 2, 3]))
>>> y = x1.array_equal(x2)
>>> print(y)
{
a: False,
b: True
}
"""
return _ContainerWithGeneral._static_array_equal(
self,
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def static_isin(
element: ivy.Container,
test_elements: ivy.Container,
/,
*,
assume_unique: Union[bool, ivy.Container] = False,
invert: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""Container instance method variant of ivy.isin. This method simply
wraps the function, and so the docstring for ivy.isin also applies to
this method with minimal changes.
Parameters
----------
element
input container
test_elements
values against which to test for each input element
assume_unique
If True, assumes both elements and test_elements contain unique elements,
which can speed up the calculation. Default value is False.
invert
If True, inverts the boolean return array, resulting in True values for
elements not in test_elements. Default value is False.
Returns
-------
ret
output a boolean container of the same shape as elements that is True for
elements in test_elements and False otherwise.
Examples
--------
>>> x = ivy.Container(a=[[10, 7, 4], [3, 2, 1]],\
b=[3, 2, 1, 0])
>>> y = ivy.Container(a=[1, 2, 3],\
b=[1, 0, 3])
>>> ivy.Container.static_isin(x, y)
ivy.Container(a=[[False, False, False], [ True, True, True]],\
b=[ True, False, True])
>>> ivy.Container.static_isin(x, y, invert=True)
ivy.Container(a=[[ True, True, True], [False, False, False]],\
b=[False, True, False])
"""
return ContainerBase.cont_multi_map_in_function(
"isin", element, test_elements, assume_unique=assume_unique, invert=invert
)
def isin(
self: ivy.Container,
test_elements: ivy.Container,
/,
*,
assume_unique: Union[bool, ivy.Container] = False,
invert: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""Container instance method variant of ivy.isin. This method simply
wraps the function, and so the docstring for ivy.isin also applies to
this method with minimal changes.
Parameters
----------
self
input array
test_elements
values against which to test for each input element
assume_unique
If True, assumes both elements and test_elements contain unique elements,
which can speed up the calculation. Default value is False.
invert
If True, inverts the boolean return array, resulting in True values for
elements not in test_elements. Default value is False.
Returns
-------
ret
output a boolean array of the same shape as elements that is True for
elements in test_elements and False otherwise.
Examples
--------
>>> x = ivy.Container(a=[[10, 7, 4], [3, 2, 1]],\
b=[3, 2, 1, 0])
>>> y = ivy.Container(a=[1, 2, 3],\
b=[1, 0, 3])
>>> x.isin(y)
ivy.Container(a=[[False, False, False], [ True, True, True]],\
b=[ True, False, True])
"""
return self.static_isin(
self, test_elements, assume_unique=assume_unique, invert=invert
)
@staticmethod
def static_itemsize(
x: ivy.Container,
/,
) -> ivy.Container:
"""Container instance method variant of ivy.itemsize. This method
simply wraps the function, and so the docstring for ivy.itemsize also
applies to this method with minimal changes.
Parameters
----------
x
The input container.
Returns
-------
ret
Integers specifying the element size in bytes.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1,2,3], dtype=ivy.float64),\
b=ivy.array([1,2,3], dtype=ivy.complex128))
>>> ivy.itemsize(x)
ivy.Container(a=8, b=16)
"""
return ContainerBase.cont_multi_map_in_function("itemsize", x)
def itemsize(
self: ivy.Container,
/,
) -> ivy.Container:
"""Container instance method variant of ivy.itemsize. This method
simply wraps the function, and so the docstring for ivy.itemsize also
applies to this method with minimal changes.
Parameters
----------
self
The input container.
Returns
-------
ret
Integers specifying the element size in bytes.
"""
return self.static_itemsize(self)
@staticmethod
def static_strides(
x: ivy.Container,
/,
) -> ivy.Container:
"""Container instance method variant of ivy.strides. This method simply
wraps the function, and so the docstring for ivy.strides also applies
to this method with minimal changes.
Parameters
----------
x
The input container.
Returns
-------
ret
A tuple containing the strides.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[1, 5, 9], [2, 6, 10]]),\
b=ivy.array([[1, 2, 3, 4], [5, 6, 7, 8]]))
>>> ivy.strides(x)
ivy.Container(a=(4, 12), b=(16, 4))
"""
return ContainerBase.cont_multi_map_in_function("strides", x)
def strides(
self: ivy.Container,
/,
) -> ivy.Container:
"""Container instance method variant of ivy.strides. This method simply
wraps the function, and so the docstring for ivy.strides also applies
to this method with minimal changes.
Parameters
----------
self
The input container.
Returns
-------
ret
A tuple containing the strides.
"""
return self.static_strides(self)
@staticmethod
def _static_exists(
x: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.exists. This method
simply wraps the function, and so the docstring for ivy.exists also
applies to this method with minimal changes.
Parameters
----------
x
The input container.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A boolean container detailing if any of the leaf nodes are None.
True if not None, False if None.
Examples
--------
>>> x = ivy.Container(a=ivy.array([0,4,5]), b=ivy.array([2,2,0]))
>>> y = x._static_exists(x)
>>> print(y)
{ a: True, b: True }
>>> x = ivy.Container(a=[1,2], b=None)
>>> y = x._static_exists(x)
>>> print(y)
{ a: True, b: False }
>>> x = ivy.Container(a={"d": 1, "c": 3}, b={"d": 20, "c": None})
>>> y = x._static_exists(x)
>>> print(y)
{ a: { c: True, d: True }, b: { c: False, d: True } }
"""
return ContainerBase.cont_multi_map_in_function(
"exists",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def exists(
self: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.exists. This method
simply wraps the function, and so the docstring for ivy.exists also
applies to this method with minimal changes.
Parameters
----------
self
The input container.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A boolean container detailing if any of the leaf nodes are None.
True if not None, False if None.
Examples
--------
>>> x = ivy.Container(a=[1,2,3,4], b=[])
>>> y = x.exists()
>>> print(y)
{ a: True, b: True }
>>> x = ivy.Container(a=None, b=[1,2])
>>> y = x.exists()
>>> print(y)
{ a: False, b: True }
>>> x = ivy.Container(a={"d": 1, "c": 3}, b=None)
>>> y = x.exists()
>>> print(y)
{ a: { c: True, d: True }, b: False }
"""
return self._static_exists(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
| ivy/ivy/data_classes/container/general.py/0 | {
"file_path": "ivy/ivy/data_classes/container/general.py",
"repo_id": "ivy",
"token_count": 75761
} | 10 |
import ivy
from collections.abc import Mapping
from abc import ABCMeta
class FactorizedTensor(Mapping, metaclass=ABCMeta):
"""Base Class for Tensors in Factorized form."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def to_tensor(self):
return NotImplementedError
def to_unfolded(self, mode):
return NotImplementedError
def to_vec(self):
return NotImplementedError
def norm(self):
"""Norm l2 of the tensor."""
return ivy.l2_normalize(self.to_tensor())
def mode_dot(self, matrix_or_tensor, mode):
return ivy.mode_dot(self.to_tensor(), matrix_or_tensor, mode)
| ivy/ivy/data_classes/factorized_tensor/base.py/0 | {
"file_path": "ivy/ivy/data_classes/factorized_tensor/base.py",
"repo_id": "ivy",
"token_count": 275
} | 11 |
mod c_lib;
mod error;
mod wrappers;
use std::rc::Rc;
pub use error::{Error, Result};
pub use wrappers::*;
use pyo3::prelude::*;
use ndarray::{ArrayD};
use numpy::{PyArrayDyn, ToPyArray};
use half::{f16, bf16};
use pyo3::exceptions::PyTypeError;
use pyo3::{exceptions, wrap_pyfunction};
#[derive(Debug, Copy, Clone)]
pub enum TfLogLevel {
Info,
Warning,
Error,
Fatal,
}
impl TfLogLevel {
fn as_env_variable_str(&self) -> &'static str {
match self {
Self::Info => "0",
Self::Warning => "1",
Self::Error => "2",
Self::Fatal => "3",
}
}
}
pub fn set_tf_min_log_level(log_level: TfLogLevel) {
std::env::set_var("TF_CPP_MIN_LOG_LEVEL", log_level.as_env_variable_str())
}
#[derive(Debug)]
enum ArrayDyn {
Pred(ArrayD<bool>),
I8(ArrayD<i8>),
I16(ArrayD<i16>),
I32(ArrayD<i32>),
I64(ArrayD<i64>),
U8(ArrayD<u8>),
U16(ArrayD<u16>),
U32(ArrayD<u32>),
U64(ArrayD<u64>),
Bf16(ArrayD<bf16>),
F16(ArrayD<f16>),
F32(ArrayD<f32>),
F64(ArrayD<f64>),
}
#[derive(Debug)]
#[pyclass(unsendable)]
pub struct Tensor {
x: ArrayDyn
}
impl From<ArrayD<bool>> for Tensor {
fn from(x: ArrayD<bool>) -> Self {
Tensor {
x: ArrayDyn::Pred(x),
}
}
}
impl From<ArrayD<i8>> for Tensor {
fn from(x: ArrayD<i8>) -> Self {
Tensor {
x: ArrayDyn::I8(x),
}
}
}
impl From<ArrayD<i16>> for Tensor {
fn from(x: ArrayD<i16>) -> Self {
Tensor {
x: ArrayDyn::I16(x),
}
}
}
impl From<ArrayD<i32>> for Tensor {
fn from(x: ArrayD<i32>) -> Self {
Tensor {
x: ArrayDyn::I32(x),
}
}
}
impl From<ArrayD<i64>> for Tensor {
fn from(x: ArrayD<i64>) -> Self {
Tensor {
x: ArrayDyn::I64(x),
}
}
}
impl From<ArrayD<u8>> for Tensor {
fn from(x: ArrayD<u8>) -> Self {
Tensor {
x: ArrayDyn::U8(x),
}
}
}
impl From<ArrayD<u16>> for Tensor {
fn from(x: ArrayD<u16>) -> Self {
Tensor {
x: ArrayDyn::U16(x),
}
}
}
impl From<ArrayD<u32>> for Tensor {
fn from(x: ArrayD<u32>) -> Self {
Tensor {
x: ArrayDyn::U32(x),
}
}
}
impl From<ArrayD<u64>> for Tensor {
fn from(x: ArrayD<u64>) -> Self {
Tensor {
x: ArrayDyn::U64(x),
}
}
}
impl From<ArrayD<bf16>> for Tensor {
fn from(x: ArrayD<bf16>) -> Self {
Tensor {
x: ArrayDyn::Bf16(x),
}
}
}
impl From<ArrayD<f16>> for Tensor {
fn from(x: ArrayD<f16>) -> Self {
Tensor {
x: ArrayDyn::F16(x),
}
}
}
impl From<ArrayD<f32>> for Tensor {
fn from(x: ArrayD<f32>) -> Self {
Tensor {
x: ArrayDyn::F32(x),
}
}
}
impl From<ArrayD<f64>> for Tensor {
fn from(x: ArrayD<f64>) -> Self {
Tensor {
x: ArrayDyn::F64(x),
}
}
}
#[pymethods]
impl Tensor {
fn __repr__(&self) -> PyResult<String> {
let desc = match &self.x {
ArrayDyn::Pred(array) => format!("{:?}", array),
ArrayDyn::I8(array) => format!("{:?}", array),
ArrayDyn::I16(array) => format!("{:?}", array),
ArrayDyn::I32(array) => format!("{:?}", array),
ArrayDyn::I64(array) => format!("{:?}", array),
ArrayDyn::U8(array) => format!("{:?}", array),
ArrayDyn::U16(array) => format!("{:?}", array),
ArrayDyn::U32(array) => format!("{:?}", array),
ArrayDyn::U64(array) => format!("{:?}", array),
ArrayDyn::Bf16(array) => format!("{:?}", array),
ArrayDyn::F16(array) => format!("{:?}", array),
ArrayDyn::F32(array) => format!("{:?}", array),
ArrayDyn::F64(array) => format!("{:?}", array),
};
Ok(format!("Tensor({})", desc))
}
}
#[derive(Clone, Debug)]
#[pyclass(unsendable)]
struct Bf16Array {
x: Py<PyArrayDyn<f32>>
}
impl From<Py<PyArrayDyn<f32>>> for Bf16Array {
fn from(x: Py<PyArrayDyn<f32>>) -> Self {
Bf16Array {
x
}
}
}
#[derive(Clone, Debug)]
#[pyclass(unsendable)]
struct F16Array {
x: Py<PyArrayDyn<f32>>
}
impl From<Py<PyArrayDyn<f32>>> for F16Array {
fn from(x: Py<PyArrayDyn<f32>>) -> Self {
F16Array {
x
}
}
}
#[pyfunction]
fn create_bf16_array(x: Py<PyArrayDyn<f32>>) -> PyResult<Bf16Array> {
let x = Bf16Array{x};
Ok(x)
}
#[pyfunction]
fn create_f16_array(x: Py<PyArrayDyn<f32>>) -> PyResult<F16Array> {
let x = F16Array{x};
Ok(x)
}
#[derive(Debug)]
enum DynamicPyArray {
Pred(Py<PyArrayDyn<bool>>),
I8(Py<PyArrayDyn<i8>>),
I16(Py<PyArrayDyn<i16>>),
I32(Py<PyArrayDyn<i32>>),
I64(Py<PyArrayDyn<i64>>),
U8(Py<PyArrayDyn<u8>>),
U16(Py<PyArrayDyn<u16>>),
U32(Py<PyArrayDyn<u32>>),
U64(Py<PyArrayDyn<u64>>),
Bf16(Bf16Array),
F16(F16Array),
F32(Py<PyArrayDyn<f32>>),
F64(Py<PyArrayDyn<f64>>),
}
impl<'source> FromPyObject<'source> for DynamicPyArray {
fn extract(obj: &'source PyAny) -> PyResult<Self> {
if let Ok(arr) = obj.extract::<Py<PyArrayDyn<bool>>>() {
Ok(DynamicPyArray::Pred(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<i8>>>() {
Ok(DynamicPyArray::I8(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<i16>>>() {
Ok(DynamicPyArray::I16(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<i32>>>() {
Ok(DynamicPyArray::I32(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<i64>>>() {
Ok(DynamicPyArray::I64(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<u8>>>() {
Ok(DynamicPyArray::U8(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<u16>>>() {
Ok(DynamicPyArray::U16(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<u32>>>() {
Ok(DynamicPyArray::U32(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<u64>>>() {
Ok(DynamicPyArray::U64(arr))
}
else if let Ok(arr) = obj.extract::<Bf16Array>() {
Ok(DynamicPyArray::Bf16(arr))
}
else if let Ok(arr) = obj.extract::<F16Array>() {
Ok(DynamicPyArray::F16(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<f32>>>() {
Ok(DynamicPyArray::F32(arr))
}
else if let Ok(arr) = obj.extract::<Py<PyArrayDyn<f64>>>() {
Ok(DynamicPyArray::F64(arr))
}
else {
Err(PyErr::from(PyTypeError::new_err(
"Expected a numpy array of one of the valid types",
)))
}
}
}
#[pyfunction]
fn constant_array(py: Python, array: DynamicPyArray, builder: XlaBuilder) -> PyResult<XlaOp> {
match array {
DynamicPyArray::Pred(py_array) => {
let x = Literal::vec1(unsafe { py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice() });
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::I8(py_array) => {
let x = Literal::vec1(unsafe { py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice() });
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::I16(py_array) => {
let x = Literal::vec1(unsafe { py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice() });
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::I32(py_array) => {
let x = Literal::vec1(unsafe { py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice() });
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::I64(py_array) => {
let x = Literal::vec1(unsafe { py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice() });
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::U8(py_array) => {
let x = Literal::vec1(unsafe { py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice() });
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::U16(py_array) => {
let x = Literal::vec1(unsafe { py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice() });
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::U32(py_array) => {
let x = Literal::vec1(unsafe { py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice() });
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::U64(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::Bf16(py_array) => {
let x = Literal::vec1(unsafe {py_array.x.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()}).convert(PrimitiveType::Bf16)?;
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::F16(py_array) => {
let x = Literal::vec1(unsafe {py_array.x.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()}).convert(PrimitiveType::F16)?;
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::F32(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
let x = builder.constant_literal(&x)?;
Ok(x)
},
DynamicPyArray::F64(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
let x = builder.constant_literal(&x)?;
Ok(x)
},
}
}
#[pyfunction]
fn gather_params(py: Python, arrays: Vec<DynamicPyArray>) -> PyResult<Vec<Literal>> {
let mut literals = Vec::with_capacity(arrays.len());
for array in arrays {
match array {
DynamicPyArray::Pred(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::I8(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::I16(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::I32(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::I64(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::U8(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::U16(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::U32(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::U64(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::Bf16(py_array) => {
let x = Literal::vec1(unsafe {py_array.x.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()}).convert(PrimitiveType::Bf16)?;
literals.push(x);
},
DynamicPyArray::F16(py_array) => {
let x = Literal::vec1(unsafe {py_array.x.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()}).convert(PrimitiveType::F16)?;
literals.push(x);
},
DynamicPyArray::F32(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
DynamicPyArray::F64(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
literals.push(x);
},
}
}
Ok(literals)
}
#[pyfunction]
fn new_input(py: Python, input: DynamicPyArray) -> PyResult<Literal> {
match input {
DynamicPyArray::Pred(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::I8(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::I16(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::I32(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::I64(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::U8(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::U16(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::U32(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::U64(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::Bf16(py_array) => {
let x = Literal::vec1(unsafe {py_array.x.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()}).convert(PrimitiveType::Bf16)?;
Ok(x)
},
DynamicPyArray::F16(py_array) => {
let x = Literal::vec1(unsafe {py_array.x.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()}).convert(PrimitiveType::F16)?;
Ok(x)
},
DynamicPyArray::F32(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
DynamicPyArray::F64(py_array) => {
let x = Literal::vec1(unsafe {py_array.as_ref(py).as_array().to_owned().into_raw_vec().as_slice()});
Ok(x)
},
}
}
#[pyfunction]
fn swap_param(x: Literal, mut params: Vec<Literal>) -> PyResult<Vec<Literal>> {
params[0] = x;
Ok(params)
}
#[pyfunction]
fn to_tensor(literal: Literal) -> PyResult<Tensor> {
let shape = literal.shape().unwrap();
let shape = ArrayShape::try_from(&shape).unwrap();
let shape: Vec<usize> = shape.dims().iter().map(|&x| x as usize).collect();
match literal.ty().unwrap() {
ElementType::Pred => {
let data: Vec<bool> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::S8 => {
let data: Vec<i8> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::S16 => {
let data: Vec<i16> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::S32 => {
let data: Vec<i32> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::S64 => {
let data: Vec<i64> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::U8 => {
let data: Vec<u8> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::U16 => {
let data: Vec<u16> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::U32 => {
let data: Vec<u32> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::U64 => {
let data: Vec<u64> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::Bf16 => {
let data: Vec<f32> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::F16 => {
let data: Vec<f32> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::F32 => {
let data: Vec<f32> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
ElementType::F64 => {
let data: Vec<f64> = literal.to_vec().unwrap();
let array = ArrayD::from_shape_vec(shape, data).unwrap();
Ok(Tensor::from(array))
}
_ => Err(PyErr::from(PyTypeError::new_err(
"Unsupported date type",
)))
}
}
#[pyfunction]
fn to_numpy(py: Python, literal: Literal) -> PyResult<PyObject> {
let shape = literal.shape().unwrap();
let shape = ArrayShape::try_from(&shape).unwrap();
let shape: Vec<usize> = shape.dims().iter().map(|&x| x as usize).collect();
match literal.ty().unwrap() {
ElementType::Pred => {
let data: Vec<bool> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::S8 => {
let data: Vec<i8> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::S16 => {
let data: Vec<i16> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::S32 => {
let data: Vec<i32> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::S64 => {
let data: Vec<i64> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::U8 => {
let data: Vec<u8> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::U16 => {
let data: Vec<u16> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::U32 => {
let data: Vec<u32> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::U64 => {
let data: Vec<u64> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::Bf16 | ElementType::F16 => {
let literal = literal.convert(PrimitiveType::F32)?;
let data: Vec<f32> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::F32 => {
let data: Vec<f32> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
ElementType::F64 => {
let data: Vec<f64> = literal.to_vec()?;
let array = ArrayD::from_shape_vec(shape, data).unwrap().to_pyarray(py);
Ok(array.to_object(py))
}
_ => Err(PyErr::from(PyTypeError::new_err(
"Unsupported data type",
)))
}
}
#[pyfunction]
fn to_tuple(literal: Literal) -> PyResult<Vec<Literal>> {
let y = literal.to_tuple()?;
Ok(y)
}
macro_rules! param_gen {
($name:ident, $type:ty) => {
#[pyfunction]
fn $name(builder: XlaBuilder, param_number: i64, dims: Vec<i64>, name: &str) -> PyResult<XlaOp> {
let shape = &Shape::array::<$type>(dims);
let param = builder.parameter_s(param_number, shape, name)?;
Ok(param)
}
}
}
param_gen!(param_pred, bool);
param_gen!(param_i8, i8);
param_gen!(param_i16, i16);
param_gen!(param_i32, i32);
param_gen!(param_i64, i64);
param_gen!(param_u8, u8);
param_gen!(param_u16, u16);
param_gen!(param_u32, u32);
param_gen!(param_u64, u64);
param_gen!(param_bf16, Bf16);
param_gen!(param_f16, F16);
param_gen!(param_f32, f32);
param_gen!(param_f64, f64);
macro_rules! constant {
($name:ident, $type:ty) => {
#[pyfunction]
fn $name(b: XlaBuilder, v: $type) -> PyResult<XlaOp> {
let c = b.c0(v)?;
Ok(c)
}
};
}
constant!(constant_bool, bool);
constant!(constant_i8, i8);
constant!(constant_i16, i16);
constant!(constant_i32, i32);
constant!(constant_i64, i64);
constant!(constant_u8, u8);
constant!(constant_u16, u16);
constant!(constant_u32, u32);
constant!(constant_u64, u64);
constant!(constant_f32, f32);
constant!(constant_f64, f64);
macro_rules! astype {
($name:ident, $primitive:ident) => {
#[pyfunction]
fn $name(x: XlaOp) -> PyResult<XlaOp> {
let y = x.astype(PrimitiveType::$primitive)?;
Ok(y)
}
};
}
astype!(astype_bool, Pred);
astype!(astype_i8, S8);
astype!(astype_i16, S16);
astype!(astype_i32, S32);
astype!(astype_i64, S64);
astype!(astype_u8, U8);
astype!(astype_u16, U16);
astype!(astype_u32, U32);
astype!(astype_u64, U64);
astype!(astype_bf16, Bf16);
astype!(astype_f16, F16);
astype!(astype_f32, F32);
astype!(astype_f64, F64);
#[pyfunction]
fn cpu_client() -> PyResult<PjRtClient> {
let client = PjRtClient::cpu()?;
Ok(client)
}
#[pyfunction]
fn gpu_client(memory_fraction: f64, preallocate: bool) -> PyResult<PjRtClient> {
let client = PjRtClient::gpu(memory_fraction, preallocate)?;
Ok(client)
}
#[pyfunction]
fn xla_builder(name: &str) -> PyResult<XlaBuilder> {
let builder = XlaBuilder::new(name);
Ok(builder)
}
#[pyfunction]
fn build(op: XlaOp) -> PyResult<XlaComputation> {
let computation = op.build()?;
Ok(computation)
}
#[pyfunction]
fn get_hlo_proto(comp: &XlaComputation) -> PyResult<HloModuleProto> {
let hlo_proto = comp.proto();
Ok(hlo_proto)
}
#[pyfunction]
fn hlo_module_from_proto(proto: &HloModuleProto) -> PyResult<HloModule> {
let hlo_module = HloModule::from_proto(proto)?;
Ok(hlo_module)
}
#[pyfunction]
fn hlo_module_to_string(module: &HloModule) -> PyResult<String> {
let module_str = module.to_string()?;
Ok(module_str)
}
#[pyfunction]
fn get_hlo_module_entry_computation(module: &HloModule) -> PyResult<HloComputation> {
let hlo_comp = module.get_entry_computation()?;
Ok(hlo_comp)
}
#[pyfunction]
fn computation_count(module: &HloModule) -> PyResult<i64> {
let comp_count = module.computation_count()?;
Ok(comp_count)
}
#[pyfunction]
fn instruction_count(module: &HloModule) -> PyResult<i64> {
let instruct_count = module.instruction_count()?;
Ok(instruct_count)
}
#[pyfunction]
fn compile(client: PjRtClient, computation: &XlaComputation) -> PyResult<PjRtLoadedExecutable> {
let executable = client.compile(computation)?;
Ok(executable)
}
#[pyfunction]
fn execute(executable: &PjRtLoadedExecutable, args: Vec<Literal>) -> PyResult<PjRtBuffer> {
let buffer = executable.execute::<Literal>(args.as_slice())?[0].remove(0);
Ok(buffer)
}
#[pyfunction]
fn to_literal(buffer: &PjRtBuffer) -> PyResult<Literal> {
let literal = buffer.to_literal_sync()?;
Ok(literal)
}
#[pyfunction]
fn add(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.add_(rhs)?;
Ok(y)
}
#[pyfunction]
fn sub(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.sub_(rhs)?;
Ok(y)
}
#[pyfunction]
fn mul(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.mul_(rhs)?;
Ok(y)
}
#[pyfunction]
fn div(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.div_(rhs)?;
Ok(y)
}
#[pyfunction]
fn rem(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.rem_(rhs)?;
Ok(y)
}
#[pyfunction]
fn pow(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.pow(rhs)?;
Ok(y)
}
#[pyfunction]
fn max(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.max(rhs)?;
Ok(y)
}
#[pyfunction]
fn min(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.min(rhs)?;
Ok(y)
}
#[pyfunction]
fn _and(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.and(rhs)?;
Ok(y)
}
#[pyfunction]
fn _or(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.or(rhs)?;
Ok(y)
}
#[pyfunction]
fn xor(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.xor(rhs)?;
Ok(y)
}
#[pyfunction]
fn eq(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.eq(rhs)?;
Ok(y)
}
#[pyfunction]
fn ne(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.ne(rhs)?;
Ok(y)
}
#[pyfunction]
fn ge(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.ge(rhs)?;
Ok(y)
}
#[pyfunction]
fn gt(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.gt(rhs)?;
Ok(y)
}
#[pyfunction]
fn le(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.le(rhs)?;
Ok(y)
}
#[pyfunction]
fn lt(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.lt(rhs)?;
Ok(y)
}
#[pyfunction]
fn lshift(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.lshift(rhs)?;
Ok(y)
}
#[pyfunction]
fn rshift(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.rshift_arith(rhs)?;
Ok(y)
}
#[pyfunction]
fn atan2(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.atan2(rhs)?;
Ok(y)
}
#[pyfunction]
fn dot(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.dot(rhs)?;
Ok(y)
}
#[pyfunction]
fn matmul(lhs: XlaOp, rhs: &XlaOp) -> PyResult<XlaOp> {
let y = lhs.matmul(rhs)?;
Ok(y)
}
#[pyfunction]
fn population_count(x: XlaOp) -> PyResult<XlaOp> {
let y = x.population_count()?;
Ok(y)
}
#[pyfunction]
fn _not(x: XlaOp) -> PyResult<XlaOp> {
let y = x.not()?;
Ok(y)
}
#[pyfunction]
fn neg(x: XlaOp) -> PyResult<XlaOp> {
let y = x.neg()?;
Ok(y)
}
#[pyfunction]
fn abs(x: XlaOp) -> PyResult<XlaOp> {
let y = x.abs()?;
Ok(y)
}
#[pyfunction]
fn floor(x: XlaOp) -> PyResult<XlaOp> {
let y = x.floor()?;
Ok(y)
}
#[pyfunction]
fn ceil(x: XlaOp) -> PyResult<XlaOp> {
let y = x.ceil()?;
Ok(y)
}
#[pyfunction]
fn round(x: XlaOp) -> PyResult<XlaOp> {
let y = x.round()?;
Ok(y)
}
#[pyfunction]
fn round_nearest_even(x: XlaOp) -> PyResult<XlaOp> {
let y = x.round_nearest_even()?;
Ok(y)
}
#[pyfunction]
fn exp(x: XlaOp) -> PyResult<XlaOp> {
let y = x.exp()?;
Ok(y)
}
#[pyfunction]
fn expm1(x: XlaOp) -> PyResult<XlaOp> {
let y = x.expm1()?;
Ok(y)
}
#[pyfunction]
fn log(x: XlaOp) -> PyResult<XlaOp> {
let y = x.log()?;
Ok(y)
}
#[pyfunction]
fn log1p(x: XlaOp) -> PyResult<XlaOp> {
let y = x.log1p()?;
Ok(y)
}
#[pyfunction]
fn logistic(x: XlaOp) -> PyResult<XlaOp> {
let y = x.logistic()?;
Ok(y)
}
#[pyfunction]
fn sign(x: XlaOp) -> PyResult<XlaOp> {
let y = x.sign()?;
Ok(y)
}
#[pyfunction]
fn clz(x: XlaOp) -> PyResult<XlaOp> {
let y = x.clz()?;
Ok(y)
}
#[pyfunction]
fn sin(x: XlaOp) -> PyResult<XlaOp> {
let y = x.sin()?;
Ok(y)
}
#[pyfunction]
fn cos(x: XlaOp) -> PyResult<XlaOp> {
let y = x.cos()?;
Ok(y)
}
#[pyfunction]
fn tanh(x: XlaOp) -> PyResult<XlaOp> {
let y = x.tanh()?;
Ok(y)
}
#[pyfunction]
fn real(x: XlaOp) -> PyResult<XlaOp> {
let y = x.real()?;
Ok(y)
}
#[pyfunction]
fn imag(x: XlaOp) -> PyResult<XlaOp> {
let y = x.imag()?;
Ok(y)
}
#[pyfunction]
fn conj(x: XlaOp) -> PyResult<XlaOp> {
let y = x.conj()?;
Ok(y)
}
#[pyfunction]
fn square(x: XlaOp) -> PyResult<XlaOp> {
let y = x.square()?;
Ok(y)
}
#[pyfunction]
fn sqrt(x: XlaOp) -> PyResult<XlaOp> {
let y = x.sqrt()?;
Ok(y)
}
#[pyfunction]
fn rsqrt(x: XlaOp) -> PyResult<XlaOp> {
let y = x.rsqrt()?;
Ok(y)
}
#[pyfunction]
fn cbrt(x: XlaOp) -> PyResult<XlaOp> {
let y = x.cbrt()?;
Ok(y)
}
#[pyfunction]
fn upper_triangle(x: XlaOp) -> PyResult<XlaOp> {
let y = x.upper_triangle()?;
Ok(y)
}
#[pyfunction]
fn lower_triangle(x: XlaOp) -> PyResult<XlaOp> {
let y = x.lower_triangle()?;
Ok(y)
}
#[pyfunction]
fn erf(x: XlaOp) -> PyResult<XlaOp> {
let y = x.erf()?;
Ok(y)
}
#[pyfunction]
fn is_finite(x: XlaOp) -> PyResult<XlaOp> {
let y = x.is_finite()?;
Ok(y)
}
#[pyfunction]
fn zeros_like(x: XlaOp) -> PyResult<XlaOp> {
let y = x.zeros_like()?;
Ok(y)
}
#[pyfunction]
fn copy(x: XlaOp) -> PyResult<XlaOp> {
let y = x.copy()?;
Ok(y)
}
#[pyfunction]
fn sigmoid(x: XlaOp) -> PyResult<XlaOp> {
let y = x.sigmoid()?;
Ok(y)
}
#[pyfunction]
fn silu(x: XlaOp) -> PyResult<XlaOp> {
let y = x.silu()?;
Ok(y)
}
#[pyfunction]
fn relu(x: XlaOp) -> PyResult<XlaOp> {
let y = x.relu()?;
Ok(y)
}
#[pyfunction]
fn gelu(x: XlaOp) -> PyResult<XlaOp> {
let y = x.gelu()?;
Ok(y)
}
#[pyfunction]
fn gelu_approx(x: XlaOp) -> PyResult<XlaOp> {
let y = x.gelu_approx()?;
Ok(y)
}
#[pyfunction]
fn einsum1(x: XlaOp, config: &str) -> PyResult<XlaOp> {
let y = x.einsum1(config)?;
Ok(y)
}
#[pyfunction]
fn einsum2(x: XlaOp, rhs: &XlaOp, config: &str) -> PyResult<XlaOp> {
let y = x.einsum2(rhs, config)?;
Ok(y)
}
#[pyfunction]
fn reshape(x: XlaOp, dims: Vec<i64>) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.reshape(dims)?;
Ok(y)
}
#[pyfunction]
fn dynamic_reshape(
x: XlaOp,
dim_sizes: Vec<XlaOp>,
new_size_bounds: Vec<i64>,
dims_are_dynamic: Vec<bool>
) -> PyResult<XlaOp> {
let dim_sizes = dim_sizes.as_slice();
let new_size_bounds = new_size_bounds.as_slice();
let y = x.dynamic_reshape(dim_sizes, new_size_bounds, dims_are_dynamic)?;
Ok(y)
}
#[pyfunction]
fn broadcast(x: XlaOp, dims: Vec<i64>) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.broadcast(dims)?;
Ok(y)
}
#[pyfunction]
fn broadcast_in_dim(x: XlaOp, out_dims: Vec<i64>, broadcast_dims: Vec<i64>) -> PyResult<XlaOp> {
let out_dims = out_dims.as_slice();
let broadcast_dims = broadcast_dims.as_slice();
let y = x.broadcast_in_dim(out_dims, broadcast_dims)?;
Ok(y)
}
#[pyfunction]
fn collapse(x: XlaOp, dims: Vec<i64>) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.collapse(dims)?;
Ok(y)
}
#[pyfunction]
fn transpose(x: XlaOp, index_perm: Vec<i64>) -> PyResult<XlaOp> {
let index_perm = index_perm.as_slice();
let y = x.transpose(index_perm)?;
Ok(y)
}
#[pyfunction]
fn swap_dims(x: XlaOp, index1: i64, index2: i64) -> PyResult<XlaOp> {
let y = x.swap_dims(index1, index2)?;
Ok(y)
}
#[pyfunction]
fn pad(x: XlaOp, padding_value: &XlaOp, padding_config:Vec<(i64, i64, i64)> ) -> PyResult<XlaOp> {
let y = x.pad(padding_value, padding_config)?;
Ok(y)
}
#[pyfunction]
fn pad_in_dim(x: XlaOp, padding_value: &XlaOp, dinmo: i64, pad_low: i64, pad_high: i64) -> PyResult<XlaOp> {
let y = x.pad_in_dim(padding_value, dinmo, pad_low, pad_high)?;
Ok(y)
}
#[pyfunction]
fn slice(x: XlaOp, start_indices: Vec<i64>, limit_indices: Vec<i64>, strides: Vec<i64>) -> PyResult<XlaOp> {
let start_indices = start_indices.as_slice();
let limit_indices = limit_indices.as_slice();
let strides = strides.as_slice();
let y = x.slice(start_indices, limit_indices, strides)?;
Ok(y)
}
#[pyfunction]
fn slice_in_dim(x: XlaOp, start_index: i64, stop_index: i64, stride: i64, dim: i64) -> PyResult<XlaOp> {
let y = x.slice_in_dim(start_index, stop_index, stride, dim)?;
Ok(y)
}
#[pyfunction]
fn dynamic_slice(x: XlaOp, start_indices: Vec<XlaOp>, slice_indices: Vec<i64>) -> PyResult<XlaOp> {
let start_indices = start_indices.as_slice();
let slice_indices = slice_indices.as_slice();
let y = x.dynamic_slice(start_indices, slice_indices)?;
Ok(y)
}
#[pyfunction]
fn dynamic_update_slice(x: XlaOp, update: &XlaOp, start_indices: Vec<XlaOp>) -> PyResult<XlaOp> {
let start_indices = start_indices.as_slice();
let y = x.dynamic_update_slice(update, start_indices)?;
Ok(y)
}
#[pyfunction]
fn at(x: XlaOp, index_in_dim: i64, dim_index: i64) -> PyResult<XlaOp> {
let y = x.at(index_in_dim, dim_index)?;
Ok(y)
}
#[pyfunction]
fn squeeze(x: XlaOp, index: i64) -> PyResult<XlaOp> {
let y = x.squeeze(index)?;
Ok(y)
}
#[pyfunction]
fn clamp(x: XlaOp, min: &XlaOp, max: &XlaOp) -> PyResult<XlaOp> {
let y = x.clamp(min, max)?;
Ok(y)
}
#[pyfunction]
fn concat(x: XlaOp, args: Vec<XlaOp>, dim: i64) -> PyResult<XlaOp> {
let args = args.as_slice();
let y = x.concat_in_dim(args, dim)?;
Ok(y)
}
#[pyfunction]
fn get_tuple_element(x: XlaOp, index: i64) -> PyResult<XlaOp> {
let y = x.get_tuple_element(index)?;
Ok(y)
}
#[pyfunction]
fn rng_uniform(min: &XlaOp, max: &XlaOp, shape: &ArrayShape) -> PyResult<XlaOp> {
let y = XlaOp::rng_uniform(min, max, shape)?;
Ok(y)
}
#[pyfunction]
fn rng_normal(mu: &XlaOp, sigma: &XlaOp, shape: &ArrayShape) -> PyResult<XlaOp> {
let y = XlaOp::rng_normal(mu, sigma, shape)?;
Ok(y)
}
#[pyfunction]
fn astype(x: XlaOp, ty: PrimitiveType) -> PyResult<XlaOp> {
let y = x.astype(ty)?;
Ok(y)
}
#[pyfunction]
fn dimension_size(x: XlaOp, index: i64) -> PyResult<XlaOp> {
let y = x.dimensions_size(index)?;
Ok(y)
}
#[pyfunction]
fn reduce(
x: XlaOp,
init_value: XlaOp,
comp: &XlaComputation,
dims: Vec<i64>,
keep_dims: bool,
) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.reduce(init_value, comp, dims, keep_dims)?;
Ok(y)
}
#[pyfunction]
fn call(builder: XlaBuilder, computation: &XlaComputation, operands: Vec<XlaOp>) -> PyResult<XlaOp> {
let operands = operands.as_slice();
let y = builder.call(computation, operands)?;
Ok(y)
}
#[pyfunction]
fn map(builder: XlaBuilder,
operands: Vec<XlaOp>,
computation: &XlaComputation,
dims: Vec<i64>,
static_operands: Vec<XlaOp>
) -> PyResult<XlaOp> {
let operands = operands.as_slice();
let dims = dims.as_slice();
let static_operands = static_operands.as_slice();
let y = builder.map(operands, computation, dims, static_operands)?;
Ok(y)
}
#[pyfunction]
fn select(x: XlaOp, on_true: &XlaOp, on_false: &XlaOp) -> PyResult<XlaOp> {
let y = x.select(on_true, on_false)?;
Ok(y)
}
#[pyfunction]
fn while_loop(cond: &XlaComputation, body: &XlaComputation, init: XlaOp) -> PyResult<XlaOp> {
let y = XlaOp::while_(cond, body, init)?;
Ok(y)
}
#[pyfunction]
fn conditional(
x: XlaOp,
true_op: XlaOp,
true_comp: &XlaComputation,
false_op: XlaOp,
false_comp: &XlaComputation,
) -> PyResult<XlaOp> {
let y = x.conditional(true_op, true_comp,false_op, false_comp)?;
Ok(y)
}
#[pyfunction]
fn conv(
x: XlaOp,
rhs: &XlaOp,
window_strides: Vec<i64>,
padding: &str,
feature_group_count: i64,
batch_group_count: i64,
) -> PyResult<XlaOp> {
let window_strides = window_strides.as_slice();
let y = x.conv(rhs, window_strides, padding, feature_group_count, batch_group_count)?;
Ok(y)
}
#[pyfunction]
fn conv_general_dilated(
x: XlaOp,
rhs: &XlaOp,
window_strides: Vec<i64>,
padding: Vec<(i64, i64)>,
lhs_dilations: Vec<i64>,
rhs_dilations: Vec<i64>,
input_batch_dim: i64,
input_feature_dim: i64,
input_spatial_dims: Vec<i64>,
output_batch_dim: i64,
output_feature_dim: i64,
output_spatial_dims: Vec<i64>,
kernel_input_feature_dim: i64,
kernel_output_feature_dim: i64,
kernel_spatial_dims: Vec<i64>,
feature_group_count: i64,
batch_group_count: i64
) -> PyResult<XlaOp> {
let window_strides = window_strides.as_slice();
let padding = padding.as_slice();
let lhs_dilations = lhs_dilations.as_slice();
let rhs_dilations = rhs_dilations.as_slice();
let input_spatial_dims = input_spatial_dims.as_slice();
let output_spatial_dims = output_spatial_dims.as_slice();
let kernel_spatial_dims = kernel_spatial_dims.as_slice();
let y = x.conv_general_dilated(
rhs,
window_strides,
padding,
lhs_dilations,
rhs_dilations,
&input_batch_dim,
&input_feature_dim,
input_spatial_dims,
&output_batch_dim,
&output_feature_dim,
output_spatial_dims,
&kernel_input_feature_dim,
&kernel_output_feature_dim,
kernel_spatial_dims,
feature_group_count,
batch_group_count,
)?;
Ok(y)
}
#[pyfunction]
fn batch_norm_inference(
x: XlaOp,
scale: &XlaOp,
offset: &XlaOp,
mean: &XlaOp,
variance: &XlaOp,
epsilon: f32,
feature_index: i64,
) -> PyResult<XlaOp> {
let y = x.batch_norm_inference(
scale, offset, mean, variance, epsilon, feature_index
)?;
Ok(y)
}
#[pyfunction]
fn dot_general(
x: XlaOp,
rhs: &XlaOp,
lhs_contracting_dims: Vec<i64>,
rhs_contracting_dims: Vec<i64>,
lhs_batch_dims: Vec<i64>,
rhs_batch_dims: Vec<i64>,
) -> PyResult<XlaOp> {
let lhs_contracting_dims = lhs_contracting_dims.as_slice();
let rhs_contracting_dims = rhs_contracting_dims.as_slice();
let lhs_batch_dims = lhs_batch_dims.as_slice();
let rhs_batch_dims = rhs_batch_dims.as_slice();
let y = x.dot_general(
rhs,
lhs_contracting_dims,
rhs_contracting_dims,
lhs_batch_dims,
rhs_batch_dims
)?;
Ok(y)
}
#[pyfunction]
fn gather(
x: XlaOp,
start_indices: &XlaOp,
offset_dims: Vec<i64>,
collapsed_slice_dims: Vec<i64>,
start_index_map: Vec<i64>,
slice_sizes: Vec<i64>,
set_index_vector_dim: Option<i64>,
) -> PyResult<XlaOp> {
let offset_dims = offset_dims.as_slice();
let collapsed_slice_dims = collapsed_slice_dims.as_slice();
let start_index_map = start_index_map.as_slice();
let slice_sizes = slice_sizes.as_slice();
let y = x.gather(
start_indices,
offset_dims,
collapsed_slice_dims,
start_index_map,
set_index_vector_dim,
slice_sizes,
)?;
Ok(y)
}
#[pyfunction]
fn scatter(
operands: Vec<XlaOp>,
scatter_indices: &XlaOp,
updates: Vec<XlaOp>,
update_computation: &XlaComputation,
update_window_dims: Vec<i64>,
inserted_window_dims: Vec<i64>,
scatter_dims_to_operand_dims: Vec<i64>,
index_vector_dim: i64
) -> PyResult<XlaOp> {
let operands = operands.as_slice();
let updates = updates.as_slice();
let update_window_dims = update_window_dims.as_slice();
let inserted_window_dims = inserted_window_dims.as_slice();
let scatter_dims_to_operand_dims = scatter_dims_to_operand_dims.as_slice();
let y = XlaOp::scatter(
operands,
scatter_indices,
updates,
update_computation,
update_window_dims,
inserted_window_dims,
scatter_dims_to_operand_dims,
index_vector_dim
)?;
Ok(y)
}
#[pyfunction]
fn take(x: XlaOp, indices: &XlaOp, axis: i64) -> PyResult<XlaOp> {
let y = x.take(indices, axis)?;
Ok(y)
}
#[pyfunction]
fn reduce_sum(x: XlaOp, dims: Vec<i64>, keep_dims: bool) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.reduce_sum(dims, keep_dims)?;
Ok(y)
}
#[pyfunction]
fn reduce_mean(x: XlaOp, dims: Vec<i64>, keep_dims: bool) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.reduce_mean(dims, keep_dims)?;
Ok(y)
}
#[pyfunction]
fn reduce_max(x: XlaOp, dims: Vec<i64>, keep_dims: bool) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.reduce_max(dims, keep_dims)?;
Ok(y)
}
#[pyfunction]
fn reduce_min(x: XlaOp, dims: Vec<i64>, keep_dims: bool) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.reduce_min(dims, keep_dims)?;
Ok(y)
}
#[pyfunction]
fn softmax(x: XlaOp, axis: i64) -> PyResult<XlaOp> {
let y = x.softmax(axis)?;
Ok(y)
}
#[pyfunction]
fn layer_norm(x: XlaOp, dims: Vec<i64>, scale: &XlaOp, bias: &XlaOp, eps: f64) -> PyResult<XlaOp> {
let dims = dims.as_slice();
let y = x.layer_norm(dims, scale, bias, eps)?;
Ok(y)
}
#[pyfunction]
fn primitive_type(x: XlaOp) -> PyResult<PrimitiveType> {
let prim_type = x.primitive_type()?;
Ok(prim_type)
}
#[pyfunction]
fn element_type(x: XlaOp) -> PyResult<ElementType> {
let elem_type = PrimitiveType::element_type(x.ty()?)?;
Ok(elem_type)
}
#[pyfunction]
fn dims(x: XlaOp) -> PyResult<Vec<usize>> {
let dims = x.dims()?;
Ok(dims)
}
#[pyfunction]
fn rank(x: XlaOp) -> PyResult<usize> {
let rank = x.rank()?;
Ok(rank)
}
#[pyfunction]
fn shape(x: XlaOp) -> PyResult<Vec<usize>> {
let shape = x.shape()?;
let shape = ArrayShape::try_from(&shape)?;
let shape: Vec<usize> = shape.dims().iter().map(|&x| x as usize).collect();
Ok(shape)
}
#[pyfunction]
fn array_shape(x: XlaOp) -> PyResult<ArrayShape> {
let shape = x.array_shape()?;
Ok(shape)
}
#[pyfunction]
fn create_array_shape(ty: ElementType, dims: Vec<i64>) -> PyResult<ArrayShape> {
let shape = ArrayShape::new_with_type(ty, dims);
Ok(shape)
}
#[pyfunction]
fn last_dim(x: XlaOp) -> PyResult<i64> {
let shape = x.shape()?;
let shape = ArrayShape::try_from(&shape)?;
let last_dim = shape.last_dim().ok_or_else(|| PyErr::new::<exceptions::PyValueError, _>("Shape has no dimensions"))?;
Ok(last_dim)
}
#[pyfunction]
fn tuple(builder: XlaBuilder, args: Vec<XlaOp>) -> PyResult<XlaOp> {
let y = builder.tuple(&args)?;
Ok(y)
}
#[pyfunction]
fn get_builder(x: XlaOp) -> PyResult<XlaBuilder> {
let b = Rc::new(x.builder().clone());
match Rc::try_unwrap(b) {
Ok(builder) => Ok(builder),
Err(_) => Err(PyErr::new::<exceptions::PyException, _>("Could not unwrap XlaBuilder")),
}
}
#[pymodule]
#[pyo3(name="xlar")]
fn module(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_function(wrap_pyfunction!(xla_builder, m)?)?;
m.add_function(wrap_pyfunction!(constant_array, m)?)?;
m.add_function(wrap_pyfunction!(gather_params, m)?)?;
m.add_function(wrap_pyfunction!(swap_param, m)?)?;
m.add_function(wrap_pyfunction!(new_input, m)?)?;
m.add_function(wrap_pyfunction!(create_bf16_array, m)?)?;
m.add_function(wrap_pyfunction!(create_f16_array, m)?)?;
m.add_function(wrap_pyfunction!(to_tensor, m)?)?;
m.add_function(wrap_pyfunction!(to_numpy, m)?)?;
m.add_function(wrap_pyfunction!(to_tuple, m)?)?;
m.add_function(wrap_pyfunction!(param_pred, m)?)?;
m.add_function(wrap_pyfunction!(param_i8, m)?)?;
m.add_function(wrap_pyfunction!(param_i16, m)?)?;
m.add_function(wrap_pyfunction!(param_i32, m)?)?;
m.add_function(wrap_pyfunction!(param_i64, m)?)?;
m.add_function(wrap_pyfunction!(param_u8, m)?)?;
m.add_function(wrap_pyfunction!(param_u16, m)?)?;
m.add_function(wrap_pyfunction!(param_u32, m)?)?;
m.add_function(wrap_pyfunction!(param_u64, m)?)?;
m.add_function(wrap_pyfunction!(param_bf16, m)?)?;
m.add_function(wrap_pyfunction!(param_f16, m)?)?;
m.add_function(wrap_pyfunction!(param_f32, m)?)?;
m.add_function(wrap_pyfunction!(param_f64, m)?)?;
m.add_function(wrap_pyfunction!(cpu_client, m)?)?;
m.add_function(wrap_pyfunction!(gpu_client, m)?)?;
m.add_function(wrap_pyfunction!(build, m)?)?;
m.add_function(wrap_pyfunction!(get_hlo_proto, m)?)?;
m.add_function(wrap_pyfunction!(hlo_module_from_proto, m)?)?;
m.add_function(wrap_pyfunction!(hlo_module_to_string, m)?)?;
m.add_function(wrap_pyfunction!(get_hlo_module_entry_computation, m)?)?;
m.add_function(wrap_pyfunction!(computation_count, m)?)?;
m.add_function(wrap_pyfunction!(instruction_count, m)?)?;
m.add_function(wrap_pyfunction!(compile, m)?)?;
m.add_function(wrap_pyfunction!(execute, m)?)?;
m.add_function(wrap_pyfunction!(to_literal, m)?)?;
m.add_function(wrap_pyfunction!(add, m)?)?;
m.add_function(wrap_pyfunction!(sub, m)?)?;
m.add_function(wrap_pyfunction!(mul, m)?)?;
m.add_function(wrap_pyfunction!(div, m)?)?;
m.add_function(wrap_pyfunction!(rem, m)?)?;
m.add_function(wrap_pyfunction!(pow, m)?)?;
m.add_function(wrap_pyfunction!(max, m)?)?;
m.add_function(wrap_pyfunction!(min, m)?)?;
m.add_function(wrap_pyfunction!(_and, m)?)?;
m.add_function(wrap_pyfunction!(_or, m)?)?;
m.add_function(wrap_pyfunction!(xor, m)?)?;
m.add_function(wrap_pyfunction!(eq, m)?)?;
m.add_function(wrap_pyfunction!(ne, m)?)?;
m.add_function(wrap_pyfunction!(ge, m)?)?;
m.add_function(wrap_pyfunction!(gt, m)?)?;
m.add_function(wrap_pyfunction!(le, m)?)?;
m.add_function(wrap_pyfunction!(lt, m)?)?;
m.add_function(wrap_pyfunction!(lshift, m)?)?;
m.add_function(wrap_pyfunction!(rshift, m)?)?;
m.add_function(wrap_pyfunction!(atan2, m)?)?;
m.add_function(wrap_pyfunction!(dot, m)?)?;
m.add_function(wrap_pyfunction!(matmul, m)?)?;
m.add_function(wrap_pyfunction!(population_count, m)?)?;
m.add_function(wrap_pyfunction!(_not, m)?)?;
m.add_function(wrap_pyfunction!(neg, m)?)?;
m.add_function(wrap_pyfunction!(abs, m)?)?;
m.add_function(wrap_pyfunction!(floor, m)?)?;
m.add_function(wrap_pyfunction!(ceil, m)?)?;
m.add_function(wrap_pyfunction!(round, m)?)?;
m.add_function(wrap_pyfunction!(round_nearest_even, m)?)?;
m.add_function(wrap_pyfunction!(exp, m)?)?;
m.add_function(wrap_pyfunction!(expm1, m)?)?;
m.add_function(wrap_pyfunction!(log, m)?)?;
m.add_function(wrap_pyfunction!(log1p, m)?)?;
m.add_function(wrap_pyfunction!(logistic, m)?)?;
m.add_function(wrap_pyfunction!(sign, m)?)?;
m.add_function(wrap_pyfunction!(clz, m)?)?;
m.add_function(wrap_pyfunction!(sin, m)?)?;
m.add_function(wrap_pyfunction!(cos, m)?)?;
m.add_function(wrap_pyfunction!(tanh, m)?)?;
m.add_function(wrap_pyfunction!(real, m)?)?;
m.add_function(wrap_pyfunction!(imag, m)?)?;
m.add_function(wrap_pyfunction!(conj, m)?)?;
m.add_function(wrap_pyfunction!(square, m)?)?;
m.add_function(wrap_pyfunction!(sqrt, m)?)?;
m.add_function(wrap_pyfunction!(rsqrt, m)?)?;
m.add_function(wrap_pyfunction!(cbrt, m)?)?;
m.add_function(wrap_pyfunction!(upper_triangle, m)?)?;
m.add_function(wrap_pyfunction!(lower_triangle, m)?)?;
m.add_function(wrap_pyfunction!(erf, m)?)?;
m.add_function(wrap_pyfunction!(is_finite, m)?)?;
m.add_function(wrap_pyfunction!(zeros_like, m)?)?;
m.add_function(wrap_pyfunction!(copy, m)?)?;
m.add_function(wrap_pyfunction!(sigmoid, m)?)?;
m.add_function(wrap_pyfunction!(silu, m)?)?;
m.add_function(wrap_pyfunction!(relu, m)?)?;
m.add_function(wrap_pyfunction!(gelu, m)?)?;
m.add_function(wrap_pyfunction!(gelu_approx, m)?)?;
m.add_function(wrap_pyfunction!(einsum1, m)?)?;
m.add_function(wrap_pyfunction!(einsum2, m)?)?;
m.add_function(wrap_pyfunction!(reshape, m)?)?;
m.add_function(wrap_pyfunction!(dynamic_reshape, m)?)?;
m.add_function(wrap_pyfunction!(broadcast, m)?)?;
m.add_function(wrap_pyfunction!(broadcast_in_dim, m)?)?;
m.add_function(wrap_pyfunction!(collapse, m)?)?;
m.add_function(wrap_pyfunction!(transpose, m)?)?;
m.add_function(wrap_pyfunction!(swap_dims, m)?)?;
m.add_function(wrap_pyfunction!(pad, m)?)?;
m.add_function(wrap_pyfunction!(pad_in_dim, m)?)?;
m.add_function(wrap_pyfunction!(slice, m)?)?;
m.add_function(wrap_pyfunction!(slice_in_dim, m)?)?;
m.add_function(wrap_pyfunction!(dynamic_slice, m)?)?;
m.add_function(wrap_pyfunction!(dynamic_update_slice, m)?)?;
m.add_function(wrap_pyfunction!(at, m)?)?;
m.add_function(wrap_pyfunction!(squeeze, m)?)?;
m.add_function(wrap_pyfunction!(clamp, m)?)?;
m.add_function(wrap_pyfunction!(concat, m)?)?;
m.add_function(wrap_pyfunction!(get_tuple_element, m)?)?;
m.add_function(wrap_pyfunction!(rng_uniform, m)?)?;
m.add_function(wrap_pyfunction!(rng_normal, m)?)?;
m.add_function(wrap_pyfunction!(astype, m)?)?;
m.add_function(wrap_pyfunction!(dimension_size, m)?)?;
m.add_function(wrap_pyfunction!(reduce, m)?)?;
m.add_function(wrap_pyfunction!(call, m)?)?;
m.add_function(wrap_pyfunction!(map, m)?)?;
m.add_function(wrap_pyfunction!(select, m)?)?;
m.add_function(wrap_pyfunction!(while_loop, m)?)?;
m.add_function(wrap_pyfunction!(conditional, m)?)?;
m.add_function(wrap_pyfunction!(conv, m)?)?;
m.add_function(wrap_pyfunction!(conv_general_dilated, m)?)?;
m.add_function(wrap_pyfunction!(batch_norm_inference, m)?)?;
m.add_function(wrap_pyfunction!(dot_general, m)?)?;
m.add_function(wrap_pyfunction!(gather, m)?)?;
m.add_function(wrap_pyfunction!(scatter, m)?)?;
m.add_function(wrap_pyfunction!(take, m)?)?;
m.add_function(wrap_pyfunction!(reduce_sum, m)?)?;
m.add_function(wrap_pyfunction!(reduce_mean, m)?)?;
m.add_function(wrap_pyfunction!(reduce_max, m)?)?;
m.add_function(wrap_pyfunction!(reduce_min, m)?)?;
m.add_function(wrap_pyfunction!(softmax, m)?)?;
m.add_function(wrap_pyfunction!(layer_norm, m)?)?;
m.add_function(wrap_pyfunction!(primitive_type, m)?)?;
m.add_function(wrap_pyfunction!(element_type, m)?)?;
m.add_function(wrap_pyfunction!(rank, m)?)?;
m.add_function(wrap_pyfunction!(shape, m)?)?;
m.add_function(wrap_pyfunction!(array_shape, m)?)?;
m.add_function(wrap_pyfunction!(dims, m)?)?;
m.add_function(wrap_pyfunction!(last_dim, m)?)?;
m.add_function(wrap_pyfunction!(tuple, m)?)?;
m.add_function(wrap_pyfunction!(get_builder, m)?)?;
m.add_function(wrap_pyfunction!(constant_array, m)?)?;
m.add_function(wrap_pyfunction!(create_array_shape, m)?)?;
m.add_function(wrap_pyfunction!(constant_i32, m)?)?;
m.add_function(wrap_pyfunction!(constant_bool, m)?)?;
m.add_function(wrap_pyfunction!(constant_i8, m)?)?;
m.add_function(wrap_pyfunction!(constant_i16, m)?)?;
m.add_function(wrap_pyfunction!(constant_i32, m)?)?;
m.add_function(wrap_pyfunction!(constant_i64, m)?)?;
m.add_function(wrap_pyfunction!(constant_u8, m)?)?;
m.add_function(wrap_pyfunction!(constant_u16, m)?)?;
m.add_function(wrap_pyfunction!(constant_u32, m)?)?;
m.add_function(wrap_pyfunction!(constant_u64, m)?)?;
m.add_function(wrap_pyfunction!(constant_f32, m)?)?;
m.add_function(wrap_pyfunction!(constant_f64, m)?)?;
m.add_function(wrap_pyfunction!(astype_bool, m)?)?;
m.add_function(wrap_pyfunction!(astype_i8, m)?)?;
m.add_function(wrap_pyfunction!(astype_i16, m)?)?;
m.add_function(wrap_pyfunction!(astype_i32, m)?)?;
m.add_function(wrap_pyfunction!(astype_i64, m)?)?;
m.add_function(wrap_pyfunction!(astype_u8, m)?)?;
m.add_function(wrap_pyfunction!(astype_u16, m)?)?;
m.add_function(wrap_pyfunction!(astype_u32, m)?)?;
m.add_function(wrap_pyfunction!(astype_u64, m)?)?;
m.add_function(wrap_pyfunction!(astype_bf16, m)?)?;
m.add_function(wrap_pyfunction!(astype_f16, m)?)?;
m.add_function(wrap_pyfunction!(astype_f32, m)?)?;
m.add_function(wrap_pyfunction!(astype_f64, m)?)?;
Ok(())
}
| ivy/ivy/engines/XLA/rust_api/src/lib.rs/0 | {
"file_path": "ivy/ivy/engines/XLA/rust_api/src/lib.rs",
"repo_id": "ivy",
"token_count": 27233
} | 12 |
# global
import jax
from typing import Callable
# local
import ivy
from ivy.func_wrapper import inputs_to_native_arrays
def bind_custom_gradient_function(func, custom_grad_fn):
def custom_forward(x):
ret = func(x)
return ivy.to_native((ret, (x, ret)), nested=True, include_derived=True)
def custom_backward(*args):
return (custom_grad_fn(*args),)
func = jax.custom_vjp(func)
func.defvjp(custom_forward, custom_backward)
return inputs_to_native_arrays(func)
def vjp(func: Callable, *primals):
def grad_fn(*x_in):
return ivy.to_native(
func(*ivy.to_ivy(x_in, nested=True)), nested=True, include_derived=True
)
primals_out, _vjpfun = ivy.outputs_to_ivy_arrays(jax.vjp)(
grad_fn, *ivy.to_native(primals, nested=True)
)
def vjpfun(x_in):
return ivy.to_ivy(
_vjpfun(ivy.to_native(x_in, nested=True)), nested=True, include_derived=True
)
return (primals_out, vjpfun)
def jvp(func: Callable, primals, tangents):
def grad_fn(*x_in):
return ivy.to_native(
func(*ivy.to_ivy(x_in, nested=True)), nested=True, include_derived=True
)
primals_out, tangents_out = ivy.outputs_to_ivy_arrays(jax.jvp)(
grad_fn,
ivy.to_native(primals, nested=True),
ivy.to_native(tangents, nested=True),
)
return (primals_out, tangents_out)
| ivy/ivy/functional/backends/jax/experimental/gradients.py/0 | {
"file_path": "ivy/ivy/functional/backends/jax/experimental/gradients.py",
"repo_id": "ivy",
"token_count": 656
} | 13 |
# global
from collections import namedtuple
from typing import Union, Optional, Tuple, Literal, Sequence, NamedTuple
import jax.numpy as jnp
# local
import ivy
from ivy import inf
from ivy.func_wrapper import with_unsupported_dtypes
from ivy.functional.backends.jax import JaxArray
from . import backend_version
from ivy import promote_types_of_inputs
# Array API Standard #
# -------------------#
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def cholesky(
x: JaxArray, /, *, upper: bool = False, out: Optional[JaxArray] = None
) -> JaxArray:
if not upper:
ret = jnp.linalg.cholesky(x)
else:
axes = list(range(len(x.shape) - 2)) + [len(x.shape) - 1, len(x.shape) - 2]
ret = jnp.transpose(jnp.linalg.cholesky(jnp.transpose(x, axes=axes)), axes=axes)
return ret
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def cross(
x1: JaxArray,
x2: JaxArray,
/,
*,
axisa: int = -1,
axisb: int = -1,
axisc: int = -1,
axis: Optional[int] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = promote_types_of_inputs(x1, x2)
return jnp.cross(a=x1, b=x2, axisa=axisa, axisb=axisb, axisc=axisc, axis=axis)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def det(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.linalg.det(x)
@with_unsupported_dtypes({"0.4.24 and below": ("float16", "bfloat16")}, backend_version)
def eig(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> Tuple[JaxArray]:
result_tuple = NamedTuple(
"eig", [("eigenvalues", JaxArray), ("eigenvectors", JaxArray)]
)
eigenvalues, eigenvectors = jnp.linalg.eig(x)
return result_tuple(eigenvalues, eigenvectors)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def diagonal(
x: JaxArray,
/,
*,
offset: int = 0,
axis1: int = -2,
axis2: int = -1,
out: Optional[JaxArray] = None,
) -> JaxArray:
if x.dtype != bool and not jnp.issubdtype(x.dtype, jnp.integer):
ret = jnp.diagonal(x, offset=offset, axis1=axis1, axis2=axis2)
ret_edited = jnp.diagonal(
x.at[1 / x == -jnp.inf].set(-jnp.inf),
offset=offset,
axis1=axis1,
axis2=axis2,
)
ret_edited = ret_edited.at[ret_edited == -jnp.inf].set(-0.0)
ret = ret.at[ret == ret_edited].set(ret_edited[ret == ret_edited])
else:
ret = jnp.diagonal(x, offset=offset, axis1=axis1, axis2=axis2)
return ret
def tensorsolve(
x1: JaxArray,
x2: JaxArray,
/,
*,
axes: Optional[Union[int, Tuple[Sequence[int], Sequence[int]]]] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.linalg.tensorsolve(x1, x2, axes)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def eigh(
x: JaxArray, /, *, UPLO: str = "L", out: Optional[JaxArray] = None
) -> Tuple[JaxArray]:
result_tuple = NamedTuple(
"eigh", [("eigenvalues", JaxArray), ("eigenvectors", JaxArray)]
)
eigenvalues, eigenvectors = jnp.linalg.eigh(x, UPLO=UPLO)
return result_tuple(eigenvalues, eigenvectors)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def eigvalsh(
x: JaxArray, /, *, UPLO: str = "L", out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.linalg.eigvalsh(x, UPLO=UPLO)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def inner(x1: JaxArray, x2: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.inner(x1, x2)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def inv(
x: JaxArray,
/,
*,
adjoint: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
if adjoint:
if x.ndim < 2:
raise ValueError("Input must be at least 2D")
permutation = list(range(x.ndim))
permutation[-2], permutation[-1] = permutation[-1], permutation[-2]
x_adj = jnp.transpose(x, permutation).conj()
return jnp.linalg.inv(x_adj)
return jnp.linalg.inv(x)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def matmul(
x1: JaxArray,
x2: JaxArray,
/,
*,
transpose_a: bool = False,
transpose_b: bool = False,
adjoint_a: bool = False,
adjoint_b: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
if transpose_a:
x1 = jnp.swapaxes(x1, -1, -2)
if transpose_b:
x2 = jnp.swapaxes(x2, -1, -2)
if adjoint_a:
x1 = jnp.swapaxes(jnp.conjugate(x1), -1, -2)
if adjoint_b:
x2 = jnp.swapaxes(jnp.conjugate(x2), -1, -2)
return jnp.matmul(x1, x2)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def matrix_norm(
x: JaxArray,
/,
*,
ord: Union[int, float, Literal[inf, -inf, "fro", "nuc"]] = "fro",
axis: Tuple[int, int] = (-2, -1),
keepdims: bool = False,
dtype: Optional[jnp.dtype] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
if dtype is not None:
x = ivy.astype(x, dtype)
if hasattr(axis, "__iter__"):
if not isinstance(axis, tuple):
axis = tuple(axis)
else:
if not isinstance(axis, tuple):
axis = (axis,)
return jnp.linalg.norm(x, ord=ord, axis=axis, keepdims=keepdims)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def matrix_power(x: JaxArray, n: int, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.linalg.matrix_power(x, n)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def matrix_rank(
x: JaxArray,
/,
*,
atol: Optional[Union[float, Tuple[float]]] = None,
rtol: Optional[Union[float, Tuple[float]]] = None,
hermitian: Optional[bool] = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
if (x.ndim < 2) or (0 in x.shape):
return jnp.asarray(0, jnp.int64)
# we don't use the native matrix_rank function because the behaviour of the
# tolerance argument is difficult to unify,
# and the native implementation is compositional
svd_values = jnp.linalg.svd(x, hermitian=hermitian, compute_uv=False)
sigma = jnp.max(svd_values, axis=-1, keepdims=False)
atol = (
atol if atol is not None else jnp.finfo(x.dtype).eps * max(x.shape[-2:]) * sigma
)
rtol = rtol if rtol is not None else 0.0
tol = jnp.maximum(atol, rtol * sigma)
# make sure it's broadcastable again with svd_values
tol = jnp.expand_dims(tol, axis=-1)
ret = jnp.count_nonzero(svd_values > tol, axis=-1)
return ret
@with_unsupported_dtypes(
{"0.4.24 and below": ("int", "float16", "complex")},
backend_version,
)
def matrix_transpose(
x: JaxArray, /, *, conjugate: bool = False, out: Optional[JaxArray] = None
) -> JaxArray:
if conjugate:
x = jnp.conj(x)
return jnp.swapaxes(x, -1, -2)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def outer(
x1: JaxArray,
x2: JaxArray,
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.outer(x1, x2)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def pinv(
x: JaxArray,
/,
*,
rtol: Optional[Union[float, Tuple[float]]] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
if rtol is None:
ret = jnp.linalg.pinv(x)
else:
ret = jnp.linalg.pinv(x, rtol)
return ret
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def qr(
x: JaxArray, /, *, mode: str = "reduced", out: Optional[JaxArray] = None
) -> Tuple[JaxArray, JaxArray]:
res = namedtuple("qr", ["Q", "R"])
q, r = jnp.linalg.qr(x, mode=mode)
return res(q, r)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def slogdet(
x: JaxArray,
/,
) -> Tuple[JaxArray, JaxArray]:
results = NamedTuple("slogdet", [("sign", JaxArray), ("logabsdet", JaxArray)])
sign, logabsdet = jnp.linalg.slogdet(x)
return results(sign, logabsdet)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def solve(
x1: JaxArray,
x2: JaxArray,
/,
*,
adjoint: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
if adjoint:
x1 = jnp.swapaxes(jnp.conjugate(x1), -1, -2)
expanded_last = False
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if len(x2.shape) <= 1:
if x2.shape[-1] == x1.shape[-1]:
expanded_last = True
x2 = jnp.expand_dims(x2, axis=1)
# if any of the arrays are empty
is_empty_x1 = x1.size == 0
is_empty_x2 = x2.size == 0
if is_empty_x1 or is_empty_x2:
for i in range(len(x1.shape) - 2):
x2 = jnp.expand_dims(x2, axis=0)
output_shape = list(jnp.broadcast_shapes(x1.shape[:-2], x2.shape[:-2]))
output_shape.append(x2.shape[-2])
output_shape.append(x2.shape[-1])
ret = jnp.array([]).reshape(output_shape)
else:
output_shape = tuple(jnp.broadcast_shapes(x1.shape[:-2], x2.shape[:-2]))
x1 = jnp.broadcast_to(x1, output_shape + x1.shape[-2:])
x2 = jnp.broadcast_to(x2, output_shape + x2.shape[-2:])
ret = jnp.linalg.solve(x1, x2)
if expanded_last:
ret = jnp.squeeze(ret, axis=-1)
return jnp.asarray(ret, dtype=x1.dtype)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def svd(
x: JaxArray, /, *, compute_uv: bool = True, full_matrices: bool = True
) -> Union[JaxArray, Tuple[JaxArray, ...]]:
if compute_uv:
results = namedtuple("svd", "U S Vh")
U, D, VT = jnp.linalg.svd(x, full_matrices=full_matrices, compute_uv=compute_uv)
return results(U, D, VT)
else:
results = namedtuple("svd", "S")
D = jnp.linalg.svd(x, full_matrices=full_matrices, compute_uv=compute_uv)
return results(D)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def svdvals(
x: JaxArray, /, *, driver: Optional[str] = None, out: Optional[JaxArray] = None
) -> JaxArray:
# TODO: handling the driver argument
return jnp.linalg.svd(x, compute_uv=False)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def tensordot(
x1: JaxArray,
x2: JaxArray,
/,
*,
axes: Union[int, Tuple[Sequence[int], Sequence[int]]] = 2,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = promote_types_of_inputs(x1, x2)
return jnp.tensordot(x1, x2, axes)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def trace(
x: JaxArray,
/,
*,
offset: int = 0,
axis1: int = 0,
axis2: int = 1,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.trace(x, offset=offset, axis1=axis1, axis2=axis2, out=out)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def vecdot(
x1: JaxArray, x2: JaxArray, /, *, axis: int = -1, out: Optional[JaxArray] = None
) -> JaxArray:
x1, x2 = promote_types_of_inputs(x1, x2)
return jnp.tensordot(x1, x2, axes=(axis, axis))
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def vector_norm(
x: JaxArray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
ord: Union[int, float, Literal[inf, -inf]] = 2,
out: Optional[JaxArray] = None,
dtype: Optional[jnp.dtype] = None,
) -> JaxArray:
if dtype and x.dtype != dtype:
x = x.astype(dtype)
abs_x = jnp.abs(x)
if ord == 0:
return jnp.sum(
(abs_x != 0).astype(abs_x.dtype), axis=axis, keepdims=keepdims, out=out
)
elif ord == inf:
return jnp.max(abs_x, axis=axis, keepdims=keepdims, out=out)
elif ord == -inf:
return jnp.min(abs_x, axis=axis, keepdims=keepdims, out=out)
else:
return jnp.sum(abs_x**ord, axis=axis, keepdims=keepdims) ** (1.0 / ord)
# Extra #
# ------#
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def diag(
x: JaxArray,
/,
*,
k: int = 0,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.diag(x, k=k)
@with_unsupported_dtypes(
{"0.4.24 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def vander(
x: JaxArray,
/,
*,
N: Optional[int] = None,
increasing: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.vander(x, N=N, increasing=increasing)
@with_unsupported_dtypes(
{
"0.4.24 and below": (
"complex",
"unsigned",
)
},
backend_version,
)
def vector_to_skew_symmetric_matrix(
vector: JaxArray, /, *, out: Optional[JaxArray] = None
) -> JaxArray:
batch_shape = list(vector.shape[:-1])
# BS x 3 x 1
vector_expanded = jnp.expand_dims(vector, -1)
# BS x 1 x 1
a1s = vector_expanded[..., 0:1, :]
a2s = vector_expanded[..., 1:2, :]
a3s = vector_expanded[..., 2:3, :]
# BS x 1 x 1
zs = jnp.zeros(batch_shape + [1, 1], dtype=vector.dtype)
# BS x 1 x 3
row1 = jnp.concatenate((zs, -a3s, a2s), -1)
row2 = jnp.concatenate((a3s, zs, -a1s), -1)
row3 = jnp.concatenate((-a2s, a1s, zs), -1)
# BS x 3 x 3
return jnp.concatenate((row1, row2, row3), -2)
| ivy/ivy/functional/backends/jax/linear_algebra.py/0 | {
"file_path": "ivy/ivy/functional/backends/jax/linear_algebra.py",
"repo_id": "ivy",
"token_count": 6716
} | 14 |
import mxnet as mx
from typing import Union, Optional
from ivy.func_wrapper import with_supported_dtypes
from . import backend_version
from ivy.utils.exceptions import IvyNotImplementedException
import ivy
def abs(
x: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.abs(x)
def acos(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.arccos(x)
def acosh(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.arccosh(x)
def add(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
alpha: Optional[Union[(int, float)]] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
if alpha is None or alpha == 1.0:
return mx.nd.add(x1, x2)
return mx.nd.add(mx.nd.multiply(x1, alpha), x2)
def asin(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.arcsin(x)
def asinh(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.arcsinh(x)
def atan(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.arctan(x)
def atan2(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.arctan2(x1, x2)
def atanh(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.arctanh(x)
def bitwise_and(
x1: Union[(int, None, mx.ndarray.NDArray)],
x2: Union[(int, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def bitwise_invert(
x: Union[(int, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def bitwise_left_shift(
x1: Union[(int, None, mx.ndarray.NDArray)],
x2: Union[(int, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def bitwise_or(
x1: Union[(int, None, mx.ndarray.NDArray)],
x2: Union[(int, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def bitwise_right_shift(
x1: Union[(int, None, mx.ndarray.NDArray)],
x2: Union[(int, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def bitwise_xor(
x1: Union[(int, None, mx.ndarray.NDArray)],
x2: Union[(int, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def ceil(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.ceil(x)
def cos(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.cos(x)
def cosh(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.cosh(x)
def divide(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
ret = mx.nd.divide(x1, x2)
if ivy.is_float_dtype(x1.dtype) or ivy.is_complex_dtype(x1.dtype):
ret = mx.nd.array(ret, dtype=x1.dtype)
else:
ret = mx.nd.array(ret, dtype=ivy.default_float_dtype(as_native=True))
return ret
def equal(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def exp(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.exp(x)
def expm1(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return (mx.nd.exp(x) - 1).astype(x.dtype)
def floor(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.floor(x)
def floor_divide(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.floor(mx.nd.divide(x1, x2))
def fmin(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def greater(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.greater(x1, x2)
def greater_equal(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.greater_equal(x1, x2)
def isfinite(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def isinf(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
detect_positive: bool = True,
detect_negative: bool = True,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def isnan(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def lcm(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def less(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.less(x1, x2)
def less_equal(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.less_equal(x1, x2)
def log(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.log(x)
def log10(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def log1p(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.log1p(x)
def log2(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.log2(x)
def logaddexp(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.logaddexp(x1, x2)
def logaddexp2(
x1: Union[(None, mx.ndarray.NDArray, float, list, tuple)],
x2: Union[(None, mx.ndarray.NDArray, float, list, tuple)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def logical_and(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.logical_and(x1, x2)
def logical_not(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.logical_not(x)
def logical_or(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.logical_or(x1, x2)
def logical_xor(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.logical_xor(x1, x2)
def multiply(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.multiply(x1, x2)
def negative(
x: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.negative(x)
def not_equal(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def positive(
x: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def pow(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(int, float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.power(x1, x2)
def remainder(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
modulus: bool = True,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def round(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
decimals: int = 0,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def sign(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
@with_supported_dtypes(
{"1.9.1 and below": ("float",)},
backend_version,
)
def sin(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
x_dtype = x.dtype
# have to handle zero dim array separately from dtype
zero_dim = False
if x.shape == ():
ret = mx.nd.sin(mx.nd.array([x.asscalar()]))
zero_dim = True
else:
ret = mx.nd.sin(x)
if "int" in str(x_dtype):
ret = ret.astype("float32")
else:
ret = ret.astype(x_dtype)
if zero_dim:
return ret.reshape(())
return ret
def sinh(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.sinh(x)
def sqrt(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.sqrt(x)
def square(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.square(x)
def subtract(
x1: Union[(float, None, mx.ndarray.NDArray)],
x2: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
alpha: Optional[Union[(int, float)]] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
if alpha not in (1, None):
ivy.set_array_mode(False)
x2 = multiply(x2, alpha)
ivy.unset_array_mode()
return mx.nd.subtract(x1, x2)
def trapz(
y: Union[(None, mx.ndarray.NDArray)],
/,
*,
x: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
dx: float = 1.0,
axis: int = (-1),
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def tan(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.tan(x)
def tanh(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
complex_mode="jax",
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.tanh(x)
def trunc(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def imag(
val: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def angle(
input: Union[(None, mx.ndarray.NDArray)],
/,
*,
deg: Optional[bool] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def exp2(
x: Union[(None, mx.ndarray.NDArray, float, list, tuple)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def erf(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def maximum(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
use_where: bool = True,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def minimum(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
use_where: bool = True,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def reciprocal(
x: Union[(float, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.reciprocal(x)
def deg2rad(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def rad2deg(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def isreal(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def fmod(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def real(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
| ivy/ivy/functional/backends/mxnet/elementwise.py/0 | {
"file_path": "ivy/ivy/functional/backends/mxnet/elementwise.py",
"repo_id": "ivy",
"token_count": 8528
} | 15 |
from typing import Union, Optional
import mxnet as mx
from ivy.utils.exceptions import IvyNotImplementedException
def lexsort(
keys: Union[(None, mx.ndarray.NDArray)],
/,
*,
axis: int = (-1),
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
| ivy/ivy/functional/backends/mxnet/experimental/sorting.py/0 | {
"file_path": "ivy/ivy/functional/backends/mxnet/experimental/sorting.py",
"repo_id": "ivy",
"token_count": 138
} | 16 |
from typing import Union, Optional, Sequence
import mxnet as mx
from ivy.utils.exceptions import IvyNotImplementedException
def all(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
axis: Optional[Union[(int, Sequence[int])]] = None,
keepdims: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def any(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
axis: Optional[Union[(int, Sequence[int])]] = None,
keepdims: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
| ivy/ivy/functional/backends/mxnet/utility.py/0 | {
"file_path": "ivy/ivy/functional/backends/mxnet/utility.py",
"repo_id": "ivy",
"token_count": 288
} | 17 |
"""Collection of Numpy network layers, wrapped to fit Ivy syntax and
signature."""
# global
import numpy as np
from typing import Union, Tuple, Optional, Sequence
# local
import ivy
from ivy.functional.ivy.layers import (
_handle_padding,
_deconv_length,
_get_x_data_format,
)
def _add_dilations(x, dilations, axis, values=0):
return np.insert(
x,
[i for i in range(1, x.shape[axis])] * (dilations - 1),
values=values,
axis=axis,
)
def _dilate_pad_conv(x, filters, strides, padding, dims, dilations):
for j in range(dims):
if dilations[j] > 1:
filters = _add_dilations(filters, dilations[j], axis=j)
if isinstance(padding, str):
pad_specific = [
_handle_padding(x.shape[1 + i], strides[i], filters.shape[i], padding)
for i in range(dims)
]
pad_list = [
(pad_specific[i] // 2, pad_specific[i] - pad_specific[i] // 2)
for i in range(dims)
]
elif isinstance(padding, int):
pad_list = [(padding, padding)] * dims
else:
pad_list = [(_p, _p) if isinstance(_p, int) else _p for _p in padding]
pad_width = [(0, 0), *pad_list, (0, 0)]
x = np.pad(
x,
pad_width=pad_width,
mode="constant",
)
return x, filters
def _dilate_pad_conv_tranpose(
x, filters, strides, padding, dims, dilations, output_shape
):
strides = [strides] * dims if isinstance(strides, int) else strides
dilations = [dilations] * dims if isinstance(dilations, int) else dilations
if output_shape is None:
new_shape = [
_deconv_length(
x.shape[i + 1], strides[i], filters.shape[i], padding, dilations[i]
)
for i in range(dims)
]
output_shape = [x.shape[0], *new_shape, filters.shape[-1]]
elif len(output_shape) == dims:
output_shape = [x.shape[0]] + list(output_shape) + [filters.shape[-1]]
for i in reversed(range(dims)):
if strides[i] > 1:
x = _add_dilations(x, strides[i], axis=i + 1)
if dilations[i] > 1:
filters = _add_dilations(filters, dilations[i], axis=i)
pad_specific = [
_handle_padding(output_shape[i + 1], strides[i], filters.shape[i], padding)
for i in range(dims)
]
extra_pad = [
max(
0,
output_shape[i + 1]
- (x.shape[i + 1] + filters.shape[i] - 1 - pad_specific[i]),
)
for i in range(dims)
]
pad_top = [filters.shape[i] - 1 - (pad_specific[i] // 2) for i in range(dims)]
pad_bot = [
filters.shape[i] - 1 - (pad_specific[i] - pad_specific[i] // 2)
for i in range(dims)
]
pad_list = [(pad_top[i], pad_bot[i] + extra_pad[i]) for i in range(dims)]
x = np.pad(
x,
[
(0, 0),
*pad_list,
(0, 0),
],
"constant",
)
return x, filters
def _ff_xd_before_conv(x, filters, dims, filter_format, x_dilations):
if filter_format == "channel_first":
filters = np.transpose(filters, (*range(2, dims + 2), 1, 0))
x_dilations = [x_dilations] * dims if isinstance(x_dilations, int) else x_dilations
for j in range(dims):
if x_dilations[j] > 1:
x = _add_dilations(x, x_dilations[j], axis=j + 1)
return x, filters
def conv1d(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int]] = 1,
dilations: Union[int, Tuple[int]] = 1,
bias: Optional[np.ndarray] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
strides = [strides] if isinstance(strides, int) else strides
dilations = [dilations] if isinstance(dilations, int) else dilations
if data_format == "NCW":
x = np.transpose(x, (0, 2, 1))
x, filters = _ff_xd_before_conv(x, filters, 1, filter_format, x_dilations)
x, filters = _dilate_pad_conv(x, filters, strides, padding, 1, dilations)
x_shape = x.shape
filter_shape = list(filters.shape[0:1])
input_dim = filters.shape[-2]
output_dim = filters.shape[-1]
new_w = (x_shape[1] - filter_shape[0]) // strides[0] + 1
new_shape = [x_shape[0], new_w] + filter_shape + [x_shape[-1]]
new_strides = (
x.strides[0],
x.strides[1] * strides[0],
x.strides[1],
x.strides[2],
)
# B x OW x KW x I
sub_matrices = np.lib.stride_tricks.as_strided(
x, new_shape, new_strides, writeable=False
)
# B x OW x KW x I x O
sub_matrices_w_output_dim = np.tile(
np.expand_dims(sub_matrices, -1), [1] * 4 + [output_dim]
)
# B x OW x KW x I x O
mult = sub_matrices_w_output_dim * filters.reshape(
[1] * 2 + filter_shape + [input_dim, output_dim]
)
# B x OW x O
res = np.sum(mult, (2, 3))
res = np.add(res, bias) if bias is not None else res
if data_format == "NCW":
res = np.transpose(res, (0, 2, 1))
return res
def conv1d_transpose(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int]],
padding: str,
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "NWC",
dilations: Union[int, Tuple[int]] = 1,
bias: Optional[np.ndarray] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if data_format == "NCW":
x = np.transpose(x, (0, 2, 1))
if filter_format == "channel_last":
filters = np.transpose(filters, (0, 2, 1))
else:
filters = np.transpose(filters, (2, 0, 1))
x, filters = _dilate_pad_conv_tranpose(
x, filters, strides, padding, 1, dilations, output_shape
)
x = np.flip(x, (1,))
res = np.flip(
conv1d(x, filters, 1, "VALID", data_format="NWC", dilations=1),
(1,),
)
res = np.add(res, bias) if bias is not None else res
if data_format == "NCW":
res = np.transpose(res, (0, 2, 1))
return res
def conv2d(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NHWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int, int]] = 1,
dilations: Union[int, Tuple[int, int]] = 1,
bias: Optional[np.ndarray] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
strides = [strides] * 2 if isinstance(strides, int) else strides
dilations = [dilations] * 2 if isinstance(dilations, int) else dilations
if data_format == "NCHW":
x = np.transpose(x, (0, 2, 3, 1))
x, filters = _ff_xd_before_conv(x, filters, 2, filter_format, x_dilations)
x, filters = _dilate_pad_conv(x, filters, strides, padding, 2, dilations)
x_shape = x.shape
filter_shape = list(filters.shape[0:2])
input_dim = filters.shape[-2]
output_dim = filters.shape[-1]
new_h = (x_shape[1] - filter_shape[0]) // strides[0] + 1
new_w = (x_shape[2] - filter_shape[1]) // strides[1] + 1
new_shape = [x_shape[0], new_h, new_w] + filter_shape + [x_shape[-1]]
new_strides = (
x.strides[0],
x.strides[1] * strides[0],
x.strides[2] * strides[1],
x.strides[1],
x.strides[2],
x.strides[3],
)
# B x OH x OW x KH x KW x I
sub_matrices = np.lib.stride_tricks.as_strided(
x, new_shape, new_strides, writeable=False
)
# B x OH x OW x KH x KW x I x O
sub_matrices_w_output_dim = np.tile(
np.expand_dims(sub_matrices, -1), [1] * 6 + [output_dim]
)
# B x OH x OW x KH x KW x I x O
mult = sub_matrices_w_output_dim * filters.reshape(
[1] * 3 + filter_shape + [input_dim, output_dim]
)
# B x OH x OW x O
res = np.sum(mult, (3, 4, 5))
res = np.add(res, bias) if bias is not None else res
if data_format == "NCHW":
return np.transpose(res, (0, 3, 1, 2))
return res
def conv2d_transpose(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int, int]],
padding: str,
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "NHWC",
dilations: Union[int, Tuple[int, int]] = 1,
bias: Optional[np.ndarray] = None,
out: Optional[np.ndarray] = None,
):
if data_format == "NCHW":
x = np.transpose(x, (0, 2, 3, 1))
if filter_format == "channel_last":
filters = np.transpose(filters, (0, 1, 3, 2))
else:
filters = np.transpose(filters, (2, 3, 0, 1))
x, filters = _dilate_pad_conv_tranpose(
x, filters, strides, padding, 2, dilations, output_shape
)
x = np.flip(x, (1, 2))
res = np.flip(
conv2d(x, filters, 1, "VALID", data_format="NHWC", dilations=1),
(1, 2),
)
res = np.add(res, bias) if bias is not None else res
if data_format == "NCHW":
res = np.transpose(res, (0, 3, 1, 2))
return res
def depthwise_conv2d(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NHWC",
dilations: Union[int, Tuple[int, int]] = 1,
out: Optional[np.ndarray] = None,
):
strides = [strides] * 2 if isinstance(strides, int) else strides
dilations = [dilations] * 2 if isinstance(dilations, int) else dilations
if isinstance(padding, int):
padding = [(padding, padding)] * 2
if data_format == "NHWC":
x = np.transpose(x, (3, 0, 1, 2))
else:
x = np.transpose(x, (1, 0, 2, 3))
filters = np.squeeze(filters, 3) if filters.ndim == 4 else filters
filters = np.transpose(filters, (2, 0, 1))
filters = np.expand_dims(filters, (-1, -2))
filter_h = filters.shape[1] + (filters.shape[1] - 1) * (dilations[0] - 1)
filter_w = filters.shape[2] + (filters.shape[2] - 1) * (dilations[1] - 1)
if isinstance(padding, str):
if padding == "VALID":
out_height = np.ceil(float(x.shape[2] - filter_h + 1) / float(strides[0]))
out_width = np.ceil(float(x.shape[3] - filter_w + 1) / float(strides[1]))
else:
out_height = np.ceil(float(x.shape[2]) / float(strides[0]))
out_width = np.ceil(float(x.shape[3]) / float(strides[1]))
else:
out_height = np.ceil(
float(x.shape[2] - filter_h + padding[0][0] + padding[0][1] + 1)
/ float(strides[0])
)
out_width = np.ceil(
float(x.shape[3] - filter_w + padding[1][0] + padding[1][1] + 1)
/ float(strides[1])
)
if data_format == "NHWC":
outputs = np.empty([x.shape[1], int(out_height), int(out_width), 0], x.dtype)
else:
outputs = np.empty([x.shape[1], 0, int(out_height), int(out_width)], x.dtype)
x = np.expand_dims(x, -1)
for i in range(x.shape[0]):
output = conv2d(
x[i], filters[i], strides, padding, data_format="NHWC", dilations=dilations
)
if data_format == "NHWC":
outputs = np.append(outputs, output, axis=-1)
else:
outputs = np.append(outputs, np.transpose(output, (0, 3, 1, 2)), axis=1)
return outputs
def conv3d(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int, int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
data_format: str = "NDHWC",
filter_format: str = "channel_last",
x_dilations: Union[int, Tuple[int, int, int]] = 1,
dilations: Union[int, Tuple[int, int, int]] = 1,
bias: Optional[np.ndarray] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
strides = [strides] * 3 if isinstance(strides, int) else strides
dilations = [dilations] * 3 if isinstance(dilations, int) else dilations
if data_format == "NCDHW":
x = np.transpose(x, (0, 2, 3, 4, 1))
x, filters = _ff_xd_before_conv(x, filters, 3, filter_format, x_dilations)
x, filters = _dilate_pad_conv(x, filters, strides, padding, 3, dilations)
x_shape = x.shape
filter_shape = list(filters.shape[0:3])
input_dim = filters.shape[-2]
output_dim = filters.shape[-1]
new_d = (x_shape[1] - filter_shape[0]) // strides[0] + 1
new_h = (x_shape[2] - filter_shape[1]) // strides[1] + 1
new_w = (x_shape[3] - filter_shape[2]) // strides[2] + 1
new_shape = [x_shape[0], new_d, new_h, new_w] + filter_shape + [x_shape[-1]]
new_strides = (
x.strides[0],
x.strides[1] * strides[0],
x.strides[2] * strides[1],
x.strides[3] * strides[2],
x.strides[1],
x.strides[2],
x.strides[3],
x.strides[4],
)
# B x OD X OH x OW x KD X KH x KW x I
sub_matrices = np.lib.stride_tricks.as_strided(
x, new_shape, new_strides, writeable=False
)
# B x OD X OH x OW x KD X KH x KW x I x O
sub_matrices_w_output_dim = np.tile(
np.expand_dims(sub_matrices, -1), [1] * 8 + [output_dim]
)
# B x OD X OH x OW x KD X KH x KW x I x O
mult = sub_matrices_w_output_dim * filters.reshape(
[1] * 4 + filter_shape + [input_dim, output_dim]
)
# B x OD X OH x OW x O
res = np.sum(mult, (4, 5, 6, 7))
res = np.add(res, bias) if bias is not None else res
if data_format == "NCDHW":
return np.transpose(res, (0, 4, 1, 2, 3))
return res
def conv3d_transpose(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int, int, int]],
padding: str,
/,
*,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "NDHWC",
dilations: Union[int, Tuple[int, int, int]] = 1,
bias: Optional[np.ndarray] = None,
out: Optional[np.ndarray] = None,
):
if data_format == "NCDHW":
x = np.transpose(x, (0, 2, 3, 4, 1))
if filter_format == "channel_last":
filters = np.transpose(filters, (0, 1, 2, 4, 3))
else:
filters = np.transpose(filters, (2, 3, 4, 0, 1))
x, filters = _dilate_pad_conv_tranpose(
x, filters, strides, padding, 3, dilations, output_shape
)
x = np.flip(x, (1, 2, 3))
res = np.flip(
conv3d(x, filters, 1, "VALID", data_format="NDHWC", dilations=1),
(1, 2, 3),
)
res = np.add(res, bias) if bias is not None else res
if data_format == "NCDHW":
res = np.transpose(res, (0, 4, 1, 2, 3))
return res
def conv_general_dilated(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]],
padding: Union[str, int, Sequence[Tuple[int, int]]],
/,
*,
dims: int = 2,
data_format: str = "channel_last",
filter_format: str = "channel_last",
feature_group_count: int = 1,
x_dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
bias: Optional[np.ndarray] = None,
out: np.ndarray = None,
) -> np.ndarray:
# permuting dims based on formats
if data_format == "channel_first":
x = np.transpose(x, (0, *range(2, dims + 2), 1))
if filter_format == "channel_first":
filters = np.transpose(filters, (*range(2, dims + 2), 1, 0))
strides = [strides] * dims if isinstance(strides, int) else strides
dilations = [dilations] * dims if isinstance(dilations, int) else dilations
x_dilations = [x_dilations] * dims if isinstance(x_dilations, int) else x_dilations
for j in range(dims):
if x_dilations[j] > 1:
x = _add_dilations(x, x_dilations[j], axis=j + 1)
x, filters = _dilate_pad_conv(x, filters, strides, padding, dims, dilations)
x_shape = x.shape
filter_shape = list(filters.shape[0:dims])
input_dim = filters.shape[-2]
output_dim = filters.shape[-1]
new_shape = [
(x_shape[i + 1] - filter_shape[i]) // strides[i] + 1 for i in range(dims)
]
res = []
new_shape = [x_shape[0], *new_shape] + filter_shape + [input_dim]
for i, j in zip(
range(0, x.shape[-1], input_dim),
range(0, output_dim, output_dim // feature_group_count),
):
sliced_x = x[..., i : i + input_dim]
sliced_filters = filters[..., j : j + output_dim // feature_group_count]
normal_strides = [sliced_x.strides[i] for i in range(1, dims + 2)]
changed_strides = [
sliced_x.strides[i] * strides[i - 1] for i in range(1, dims + 1)
]
new_strides = (x.strides[0], *changed_strides, *normal_strides)
# B x OH x OW x KH x KW x I
sub_matrices = np.lib.stride_tricks.as_strided(
sliced_x, new_shape, new_strides, writeable=False
)
# B x OH x OW x KH x KW x I x O
sub_matrices_w_output_dim = np.tile(
np.expand_dims(sub_matrices, -1),
[1] * (dims * 2 + 2) + [output_dim // feature_group_count],
)
# B x OH x OW x KH x KW x I x O
mult = sub_matrices_w_output_dim * sliced_filters.reshape(
[1] * (dims + 1)
+ filter_shape
+ [input_dim, output_dim // feature_group_count]
)
# B x OH x OW x O
res.append(np.sum(mult, tuple([i for i in range(dims + 1, dims * 2 + 2)])))
res = np.concatenate(res, axis=-1)
res = np.add(res, bias) if bias is not None else res
if data_format == "channel_first":
return np.transpose(res, (0, dims + 1, *range(1, dims + 1)))
return res
def conv_general_transpose(
x: np.ndarray,
filters: np.ndarray,
strides: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]],
padding: str,
/,
*,
dims: int = 2,
output_shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
filter_format: str = "channel_last",
data_format: str = "channel_last",
dilations: Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int]] = 1,
feature_group_count: int = 1,
bias: Optional[np.ndarray] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if data_format == "channel_first":
x = np.transpose(x, (0, *range(2, dims + 2), 1))
if filter_format == "channel_last":
filters = np.transpose(filters, (*range(dims), dims + 1, dims))
else:
filters = np.transpose(filters, (*range(2, dims + 2), 0, 1))
x, filters = _dilate_pad_conv_tranpose(
x, filters, strides, padding, dims, dilations, output_shape
)
x = np.flip(x, (*range(1, dims + 1),))
res = np.concatenate(
[
np.flip(
conv_general_dilated(
x[..., j : j + filters.shape[-2] // feature_group_count],
filters[..., j : j + filters.shape[-2] // feature_group_count, :],
1,
"VALID",
dims=dims,
data_format=_get_x_data_format(dims, "channel_last"),
dilations=1,
),
(*range(1, dims + 1),),
)
for j in range(
0, filters.shape[-2], filters.shape[-2] // feature_group_count
)
],
axis=-1,
)
res = np.add(res, bias) if bias is not None else res
if data_format == "channel_first":
return np.transpose(res, (0, dims + 1, *range(1, dims + 1)))
return res
| ivy/ivy/functional/backends/numpy/layers.py/0 | {
"file_path": "ivy/ivy/functional/backends/numpy/layers.py",
"repo_id": "ivy",
"token_count": 9448
} | 18 |
"""Collection of Paddle general functions, wrapped to fit Ivy syntax and
signature."""
# global
import os
import paddle
from typing import Optional, Union
import time
import ivy
from ivy.functional.ivy.device import (
_shift_native_arrays_on_default_device,
Profiler as BaseProfiler,
)
from paddle.device import core
# API #
# ----#
def dev(
x: paddle.Tensor, /, *, as_native: bool = False
) -> Union[ivy.Device, core.Place]:
return x.place if as_native else as_ivy_dev(x.place)
def to_device(
x: paddle.Tensor,
device: core.Place,
/,
*,
stream: Optional[int] = None,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
device = as_native_dev(device)
if device.is_cpu_place() and not x.place.is_cpu_place():
return x.cpu()
elif (device.is_gpu_place() and not x.place.is_gpu_place()) or (
x.place.is_gpu_place()
and device.is_gpu_place()
and x.place.gpu_device_id() != device.gpu_device_id()
):
return x.cuda(device.gpu_device_id())
else:
return x
def as_ivy_dev(device: core.Place, /):
# TODO: add handling to string inputs without indices for gpu
if isinstance(device, str):
return ivy.Device(device)
# TODO: remove this once ivy.Device accepts native device inputs
if device.is_cpu_place():
return ivy.Device("cpu")
elif device.is_gpu_place():
dev_idx = device.gpu_device_id()
return ivy.Device(f"gpu:{str(dev_idx)}")
def as_native_dev(
device: Optional[Union[ivy.Device, core.Place]] = None,
/,
) -> core.Place:
if isinstance(device, core.Place):
return device
native_dev = core.Place()
if "cpu" in device:
native_dev.set_place(paddle.device.core.CPUPlace())
elif "gpu" in device:
if ":" in device:
gpu_idx = int(device.split(":")[-1])
assert (
gpu_idx < num_gpus()
), "The requested device is higher than the number of available devices"
else:
gpu_idx = 0
native_dev.set_place(paddle.device.core.CUDAPlace(gpu_idx))
return native_dev
def clear_mem_on_dev(device: core.Place, /):
device = as_native_dev(device)
if device.is_gpu_place():
paddle.device.cuda.empty_cache()
def clear_cached_mem_on_dev(device: str, /):
device = as_native_dev(device)
if device.is_gpu_place():
paddle.device.cuda.empty_cache()
def num_gpus() -> int:
return paddle.device.cuda.device_count()
def gpu_is_available() -> bool:
return bool(paddle.device.cuda.device_count())
# noinspection PyUnresolvedReferences
def tpu_is_available() -> bool:
return False
def handle_soft_device_variable(*args, fn, **kwargs):
args, kwargs, device_shifting_dev = _shift_native_arrays_on_default_device(
*args, **kwargs
)
# since there is no context manager for device in Paddle,
# we need to manually set the device
# then set it back to prev default device after the function call
prev_def_dev = paddle.get_device()
paddle.device.set_device(ivy.as_ivy_dev(device_shifting_dev))
ret = fn(*args, **kwargs)
paddle.device.set_device(ivy.as_ivy_dev(prev_def_dev))
return ret
class Profiler(BaseProfiler):
def __init__(self, save_dir: str):
# ToDO: add proper Paddle profiler
super().__init__(save_dir)
os.makedirs(save_dir, exist_ok=True)
self._start_time = None
def start(self):
self._start_time = time.perf_counter()
def stop(self):
time_taken = time.perf_counter() - self._start_time
with open(os.path.join(self._save_dir, "profile.log"), "w+") as f:
f.write(f"took {time_taken} seconds to complete")
def __enter__(self):
self.start()
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
| ivy/ivy/functional/backends/paddle/device.py/0 | {
"file_path": "ivy/ivy/functional/backends/paddle/device.py",
"repo_id": "ivy",
"token_count": 1634
} | 19 |
import paddle
import paddle.nn.functional as F
import ivy
from ivy.utils.exceptions import IvyNotImplementedException
from typing import Optional, Tuple
from ivy.func_wrapper import with_supported_dtypes
from ivy.func_wrapper import with_unsupported_device_and_dtypes
from . import backend_version
# TODO: add support for the rest of the dtypes
# use numpy implementation with ivy functions
@with_unsupported_device_and_dtypes(
{
"2.6.0 and below": {
"cpu": (
"int8",
"int16",
"int32",
"int64",
"uint8",
"float16",
"complex",
"bool",
)
}
},
backend_version,
)
def batch_norm(
x: paddle.Tensor,
mean: paddle.Tensor,
variance: paddle.Tensor,
/,
*,
scale: Optional[paddle.Tensor] = None,
offset: Optional[paddle.Tensor] = None,
training: Optional[bool] = False,
eps: Optional[float] = 1e-5,
momentum: Optional[float] = 1e-1,
data_format: Optional[str] = "NSC",
out: Optional[Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]] = None,
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]:
if x.dtype not in [paddle.float32, paddle.float64]:
x, mean, variance, scale, offset = (
t.cast("float32") for t in [x, mean, variance, scale, offset]
)
runningmean = mean
runningvariance = variance
data_formats = ["NC", "NCL", "NCHW", "NCDHW", "NLC", "NHWC", "NDHWC"]
try:
data_format = (
data_formats[4:][x.ndim - 3]
if data_format[-1] == "C"
else data_formats[0:4][x.ndim - 2]
)
except IndexError as e:
raise IndexError(
"data_format must be one of 'NC', 'NCL', 'NCHW', 'NCDHW', 'NLC', 'NHWC',"
f" 'NDHWC' but receive {data_format}"
) from e
with ivy.ArrayMode(False):
if training:
x_shape = paddle.to_tensor(x.shape)
x_size = paddle.prod(x_shape)
n = (x_size if x.ndim == 1 else ivy.divide(x_size, x_shape[-1])).cast(
x.dtype
)
dims = (0, *range(1, x.ndim - 1))
mean = ivy.mean(x, axis=dims)
variance = ivy.var(x, axis=dims)
# runningmean = (1 - momentum) * runningmean + momentum * mean
runningmean = ivy.add(
ivy.multiply(ivy.subtract(1, momentum), runningmean),
ivy.multiply(momentum, mean),
)
# runningvariance = (
# 1 - momentum
# ) * runningvariance + momentum * variance * n / (n - 1)
runningvariance = ivy.add(
ivy.multiply(ivy.subtract(1, momentum), runningvariance),
ivy.divide(ivy.multiply(ivy.multiply(momentum, variance), n), n - 1),
)
xnormalized = F.batch_norm(
x,
running_mean=mean,
running_var=variance,
weight=scale,
bias=offset,
training=training,
momentum=momentum,
epsilon=eps,
data_format=data_format,
).cast(x.dtype)
return xnormalized, runningmean, runningvariance
batch_norm.partial_mixed_handler = lambda x, *args, scale, offset, **kwargs: (
(x.ndim > 1 and x.ndim < 6)
and (scale is not None and scale.ndim == 1)
and (offset is not None and offset.ndim == 1)
)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, backend_version)
def l1_normalize(
x: paddle.Tensor, /, *, axis: Optional[int] = None, out: paddle.Tensor = None
) -> paddle.Tensor:
if not isinstance(x, paddle.Tensor):
x = paddle.to_tensor(x)
if axis is None:
axis = list(range(x.ndim))
elif isinstance(axis, int):
axis = [axis]
else:
axis = list(axis)
# Compute the L1 norm along the given axis
norm = paddle.norm(x, p=1, axis=axis, keepdim=True)
# Divide x by the L1 norm to obtain the normalized array
norm = paddle.where(norm == 0, paddle.to_tensor([1], dtype=x.dtype), norm)
if out is None:
return x / norm
else:
out[:] = x / norm
return out
def l2_normalize(
x: paddle.Tensor, /, *, axis: Optional[int] = None, out: paddle.Tensor = None
) -> paddle.Tensor:
raise IvyNotImplementedException()
def instance_norm(
x: paddle.Tensor,
mean: paddle.Tensor,
variance: paddle.Tensor,
/,
*,
scale: Optional[paddle.Tensor] = None,
offset: Optional[paddle.Tensor] = None,
training: Optional[bool] = False,
eps: Optional[float] = 1e-5,
momentum: Optional[float] = 1e-1,
data_format: Optional[str] = "NSC",
out: Optional[
Tuple[
paddle.Tensor,
paddle.Tensor,
paddle.Tensor,
]
] = None,
) -> Tuple[
paddle.Tensor,
paddle.Tensor,
paddle.Tensor,
]:
raise IvyNotImplementedException()
def lp_normalize(
x: paddle.Tensor,
/,
*,
p: float = 2,
axis: Optional[int] = None,
out: paddle.Tensor = None,
) -> paddle.Tensor:
raise IvyNotImplementedException()
| ivy/ivy/functional/backends/paddle/experimental/norms.py/0 | {
"file_path": "ivy/ivy/functional/backends/paddle/experimental/norms.py",
"repo_id": "ivy",
"token_count": 2466
} | 20 |
# global
import paddle
from typing import Tuple, Optional
from collections import namedtuple
import ivy.functional.backends.paddle as paddle_backend
from ivy.func_wrapper import with_supported_dtypes
# local
from . import backend_version
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, backend_version
)
def unique_all(
x: paddle.Tensor,
/,
*,
axis: Optional[int] = None,
by_value: bool = True,
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor]:
Results = namedtuple(
"Results",
["values", "indices", "inverse_indices", "counts"],
)
x_dtype = x.dtype
if axis is not None:
axis = axis % x.ndim
values, inverse_indices, counts = paddle.unique(
x,
return_index=False, # which occurrences of the unique values are picked is
# inconsistent in some cases, so calculate the indices manually below
return_counts=True,
return_inverse=True,
axis=axis,
)
unique_nan = paddle.isnan(values)
idx_dtype = inverse_indices.dtype
if paddle.any(unique_nan):
nan_index = paddle.where(paddle.isnan(x))
non_nan_index = [
x.tolist().index(val) for val in values if not paddle.isnan(val)
]
indices = values.clone().to(idx_dtype)
indices[unique_nan] = nan_index[0]
inverse_indices[paddle.isnan(x)] = paddle.where(unique_nan)[0][0]
counts[unique_nan] = 1
indices[~unique_nan] = paddle.to_tensor(non_nan_index, dtype=idx_dtype)
else:
decimals = paddle.arange(inverse_indices.numel()) / inverse_indices.numel()
inv_sorted = (inverse_indices.astype(decimals.dtype) + decimals).argsort()
tot_counts = paddle.concat(
(paddle.zeros((1,), dtype=counts.dtype), counts.cumsum(axis=0))
)[:-1]
indices = inv_sorted[tot_counts].astype(idx_dtype)
if not by_value:
sort_idx = paddle.argsort(indices)
else:
if axis is None:
axis = 0
values_ = paddle.moveaxis(values, axis, 0)
values_ = paddle.reshape(values_, (values_.shape[0], -1))
sort_idx = paddle.to_tensor(
[
i[0]
for i in sorted(
enumerate(values_.numpy().tolist()), key=lambda x: tuple(x[1])
)
]
)
values = paddle.gather(values, sort_idx, axis=axis)
counts = paddle.gather(counts, sort_idx)
indices = paddle.gather(indices, sort_idx)
inv_sort_idx = paddle_backend.invert_permutation(sort_idx)
inverse_indices = paddle_backend.vmap(lambda y: paddle.gather(inv_sort_idx, y))(
inverse_indices
)
return Results(
values.cast(x_dtype),
indices,
inverse_indices,
counts,
)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, backend_version
)
def unique_counts(x: paddle.Tensor, /) -> Tuple[paddle.Tensor, paddle.Tensor]:
unique, counts = paddle.unique(x, return_counts=True)
nan_count = paddle.count_nonzero(paddle.where(paddle.isnan(x) > 0)).numpy()[0]
if nan_count > 0:
unique_nan = paddle.full(shape=[1, nan_count], fill_value=float("nan")).cast(
x.dtype
)
counts_nan = paddle.full(shape=[1, nan_count], fill_value=1).cast(x.dtype)
unique = paddle.concat(
[unique.astype(x.dtype), paddle.reshape(unique_nan, [nan_count])], axis=0
)
counts = paddle.concat(
[counts.astype(x.dtype), paddle.reshape(counts_nan, [nan_count])], axis=0
)
Results = namedtuple("Results", ["values", "counts"])
return Results(unique.cast(x.dtype), counts)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, backend_version
)
def unique_inverse(
x: paddle.Tensor,
/,
*,
axis: Optional[int] = None,
) -> Tuple[paddle.Tensor, paddle.Tensor]:
if x.dtype not in [paddle.int32, paddle.int64, paddle.float32, paddle.float64]:
x = x.cast("float32")
if axis is not None:
unique, inverse_val = paddle.unique(x, return_inverse=True, axis=axis)
if axis is None:
axis = 0
nan_idx = paddle.where(paddle.isnan(x) > 0)
nan_count = paddle.count_nonzero(nan_idx).numpy()[0]
if nan_count > 0:
inverse_val[nan_idx] = len(unique)
unique_nan = paddle.full(shape=[1, nan_count], fill_value=float("nan")).cast(
x.dtype
)
unique = paddle.concat(
[unique.astype(x.dtype), paddle.reshape(unique_nan, [nan_count])],
axis=-1,
)
inverse_val = paddle.reshape(inverse_val, shape=x.shape)
Results = namedtuple("Results", ["values", "inverse_indices"])
return Results(unique.cast(x.dtype), inverse_val)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64", "int32", "int64")}, backend_version
)
def unique_values(
x: paddle.Tensor, /, *, out: Optional[paddle.Tensor] = None
) -> paddle.Tensor:
nan_count = paddle.sum(paddle.isnan(x))
unique = paddle.unique(x)
if nan_count > 0:
nans = paddle.full(shape=[nan_count], fill_value=float("nan")).cast(
unique.dtype
)
unique = paddle.concat([unique, nans])
return unique.cast(x.dtype)
| ivy/ivy/functional/backends/paddle/set.py/0 | {
"file_path": "ivy/ivy/functional/backends/paddle/set.py",
"repo_id": "ivy",
"token_count": 2450
} | 21 |
"""Tensorflow general functions.
Collection of TensorFlow general functions, wrapped to fit Ivy syntax
and signature.
"""
# global
from typing import Optional, Union, Sequence, Callable, Tuple
import numpy as np
import multiprocessing as _multiprocessing
from numbers import Number
import tensorflow as tf
# local
import ivy
from ivy.functional.ivy.gradients import _is_variable
from ivy.func_wrapper import with_unsupported_dtypes
from ivy.utils.exceptions import _check_inplace_update_support
from . import backend_version
from ...ivy.general import _broadcast_to
_round = round
def is_native_array(x, /, *, exclusive=False):
if isinstance(x, (tf.Tensor, tf.Variable, tf.TensorArray)):
if exclusive and isinstance(x, tf.Variable):
return False
return True
return False
def array_equal(
x0: Union[tf.Tensor, tf.Variable],
x1: Union[tf.Tensor, tf.Variable],
/,
) -> bool:
x0, x1 = ivy.promote_types_of_inputs(x0, x1)
return bool(tf.experimental.numpy.array_equal(x0, x1))
def container_types():
return []
def current_backend_str() -> str:
return "tensorflow"
def _check_query(query):
return not isinstance(query, list) and (
not (ivy.is_array(query) and bool(query.ndim > 0))
)
def get_item(
x: Union[tf.Tensor, tf.Variable],
/,
query: Union[tf.Tensor, tf.Variable, Tuple],
*,
copy: Optional[bool] = None,
) -> Union[tf.Tensor, tf.Variable]:
if ivy.is_array(query) and ivy.is_bool_dtype(query):
if not len(query.shape):
return tf.expand_dims(x, 0)
return x.__getitem__(query)
get_item.partial_mixed_handler = lambda x, query, **kwargs: (
all(_check_query(i) for i in query)
and len({i.shape for i in query if ivy.is_array(i)}) <= 1
if isinstance(query, tuple)
else _check_query(query)
)
def to_numpy(x: Union[tf.Tensor, tf.Variable], /, *, copy: bool = True) -> np.ndarray:
# TensorFlow fails to convert bfloat16 tensor when it has 0 dimensions
if (
ivy.is_array(x)
and get_num_dims(x) == 0
and ivy.as_native_dtype(x.dtype) is tf.bfloat16
):
x = tf.expand_dims(x, 0)
if copy:
return np.squeeze(np.array(tf.convert_to_tensor(x)), 0)
else:
return np.squeeze(np.asarray(tf.convert_to_tensor(x)), 0)
if copy:
return np.array(tf.convert_to_tensor(x))
else:
return np.asarray(tf.convert_to_tensor(x))
def to_scalar(x: Union[tf.Tensor, tf.Variable], /) -> Number:
ret = to_numpy(x).item()
if x.dtype == tf.bfloat16:
return float(ret)
return ret
def to_list(x: Union[tf.Tensor, tf.Variable], /) -> list:
return x.numpy().tolist()
def gather(
params: Union[tf.Tensor, tf.Variable],
indices: Union[tf.Tensor, tf.Variable],
/,
*,
axis: int = -1,
batch_dims: int = 0,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
axis %= len(params.shape)
batch_dims %= len(params.shape)
ivy.utils.assertions.check_gather_input_valid(params, indices, axis, batch_dims)
return tf.gather(params, indices, axis=axis, batch_dims=batch_dims)
def gather_nd_helper(params, indices):
indices_shape = tf.shape(indices)
params_shape = tf.shape(params)
num_index_dims = indices_shape[-1]
result_dim_sizes_list = [
tf.math.reduce_prod(params_shape[i + 1 :]) for i in range(len(params_shape) - 1)
] + [1]
result_dim_sizes = tf.convert_to_tensor(result_dim_sizes_list, dtype=indices.dtype)
implicit_indices_factor = result_dim_sizes[num_index_dims - 1]
flat_params = tf.reshape(params, (-1,))
new_shape = [1] * (len(indices_shape) - 1) + [num_index_dims]
indices_scales = tf.reshape(result_dim_sizes[0:num_index_dims], new_shape)
indices_for_flat_tiled = tf.reshape(
tf.reduce_sum(indices * indices_scales, -1, keepdims=True), (-1, 1)
)
indices_for_flat_tiled = tf.repeat(
indices_for_flat_tiled, implicit_indices_factor, axis=1
)
implicit_indices = tf.repeat(
tf.expand_dims(tf.range(implicit_indices_factor), 0),
indices_for_flat_tiled.shape[0],
axis=0,
)
indices_for_flat = indices_for_flat_tiled + implicit_indices
flat_indices_for_flat = tf.reshape(indices_for_flat, (-1,))
flat_gather = tf.gather(flat_params, flat_indices_for_flat)
res = tf.reshape(
flat_gather, tf.concat([indices_shape[:-1], params_shape[num_index_dims:]], 0)
)
return res
def gather_nd(
params: Union[tf.Tensor, tf.Variable],
indices: Union[tf.Tensor, tf.Variable],
/,
*,
batch_dims: int = 0,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
ivy.utils.assertions.check_gather_nd_input_valid(params, indices, batch_dims)
try:
return tf.gather_nd(params, indices, batch_dims=batch_dims)
except Exception: # fall back to compositional implementation
batch_dims %= len(params.shape)
result = []
if batch_dims == 0:
result = gather_nd_helper(params, indices)
else:
for b in range(batch_dims):
if b == 0:
zip_list = list(zip(params, indices))
else:
zip_list = [
(p, i)
for z in [zip(p1, i1) for p1, i1 in zip_list]
for p, i in z
]
for z in zip_list:
p, i = z
r = gather_nd_helper(p, i)
result.append(r)
result = tf.stack(result)
result = tf.reshape(
result, tf.concat([params.shape[0:batch_dims], result.shape[1:]], 0)
)
return result
def get_num_dims(x, /, *, as_array=False):
return (
tf.cast(tf.shape(tf.shape(x))[0], tf.int64)
if as_array
else int(tf.shape(tf.shape(x)))
)
def inplace_arrays_supported():
return False
def inplace_decrement(
x: Union[ivy.Array, tf.Tensor], val: Union[ivy.Array, tf.Tensor]
) -> ivy.Array:
(x_native, val_native), _ = ivy.args_to_native(x, val)
if _is_variable(x_native):
x_native.assign(x_native - val_native)
if ivy.is_ivy_array(x):
x.data = x_native
else:
x = ivy.Array(x_native)
elif ivy.is_ivy_array(x):
x.data -= val_native
else:
x = ivy.Array(val_native)
return x
def inplace_increment(
x: Union[ivy.Array, tf.Tensor], val: Union[ivy.Array, tf.Tensor]
) -> ivy.Array:
(x_native, val_native), _ = ivy.args_to_native(x, val)
if _is_variable(x_native):
x_native.assign(x_native + val_native)
if ivy.is_ivy_array(x):
x.data = x_native
else:
x = ivy.Array(x_native)
else:
x_native += val_native
if ivy.is_ivy_array(x):
x._data = x_native
else:
x = ivy.Array(x_native)
return x
def inplace_update(
x: Union[ivy.Array, tf.Tensor],
val: Union[ivy.Array, tf.Tensor],
/,
*,
ensure_in_backend: bool = False,
keep_input_dtype: bool = False,
) -> ivy.Array:
if ivy.is_array(x) and ivy.is_array(val):
_check_inplace_update_support(x, ensure_in_backend)
if keep_input_dtype:
val = ivy.astype(val, x.dtype)
(x_native, val_native), _ = ivy.args_to_native(x, val)
if _is_variable(x_native):
x_native.assign(val_native)
if ivy.is_ivy_array(x):
x.data = x_native
else:
x = ivy.Array(x_native)
elif ivy.is_ivy_array(x):
x.data = val_native
# Handle view updates
if ivy.exists(x._base):
base = x._base
base_idx = ivy.arange(base.size).reshape(base.shape)
for fn, args, kwargs, index in x._manipulation_stack:
kwargs["copy"] = True
base_idx = ivy.__dict__[fn](base_idx, *args, **kwargs)
base_idx = base_idx[index] if ivy.exists(index) else base_idx
base_flat = tf.reshape(base.data, -1)
base_flat = tf.tensor_scatter_nd_update(
base_flat,
tf.reshape(base_idx.data, (-1, 1)),
tf.reshape(val_native, -1),
)
base.data = tf.reshape(base_flat, base.shape)
for ref in base._view_refs:
view = ref()
if ivy.exists(view) and view is not x:
_update_view(view, base)
else:
for ref in x._view_refs:
view = ref()
if ivy.exists(view):
_update_view(view, x)
else:
x = ivy.to_ivy(x_native)
return x
else:
return val
def _update_view(view, base):
for fn, args, kwargs, index in view._manipulation_stack:
base = ivy.__dict__[fn](base, *args, **kwargs)
base = base[index] if ivy.exists(index) else base
view.data = base.data
return view
def inplace_variables_supported():
return True
def multiprocessing(context: Optional[str] = None):
return (
_multiprocessing if context is None else _multiprocessing.get_context(context)
)
def scatter_flat(
indices: Union[tf.Tensor, tf.Variable],
updates: Union[tf.Tensor, tf.Variable],
/,
*,
size: Optional[int] = None,
reduction: str = "sum",
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if indices.dtype != tf.int32 or indices.dtype != tf.int64:
if indices.dtype in [tf.int8, tf.int16, tf.uint8, tf.uint16]:
indices = tf.cast(indices, tf.int32)
else:
indices = tf.cast(indices, tf.int64)
target = out
target_given = ivy.exists(target)
if ivy.exists(size) and ivy.exists(target):
ivy.utils.assertions.check_equal(len(target.shape), 1, as_array=False)
ivy.utils.assertions.check_equal(target.shape[0], size, as_array=False)
if not target_given:
target = tf.zeros([size], dtype=updates.dtype)
res = tf.tensor_scatter_nd_update(target, tf.expand_dims(indices, -1), updates)
elif reduction == "max":
res = tf.tensor_scatter_nd_max(target, tf.expand_dims(indices, -1), updates)
elif reduction == "min":
res = tf.tensor_scatter_nd_min(target, tf.expand_dims(indices, -1), updates)
elif reduction == "replace":
res = tf.tensor_scatter_nd_update(target, tf.expand_dims(indices, -1), updates)
elif reduction == "sum":
res = tf.tensor_scatter_nd_add(target, tf.expand_dims(indices, -1), updates)
else:
raise ivy.utils.exceptions.IvyException(
f'reduction is {reduction}, but it must be one of "sum", "min", "max" or'
' "replace"'
)
return ivy.inplace_update(out, res) if ivy.exists(out) else res
scatter_flat.support_native_out = True
@with_unsupported_dtypes({"2.15.0 and below": ("bfloat16", "complex")}, backend_version)
def scatter_nd(
indices: Union[tf.Tensor, tf.Variable],
updates: Union[tf.Tensor, tf.Variable],
/,
shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
*,
reduction: str = "sum",
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
updates_dtype = updates.dtype
if ivy.exists(out):
dtype = ivy.promote_types(out.dtype, updates_dtype)
updates = tf.cast(
updates,
(ivy.as_native_dtype(dtype) if ivy.exists(out) else updates_dtype),
)
expected_shape = (
list(indices.shape[:-1]) + list(out.shape[indices.shape[-1] :])
if ivy.exists(out)
else list(indices.shape[:-1]) + list(shape[indices.shape[-1] :])
)
updates = _broadcast_to(updates, expected_shape)._data
if len(updates.shape) == 0:
indices = tf.expand_dims(indices, 0)
updates = tf.expand_dims(updates, 0)
# implementation
target = out
target_given = ivy.exists(target)
if ivy.exists(shape) and target_given:
ivy.utils.assertions.check_equal(
ivy.Shape(target.shape), ivy.Shape(shape), as_array=False
)
if not target_given:
shape = list(shape) if ivy.exists(shape) else list(out.shape)
target = tf.zeros(shape, dtype=updates.dtype)
if reduction == "sum":
res = tf.tensor_scatter_nd_add(target, indices, updates)
elif reduction == "min":
res = tf.tensor_scatter_nd_min(target, indices, updates)
elif reduction == "max":
res = tf.tensor_scatter_nd_max(target, indices, updates)
elif reduction == "mul":
updates = ivy.multiply(ivy.gather_nd(target, indices), updates).data
res = tf.tensor_scatter_nd_update(target, indices, updates)
elif reduction == "replace":
res = tf.tensor_scatter_nd_update(target, indices, updates)
else:
raise ivy.utils.exceptions.IvyException(
f'reduction is {reduction}, but it must be one of "sum", "min", "max",'
' "mul" or "replace"'
)
if ivy.exists(out):
return ivy.inplace_update(out, res)
return res
scatter_nd.support_native_out = True
def shape(
x: Union[tf.Tensor, tf.Variable],
/,
*,
as_array: bool = False,
) -> Union[tf.Tensor, ivy.Shape, ivy.Array]:
if as_array:
return ivy.array(tf.shape(x), dtype=ivy.default_int_dtype())
else:
return ivy.Shape(x.shape)
def vmap(
func: Callable,
in_axes: Union[int, Sequence[int], Sequence[None]] = 0,
out_axes: int = 0,
) -> Callable:
@ivy.output_to_native_arrays
@ivy.inputs_to_native_arrays
def _vmap(*args, **kwargs):
# convert args tuple to list to allow mutability using moveaxis ahead.
args = list(args)
# if in_axis is a non-integer, its length should be equal to pos args.
if isinstance(in_axes, (list, tuple)):
ivy.utils.assertions.check_equal(
len(args),
len(in_axes),
message="""in_axes should have a length equivalent to the number
of positional arguments to the function being vectorized or it
should be an integer""",
as_array=False,
)
# checking axis_size consistency
axis_size = set()
if isinstance(in_axes, int):
for arg in args:
axis_size.add(arg.shape[in_axes])
elif isinstance(in_axes, (list, tuple)):
for arg, axis in zip(args, in_axes):
if axis is not None:
axis_size.add(arg.shape[axis])
if len(axis_size) > 1:
raise ivy.utils.exceptions.IvyException(
"""Inconsistent sizes. All mapped axes should have the same size"""
)
# Making sure not all in_axes are None
if isinstance(in_axes, (list, tuple)):
ivy.utils.assertions.check_any(
[ivy.exists(ax) for ax in in_axes],
message="At least one of the axes should be specified (not None)",
as_array=False,
)
else:
ivy.utils.assertions.check_exists(
in_axes, message="single value in_axes should not be None"
)
# Handling None in in_axes by broadcasting the axis_size
if isinstance(in_axes, (tuple, list)) and None in in_axes:
none_axis_index = []
for index, axis in enumerate(in_axes):
if axis is None:
none_axis_index.append(index)
for none_mapped_axis in none_axis_index:
args[none_mapped_axis] = tf.broadcast_to(
args[none_mapped_axis],
(tuple(axis_size) + args[none_mapped_axis].shape),
)
# set up the axis to be mapped
if isinstance(in_axes, (tuple, list)):
for i in range(len(in_axes)):
args[i] = tf.experimental.numpy.moveaxis(args[i], in_axes[i], 0)
elif isinstance(in_axes, int):
args[0] = tf.experimental.numpy.moveaxis(args[0], in_axes, 0)
# vectorisation - applying map_fn if only one arg provided as reduce requires
# two elements to begin with.
arr_results = []
for arrays in zip(*args):
single_op = func(*arrays)
arr_results.append(single_op)
res = ivy.stack(arr_results)
if out_axes:
res = tf.experimental.numpy.moveaxis(res, 0, out_axes)
return res
return _vmap
@with_unsupported_dtypes({"2.15.0 and below": ("bfloat16", "complex")}, backend_version)
def isin(
elements: tf.Tensor,
test_elements: tf.Tensor,
/,
*,
assume_unique: bool = False,
invert: bool = False,
) -> tf.Tensor:
input_shape = elements.shape
if tf.rank(elements) == 0:
elements = tf.reshape(elements, [1])
if tf.rank(test_elements) == 0:
test_elements = tf.reshape(test_elements, [1])
if not assume_unique:
test_elements = tf.unique(tf.reshape(test_elements, [-1]))[0]
elements = tf.reshape(elements, [-1])
test_elements = tf.reshape(test_elements, [-1])
output = tf.reduce_any(
tf.equal(tf.expand_dims(elements, -1), test_elements), axis=-1
)
return tf.reshape(output, input_shape) ^ invert
def itemsize(x: Union[tf.Tensor, tf.Variable]) -> int:
return x.dtype.size
| ivy/ivy/functional/backends/tensorflow/general.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/general.py",
"repo_id": "ivy",
"token_count": 8506
} | 22 |
# global
import tensorflow as tf
from typing import Union, Optional, Sequence
# local
import ivy
def all(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if axis is None:
num_dims = len(x.shape)
axis = tuple(range(num_dims))
elif isinstance(axis, list):
axis = tuple(axis)
try:
return tf.reduce_all(tf.cast(x, tf.bool), axis=axis, keepdims=keepdims)
except tf.errors.InvalidArgumentError as e:
raise ivy.utils.exceptions.IvyIndexError(e) from e
def any(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if axis is None:
num_dims = len(x.shape)
axis = tuple(range(num_dims))
elif isinstance(axis, list):
axis = tuple(axis)
try:
return tf.reduce_any(tf.cast(x, tf.bool), axis=axis, keepdims=keepdims)
except tf.errors.InvalidArgumentError as e:
raise ivy.utils.exceptions.IvyIndexError(e) from e
| ivy/ivy/functional/backends/tensorflow/utility.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/utility.py",
"repo_id": "ivy",
"token_count": 551
} | 23 |
# global
import torch
from typing import Callable
# local
import ivy
from ivy.func_wrapper import inputs_to_native_arrays
from ivy.functional.ivy.gradients import (
_flatten_containers,
_rebuild_flattened_containers,
)
def bind_custom_gradient_function(func, custom_grad_fn):
class _CustomModule(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ret = ivy.to_native(func(x), nested=True, include_derived=True)
ctx.save_for_backward(x, ret)
return ret
@staticmethod
def backward(ctx, upstream):
grads = custom_grad_fn(
*ivy.to_ivy(
(ctx.saved_tensors, upstream), nested=True, include_derived=True
)
)
return ivy.to_native(grads, nested=True, include_derived=True)
custom_module = _CustomModule.apply
return inputs_to_native_arrays(custom_module)
def vjp(func: Callable, *primals):
flattened_primals, ret_idxs = _flatten_containers(primals)
unique_keys = list(
{
ivy.index_nest(ret_idxs, i)
for i in ivy.nested_argwhere(ret_idxs, lambda x: isinstance(x, str))
}
)
def grad_fn(*x_in):
ret, idxs = _flatten_containers(
ivy.to_native(
func(
*ivy.to_ivy(
_rebuild_flattened_containers(x_in, ret_idxs), nested=True
)
),
nested=True,
include_derived=True,
)
)
# replave the idxs with the unique keys
func_ret_idxs = torch.tensor(
ivy.nested_map(
lambda x: (
unique_keys.index(x)
if isinstance(x, str)
else -1 if x is None else x
),
idxs,
)
)
return (ret, func_ret_idxs)
primals_out, _vjpfun, func_ret_idxs = ivy.outputs_to_ivy_arrays(torch.func.vjp)(
grad_fn, *ivy.to_native(flattened_primals, nested=True), has_aux=True
)
func_ret_idxs = ivy.nested_map(
lambda x: unique_keys[x] if x >= 0 and x < len(unique_keys) else None,
func_ret_idxs.tolist(),
)
primals_out = _rebuild_flattened_containers(primals_out, func_ret_idxs)
def vjpfun(*x_in):
ivy.assertions.check_isinstance(x_in, tuple)
return _rebuild_flattened_containers(
ivy.to_ivy(
_vjpfun(ivy.to_native(_flatten_containers(x_in)[0], nested=True)),
nested=True,
include_derived=True,
),
ret_idxs,
)
return (primals_out, vjpfun)
def jvp(func: Callable, primals, tangents):
flattened_primals, ret_idxs = _flatten_containers(primals)
flattened_tangents, _ = _flatten_containers(tangents)
unique_keys = list(
{
ivy.index_nest(ret_idxs, i)
for i in ivy.nested_argwhere(ret_idxs, lambda x: isinstance(x, str))
}
)
def grad_fn(*x_in):
ret, idxs = _flatten_containers(
ivy.to_native(
func(
*ivy.to_ivy(
_rebuild_flattened_containers(x_in, ret_idxs), nested=True
)
),
nested=True,
include_derived=True,
)
)
# replave the idxs with the unique keys
func_ret_idxs = torch.tensor(
ivy.nested_map(
lambda x: (
unique_keys.index(x)
if isinstance(x, str)
else -1 if x is None else x
),
idxs,
)
)
return (ret, func_ret_idxs)
primals_out, tangents_out, func_ret_idxs = ivy.outputs_to_ivy_arrays(
torch.func.jvp
)(
grad_fn,
ivy.to_native(flattened_primals, nested=True),
ivy.to_native(flattened_tangents, nested=True),
has_aux=True,
)
func_ret_idxs = ivy.nested_map(
lambda x: unique_keys[x] if x >= 0 and x < len(unique_keys) else None,
func_ret_idxs.tolist(),
)
primals_out = _rebuild_flattened_containers(primals_out, func_ret_idxs)
tangents_out = _rebuild_flattened_containers(tangents_out, func_ret_idxs)
return (primals_out, tangents_out)
| ivy/ivy/functional/backends/torch/experimental/gradients.py/0 | {
"file_path": "ivy/ivy/functional/backends/torch/experimental/gradients.py",
"repo_id": "ivy",
"token_count": 2407
} | 24 |
# global
import torch
from typing import Union, Optional, Tuple, Literal, List, NamedTuple, Sequence
from collections import namedtuple
# local
import ivy
from ivy import inf
from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from . import backend_version
from .elementwise import _cast_for_unary_op
# Array API Standard #
# -------------------#
@with_unsupported_dtypes(
{"2.2 and below": ("bfloat16", "float16", "complex")},
backend_version,
)
def cholesky(
x: torch.Tensor, /, *, upper: bool = False, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
if not upper:
return torch.linalg.cholesky(x, out=out)
else:
ret = torch.transpose(
torch.linalg.cholesky(
torch.transpose(x, dim0=len(x.shape) - 1, dim1=len(x.shape) - 2)
),
dim0=len(x.shape) - 1,
dim1=len(x.shape) - 2,
)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
cholesky.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
def cross(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
axisa: int = -1,
axisb: int = -1,
axisc: int = -1,
axis: Optional[int] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if axis is None:
axis = -1
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if axis is not None:
return torch.linalg.cross(input=x1, other=x2, dim=axis)
x1 = torch.transpose(x1, axisa, 1)
x2 = torch.transpose(x2, axisb, 1)
return torch.transpose(
torch.linalg.cross(input=x1, other=x2, out=out), dim0=axisc, dim1=1
)
cross.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def det(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
return torch.linalg.det(x, out=out)
det.support_native_out = True
def diagonal(
x: torch.Tensor,
/,
*,
offset: int = 0,
axis1: int = -2,
axis2: int = -1,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
return torch.diagonal(x, offset=offset, dim1=axis1, dim2=axis2)
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def eigh(
x: torch.Tensor, /, *, UPLO: str = "L", out: Optional[torch.Tensor] = None
) -> Tuple[torch.Tensor]:
result_tuple = NamedTuple(
"eigh", [("eigenvalues", torch.Tensor), ("eigenvectors", torch.Tensor)]
)
eigenvalues, eigenvectors = torch.linalg.eigh(x, UPLO=UPLO, out=out)
return result_tuple(eigenvalues, eigenvectors)
eigh.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def eigvalsh(
x: torch.Tensor, /, *, UPLO: str = "L", out: Optional[torch.Tensor] = None
) -> torch.Tensor:
return torch.linalg.eigvalsh(x, UPLO=UPLO, out=out)
eigvalsh.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
def inner(
x1: torch.Tensor, x2: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
ret_dtype = x1.dtype
if ivy.is_int_dtype(x1):
# https://github.com/pytorch/pytorch/issues/103366
x1 = x1.long()
x2 = x2.long()
ret = torch.inner(x1, x2).type(ret_dtype)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
return torch.inner(x1, x2, out=out)
inner.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def inv(
x: torch.Tensor,
/,
*,
adjoint: bool = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if adjoint:
if x.dim() < 2:
raise ValueError("Input must be at least 2D")
x_adj = x.transpose(-2, -1).conj()
ret = torch.linalg.inv(x_adj)
else:
ret = torch.linalg.inv(x)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
inv.support_native_out = True
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "bool")}, backend_version
)
def matmul(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
transpose_a: bool = False,
transpose_b: bool = False,
adjoint_a: bool = False,
adjoint_b: bool = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
# torch does not support inplace matmul (same storage in out=)
# https://github.com/pytorch/pytorch/issues/58742
# https://github.com/pytorch/pytorch/issues/48900
if out is x1 or out is x2:
out = None
if transpose_a:
x1 = torch.swapaxes(x1, -1, -2)
if transpose_b:
x2 = torch.swapaxes(x2, -1, -2)
if adjoint_a:
x1 = torch.adjoint(x1)
if adjoint_b:
x2 = torch.adjoint(x2)
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.matmul(x1, x2, out=out)
matmul.support_native_out = True
@with_supported_dtypes({"2.2 and below": ("float", "complex")}, backend_version)
def matrix_norm(
x: torch.Tensor,
/,
*,
ord: Union[int, float, Literal[inf, -inf, "fro", "nuc"]] = "fro",
axis: Tuple[int, int] = (-2, -1),
keepdims: bool = False,
dtype: Optional[torch.dtype] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
ret = torch.linalg.matrix_norm(
x, ord=ord, dim=axis, keepdim=keepdims, dtype=dtype, out=out
)
return ret
matrix_norm.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def eig(
x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> Tuple[torch.Tensor]:
result_tuple = NamedTuple(
"eig", [("eigenvalues", torch.Tensor), ("eigenvectors", torch.Tensor)]
)
eigenvalues, eigenvectors = torch.linalg.eig(x, out=out)
return result_tuple(eigenvalues, eigenvectors)
eig.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def matrix_power(
x: torch.Tensor, n: int, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
return torch.linalg.matrix_power(x, n, out=out)
matrix_power.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def matrix_rank(
x: torch.Tensor,
/,
*,
atol: Optional[Union[float, Tuple[float]]] = None,
rtol: Optional[Union[float, Tuple[float]]] = None,
hermitian: Optional[bool] = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if (x.ndim < 2) or (0 in x.shape):
return torch.tensor(0, dtype=torch.int64)
# we don't use the native matrix_rank function because the behaviour of the
# tolerance argument is difficult to unify
# return torch.linalg.matrix_rank(
# x, atol=atol, rtol=rtol, hermitian=hermitian, out=out
# )
if hermitian:
svd_values = torch.abs(torch.linalg.eigvalsh(x))
else:
svd_values = torch.linalg.svdvals(x)
sigma = torch.max(svd_values, axis=-1, keepdim=False)[0]
atol = (
atol
if atol is not None
else torch.finfo(x.dtype).eps * max(x.shape[-2:]) * sigma
)
rtol = rtol if rtol is not None else 0.0
atol = _cast_for_unary_op(atol)
rtol = _cast_for_unary_op(rtol)
tol = torch.maximum(atol, rtol * sigma)
# make sure it's broadcastable again with svd_values
tol = torch.unsqueeze(tol, dim=-1)
ret = torch.count_nonzero(svd_values > tol, dim=-1)
return ret
matrix_rank.support_native_out = True
def matrix_transpose(
x: torch.Tensor, /, *, conjugate: bool = False, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
if conjugate:
x = torch.conj(x)
return torch.swapaxes(x, -1, -2)
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def outer(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.outer(x1, x2, out=out)
outer.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def pinv(
x: torch.Tensor,
/,
*,
rtol: Optional[Union[float, Tuple[float]]] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if rtol is None:
return torch.linalg.pinv(x, out=out)
return torch.linalg.pinv(x, rtol, out=out)
pinv.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def tensorsolve(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
axes: Optional[Union[int, Tuple[List[int], List[int]]]] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
return torch.linalg.tensorsolve(x1, x2, dims=axes)
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def qr(
x: torch.Tensor,
/,
*,
mode: str = "reduced",
out: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
) -> Tuple[torch.Tensor, torch.Tensor]:
res = namedtuple("qr", ["Q", "R"])
if mode == "reduced":
q, r = torch.qr(x, some=True, out=out)
ret = res(q, r)
elif mode == "complete":
q, r = torch.qr(x, some=False, out=out)
ret = res(q, r)
else:
raise ivy.utils.exceptions.IvyException(
"Only 'reduced' and 'complete' qr modes are allowed for the torch backend."
)
return ret
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def slogdet(
x: torch.Tensor,
/,
) -> Tuple[torch.Tensor, torch.Tensor]:
results = NamedTuple(
"slogdet", [("sign", torch.Tensor), ("logabsdet", torch.Tensor)]
)
sign, logabsdet = torch.linalg.slogdet(x)
return results(sign, logabsdet)
slogdet.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def solve(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
adjoint: bool = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if adjoint:
x1 = torch.adjoint(x1)
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
expanded_last = False
if len(x2.shape) <= 1:
if x2.shape[-1] == x1.shape[-1]:
expanded_last = True
x2 = torch.unsqueeze(x2, dim=1)
is_empty_x1 = x1.nelement() == 0
is_empty_x2 = x2.nelement() == 0
if is_empty_x1 or is_empty_x2:
for i in range(len(x1.shape) - 2):
x2 = torch.unsqueeze(x2, dim=0)
output_shape = list(torch.broadcast_shapes(x1.shape[:-2], x2.shape[:-2]))
output_shape.append(x2.shape[-2])
output_shape.append(x2.shape[-1])
ret = torch.Tensor([])
ret = torch.reshape(ret, output_shape)
else:
ret = torch.linalg.solve(x1, x2)
if expanded_last:
ret = torch.squeeze(ret, dim=-1)
return ret
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def svd(
x: torch.Tensor, /, *, full_matrices: bool = True, compute_uv: bool = True
) -> Union[torch.Tensor, Tuple[torch.Tensor, ...]]:
if compute_uv:
results = namedtuple("svd", "U S Vh")
U, D, VT = torch.linalg.svd(x, full_matrices=full_matrices)
return results(U, D, VT)
else:
results = namedtuple("svd", "S")
svd = torch.linalg.svd(x, full_matrices=full_matrices)
# torch.linalg.svd returns a tuple with U, S, and Vh
D = svd[1]
return results(D)
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def svdvals(
x: torch.Tensor,
/,
*,
driver: Optional[str] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
return torch.linalg.svdvals(x, driver=driver, out=out)
svdvals.support_native_out = True
# ToDo: re-add int32 support once
# (https://github.com/pytorch/pytorch/issues/84530) is fixed
@with_supported_dtypes({"2.2 and below": ("float32",)}, backend_version)
def tensordot(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
axes: Union[int, Tuple[List[int], List[int]]] = 2,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
dtype = ivy.as_native_dtype(ivy.promote_types(x1.dtype, x2.dtype))
# handle tensordot for axes==0
# otherwise call with axes
if axes == 0:
ret = (x1.reshape(x1.size() + (1,) * x2.dim()) * x2).type(dtype)
else:
ret = torch.tensordot(x1, x2, dims=axes).type(dtype)
return ret
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def trace(
x: torch.Tensor,
/,
*,
offset: int = 0,
axis1: int = 0,
axis2: int = 1,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if len(x) == 0:
return ivy.array([])
ret = torch.diagonal(x, offset=offset, dim1=axis1, dim2=axis2)
ret = torch.sum(ret, dim=-1)
return ret
def vecdot(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
axis: int = -1,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
dtype = ivy.as_native_dtype(ivy.promote_types(x1.dtype, x2.dtype))
if dtype != "float64":
x1, x2 = x1.to(dtype=torch.float32), x2.to(dtype=torch.float32)
if ivy.exists(out):
if ivy.as_ivy_dtype(out.dtype) == ivy.as_ivy_dtype(x1.dtype):
return torch.tensordot(x1, x2, dims=([axis], [axis]), out=out)
return ivy.inplace_update(
out, torch.tensordot(x1, x2, dims=([axis], [axis])).to(out.dtype)
)
return torch.tensordot(x1, x2, dims=([axis], [axis])).to(dtype)
vecdot.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("integer",)}, backend_version)
def vector_norm(
x: torch.Tensor,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
ord: Union[int, float, Literal[inf, -inf]] = 2,
dtype: Optional[torch.dtype] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
# TODO: remove the as_native_dtype call once there are wrappers that handle dtype
# conversion automatically in the backends
dtype = ivy.as_native_dtype(dtype)
if dtype and x.dtype != dtype:
x = x.type(dtype)
return torch.linalg.vector_norm(x, ord, axis, keepdims, out=out)
vector_norm.support_native_out = True
# Extra #
# ----- #
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def diag(
x: torch.Tensor,
/,
*,
k: int = 0,
out: Optional[torch.Tensor] = None,
) -> torch.tensor:
return torch.diag(x, diagonal=k)
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
def vander(
x: torch.tensor,
/,
*,
N: Optional[int] = None,
increasing: bool = False,
out: Optional[torch.tensor] = None,
) -> torch.tensor:
# torch.vander hasn't been used as it produces 0 gradients
N = ivy.default(N, x.shape[-1])
start, stop, step = N - 1, -1, -1
if increasing:
start, stop, step = 0, N, 1
ret = torch.pow(
torch.transpose(torch.unsqueeze(x, 0), 0, 1),
torch.arange(start, stop, step),
out=out,
)
if ret.dtype != x.dtype:
return ret.to(x.dtype)
return ret
@with_unsupported_dtypes(
{
"2.2 and below": (
"complex",
"unsigned",
)
},
backend_version,
)
def vector_to_skew_symmetric_matrix(
vector: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
batch_shape = list(vector.shape[:-1])
# BS x 3 x 1
vector_expanded = torch.unsqueeze(vector, -1)
# BS x 1 x 1
a1s = vector_expanded[..., 0:1, :]
a2s = vector_expanded[..., 1:2, :]
a3s = vector_expanded[..., 2:3, :]
# BS x 1 x 1
zs = torch.zeros(batch_shape + [1, 1], device=vector.device, dtype=vector.dtype)
# BS x 1 x 3
row1 = torch.cat((zs, -a3s, a2s), -1)
row2 = torch.cat((a3s, zs, -a1s), -1)
row3 = torch.cat((-a2s, a1s, zs), -1)
# BS x 3 x 3
return torch.cat((row1, row2, row3), -2, out=out)
vector_to_skew_symmetric_matrix.support_native_out = True
| ivy/ivy/functional/backends/torch/linear_algebra.py/0 | {
"file_path": "ivy/ivy/functional/backends/torch/linear_algebra.py",
"repo_id": "ivy",
"token_count": 7625
} | 25 |
# global
import sys
# local
from ivy.functional.frontends import set_frontend_to_specific_version
from . import config
from . import array
from .array import *
from . import general_functions
from .general_functions import *
from . import lax
from . import nn
from . import numpy
from . import random
from . import _src
from ._src import tree_util
_frontend_array = numpy.array
# setting to specific version #
# --------------------------- #
if ivy.is_local():
module = ivy.utils._importlib.import_cache[__name__]
else:
module = sys.modules[__name__]
__version__ = set_frontend_to_specific_version(module)
| ivy/ivy/functional/frontends/jax/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/jax/__init__.py",
"repo_id": "ivy",
"token_count": 193
} | 26 |
from . import nn
from . import ops
from . import numpy
| ivy/ivy/functional/frontends/mindspore/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/mindspore/__init__.py",
"repo_id": "ivy",
"token_count": 17
} | 27 |
import ivy
from ivy.functional.frontends.mxnet.func_wrapper import to_ivy_arrays_and_back
@to_ivy_arrays_and_back
def beta(a, b, size=None, dtype=None, device=None):
return ivy.experimental.beta(a, b, shape=size, dtype=dtype, device=device)
@to_ivy_arrays_and_back
def chisquare(df, size=None, dtype=None, device=None):
return ivy.experimental.gamma(
df * 0.5,
0.5,
shape=size,
dtype=dtype,
device=device,
)
@to_ivy_arrays_and_back
def gamma(shape, scale=1.0, size=None, dtype=None, device=None, out=None):
return ivy.experimental.gamma(
shape, scale, shape=size, dtype=dtype, device=device, out=out
)
@to_ivy_arrays_and_back
def multinomial(n, pvals, size=None, **kwargs):
num_samples = ivy.prod(size)
assert not ivy.exists(size) or (len(size) > 0 and len(size) < 3)
batch_size = 1
if ivy.exists(size):
if len(size) == 2:
batch_size = size[0]
num_samples = size[1]
else:
num_samples = size[0]
else:
num_samples = len(pvals)
return ivy.multinomial(n, num_samples, batch_size=batch_size, probs=pvals, **kwargs)
@to_ivy_arrays_and_back
def normal(loc=0.0, scale=1.0, size=None, dtype=None, device=None, out=None):
return ivy.random_normal(
mean=loc, std=scale, shape=size, device=device, dtype=dtype, out=out
)
@to_ivy_arrays_and_back
def power(a, size=None, dtype=None, device=None, out=None):
# special case of beta function
b = ivy.ones_like(a)
return ivy.experimental.beta(a, b, shape=size, dtype=dtype, device=device, out=out)
@to_ivy_arrays_and_back
def rand(*size, **kwargs):
return ivy.random_uniform(shape=size, **kwargs)
@to_ivy_arrays_and_back
def randint(low, high=None, size=None, dtype=None, device=None, out=None):
return ivy.randint(low, high, shape=size, device=device, dtype=dtype, out=out)
@to_ivy_arrays_and_back
def shuffle(x, axis=0):
ivy.shuffle(x, axis)
@to_ivy_arrays_and_back
def uniform(low=0.0, high=1.0, size=None, dtype=None, device=None, out=None):
return ivy.random_uniform(
low=low, high=high, shape=size, device=device, dtype=dtype, out=out
)
| ivy/ivy/functional/frontends/mxnet/numpy/random.py/0 | {
"file_path": "ivy/ivy/functional/frontends/mxnet/numpy/random.py",
"repo_id": "ivy",
"token_count": 1009
} | 28 |
# local
import ivy
from ivy.functional.frontends.numpy.func_wrapper import (
to_ivy_arrays_and_back,
from_zero_dim_arrays_to_scalar,
)
@to_ivy_arrays_and_back
def eig(a):
return ivy.eig(a)
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def eigh(a, /, UPLO="L"):
return ivy.eigh(a, UPLO=UPLO)
@to_ivy_arrays_and_back
def eigvals(a):
return ivy.eig(a)[0]
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def eigvalsh(a, /, UPLO="L"):
return ivy.eigvalsh(a, UPLO=UPLO)
| ivy/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py",
"repo_id": "ivy",
"token_count": 271
} | 29 |
from . import adding_and_removing_elements
from .adding_and_removing_elements import *
from . import basic_operations
from .basic_operations import *
from . import changing_array_shape
from .changing_array_shape import *
from . import changing_kind_of_array
from .changing_kind_of_array import *
from . import changing_number_of_dimensions
from .changing_number_of_dimensions import *
from . import padding_arrays
from .padding_arrays import *
from . import joining_arrays
from .joining_arrays import *
from . import rearranging_elements
from .rearranging_elements import *
from . import splitting_arrays
from .splitting_arrays import *
from . import tiling_arrays
from .tiling_arrays import *
from . import transpose_like_operations
from .transpose_like_operations import *
| ivy/ivy/functional/frontends/numpy/manipulation_routines/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/manipulation_routines/__init__.py",
"repo_id": "ivy",
"token_count": 225
} | 30 |
# global
import ivy
# local
from ivy.functional.frontends.numpy.func_wrapper import (
to_ivy_arrays_and_back,
handle_numpy_out,
handle_numpy_dtype,
from_zero_dim_arrays_to_scalar,
handle_numpy_casting,
)
# --- Helpers --- #
# --------------- #
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _nextafter(
x1,
x2,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
return ivy.nextafter(x1, x2, out=out)
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _signbit(
x,
/,
out=None,
*,
where=True,
casting="safe",
order="K",
dtype=None,
subok=True,
):
x = ivy.astype(x, ivy.float64)
return ivy.logical_or(ivy.less(x, 0), ivy.atan2(0.0, x) == ivy.pi, out=out)
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _spacing(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
# Implement the frontend function using Ivy compositions
if dtype is None:
dtype = ivy.dtype(x)
y = ivy.floor(ivy.log2(ivy.abs(x + 1)))
spacing = ivy.multiply(ivy.finfo(dtype).eps, ivy.pow(2, y))
if dtype != "float16":
spacing = ivy.sign(x) * spacing
return spacing
| ivy/ivy/functional/frontends/numpy/mathematical_functions/floating_point_routines.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/mathematical_functions/floating_point_routines.py",
"repo_id": "ivy",
"token_count": 742
} | 31 |
Subsets and Splits