repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
OpenMined/PySyft | 1,320 | OpenMined__PySyft-1320 | [
"938"
] | ea0974a834cc3f621b2adb0b090903b848f77b95 | diff --git a/syft/tensor/int_tensor.py b/syft/tensor/int_tensor.py
--- a/syft/tensor/int_tensor.py
+++ b/syft/tensor/int_tensor.py
@@ -423,6 +423,19 @@ def view_(self, *args):
self.params_func("view_", new_dim, return_response=False)
return self
+ def exp(self):
+ """
+ Computes exponential of each element of the tensor.
+ Parameters
+ ----------
+ Returns
+ -------
+ IntTensor
+
+ Output tensor
+ """
+ return self.no_params_func("exp", return_response=True)
+
def unfold(self, dim, size, step):
"""
Returns a tensor which contains all slices of size `size` from `self` tensor in the dimension `dim`.
@@ -431,10 +444,7 @@ def unfold(self, dim, size, step):
dim (int) β dimension in which unfolding happens
size (int) β the size of each slice that is unfolded
step (int) β the step between each slice
- ----------
- Returns
- -------
- IntTensor
Output Tensor
"""
return self.params_func("unfold", [dim, size, step], return_response=True)
+
| Implement the Exp function for IntTensor on the CPU
As a Data Scientist using PySyft's IntTensor type, I want to leverage a wide range of methods which use our new Unity backend. For this ticket to be complete, the exp() should be added to our IntTensor class with the appropriate functionality, returning a new tensor.
If you want to take it to the next level, boost it implementing the operation on the GPU: Search for an issue titled like this but with "on the GPU" on the title!
HLSL (GPU language) tutorial here: [Direct Compute Programming Guide](https://github.com/OpenMined/OpenMined/blob/master/tutorials/DirectCompute_Programming_Guide.md)
Note, it is possible that when you look in the code you'll find that parts of this issue were completed on the backend while implementing another issue. This is normal as features do not live in isolation. If this is the case, just take it as a convenience that someone already built that part and press on!
### Every Reference You Might Need for this Issue:
- For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.
- For a reference on how to program in Unity, check out [this basic tutorial](https://unity3d.com/learn/tutorials/projects/roll-ball-tutorial)
- For a reference on how to write HLSL code, check out [this basic tutorial](http://kylehalladay.com/blog/tutorial/2014/06/27/Compute-Shaders-Are-Nifty.html)
- For a complete tutorial on how to add functions to FloatTensor (step by step guide) see [this Google Document](https://docs.google.com/document/d/1WRd7gGLFN0Awtf86AICYIHtg3gfFWLBa5wYTthsB3i0/edit)
- For a reference on how other functions like this have been implemented check out the functions in [this notebook](https://github.com/OpenMined/OpenMined/blob/master/notebooks/Syft%20Tensor%20Example%20Notebook.ipynb) as well as the corresponding files that made it possible ([SyftController](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Network/Controllers/SyftController.cs), [FloatTensor.Ops](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/FloatTensor.Ops.cs), [FloatTensorShaders](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/Ops/Shaders/FloatTensorShaders.compute), [TensorOpsShaders](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/Ops/Shaders/TensorOpsShaders.compute), [FloatTensorTest](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined.Tests/Editor/FloatTensor/FloatTensorTest.cs) and [FloatTensorGpuTest](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined.Tests/Editor/FloatTensor/FloatTensorGpuTest.cs)).
- And of course, please consider our [Contributor Guidelines](https://github.com/OpenMined/Docs/blob/master/contributing/guidelines.md) for all contributions.
### Acceptance Criteria:
- [ ] comment below that you're picking up this project
- [ ] an example in a notebook in our [tests folder](https://github.com/OpenMined/OpenMined/tree/master/notebooks/tests) showing how to use the functionality from PySyft
- [ ] an integration test in PySyft demonstrating the correct CPU operation implemented over an IntTensor while connected to a Unity backend
- [ ] a Unit Test in OpenMined/OpenMined demonstrating the correct operation on a FloatTensor
- [ ] [inline](http://pytorch.org/docs/master/tensors.html) documentation in the python code. For inspiration on inline documentation, please check out PyTorch's documentation for this operator.
- [ ] Link your Pull Request back to this Issue so that it gets closed appropriately when the PR is merged.
| I'll take this | 2018-02-13T15:44:41 |
|
OpenMined/PySyft | 1,321 | OpenMined__PySyft-1321 | [
"1266",
"1266"
] | 7584538af6ccb4afd2ed97c5e0ef88573688c896 | diff --git a/syft/tensor/int_tensor.py b/syft/tensor/int_tensor.py
--- a/syft/tensor/int_tensor.py
+++ b/syft/tensor/int_tensor.py
@@ -404,3 +404,19 @@ def view_(self, *args):
assert type(new_dim[0]) == int
self.params_func("view_", new_dim, return_response=False)
return self
+
+ def unfold(self, dim, size, step):
+ """
+ Returns a tensor which contains all slices of size `size` from `self` tensor in the dimension `dim`.
+
+ Parameters:
+ dim (int) β dimension in which unfolding happens
+ size (int) β the size of each slice that is unfolded
+ step (int) β the step between each slice
+ ----------
+ Returns
+ -------
+ IntTensor
+ Output Tensor
+ """
+ return self.params_func("unfold", [dim, size, step], return_response=True)
| diff --git a/tests/test_inttensor.py b/tests/test_inttensor.py
--- a/tests/test_inttensor.py
+++ b/tests/test_inttensor.py
@@ -41,3 +41,22 @@ def test_view():
a.view_(4, -1, 2)
c_v_ground = IntTensor(np.array([[[9, 3], [1, 0]], [[6, 8], [6, 6]], [[1, 6], [8, 6]], [[5, 0], [2, 0]]]))
assert(a.equal(c_v_ground))
+
+def test_unfold():
+ a = IntTensor(np.array([[-1, 2, 3, 5], [0, 4, 6, 7], [10, 3, 2, -5]], dtype=np.int32))
+
+ # Test1
+ expected_a = IntTensor(np.array([[[-1, 2, 3, 5], [0, 4, 6, 7]], [[0, 4, 6, 7], [10, 3, 2, -5]]], dtype=np.int32))
+ actual_a = a.unfold(0, 2, 1)
+ assert(actual_a.equal(expected_a))
+
+ # Test2
+ expected_a = IntTensor(np.array([[[-1, 2, 3], [0, 4, 6], [10, 3, 2]],
+ [[2, 3, 5], [4, 6, 7], [3, 2, -5]]], dtype=np.int32))
+ actual_a = a.unfold(1, 3, 1)
+ assert(actual_a.equal(expected_a))
+
+ # Test3
+ expected_a = IntTensor(np.array([[[-1, 2], [0, 4], [10, 3]], [[3, 5], [6, 7], [2, -5]]], dtype=np.int32))
+ actual_a = a.unfold(1, 2, 2)
+ assert(actual_a.equal(expected_a))
| Implement the Unfold function for IntTensor on the CPU
As a Data Scientist using PySyft's IntTensor type, I want to leverage a wide range of methods which use our new Unity backend. For this ticket to be complete, the unfold() should be added to our IntTensor class with the appropriate functionality, returning a new tensor.
If you want to take it to the next level, boost it implementing the operation on the GPU: Search for an issue titled like this but with "on the GPU" on the title!
HLSL (GPU language) tutorial here: [Direct Compute Programming Guide](https://github.com/OpenMined/OpenMined/blob/master/tutorials/DirectCompute_Programming_Guide.md)
Note, it is possible that when you look in the code you'll find that parts of this issue were completed on the backend while implementing another issue. This is normal as features do not live in isolation. If this is the case, just take it as a convenience that someone already built that part and press on!
### Every Reference You Might Need for this Issue:
- For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.
- For a reference on how to program in Unity, check out [this basic tutorial](https://unity3d.com/learn/tutorials/projects/roll-ball-tutorial)
- For a reference on how to write HLSL code, check out [this basic tutorial](http://kylehalladay.com/blog/tutorial/2014/06/27/Compute-Shaders-Are-Nifty.html)
- For a complete tutorial on how to add functions to FloatTensor (step by step guide) see [this Google Document](https://docs.google.com/document/d/1WRd7gGLFN0Awtf86AICYIHtg3gfFWLBa5wYTthsB3i0/edit)
- For a reference on how other functions like this have been implemented check out the functions in [this notebook](https://github.com/OpenMined/OpenMined/blob/master/notebooks/Syft%20Tensor%20Example%20Notebook.ipynb) as well as the corresponding files that made it possible ([SyftController](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Network/Controllers/SyftController.cs), [FloatTensor.Ops](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/FloatTensor.Ops.cs), [FloatTensorShaders](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/Ops/Shaders/FloatTensorShaders.compute), [TensorOpsShaders](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/Ops/Shaders/TensorOpsShaders.compute), [FloatTensorTest](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined.Tests/Editor/FloatTensor/FloatTensorTest.cs) and [FloatTensorGpuTest](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined.Tests/Editor/FloatTensor/FloatTensorGpuTest.cs)).
- And of course, please consider our [Contributor Guidelines](https://github.com/OpenMined/Docs/blob/master/contributing/guidelines.md) for all contributions.
### Acceptance Criteria:
- [ ] comment below that you're picking up this project
- [ ] an example in a notebook in our [tests folder](https://github.com/OpenMined/OpenMined/tree/master/notebooks/tests) showing how to use the functionality from PySyft
- [ ] an integration test in PySyft demonstrating the correct CPU operation implemented over an IntTensor while connected to a Unity backend
- [ ] a Unit Test in OpenMined/OpenMined demonstrating the correct operation on a FloatTensor
- [ ] [inline](http://pytorch.org/docs/master/tensors.html) documentation in the python code. For inspiration on inline documentation, please check out PyTorch's documentation for this operator.
- [ ] Link your Pull Request back to this Issue so that it gets closed appropriately when the PR is merged.
Implement the Unfold function for IntTensor on the CPU
As a Data Scientist using PySyft's IntTensor type, I want to leverage a wide range of methods which use our new Unity backend. For this ticket to be complete, the unfold() should be added to our IntTensor class with the appropriate functionality, returning a new tensor.
If you want to take it to the next level, boost it implementing the operation on the GPU: Search for an issue titled like this but with "on the GPU" on the title!
HLSL (GPU language) tutorial here: [Direct Compute Programming Guide](https://github.com/OpenMined/OpenMined/blob/master/tutorials/DirectCompute_Programming_Guide.md)
Note, it is possible that when you look in the code you'll find that parts of this issue were completed on the backend while implementing another issue. This is normal as features do not live in isolation. If this is the case, just take it as a convenience that someone already built that part and press on!
### Every Reference You Might Need for this Issue:
- For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.
- For a reference on how to program in Unity, check out [this basic tutorial](https://unity3d.com/learn/tutorials/projects/roll-ball-tutorial)
- For a reference on how to write HLSL code, check out [this basic tutorial](http://kylehalladay.com/blog/tutorial/2014/06/27/Compute-Shaders-Are-Nifty.html)
- For a complete tutorial on how to add functions to FloatTensor (step by step guide) see [this Google Document](https://docs.google.com/document/d/1WRd7gGLFN0Awtf86AICYIHtg3gfFWLBa5wYTthsB3i0/edit)
- For a reference on how other functions like this have been implemented check out the functions in [this notebook](https://github.com/OpenMined/OpenMined/blob/master/notebooks/Syft%20Tensor%20Example%20Notebook.ipynb) as well as the corresponding files that made it possible ([SyftController](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Network/Controllers/SyftController.cs), [FloatTensor.Ops](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/FloatTensor.Ops.cs), [FloatTensorShaders](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/Ops/Shaders/FloatTensorShaders.compute), [TensorOpsShaders](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/Ops/Shaders/TensorOpsShaders.compute), [FloatTensorTest](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined.Tests/Editor/FloatTensor/FloatTensorTest.cs) and [FloatTensorGpuTest](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined.Tests/Editor/FloatTensor/FloatTensorGpuTest.cs)).
- And of course, please consider our [Contributor Guidelines](https://github.com/OpenMined/Docs/blob/master/contributing/guidelines.md) for all contributions.
### Acceptance Criteria:
- [ ] comment below that you're picking up this project
- [ ] an example in a notebook in our [tests folder](https://github.com/OpenMined/OpenMined/tree/master/notebooks/tests) showing how to use the functionality from PySyft
- [ ] an integration test in PySyft demonstrating the correct CPU operation implemented over an IntTensor while connected to a Unity backend
- [ ] a Unit Test in OpenMined/OpenMined demonstrating the correct operation on a FloatTensor
- [ ] [inline](http://pytorch.org/docs/master/tensors.html) documentation in the python code. For inspiration on inline documentation, please check out PyTorch's documentation for this operator.
- [ ] Link your Pull Request back to this Issue so that it gets closed appropriately when the PR is merged.
| I am taking up this one.
I am taking up this one. | 2018-02-15T21:02:02 |
OpenMined/PySyft | 1,324 | OpenMined__PySyft-1324 | [
"424"
] | 21061a3b3378e4d2234791a6539d1ce6fa33b9ef | diff --git a/syft/tensor/float_tensor.py b/syft/tensor/float_tensor.py
--- a/syft/tensor/float_tensor.py
+++ b/syft/tensor/float_tensor.py
@@ -149,6 +149,97 @@ def addmv(self, x, y):
copy.params_func("addmv_", [x.id, y.id])
return copy
+ def addr(self, *args):
+ """
+ Returns a new Tensor as the sum of beta*self and alpha*(vec1*vec2^T)
+
+ Parameters
+ ----------
+ beta: float
+ scalar to optionally multiply each element of mat
+ vec1: FloatTensor
+ first vector in outer product with vec2
+ vec2: FloatTensor
+ second vector in outer product with vec1
+ alpha: float
+ scalar to optionally multiply each element of the outer product of vec1, vec2
+
+ Returns
+ -------
+ FloatTensor
+ Output tensor
+ """
+
+ for arg in args:
+ if type(arg) == float:
+ if 'beta' not in locals() and 'vec1' not in locals():
+ beta = arg
+ elif 'alpha' not in locals() and 'vec2' in locals():
+ alpha = arg
+ else:
+ raise TypeException('Method `addr` accepts only 2 float params; they may be out of order with respect to the Tensors.')
+ elif type(arg) == FloatTensor:
+ if 'vec1' not in locals():
+ vec1 = arg
+ elif 'vec2' not in locals():
+ vec2 = arg
+ else:
+ raise TypeException('Method `addr` accepts only 2 FloatTensors')
+ else:
+ raise TypeException('Unexpected argument type')
+
+ if 'beta' not in locals():
+ beta = 1
+ if 'alpha' not in locals():
+ alpha = 1
+
+ return self.params_func("addr", [beta, vec1.id, vec2.id, alpha], return_response=True)
+
+ def addr_(self, *args):
+ """
+ Returns the sum of beta*self and alpha*(vec1*vec2') inline
+
+ Parameters
+ ----------
+ beta: float
+ scalar to optionally multiply each element of mat
+ vec1: FloatTensor
+ first vector in outer product with vec2
+ vec2: FloatTensor
+ second vector in outer product with vec1
+ alpha: float
+ scalar to optionally multiply each element of the outer product of vec1, vec2
+
+ Returns
+ -------
+ FloatTensor inline
+ """
+
+ for arg in args:
+ if type(arg) == float:
+ if 'beta' not in locals() and 'vec1' not in locals():
+ beta = arg
+ elif 'alpha' not in locals() and 'vec2' in locals():
+ alpha = arg
+ else:
+ raise TypeException('Method `addr` accepts only 2 float params; they may be out of order with respect to the Tensors.')
+ elif type(arg) == FloatTensor:
+ if 'vec1' not in locals():
+ vec1 = arg
+ elif 'vec2' not in locals():
+ vec2 = arg
+ else:
+ raise TypeException('Method `addr` accepts only 2 FloatTensors')
+ else:
+ raise TypeException('Unexpected argument type')
+
+ if 'beta' not in locals():
+ beta = 1
+ if 'alpha' not in locals():
+ alpha = 1
+
+ return self.params_func("addr_", [beta, vec1.id, vec2.id, alpha])
+
def asin(self):
"""
Returns a new Tensor with the arcsine of the elements of input.
| Implement addr Functionality in FloatTensor with CPU/GPU Backend Support
### User Story:
As a Data Scientist using PySyft's FloatTensor type, I want to leverage a wide range of methods which use our new Unity backend. For this ticket to be complete, the addr() should be added to our FloatTensor class with the appropriate functionality, returning a new tensor.
Furthermore, the function should automatically determine which backend to use (CPU/GPU) based on where the data is located. If the data is located on the CPU, a performant CPU implementation should run but if the data for a given FloatTensor is located on a GPU, it should be run using an HLSL kernel where appropriate. Obviously, if no GPU is available, it should automatically fall back to the CPU implementation.
### Every Reference You Might Need for this Issue:
- For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.
- For a reference on how to program in Unity, check out [this basic tutorial](https://unity3d.com/learn/tutorials/projects/roll-ball-tutorial)
- For a reference on how to write HLSL code, check out [this basic tutorial](http://kylehalladay.com/blog/tutorial/2014/06/27/Compute-Shaders-Are-Nifty.html)
- For a complete tutorial on how to add functions to FloatTensor (step by step guide) see [this Google Document](https://docs.google.com/document/d/1WRd7gGLFN0Awtf86AICYIHtg3gfFWLBa5wYTthsB3i0/edit)
- For a reference on how other functions like this have been implemented check out the functions in [this notebook](https://github.com/OpenMined/OpenMined/blob/master/notebooks/Syft%20Tensor%20Example%20Notebook.ipynb) as well as the corresponding files that made it possible ([SyftController](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined/Network/Controllers/SyftController.cs), [FloatTensor.Ops](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined/Syft/Tensor/FloatTensor.Ops.cs), [FloatTensor.ShaderOps](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined/Syft/Tensor/FloatTensor.ShaderOps.cs), [FloatTensorShaders](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined/Syft/Math/Shaders/FloatTensorShaders.compute), and [FloatTensorTest](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined.Tests/Editor/FloatTensorTest.cs)).
- And of course, please consider our [Contributor Guidelines](https://github.com/OpenMined/Docs/blob/master/contributing/guidelines.md) for all contributions.
### Acceptance Criteria:
- [ ] an integration test in PySyft demonstrating the correct CPU and GPU operation implemented over a FloatTensor while connected to a Unity backend
- [ ] a Unit Test in OpenMined/OpenMined demonstrating the correct operation on a FloatTensor
- [ ] [inline](http://pytorch.org/docs/master/tensors.html) documentation in the python code. For inspiration on inline documentation, please check out PyTorch's documentation for this operator.
- [ ] Link your Pull Request back to this Issue so that it gets closed appropriately when the PR is merged.
| i'll take this and #425! | 2018-02-22T16:20:02 |
|
OpenMined/PySyft | 1,325 | OpenMined__PySyft-1325 | [
"1267"
] | 342a5ebb905091affc1ce60158b811682fb62eb4 | diff --git a/syft/tensor/int_tensor.py b/syft/tensor/int_tensor.py
--- a/syft/tensor/int_tensor.py
+++ b/syft/tensor/int_tensor.py
@@ -460,3 +460,15 @@ def unfold(self, dim, size, step):
"""
return self.params_func("unfold", [dim, size, step], return_response=True)
+ def unfold_(self, dim, size, step):
+ """
+ Computes all slices of size `size` from `self` tensor in the dimension `dim`. Inplace version of Unfold.
+
+ Parameters:
+ dim (int) β dimension in which unfolding happens
+ size (int) β the size of each slice that is unfolded
+ step (int) β the step between each slice
+ Output Tensor
+ """
+ return self.params_func("unfold_", [dim, size, step], return_response=True)
+
| diff --git a/tests/test_inttensor.py b/tests/test_inttensor.py
--- a/tests/test_inttensor.py
+++ b/tests/test_inttensor.py
@@ -69,3 +69,23 @@ def test_unfold():
actual_a = a.unfold(1, 2, 2)
assert(actual_a.equal(expected_a))
+def test_unfold_():
+ # Test1
+ a = IntTensor(np.array([[-1, 2, 3, 5], [0, 4, 6, 7], [10, 3, 2, -5]], dtype=np.int32))
+ expected_a = IntTensor(np.array([[[-1, 2, 3, 5], [0, 4, 6, 7]], [[0, 4, 6, 7], [10, 3, 2, -5]]], dtype=np.int32))
+ a.unfold_(0, 2, 1)
+ assert(a.equal(expected_a))
+
+ # Test2
+ a = IntTensor(np.array([[-1, 2, 3, 5], [0, 4, 6, 7], [10, 3, 2, -5]], dtype=np.int32))
+ expected_a = IntTensor(np.array([[[-1, 2, 3], [0, 4, 6], [10, 3, 2]],
+ [[2, 3, 5], [4, 6, 7], [3, 2, -5]]], dtype=np.int32))
+ a.unfold_(1, 3, 1)
+ assert(a.equal(expected_a))
+
+ # Test3
+ a = IntTensor(np.array([[-1, 2, 3, 5], [0, 4, 6, 7], [10, 3, 2, -5]], dtype=np.int32))
+ expected_a = IntTensor(np.array([[[-1, 2], [0, 4], [10, 3]], [[3, 5], [6, 7], [2, -5]]], dtype=np.int32))
+ a.unfold_(1, 2, 2)
+ assert(a.equal(expected_a))
+
| Implement the inline Unfold function for IntTensor on the CPU
As a Data Scientist using PySyft's IntTensor type, I want to leverage a wide range of methods which use our new Unity backend. For this ticket to be complete, the inline unfold() should be added to our IntTensor class with the appropriate functionality.
If you want to take it to the next level, boost it implementing the operation on the GPU: Search for an issue titled like this but with "on the GPU" on the title!
HLSL (GPU language) tutorial here: [Direct Compute Programming Guide](https://github.com/OpenMined/OpenMined/blob/master/tutorials/DirectCompute_Programming_Guide.md)
Note, it is possible that when you look in the code you'll find that parts of this issue were completed on the backend while implementing another issue. This is normal as features do not live in isolation. If this is the case, just take it as a convenience that someone already built that part and press on!
### Every Reference You Might Need for this Issue:
- For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation.
- For a reference on how to program in Unity, check out [this basic tutorial](https://unity3d.com/learn/tutorials/projects/roll-ball-tutorial)
- For a reference on how to write HLSL code, check out [this basic tutorial](http://kylehalladay.com/blog/tutorial/2014/06/27/Compute-Shaders-Are-Nifty.html)
- For a complete tutorial on how to add functions to FloatTensor (step by step guide) see [this Google Document](https://docs.google.com/document/d/1WRd7gGLFN0Awtf86AICYIHtg3gfFWLBa5wYTthsB3i0/edit)
- For a reference on how other functions like this have been implemented check out the functions in [this notebook](https://github.com/OpenMined/OpenMined/blob/master/notebooks/Syft%20Tensor%20Example%20Notebook.ipynb) as well as the corresponding files that made it possible ([SyftController](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Network/Controllers/SyftController.cs), [FloatTensor.Ops](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/FloatTensor.Ops.cs), [FloatTensorShaders](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/Ops/Shaders/FloatTensorShaders.compute), [TensorOpsShaders](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined/Syft/Tensor/Ops/Shaders/TensorOpsShaders.compute), [FloatTensorTest](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined.Tests/Editor/FloatTensor/FloatTensorTest.cs) and [FloatTensorGpuTest](https://github.com/OpenMined/OpenMined/blob/master/UnityProject/Assets/OpenMined.Tests/Editor/FloatTensor/FloatTensorGpuTest.cs)).
- And of course, please consider our [Contributor Guidelines](https://github.com/OpenMined/Docs/blob/master/contributing/guidelines.md) for all contributions.
### Acceptance Criteria:
- [ ] comment below that you're picking up this project
- [ ] an example in a notebook in our [tests folder](https://github.com/OpenMined/OpenMined/tree/master/notebooks/tests) showing how to use the functionality from PySyft
- [ ] an integration test in PySyft demonstrating the correct CPU operation implemented over an IntTensor while connected to a Unity backend
- [ ] a Unit Test in OpenMined/OpenMined demonstrating the correct operation on a FloatTensor
- [ ] [inline](http://pytorch.org/docs/master/tensors.html) documentation in the python code. For inspiration on inline documentation, please check out PyTorch's documentation for this operator.
- [ ] Link your Pull Request back to this Issue so that it gets closed appropriately when the PR is merged.
| Working on this one | 2018-02-26T10:22:52 |
OpenMined/PySyft | 1,361 | OpenMined__PySyft-1361 | [
"1358"
] | 7e2eecb95424995923e2492c8cbecb404215761c | diff --git a/syft/core/workers.py b/syft/core/workers.py
--- a/syft/core/workers.py
+++ b/syft/core/workers.py
@@ -1051,7 +1051,7 @@ def _listen(self):
message = self._process_buffer(connection)
# process message and generate response
- response = self.receive_msg(message)
+ response = self.receive_msg(message, False)
# send response back
connection.send(response.encode())
| SockerWorker error when decoding message
# SockerWorker error when decoding message
## Context
The issue append when sending Tensor to a remote SockerWorker, like in this example :
https://github.com/OpenMined/PySyft/blob/master/examples/SocketWorker%20Client.ipynb
**Test Configuration**:
A Docker container with:
Python 3.6.5
Jupyter Notebook 5.5.0
IPython 6.4.0
PyTorch 0.3.1
## Failure Information
An Exception occurs when calling the send method of a Tensor object.
The exact code is `x = torch.FloatTensor([1,2,3,4,5]).send(remote_client)`
And here is the exception (which appends on the 'remote' notebook) :
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-2e80f7b137c2> in <module>()
3 port=8188,
4 is_pointer=False,
----> 5 is_client_worker=False)
/usr/local/lib/python3.6/dist-packages/syft-0.1.0-py3.6.egg/syft/core/workers.py in __init__(self, hook, hostname, port, max_connections, id, is_client_worker, objects, tmp_objects, known_workers, verbose, is_pointer, queue_size)
1033 if(not is_client_worker or self.is_pointer):
1034 print("Ready to receive commands...")
-> 1035 self._listen()
1036 else:
1037 print("Ready!")
/usr/local/lib/python3.6/dist-packages/syft-0.1.0-py3.6.egg/syft/core/workers.py in _listen(self)
1052
1053 # process message and generate response
-> 1054 response = self.receive_msg(message)
1055
1056 # send response back
/usr/local/lib/python3.6/dist-packages/syft-0.1.0-py3.6.egg/syft/core/workers.py in receive_msg(self, message_wrapper_json, is_binary)
194
195 if(is_binary):
--> 196 message_wrapper_json = message_wrapper_json.decode('utf-8')
197 message_wrapper = json.loads(message_wrapper_json)
198
AttributeError: 'str' object has no attribute 'decode'
```
### Steps to Reproduce
1. Run the [SockerWorker Server notebook](https://github.com/OpenMined/PySyft/blob/master/examples/SocketWorker%20Server.ipynb)
2. Run the first three cell of the [SockerWorker Client notebook](https://github.com/OpenMined/PySyft/blob/master/examples/SocketWorker%20Server.ipynb)
| 2018-06-23T17:41:56 |
||
OpenMined/PySyft | 1,580 | OpenMined__PySyft-1580 | [
"1532"
] | 184e38405446fc26a108990253e714c55e255597 | diff --git a/syft/core/frameworks/torch/tensor.py b/syft/core/frameworks/torch/tensor.py
--- a/syft/core/frameworks/torch/tensor.py
+++ b/syft/core/frameworks/torch/tensor.py
@@ -783,7 +783,7 @@ def sum_get(self):
return res
def workers(self):
- return list(self.pointer_tensor_dict.keys())
+ return list(self.pointer_tensor_dict.keys())
def on(self, wrapper):
"""
@@ -1177,33 +1177,69 @@ def handle_call(cls, command, owner):
args = command['args']
kwargs = command['kwargs']
has_self = command['has_self']
-
if has_self:
self = command['self']
+ # Overriding prod, sum and cumsum or similar methods, which may have an argument but
+ # do not include the parameters when they're called. i.e. given an argument prod(
+ # dim=3) the value of args[0] is just 3, therefore it does not satisfy the second
+ # part of the if statement.
+ if attr in ('prod', 'sum', 'cumsum'):
+ if args == ():
+ raise AttributeError('Please provide a dimension')
+ if attr == 'prod':
+ response = cls.prod(self, *args, **kwargs)
+ elif attr == 'sum':
+ response = cls.sum(self, *args, **kwargs)
+ elif attr == 'cumsum':
+ response = cls.cumsum(self, *args, **kwargs)
+ return _FixedPrecisionTensor(response).wrap(True)
+
if attr == 'share':
response = self.share(*args, **kwargs)
return response
else:
- if attr in ('__add__', '__mul__', 'mm') and isinstance(args[0], sy._FixedPrecisionTensor):
+ if attr in ('__add__', '__mul__', '__sub__', '__div__', '__truediv__', 'mm',
+ 'matmul') and\
+ isinstance(args[0], sy._FixedPrecisionTensor):
+ # Compute the precision to keep
other = args[0]
assert (self.base == other.base) and (self.bits == other.bits), \
'Arguments should share the same base and field'
-
+ self_precision = self.precision_fractional
+ other_precision = other.precision_fractional
+
+ # If the precision fractional of self is different than other's raise an exception
+ # You may uncomment this line out if you do care about different precisions,
+ # the code will work either way.
+ # if not(self_precision == other_precision):
+ # raise ArithmeticError("The tensors have different precisions")
+ precision = max(self_precision, other_precision)
+ precision_loss = min(self_precision, other_precision)
# Perform the computation
torch_tensorvar = None
- if attr == '__add__':
- torch_tensorvar = cls.__add__(self, *args, **kwargs)
- torch_tensorvar = torch_tensorvar % self.field
- elif attr == '__mul__':
+ if attr == '__mul__':
torch_tensorvar = cls.__mul__(self, other)
- elif attr in ('mm',):
+ elif attr in ('mm',) or attr == 'matmul':
torch_tensorvar = cls.mm(self, other)
-
- # We could check overflow, but it is a pb for shared values.
- # if (torch_tensorvar > self.field).any():
- # torch_tensorvar = torch_tensorvar % self.field
- # logging.warning('{} on FixPrecision Tensor/Variable overflowed, '
- # 'try reducing precision_fractional.'.format(attr))
+ elif attr == '__add__':
+ torch_tensorvar = cls.__add__(self, *args, **kwargs)
+ elif attr == '__sub__':
+ torch_tensorvar = cls.__sub__(self, *args, **kwargs)
+ elif attr == '__div__' or '__truediv__':
+ torch_tensorvar = cls.__div__(self, *args, **kwargs)
+ if attr not in ('mm',) and attr != '__mul__':
+ response = torch_tensorvar.fix_precision(
+ already_encoded=True,
+ precision_fractional=precision
+ )
+ return response
+
+
+ # We could check overflow, but it is a pb for shared values.
+ # if (torch_tensorvar > self.field).any():
+ # torch_tensorvar = torch_tensorvar % self.field
+ # logging.warning('{} on FixPrecision Tensor/Variable overflowed, '
+ # 'try reducing precision_fractional.'.format(attr))
else: # Standard procedure for methods
# Get the next node type and update in command tensorvar with tensorvar.child
@@ -1220,7 +1256,7 @@ def handle_call(cls, command, owner):
# Compute the precision to keep
precision = self.precision_fractional
- if attr in ('__mul__', 'mm', 'matmul'):
+ if attr in ('mm', 'matmul', '__mul__'):
other = args[0]
if isinstance(other, sy._FixedPrecisionTensor):
self_precision = self.precision_fractional
@@ -1272,12 +1308,74 @@ def get(self, *args, **kwargs):
return self.parent
def __add__(self, other):
- response = self.child + other.child
- return response
+ # if other is not a fixed tensor, convert it to a fixed one
+ if (not hasattr(other, 'precision_fractional')):
+ other = other.fix_precision(precision_fractional = self.precision_fractional)
+ # check for inconsistencies in precision points
+ if (self.precision_fractional == other.precision_fractional):
+ gp_response = (self.child + other.child) % self.field
+ elif (self.precision_fractional > other.precision_fractional):
+ gp_response = (self.child + other.child * 10 ** (self.precision_fractional -
+ other.precision_fractional)) % self.field
+ elif (self.precision_fractional < other.precision_fractional):
+ gp_response = (self.child * 10 ** (other.precision_fractional -
+ self.precision_fractional)+ other.child) % self.field
+ return gp_response
+
+ def __div__(self, other):
+ # if other is not a fixed tensor, convert it to a fixed one
+ if (not hasattr(other, 'precision_fractional')):
+ other = other.fix_precision(precision_fractional = self.precision_fractional)
+
+ if (self.precision_fractional == other.precision_fractional):
+ gp_response = (self.child * 10 ** self.precision_fractional / other.child) % \
+ self.field
+ elif (self.precision_fractional > other.precision_fractional):
+ gp_response = (self.child / other.child * 10 ** other.precision_fractional) % \
+ self.field
+
+ elif (self.precision_fractional < other.precision_fractional):
+ gp_response = ((self.child *10 ** (2 * other.precision_fractional
+ - self.precision_fractional)) / other.child) % \
+ self.field
+ return gp_response
def __mul__(self, other):
- response = self.child * other.child
- return response
+ # if other is not a fixed tensor, convert it to a fixed one
+ if (not hasattr(other, 'precision_fractional')):
+ other = other.fix_precision(precision_fractional = self.precision_fractional)
+ return self.child * other.child
+
+ def __sub__(self, other):
+ # if other is not a fixed tensor, convert it to a fixed one
+ if (not hasattr(other, 'precision_fractional')):
+ other = other.fix_precision(precision_fractional = self.precision_fractional)
+
+ if (self.precision_fractional == other.precision_fractional):
+ gp_response = self.child - other.child
+ elif (self.precision_fractional > other.precision_fractional):
+ gp_response = (self.child - other.child * 10 ** (self.precision_fractional -
+ other.precision_fractional)) % self.field
+
+ other.precision_fractional = self.precision_fractional
+ elif (self.precision_fractional < other.precision_fractional):
+ gp_response = (self.child * 10 ** (other.precision_fractional -
+ self.precision_fractional) - other.child) % \
+ self.field
+
+ return gp_response
+
+ def prod(self, *args, **kwargs):
+ # getting the dimension of the tensor which prod will be applied to. (needed for fixing
+ # the precision precision problems)
+ dim = self.child.size()[args[0]]
+ return self.child.prod(*args, **kwargs) / 10 ** (self.precision_fractional * dim)
+
+ def sum(self, *args, **kwargs):
+ return (self.child.sum(*args, *kwargs) / 10 ** self.precision_fractional)
+
+ def cumsum(self, *args, **kwargs):
+ return (self.child.cumsum(*args, *kwargs) / 10 ** self.precision_fractional)
def mm(self, other):
response = self.child.mm(other.child)
@@ -2303,6 +2401,3 @@ def decode_(self):
self.data.child = self.data.child.child.child
self.child = self.child.child.child
torch_utils.fix_chain_ends(self)
-
-
-
| diff --git a/test/torch_test.py b/test/torch_test.py
--- a/test/torch_test.py
+++ b/test/torch_test.py
@@ -1176,13 +1176,138 @@ def test_fix_precision_decode(self):
assert torch_utils.chain_print(x, display=False) == display_chain.tensor.local
def test_fix_precision_mul(self):
- x = torch.FloatTensor([2.1, 1])
- y = torch.FloatTensor([1.2, 1.111])
+ x = torch.FloatTensor([1, 2, 0.4])
+ y = torch.FloatTensor([1, 1, 2])
+ x = x.fix_precision(precision_fractional=3)
+ y = y.fix_precision(precision_fractional=3)
+ z = x * y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([1, 2, 0.8])).all()
+
+ # with different precision fractions x's > y's
+ x = torch.FloatTensor([1, 2, 0.4])
+ y = torch.FloatTensor([1, 1, 2])
+ x = x.fix_precision(precision_fractional=3)
+ y = y.fix_precision(precision_fractional=4)
+ z = x * y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([1, 2, 0.8])).all()
+
+ # with different precision fractions x's < y's
+ x = torch.FloatTensor([1, 2, 0.4])
+ y = torch.FloatTensor([1, 1, 2])
+ x = x.fix_precision(precision_fractional=3)
+ y = y.fix_precision(precision_fractional=2)
+ z = x * y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([1, 2, 0.8])).all()
+
+
+
+ def test_fix_precision_add(self):
+ x = torch.FloatTensor([[1, 0.2], [0.9, 11]])
+ y = torch.FloatTensor([[0.8, 1], [1, 3]])
x = x.fix_precision()
y = y.fix_precision()
z = x + y
z = z.decode()
- assert torch.eq(z, torch.FloatTensor([3.3, 2.111])).all()
+ assert torch.eq(z, torch.FloatTensor([[1.8, 1.2], [1.9, 14]])).all()
+
+ # with different precision fractions x's > y's
+ x = torch.FloatTensor([[1, 0.2], [0.9, 11]])
+ y = torch.FloatTensor([[0.8, 1], [1, 3]])
+ x = x.fix_precision(precision_fractional=4)
+ y = y.fix_precision(precision_fractional=3)
+ z = x + y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[1.8, 1.2], [1.9, 14]])).all()
+
+ # with different precision fractions x's < y's
+ x = torch.FloatTensor([[1, 0.2], [0.9, 11]])
+ y = torch.FloatTensor([[0.8, 1], [1, 3]])
+ x = x.fix_precision(precision_fractional=3)
+ y = y.fix_precision(precision_fractional=4)
+ z = x + y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[1.8, 1.2], [1.9, 14]])).all()
+
+ def test_fix_precision_sub(self):
+ x = torch.FloatTensor([[1, 1.2], [1.9, 11]])
+ y = torch.FloatTensor([[0.8, 1], [1, 3]])
+ x = x.fix_precision()
+ y = y.fix_precision()
+ z = x - y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[0.2, .2], [.9, 8]])).all()
+
+ # with different precision fractions x's > y's
+ x = torch.FloatTensor([[1, 1.2], [1.9, 11]])
+ y = torch.FloatTensor([[0.8, 1], [1, 3]])
+ x = x.fix_precision(precision_fractional=4)
+ y = y.fix_precision(precision_fractional=3)
+ z = x - y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[0.2, .2], [.9, 8]])).all()
+
+ # with different precision fractions x's < y's
+ x = torch.FloatTensor([[1, 1.2], [1.9, 11]])
+ y = torch.FloatTensor([[0.8, 1], [1, 3]])
+ x = x.fix_precision(precision_fractional=3)
+ y = y.fix_precision(precision_fractional=4)
+ z = x - y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[0.2, .2], [.9, 8]])).all()
+
+ def test_fix_precision_div(self):
+ x = torch.FloatTensor([[1, 1.2], [1.9, 12]])
+ y = torch.FloatTensor([[0.8, 0.4], [1, 3]])
+ x = x.fix_precision()
+ y = y.fix_precision()
+ z = x / y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[1.2500, 3], [1.9, 4]])).all()
+
+ # with different precision fractions x's > y's
+ x = torch.FloatTensor([[1, 1.2], [1.9, 12]])
+ y = torch.FloatTensor([[0.8, 0.4], [1, 3]])
+ x = x.fix_precision(precision_fractional=4)
+ y = y.fix_precision(precision_fractional=3)
+ z = x / y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[1.2000, 3], [1.9, 4]])).all()
+
+ # with different precision fractions x's < y's
+ x = torch.FloatTensor([[1, 1.2], [1.9, 12]])
+ y = torch.FloatTensor([[0.8, 0.4], [1, 3]])
+ x = x.fix_precision(precision_fractional=3)
+ y = y.fix_precision(precision_fractional=4)
+ z = x / y
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[1.2500, 3], [1.9, 4]])).all()
+
+ def test_fix_precision_sum(self):
+ x = torch.FloatTensor([[1, 1.2], [1.9, 12]])
+ y = torch.FloatTensor([[0.8, 0.4], [1, 3]])
+ x = x.fix_precision(precision_fractional=4)
+ z = x.sum(0)
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([2, 13])).all()
+
+ def test_fix_precision_cumsum(self):
+ x = torch.FloatTensor([[1, 1.2], [1.9, 12]])
+ y = torch.FloatTensor([[0.8, 0.4], [1, 3]])
+ x = x.fix_precision(precision_fractional=4)
+ z = x.cumsum(0)
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([[1, 1], [2, 13]])).all()
+
+ def test_fix_precision_prod(self):
+ x = torch.FloatTensor([[1, 1.2], [1.9, 12]])
+ y = torch.FloatTensor([[0.8, 0.4], [1, 3]])
+ x = x.fix_precision(precision_fractional=4)
+ z = x.prod(0)
+ z = z.decode()
+ assert torch.eq(z, torch.FloatTensor([1, 14])).all()
def test_var_fix_precision_decode(self):
x = sy.Variable(torch.FloatTensor([0.1, 0.2, 0.1, 0.2]))
@@ -1540,28 +1665,28 @@ def test_addition_remote_fix_precision_share(self):
self.remote_fix_precision_share_operation([2.5, 3.2], [5.4, -1.1])
self.remote_fix_precision_share_operation([-2.8, -3.9], [-1, -1])
self.remote_fix_precision_share_operation([-2, 3.3], [-1.9, 1])
- self.remote_fix_precision_share_operation([-19000, 3.3], [-1.9, 17654])
+ self.remote_fix_precision_share_operation([-190, 3.3], [-1.9, 174])
def test_var_addition_remote_fix_precision_share(self):
self.remote_fix_precision_share_operation([3.3], [5.1], var=True)
self.remote_fix_precision_share_operation([2.5, 3.2], [5.4, -1.1], var=True)
self.remote_fix_precision_share_operation([-2.8, -3.9], [-1, -1], var=True)
self.remote_fix_precision_share_operation([-2, 3.3], [-1.9, 1], var=True)
- self.remote_fix_precision_share_operation([-19000, 3.3], [-1.9, 17654], var=True)
+ self.remote_fix_precision_share_operation([-190, 3.3], [-1.9, 174], var=True)
def test_mult_remote_fix_precision_share(self):
self.remote_fix_precision_share_operation([3.3], [5.1], op='mul')
self.remote_fix_precision_share_operation([2.5, 3.2], [5.4, -1.1], op='mul')
self.remote_fix_precision_share_operation([-2.8, -3.9], [-1, -1], op='mul')
self.remote_fix_precision_share_operation([-2, 3.3], [-1.9, 1], op='mul')
- #self.remote_fix_precision_share_operation([-19000, 3.3], [-1.9, 17654], op='mul')
+ self.remote_fix_precision_share_operation([-190, 3.3], [-1.9, 174], op='mul')
def test_var_mult_remote_fix_precision_share(self):
self.remote_fix_precision_share_operation([3.3], [5.1], var=True, op='mul')
self.remote_fix_precision_share_operation([2.5, 3.2], [5.4, -1.1], var=True, op='mul')
self.remote_fix_precision_share_operation([-2.8, -3.9], [-1, -1], var=True, op='mul')
self.remote_fix_precision_share_operation([-2, 3.3], [-1.9, 1], var=True, op='mul')
- #self.remote_fix_precision_share_operation([-19000, 3.3], [-1.9, 17654], var=True, op='mul')
+ self.remote_fix_precision_share_operation([-190, 3.3], [-1.9, 174], var=True, op='mul')
def test_matmul_remote_fix_precision_share(self):
self.remote_fix_precision_share_operation([[3.3, 2.1],
@@ -1635,7 +1760,7 @@ def test_gpc_unwrapped_add(self):
results = y.get()
assert (results[0] == (x.get() * 2)).all()
-
+
def test_gpc_workers(self):
x = torch.LongTensor([1, 2, 3, 4, 5])
y = torch.LongTensor([1, 2, 3, 4, 5])
@@ -1647,7 +1772,7 @@ def test_gpc_workers(self):
x_gp = _GeneralizedPointerTensor(x_pointer_tensor_dict)
results = x_gp.workers()
-
+
assert(results == [k.id for k in x_pointer_tensor_dict.keys()])
@@ -1657,4 +1782,4 @@ def test_gpc_workers(self):
if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
+ unittest.main()
| Add Functionality to FixedPrecisionTensor
In https://github.com/OpenMined/PySyft/pull/1530 and https://github.com/OpenMined/PySyft/pull/1531, we created a new class called FixedPrecisionTensor which is able to take any pytorch tensor and compute over it using a limited range of precision. For example (https://github.com/OpenMined/PySyft/blob/master/examples/torch/Fixed%20Precision%20Tensor%20Testing.ipynb)
However, only addition has been implemented. In this work, we'd like to add the following functions.
- \_\_mult__ - multiplication by another fixed precision tensor
- \_\_mult__ - multiplication by another non-fixed precision tensor
- prod() - multiplication across a dimension
- cumprod() - cumulative multiplication across a dimension
- \_\_add__ - addition by a non-fixed precision tensor
- sum() - addition across a dimension
- cumsum() - cumulative addition across a dimension
- inheritance - all undefined methods should inherit from the tensor underlying FixedPrecisionTensor (self.child). Test that this works using __getitem__
| I would like to work on this. I'm just getting started with this project though.
Please reach out if you get stuck on anything - I'm here to help :)
I got this π | 2018-09-29T13:56:21 |
OpenMined/PySyft | 1,612 | OpenMined__PySyft-1612 | [
"1592"
] | 2290d654eba80fde72ee79f989b778d5ed63b08c | diff --git a/syft/core/frameworks/torch/tensor.py b/syft/core/frameworks/torch/tensor.py
--- a/syft/core/frameworks/torch/tensor.py
+++ b/syft/core/frameworks/torch/tensor.py
@@ -1276,8 +1276,8 @@ def encode(self, rational):
def decode(self):
save = self.child.child * 1
- self.child.child = None # <-- This is doing magic things
- value = self.child.long() % self.field
+ self.child.child = None # <-- This is doing magic things
+ value = torch.fmod(self.child.long(), self.field)
if len(value.size()) == 0:
# raise TypeError("Can't decode empty tensor")
return None
@@ -1541,15 +1541,15 @@ def check_and_scale_precision_if_needed(self, other):
def __add__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
- return (a + b) % self.field
+ return torch.fmod((a + b), self.field)
def __sub__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
- return (a - b) % self.field
+ return torch.fmod((a - b), self.field)
def __rsub__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
- return (b - a) % self.field
+ return torch.fmod((b - a), self.field)
def __mul__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
@@ -1559,95 +1559,88 @@ def __gt__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
result = (a > b).long() * self.base ** self.precision_fractional
- result = sy._FixedPrecisionTensor(
- result,
- base=self.base,
- field=self.field,
- precision_fractional=self.precision_fractional,
- precision_integral=self.precision_integral,
- already_encoded=True,
- ).wrap(True)
+
+ result = sy._FixedPrecisionTensor(result,
+ base=self.base,
+ field=self.field,
+ precision_fractional=self.precision_fractional,
+ precision_integral=self.precision_integral,
+ already_encoded=True).wrap(True)
+
return result
def __lt__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
result = (a < b).long() * self.base ** self.precision_fractional
- result = sy._FixedPrecisionTensor(
- result,
- base=self.base,
- field=self.field,
- precision_fractional=self.precision_fractional,
- precision_integral=self.precision_integral,
- already_encoded=True,
- ).wrap(True)
+
+ result = sy._FixedPrecisionTensor(result,
+ base=self.base,
+ field=self.field,
+ precision_fractional=self.precision_fractional,
+ precision_integral=self.precision_integral,
+ already_encoded=True).wrap(True)
+
return result
def __ge__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
result = (a >= b).long() * self.base ** self.precision_fractional
- result = sy._FixedPrecisionTensor(
- result,
- base=self.base,
- field=self.field,
- precision_fractional=self.precision_fractional,
- precision_integral=self.precision_integral,
- already_encoded=True,
- ).wrap(True)
+
+ result = sy._FixedPrecisionTensor(result,
+ base=self.base,
+ field=self.field,
+ precision_fractional=self.precision_fractional,
+ precision_integral=self.precision_integral,
+ already_encoded=True).wrap(True)
+
return result
def __le__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
result = (a <= b).long() * self.base ** self.precision_fractional
- result = sy._FixedPrecisionTensor(
- result,
- base=self.base,
- field=self.field,
- precision_fractional=self.precision_fractional,
- precision_integral=self.precision_integral,
- already_encoded=True,
- ).wrap(True)
+
+ result = sy._FixedPrecisionTensor(result,
+ base=self.base,
+ field=self.field,
+ precision_fractional=self.precision_fractional,
+ precision_integral=self.precision_integral,
+ already_encoded=True).wrap(True)
+
return result
def __eq__(self, other):
a, b = self.check_and_scale_precision_if_needed(other)
result = (a == b).long() * self.base ** self.precision_fractional
- result = sy._FixedPrecisionTensor(
- result,
- base=self.base,
- field=self.field,
- precision_fractional=self.precision_fractional,
- precision_integral=self.precision_integral,
- already_encoded=True,
- ).wrap(True)
+
+ result = sy._FixedPrecisionTensor(result,
+ base=self.base,
+ field=self.field,
+ precision_fractional=self.precision_fractional,
+ precision_integral=self.precision_integral,
+ already_encoded=True).wrap(True)
+
return result
def __div__(self, other):
# if other is not a fixed tensor, convert it to a fixed one
- if not hasattr(other, "precision_fractional"):
- other = other.fix_precision(precision_fractional=self.precision_fractional)
- if self.precision_fractional == other.precision_fractional:
- gp_response = (
- self.child * 10 ** self.precision_fractional / other.child
- ) % self.field
- elif self.precision_fractional > other.precision_fractional:
- gp_response = (
- self.child / other.child * 10 ** other.precision_fractional
- ) % self.field
-
- elif self.precision_fractional < other.precision_fractional:
- gp_response = (
- (
- self.child
- * 10 ** (2 * other.precision_fractional - self.precision_fractional)
- )
- / other.child
- ) % self.field
- return gp_response
+ if (not hasattr(other, 'precision_fractional')):
+ other = other.fix_precision(precision_fractional = self.precision_fractional)
+
+ if (self.precision_fractional > other.precision_fractional):
+ gp_response = (self.child / other.child * 10 ** other.precision_fractional)
+ elif (self.precision_fractional < other.precision_fractional):
+ gp_response = ((self.child *10 ** (2 * other.precision_fractional
+ - self.precision_fractional)) / other.child)
+ else:
+ gp_response = (self.child * 10 ** self.precision_fractional / other.child)
+
+ return torch.fmod(gp_response, self.field)
+
# def __mul__(self, other):
# # if other is not a fixed tensor, convert it to a fixed one
@@ -1957,7 +1950,7 @@ def sum(self, *args, **kwargs):
return gp_response
def cumsum(self, *args, **kwargs):
- gp_response = self.child.cumsum(*args, **kwargs) % spdz.field
+ gp_response = torch.fmod(self.child.cumsum(*args, **kwargs), spdz.field)
return gp_response
def __mul__(self, other):
@@ -2071,7 +2064,7 @@ def get(self, deregister_ptr=False):
var.child = None
if hasattr(self, "grad") and self.grad is not None:
var_grad = self.grad.shares.child.sum_get()
- value = var_grad.data % spdz.field
+ value = torch.fmod(var_grad.data, spdz.field)
# TODO: Add this thing for negative values
# gate = (value > spdz.torch_max_value).long()
# neg_nums = (value - spdz.torch_field) * gate
@@ -2082,7 +2075,7 @@ def get(self, deregister_ptr=False):
var.assign_grad_(var_grad)
return var
# TODO: have deregister_ptr do something
- value = self.shares.child.sum_get() % spdz.field
+ value = torch.fmod(self.shares.child.sum_get(), spdz.field)
gate = (value > spdz.torch_max_value).long()
diff --git a/syft/spdz/spdz.py b/syft/spdz/spdz.py
--- a/syft/spdz/spdz.py
+++ b/syft/spdz/spdz.py
@@ -20,7 +20,7 @@
def encode(rational, precision_fractional=PRECISION_FRACTIONAL, mod=field):
upscaled = (rational * BASE ** precision_fractional).long()
- field_element = upscaled % mod
+ field_element = torch.fmod(upscaled, mod)
return field_element
@@ -83,10 +83,10 @@ def swap_shares(shares):
def truncate(x, interface, amount=PRECISION_FRACTIONAL, mod=field):
- print("truncating")
- if interface.get_party() == 0:
- return (x / BASE ** amount) % mod
- return (mod - ((mod - x) / BASE ** amount)) % mod
+
+ if (interface.get_party() == 0):
+ return torch.fmod((x / BASE ** amount), mod)
+ return torch.fmod((mod - ((mod - x) / BASE ** amount)), mod)
def public_add(x, y, interface):
@@ -98,11 +98,11 @@ def public_add(x, y, interface):
def spdz_add(a, b, mod=field):
c = a + b
- return c % mod
+ return torch.fmod(c, mod)
def spdz_neg(a, mod=field):
- return (mod - a) % mod
+ return torch.fmod((mod - a), mod)
def spdz_mul(x, y, workers, mod=field):
@@ -112,18 +112,22 @@ def spdz_mul(x, y, workers, mod=field):
triple = generate_mul_triple_communication(shape, workers)
a, b, c = triple
- d = (x - a) % mod
- e = (y - b) % mod
+ d = torch.fmod((x - a), mod)
+ e = torch.fmod((y - b), mod)
- delta = d.child.sum_get() % mod
- epsilon = e.child.sum_get() % mod
+ delta = torch.fmod(d.child.sum_get(), mod)
+ epsilon = torch.fmod(e.child.sum_get(), mod)
epsilon_delta = epsilon * delta
delta = delta.broadcast(workers)
epsilon = epsilon.broadcast(workers)
- z = (c + (delta * b) % mod + (epsilon * a) % mod) % mod
+ z = torch.fmod((c
+ + torch.fmod((delta * b), mod)
+ + torch.fmod((epsilon * a), mod)
+ ), mod)
+
z.child.public_add_(epsilon_delta)
@@ -142,21 +146,21 @@ def spdz_matmul(x, y, workers, mod=field):
assert x_width == y_height, f"dimension mismatch: {x_width!r} != {y_height!r}"
a, b, c = generate_matmul_triple_communication(shapes, workers)
- r = (x - a) % mod
- s = (y - b) % mod
+ r = torch.fmod((x - a), mod)
+ s = torch.fmod((y - b), mod)
# Communication
- rho = r.child.sum_get() % mod
- sigma = s.child.sum_get() % mod
- rho_sigma = torch.mm(rho, sigma) % mod
+ rho = torch.fmod(r.child.sum_get(), mod)
+ sigma = torch.fmod(s.child.sum_get(), mod)
+ rho_sigma = torch.fmod(torch.mm(rho, sigma), mod)
rho = rho.broadcast(workers)
sigma = sigma.broadcast(workers)
- a_sigma = torch.mm(a, sigma) % mod
- rho_b = torch.mm(rho, b) % mod
+ a_sigma = torch.fmod(torch.mm(a, sigma), mod)
+ rho_b = torch.fmod(torch.mm(rho, b), mod)
- z = (a_sigma + rho_b + c) % mod
+ z = torch.fmod((a_sigma + rho_b + c), mod)
z.child.public_add_(rho_sigma)
return z
| URGENT: remove all % operators and replace with torch.fmod()
@channel - BUG IN PYTORCH 0.3.1
The modulus operator doesn't always work.
https://github.com/pytorch/pytorch/issues/1164
torch.fmod() works correctly
torch.remainder() and the % sign do NOT work.
We need to swap out all uses of % for torch.fmod() pronto
| I think I can take this. @iamtrask
Thank you @ionlights !!
Note that % work fine if it's just between two python ints... it's only when it's computing on one or more torch tensors that it's an issue
Hey @ionlights - how's it goin?
:wave: Sorry, got caught up finishing up some assignments due an hour ago. :joy:
I should be able to finish this up towards late afternoon today (Oct 08) in EST. | 2018-10-10T02:45:39 |
|
OpenMined/PySyft | 1,779 | OpenMined__PySyft-1779 | [
"1760"
] | da0b2e0ae0f53effdbcc53c22053023a62da4671 | diff --git a/syft/core/frameworks/torch/hook.py b/syft/core/frameworks/torch/hook.py
--- a/syft/core/frameworks/torch/hook.py
+++ b/syft/core/frameworks/torch/hook.py
@@ -686,7 +686,7 @@ def module_end_get_(self):
def module_move_(self, dest):
return self.send(dest).end_get()
- torch.nn.Module.move = module_move_
+ torch.nn.Module.move = module_move_
def module_get_(self):
"""Get model parameters"""
diff --git a/syft/core/frameworks/torch/tensor.py b/syft/core/frameworks/torch/tensor.py
--- a/syft/core/frameworks/torch/tensor.py
+++ b/syft/core/frameworks/torch/tensor.py
@@ -2632,7 +2632,7 @@ def send(self, *workers, ptr_id=None, as_list=False):
for worker in workers:
gpt_dict[worker] = (self * 1).send(worker).child
sy._GeneralizedPointerTensor(gpt_dict).on(self)
- if(as_list):
+ if as_list:
return self.pointers()
else:
return self
@@ -2737,7 +2737,7 @@ def send(
new_data_id=None,
new_grad_id=None,
new_grad_data_id=None,
- as_list=False
+ as_list=False,
):
"""Give the root of the chain held by self to worker self->alice->obj
[worker] => self->worker->alice->obj Because there are Variable
@@ -2761,7 +2761,7 @@ def send(
gpt_dict[worker] = (self * 1).send(worker).child
sy._GeneralizedPointerTensor(gpt_dict).on(self)
- if(as_list):
+ if as_list:
return self.child.pointers()
return self
@@ -2801,8 +2801,10 @@ def send(
wrapper.native_set_()
wrapper.child.id = id
pointer = wrapper.child.create_pointer(
- location=worker, id_at_location=remote_id, register=True,
- original_pointer=original_pointer
+ location=worker,
+ id_at_location=remote_id,
+ register=True,
+ original_pointer=original_pointer,
)
torch_utils.bind_tensor_nodes(wrapper, pointer)
diff --git a/syft/core/workers/base.py b/syft/core/workers/base.py
--- a/syft/core/workers/base.py
+++ b/syft/core/workers/base.py
@@ -607,10 +607,11 @@ def get_worker(self, id_or_worker):
if id_or_worker in self._known_workers:
return self._known_workers[id_or_worker]
else:
- logging.warning(
- "Worker", self.id, "couldnt recognize worker", id_or_worker
+ raise RuntimeWarning(
+ "Worker {} couldnt recognize worker {}".format(
+ self.id, id_or_worker
+ )
)
- return id_or_worker
else:
if id_or_worker.id not in self._known_workers:
self.add_worker(id_or_worker)
| diff --git a/test/torch_test.py b/test/torch_test.py
--- a/test/torch_test.py
+++ b/test/torch_test.py
@@ -359,6 +359,17 @@ def test_send_get_tensor(self):
# because .get_() was called, x should no longer be in the remote worker's objects dict
assert ptr_id not in bob._objects
+ def test_send_pointer_to_unknown_worker(self):
+ """Tests that sending a pointer to a unknown worker results on a
+ RuntimeWarning exception."""
+ # Create worker that doesn't know any other worker
+ carl = sy.VirtualWorker(id="carl", hook=hook, is_client_worker=False)
+ try:
+ sy.FloatTensor([1, 2, 3, 4, 5]).send(bob).send(carl)
+ assert False
+ except RuntimeWarning:
+ assert True
+
def test_multiple_pointers_to_same_target(self):
# There are two cases:
# - You're sending a var on a loc:id you're already pointing at -> should abort
@@ -658,7 +669,11 @@ def test_end_get_tensor(self):
bob_id = random.randint(0, 10e10)
alice_id = random.randint(0, 10e10)
- x = sy.FloatTensor([1, 2, 3, 4, 5]).send(bob, ptr_id=bob_id).send(alice, ptr_id=alice_id)
+ x = (
+ sy.FloatTensor([1, 2, 3, 4, 5])
+ .send(bob, ptr_id=bob_id)
+ .send(alice, ptr_id=alice_id)
+ )
x2 = x.end_get()
# Now alice will own the tensor that was in bob and bob won't have it anymore
@@ -1958,5 +1973,6 @@ def test_gpc_workers(self):
assert results == [k.id for k in x_pointer_tensor_dict.keys()]
+
if __name__ == "__main__":
unittest.main()
| More descriptive error (or warning) when workers can't see each other?
In the Part 4 of the tutorial available at examples/tutorials/ there is this block of code:
```
bob.add_workers([alice, secure_worker])
alice.add_workers([bob, secure_worker])
secure_worker.add_workers([alice, bob])
```
If you remove these lines on cell 6 while trying to perform a move operation you see the following error:
```
/syft-0.1.0-py3.6.egg/syft/core/frameworks/torch/tensor.py in register_pointer(self)
1015 def register_pointer(self):
1016 worker = self.owner
-> 1017 location = self.location.id
1018 id_at_location = self.id_at_location
1019 # Add the remote address
AttributeError: 'str' object has no attribute 'id'
```
Which is not a very descriptive message and does not indicate the real problem.
The same error message shows on Part 3 if you remove these lines:
```
# making sure that bob/alice know about each other
bob.add_worker(alice)
alice.add_worker(bob)
```
And try to run something like:
```
x = sy.FloatTensor([1,2,3,4,5]).send(bob).send(alice)
```
---
I think a more descriptive error message can help users debug this simple mistake, would be happy to add this.
Cheers!
| 2018-12-16T00:34:01 |
|
OpenMined/PySyft | 1,792 | OpenMined__PySyft-1792 | [
"1785"
] | c568bea944fbde5bfe28c1071fdfd07ac3c3d78b | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -35,12 +35,13 @@
# ones.
extensions = [
"sphinx.ext.autodoc",
- "sphinx.ext.todo",
+ "sphinx.ext.autosummary",
"sphinx.ext.coverage",
+ "sphinx.ext.githubpages",
"sphinx.ext.mathjax",
+ "sphinx.ext.napoleon",
+ "sphinx.ext.todo",
"sphinx.ext.viewcode",
- "sphinx.ext.githubpages",
- "sphinx.ext.autosummary",
]
# Add any paths that contain templates here, relative to this directory.
| Modify documentation generation code to use napoleon
Napoleon https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html allows for us to use google style documentation with sphinx. This issue enables #1784
| 2018-12-23T14:27:37 |
||
OpenMined/PySyft | 1,908 | OpenMined__PySyft-1908 | [
"1896"
] | 0527cd54928c3174d387e7a8668da4a08c3c28d0 | diff --git a/syft/frameworks/torch/hook.py b/syft/frameworks/torch/hook.py
--- a/syft/frameworks/torch/hook.py
+++ b/syft/frameworks/torch/hook.py
@@ -524,7 +524,7 @@ def overloaded_syft_method(self, *args, **kwargs):
return overloaded_syft_method
- def get_hooked_method(hook_self, attr):
+ def get_hooked_method(hook_self, method_name):
"""
Hook a method in order to replace all args/kwargs syft/torch tensors with
their child attribute if they exist
@@ -535,45 +535,47 @@ def get_hooked_method(hook_self, attr):
:return: the hooked method
"""
- @wraps(attr)
+ @wraps(method_name)
def overloaded_native_method(self, *args, **kwargs):
"""
Operate the hooking
"""
if not hasattr(self, "child"): # means that it's not a wrapper
- cmd = getattr(self, f"native_{attr}")
+ method = getattr(self, f"native_{method_name}")
# Run the native function with the new args
try:
-
if isinstance(args, tuple):
- response = cmd(*args)
+ response = method(*args)
else:
- response = cmd(args)
+ response = method(args)
except BaseException as e:
# we can make some errors more descriptive with this method
raise route_method_exception(e, self, args, kwargs)
else: # means that there is a wrapper to remove
-
try:
# Replace all torch tensor with their child attribute
new_self, new_args = syft.frameworks.torch.hook_args.hook_method_args(
- attr, self, args
+ method_name, self, args
)
except BaseException as e:
# we can make some errors more descriptive with this method
raise route_method_exception(e, self, args, kwargs)
# Send the new command to the appropriate class and get the response
- cmd = getattr(new_self, attr)
- response = cmd(*new_args, **kwargs)
+ method = getattr(new_self, method_name)
+ response = method(*new_args, **kwargs)
+
+ # For inplace methods, just directly return self
+ if syft.torch.is_inplace_method(method_name):
+ return self
# Put back the wrappers where needed
response = syft.frameworks.torch.hook_args.hook_response(
- attr, response, wrap_type=type(self), new_self=self
+ method_name, response, wrap_type=type(self), new_self=self
)
return response
diff --git a/syft/frameworks/torch/torch_attributes.py b/syft/frameworks/torch/torch_attributes.py
--- a/syft/frameworks/torch/torch_attributes.py
+++ b/syft/frameworks/torch/torch_attributes.py
@@ -112,7 +112,6 @@ def __init__(self, torch: ModuleType, hook: ModuleType) -> None:
"is_tensor",
"isfinite",
"load",
- "zeros_like",
"randperm",
]
@@ -159,6 +158,9 @@ def __init__(self, torch: ModuleType, hook: ModuleType) -> None:
self.command_guard = self._command_guard
+ # Dict {method_name: <is_inplace:bool>
+ self.inplace_methods = {}
+
def _command_guard(
self, command: str, torch_domain: str, get_native: bool = False
) -> Union[Callable[..., Any], str]:
@@ -232,6 +234,19 @@ def get_native_torch_name(attr: str) -> str:
native_func_name = ".".join(parts)
return native_func_name
+ def is_inplace_method(self, method_name):
+ """
+ Says if a method is inplace or not by test if it ends by _ and is not a __xx__
+ :param method_name: the name for the method
+ :return: boolean
+ """
+ try:
+ return self.inplace_methods[method_name]
+ except KeyError:
+ is_inplace = method_name[-1] == "_" and "__" not in method_name
+ self.inplace_methods[method_name] = is_inplace
+ return is_inplace
+
@staticmethod
def apply_fix16922(torch):
"""
diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -264,8 +264,11 @@ def execute_command(self, message):
command = command.decode("utf-8")
# Handle methods
if _self is not None:
-
- tensor = getattr(_self, command)(*args, **kwargs)
+ if sy.torch.is_inplace_method(command):
+ getattr(_self, command)(*args, **kwargs)
+ return
+ else:
+ tensor = getattr(_self, command)(*args, **kwargs)
# Handle functions
else:
# At this point, the command is ALWAYS a path to a
@@ -304,7 +307,6 @@ def execute_command(self, message):
ptr_id=tensor.id,
garbage_collect_data=False,
)
-
return pointer
def send_command(self, recipient, message):
| diff --git a/test/torch/tensors/test_gc.py b/test/torch/tensors/test_gc.py
--- a/test/torch/tensors/test_gc.py
+++ b/test/torch/tensors/test_gc.py
@@ -132,6 +132,18 @@ def test_implicit_garbage_collect_double_pointer(workers):
# assert x.id not in workers["alice"]._objects
+# TESTING IN PLACE METHODS
+
+
+def test_inplace_method_on_pointer(workers):
+ bob = workers["bob"]
+ tensor = torch.tensor([[1.0, 2], [4.0, 2]])
+ pointer = tensor.send(bob)
+ pointer.add_(pointer)
+ tensor_back = pointer.get()
+ assert (tensor * 2 == tensor_back).all()
+
+
# TESTING LOGGING TENSORS
| Garbage Collection issue with in-place methods on tensors
This triggers a KeyError. I suspect unexpected garbage collection.
```
buf = torch.tensor([[1., 2], [4., 2]]).send(bob)
buf.add_(buf)
buf.get()
```
| 2019-02-14T10:45:48 |
|
OpenMined/PySyft | 1,983 | OpenMined__PySyft-1983 | [
"1975"
] | d1b40c2b538b0a526006546dd310469e26e910ad | diff --git a/syft/frameworks/torch/hook_args.py b/syft/frameworks/torch/hook_args.py
--- a/syft/frameworks/torch/hook_args.py
+++ b/syft/frameworks/torch/hook_args.py
@@ -1,12 +1,16 @@
import torch
+import syft as sy
from syft.exceptions import RemoteTensorFoundError
from syft.exceptions import PureTorchTensorFoundError
+from .tensors.interpreters import AbstractTensor
from .tensors.interpreters import PointerTensor
-from .tensors.decorators import LoggingTensor
from .tensors.interpreters import TorchTensor
from .tensors.interpreters import FixedPrecisionTensor
from .tensors.interpreters import AdditiveSharingTensor
from .tensors.interpreters import MultiPointerTensor
+from .tensors.decorators import LoggingTensor
+
+from typing import Callable, Union, Tuple
hook_method_args_functions = {}
hook_method_response_functions = {}
@@ -425,80 +429,89 @@ def build_response_hook(response, rules, wrap_type, wrap_args, return_tuple=Fals
return lambda x: f(lambdas, x)
-def zero_fold(*a):
+def zero_fold(*a, **k):
return tuple()
-def one_fold(return_tuple):
- def _one_fold(lambdas, args):
- return lambdas[0](args[0])
+def one_fold(return_tuple, **kwargs):
+ def _one_fold(lambdas, args, **kwargs):
+ return lambdas[0](args[0], **kwargs)
def tuple_one_fold(lambdas, args):
- return (lambdas[0](args[0]),)
+ return (lambdas[0](args[0], **kwargs),)
return {False: _one_fold, True: tuple_one_fold}[return_tuple]
-def two_fold(lambdas, args):
- return lambdas[0](args[0]), lambdas[1](args[1])
+def two_fold(lambdas, args, **kwargs):
+ return lambdas[0](args[0], **kwargs), lambdas[1](args[1], **kwargs)
-def three_fold(lambdas, args):
- return lambdas[0](args[0]), lambdas[1](args[1]), lambdas[2](args[2])
+def three_fold(lambdas, args, **kwargs):
+ return (
+ lambdas[0](args[0], **kwargs),
+ lambdas[1](args[1], **kwargs),
+ lambdas[2](args[2], **kwargs),
+ )
-def four_fold(lambdas, args):
- return (lambdas[0](args[0]), lambdas[1](args[1]), lambdas[2](args[2]), lambdas[3](args[3]))
+def four_fold(lambdas, args, **kwargs):
+ return (
+ lambdas[0](args[0], **kwargs),
+ lambdas[1](args[1], **kwargs),
+ lambdas[2](args[2], **kwargs),
+ lambdas[3](args[3], **kwargs),
+ )
-def five_fold(lambdas, args):
+def five_fold(lambdas, args, **kwargs):
return (
- lambdas[0](args[0]),
- lambdas[1](args[1]),
- lambdas[2](args[2]),
- lambdas[3](args[3]),
- lambdas[4](args[4]),
+ lambdas[0](args[0], **kwargs),
+ lambdas[1](args[1], **kwargs),
+ lambdas[2](args[2], **kwargs),
+ lambdas[3](args[3], **kwargs),
+ lambdas[4](args[4], **kwargs),
)
-def six_fold(lambdas, args):
+def six_fold(lambdas, args, **kwargs):
return (
- lambdas[0](args[0]),
- lambdas[1](args[1]),
- lambdas[2](args[2]),
- lambdas[3](args[3]),
- lambdas[4](args[4]),
- lambdas[5](args[5]),
+ lambdas[0](args[0], **kwargs),
+ lambdas[1](args[1], **kwargs),
+ lambdas[2](args[2], **kwargs),
+ lambdas[3](args[3], **kwargs),
+ lambdas[4](args[4], **kwargs),
+ lambdas[5](args[5], **kwargs),
)
-def seven_fold(lambdas, args):
+def seven_fold(lambdas, args, **kwargs):
return (
- lambdas[0](args[0]),
- lambdas[1](args[1]),
- lambdas[2](args[2]),
- lambdas[3](args[3]),
- lambdas[4](args[4]),
- lambdas[5](args[5]),
- lambdas[6](args[6]),
+ lambdas[0](args[0], **kwargs),
+ lambdas[1](args[1], **kwargs),
+ lambdas[2](args[2], **kwargs),
+ lambdas[3](args[3], **kwargs),
+ lambdas[4](args[4], **kwargs),
+ lambdas[5](args[5], **kwargs),
+ lambdas[6](args[6], **kwargs),
)
-def eight_fold(lambdas, args):
+def eight_fold(lambdas, args, **kwargs):
return (
- lambdas[0](args[0]),
- lambdas[1](args[1]),
- lambdas[2](args[2]),
- lambdas[3](args[3]),
- lambdas[4](args[4]),
- lambdas[5](args[5]),
- lambdas[6](args[6]),
- lambdas[7](args[7]),
+ lambdas[0](args[0], **kwargs),
+ lambdas[1](args[1], **kwargs),
+ lambdas[2](args[2], **kwargs),
+ lambdas[3](args[3], **kwargs),
+ lambdas[4](args[4], **kwargs),
+ lambdas[5](args[5], **kwargs),
+ lambdas[6](args[6], **kwargs),
+ lambdas[7](args[7], **kwargs),
)
-def many_fold(lambdas, args):
- return tuple([lambdas[i](args[i]) for i in range(len(lambdas))])
+def many_fold(lambdas, args, **kwargs):
+ return tuple([lambdas[i](args[i], **kwargs) for i in range(len(lambdas))])
# Add the possibility to make a type check in the identity function applied
@@ -531,3 +544,150 @@ def number_identity(i):
else:
return lambda i: i
+
+
+# -- Fast way to register responses and transform tensors in pointers
+
+register_response_functions = {}
+
+
+def register_response(attr: str, response: object, owner: sy.workers.AbstractWorker) -> object:
+ """
+ When a remote worker execute a command sent by someone else, the response is
+ inspected: all tensors are stored by this worker and a Pointer tensor is
+ made for each of them.
+
+ To make this efficient, we cache which elements of the response (which can be more
+ complicated with nested tuples for example) in the dict register_response_functions
+
+ However, sometimes a function (an attr) has multiple different response signatures.
+ This invalidates the cache, so we need to have a try/except which refreshes the
+ cache if the signature triggers an error.
+
+ Args:
+ attr (str): the name of the function being called
+ response (object): the response of this function
+ owner (BaseWorker): the worker which registers the tensors
+ """
+
+ # TODO: Why do we need to cast it in a tuple? this is a (small) time waste
+ response_is_tuple = isinstance(response, tuple)
+
+ # Add an artificial tuple
+ if not response_is_tuple:
+ response = (response, 1)
+
+ attr_id = "{}".format(attr)
+
+ try:
+ # Load the utility function to register the response and transform tensors with pointers
+ register_response_function = register_response_functions[attr_id]
+ # Try running it
+ new_response = register_response_function(response, owner=owner)
+
+ except (IndexError, KeyError, AssertionError): # Update the function in cas of an error
+ register_response_function = build_register_response_function(response)
+ # Store this utility function in the registry
+ register_response_functions[attr_id] = register_response_function
+ # Run it
+ new_response = register_response_function(response, owner=owner)
+
+ # Remove the artificial tuple
+ if not response_is_tuple:
+ new_response, _ = new_response
+
+ return new_response
+
+
+def build_register_response_function(response: object) -> Callable:
+ """
+ Build the function that registers the response and replaces tensors with pointers.
+
+ Example:
+ (1, tensor([1, 2]) is the response
+ f is the register_response_function
+ then f(p) = (1, (Wrapper)>Pointer)
+ """
+ # Inspect the call to find tensor arguments and return a rule whose
+ # structure is the same as the response object, with 1 where there was
+ # (torch or syft) tensors and 0 when not (ex: number, str, ...)
+ rule = build_rule(response)
+ # Build a function with this rule to efficiently replace syft tensors
+ # (but not pointer) with their child in the args objects
+ response_hook_function = build_register_response(response, rule)
+ return response_hook_function
+
+
+def register_transform_tensor(
+ tensor: Union[torch.Tensor, AbstractTensor], owner: sy.workers.AbstractWorker = None
+) -> PointerTensor:
+ """
+ Register a tensor and create a pointer that references it
+
+ Args:
+ tensor: the tensor
+ owner: the owner make the registration
+ Returns:
+ the pointer
+ """
+ assert owner is not None
+ # FIXME: should be added automatically
+ tensor.owner = owner
+
+ owner.register_obj(tensor)
+
+ pointer = tensor.create_pointer(
+ location=owner,
+ id_at_location=tensor.id,
+ register=True,
+ owner=owner,
+ ptr_id=tensor.id,
+ garbage_collect_data=False,
+ )
+ return pointer
+
+
+def build_register_response(response: object, rules: Tuple, return_tuple: bool = False) -> Callable:
+ """
+ Build a function given some rules to efficiently replace in the response object
+ torch tensors with a pointer after they are registered, and do nothing for other
+ types of object including , str, numbers, bool, etc.
+
+ Args:
+ response: the response
+ rules: the rule specifying where the tensors are
+ return_tuple: force to return a tuple even with a single element
+ Returns:
+ The function to apply on generic responses
+ """
+
+ # get the transformation lambda for each args
+ lambdas = [
+ (lambda i, **kwargs: i) # return the same object
+ if not r # if the rule is a number == 0.
+ else build_register_response(a, r, True) # If not, call recursively build_response_hook
+ if isinstance(r, (list, tuple)) # if the rule is a list or tuple.
+ # Last if not, rule is probably == 1 so use type to return the right transformation.
+ else lambda i, **kwargs: register_transform_tensor(i, **kwargs)
+ for a, r in zip(response, rules) # And do this for all the responses / rules provided
+ ]
+
+ # Instead of iterating which is slow, we use trick to efficiently
+ # apply each lambda to each arg
+ folds = {
+ 0: zero_fold,
+ 1: one_fold(return_tuple),
+ 2: two_fold,
+ 3: three_fold,
+ 4: four_fold,
+ 5: five_fold,
+ 6: six_fold,
+ 7: seven_fold,
+ 8: eight_fold,
+ }
+ try:
+ f = folds[len(lambdas)]
+ except KeyError:
+ f = many_fold
+
+ return lambda x, **kwargs: f(lambdas, x, **kwargs)
diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -285,57 +285,39 @@ def execute_command(self, message):
:return: a pointer to the result
"""
- command, _self, args, kwargs = message
+ command_name, _self, args, kwargs = message
# TODO add kwargs
kwargs = {}
- command = command.decode("utf-8")
+ command_name = command_name.decode("utf-8")
# Handle methods
if _self is not None:
- if sy.torch.is_inplace_method(command):
- getattr(_self, command)(*args, **kwargs)
+ if sy.torch.is_inplace_method(command_name):
+ getattr(_self, command_name)(*args, **kwargs)
return
else:
- tensor = getattr(_self, command)(*args, **kwargs)
+ response = getattr(_self, command_name)(*args, **kwargs)
# Handle functions
else:
# At this point, the command is ALWAYS a path to a
# function (i.e., torch.nn.functional.relu). Thus,
# we need to fetch this function and run it.
- sy.torch.command_guard(command, "torch_modules")
+ sy.torch.command_guard(command_name, "torch_modules")
- paths = command.split(".")
+ paths = command_name.split(".")
command = self
for path in paths:
command = getattr(command, path)
- tensor = command(*args, **kwargs)
+ response = command(*args, **kwargs)
# some functions don't return anything (such as .backward())
# so we need to check for that here.
- if tensor is not None:
-
- # FIXME: should be added automatically
- tensor.owner = self
-
- # TODO: Handle when the response is not simply a tensor
- # don't re-register tensors if the operation was inline
- # not only would this be inefficient, but it can cause
- # serious issues later on
- # if(_self is not None):
- # if(tensor.id != _self.id):
- self.register_obj(tensor)
-
- pointer = tensor.create_pointer(
- location=self,
- id_at_location=tensor.id,
- register=True,
- owner=self,
- ptr_id=tensor.id,
- garbage_collect_data=False,
- )
- return pointer
+ if response is not None:
+ # Register response et create pointers for tensor elements
+ response = sy.frameworks.torch.hook_args.register_response(command_name, response, self)
+ return response
def send_command(self, recipient, message):
"""
| diff --git a/test/torch/tensors/test_pointer.py b/test/torch/tensors/test_pointer.py
--- a/test/torch/tensors/test_pointer.py
+++ b/test/torch/tensors/test_pointer.py
@@ -259,3 +259,23 @@ def test_remote_to_cpu_device(workers):
x = th.tensor([1, 2, 3, 4, 5]).send(bob)
x.to(device)
+
+
+def test_remote_function_with_multi_ouput(workers):
+ """
+ Functions like .split return several tensors, registration and response
+ must be made carefully in this case
+ """
+ bob = workers["bob"]
+
+ tensor = torch.tensor([1, 2, 3, 4.0])
+ ptr = tensor.send(bob)
+ r_ptr = torch.split(ptr, 2)
+ assert (r_ptr[0].get() == torch.tensor([1, 2.0])).all()
+
+ tensor = torch.tensor([1, 2, 3, 4.0])
+ ptr = tensor.send(bob)
+ max_value, argmax_idx = torch.max(ptr, 0)
+
+ assert max_value.get().item() == 4.0
+ assert argmax_idx.get().item() == 3
diff --git a/test/workers/test_virtual.py b/test/workers/test_virtual.py
--- a/test/workers/test_virtual.py
+++ b/test/workers/test_virtual.py
@@ -1,5 +1,6 @@
-import syft as sy
+import random
+import syft as sy
from syft.workers.virtual import VirtualWorker
from syft.codes import MSGTYPE
from syft import serde
@@ -19,7 +20,8 @@ def test_send_msg():
me = sy.torch.hook.local_worker
# create a new worker (to send the object to)
- bob = VirtualWorker(sy.torch.hook)
+ worker_id = int(10e10 * random.random())
+ bob = VirtualWorker(sy.torch.hook, id=f"bob{worker_id}")
# initialize the object and save it's id
obj = torch.Tensor([100, 100])
@@ -40,7 +42,8 @@ def test_send_msg_using_tensor_api():
"""
# create worker to send object to
- bob = VirtualWorker(sy.torch.hook)
+ worker_id = int(10e10 * random.random())
+ bob = VirtualWorker(sy.torch.hook, id=f"bob{worker_id}")
# create a tensor to send (default on local_worker)
obj = torch.Tensor([100, 100])
@@ -66,7 +69,8 @@ def test_recv_msg():
# TEST 1: send tensor to alice
# create a worker to send data to
- alice = VirtualWorker(sy.torch.hook)
+ worker_id = int(10e10 * random.random())
+ alice = VirtualWorker(sy.torch.hook, id=f"alice{worker_id}")
# create object to send
obj = torch.Tensor([100, 100])
@@ -113,8 +117,10 @@ def tests_worker_convenience_methods():
"""
me = sy.torch.hook.local_worker
- bob = VirtualWorker(sy.torch.hook)
- alice = VirtualWorker(sy.torch.hook)
+ worker_id = int(10e10 * random.random())
+ bob = VirtualWorker(sy.torch.hook, id=f"bob{worker_id}")
+ worker_id = int(10e10 * random.random())
+ alice = VirtualWorker(sy.torch.hook, id=f"alice{worker_id}")
obj = torch.Tensor([100, 100])
# Send data to alice
@@ -142,7 +148,8 @@ def tests_worker_convenience_methods():
def test_search():
- bob = VirtualWorker(sy.torch.hook)
+ worker_id = int(10e10 * random.random())
+ bob = VirtualWorker(sy.torch.hook, id=f"bob{worker_id}")
x = (
torch.tensor([1, 2, 3, 4, 5])
diff --git a/test/workers/test_worker.py b/test/workers/test_worker.py
--- a/test/workers/test_worker.py
+++ b/test/workers/test_worker.py
@@ -1,5 +1,6 @@
import pytest
+import random
import torch
import syft as sy
from syft.exceptions import WorkerNotFoundException
@@ -11,15 +12,20 @@ def test___init__():
tensor = torch.tensor([1, 2, 3, 4])
- alice = VirtualWorker(hook, id="alice")
- bob = VirtualWorker(hook, id="bob")
- charlie = VirtualWorker(hook, id="charlie")
- dawson = VirtualWorker(hook, id="dawson", data=[tensor])
+ worker_id = int(10e10 * random.random())
+ alice_id = f"alice{worker_id}"
+ alice = VirtualWorker(hook, id=alice_id)
+ worker_id = int(10e10 * random.random())
+ bob = VirtualWorker(hook, id=f"bob{worker_id}")
+ worker_id = int(10e10 * random.random())
+ charlie = VirtualWorker(hook, id=f"charlie{worker_id}")
+ worker_id = int(10e10 * random.random())
+ dawson = VirtualWorker(hook, id=f"dawson{worker_id}", data=[tensor])
# Ensure adding data on signup functionality works as expected
assert tensor.owner == dawson
- assert bob.get_worker("alice").id == alice.id
+ assert bob.get_worker(alice_id).id == alice.id
assert bob.get_worker(alice).id == alice.id
assert bob.get_worker(charlie).id == charlie.id
| Get .max() working on PointerTensor; multiple ptrs in response
We can't return tuples of tensors when calling remote executions. This means that functions like .max() fail on PointerTensor objects.
| ```
import torch
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id="bob")
x = torch.tensor([1,2,3,4,5])
x_ptr = x.send(bob)
print(x_ptr.max())
```
Currently, `x_ptr.max()` returns `(Wrapper)>[PointerTensor | me:73509977871 -> bob:73509977871]`
and instead, should return `5` right?
@iamtrask I was wondering, if we should build an interface to directly enable all torch tensor functionalities to pointer tensor? Is that something useful? Do we require all torch functionality for pointer tensor? Why do we specifically need max?
max is one between others (torch.split, etc.), basically all functions that can return more than one tensors with fail if they are run remotely as we explicitely always make the assumption the response is a unique tensor. | 2019-03-09T15:18:05 |
OpenMined/PySyft | 2,022 | OpenMined__PySyft-2022 | [
"1992"
] | 3ec9428fcbbefa711ae78dc99654c295456b06bf | diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -73,17 +73,6 @@ def __init__(
# objects where all objects are stored using their IDs as keys.
self._objects = {}
- # Declare workers as appropriate
- self._known_workers = {}
- if hook.local_worker is not None:
- if self.id not in self.hook.local_worker._known_workers:
- hook.local_worker.add_worker(self)
- for worker_id, worker in hook.local_worker._known_workers.items():
- if worker_id not in self._known_workers:
- self.add_worker(worker)
- if self.id not in worker._known_workers:
- worker.add_worker(self)
-
# For performance, we cache each
self._message_router = {
MSGTYPE.CMD: self.execute_command,
@@ -94,8 +83,29 @@ def __init__(
MSGTYPE.GET_SHAPE: self.get_tensor_shape,
MSGTYPE.SEARCH: self.search,
}
+
self.load_data(data)
+ # Declare workers as appropriate
+ self._known_workers = {}
+ if hook.local_worker is not None:
+ known_workers = self.hook.local_worker._known_workers
+ if self.id in known_workers:
+ if isinstance(known_workers[self.id], type(self)):
+ # If a worker with this id already exists and it has the
+ # same type as the one being created, we copy all the attributes
+ # of the existing worker to this one.
+ self.__dict__.update(known_workers[self.id].__dict__)
+ else:
+ raise RuntimeError("Worker initialized with the same id and different types.")
+ else:
+ hook.local_worker.add_worker(self)
+ for worker_id, worker in hook.local_worker._known_workers.items():
+ if worker_id not in self._known_workers:
+ self.add_worker(worker)
+ if self.id not in worker._known_workers:
+ worker.add_worker(self)
+
# SECTION: Methods which MUST be overridden by subclasses
@abstractmethod
def _send_msg(self, message: bin, location: "BaseWorker"):
| diff --git a/test/conftest.py b/test/conftest.py
--- a/test/conftest.py
+++ b/test/conftest.py
@@ -1,10 +1,27 @@
import pytest
import torch
+from multiprocessing import Process
import syft
from syft import TorchHook
[email protected]()
+def start_proc(): # pragma: no cover
+ """ helper function for spinning up a websocket participant """
+
+ def _start_proc(participant, kwargs):
+ def target():
+ server = participant(**kwargs)
+ server.start()
+
+ p = Process(target=target)
+ p.start()
+ return p
+
+ return _start_proc
+
+
@pytest.fixture(scope="session", autouse=True)
def hook():
hook = TorchHook(torch)
diff --git a/test/workers/test_base.py b/test/workers/test_base.py
new file mode 100644
--- /dev/null
+++ b/test/workers/test_base.py
@@ -0,0 +1,46 @@
+import pytest
+import time
+
+import syft as sy
+import torch as th
+
+from syft.workers import WebsocketClientWorker
+from syft.workers import WebsocketServerWorker
+
+
+def test_create_already_existing_worker(hook):
+ # Shares tensor with bob
+ bob = sy.VirtualWorker(hook, "bob")
+ x = th.tensor([1, 2, 3]).send(bob)
+
+ # Recreates bob and shares a new tensor
+ bob = sy.VirtualWorker(hook, "bob")
+ y = th.tensor([2, 2, 2]).send(bob)
+
+ # Recreates bob and shares a new tensor
+ bob = sy.VirtualWorker(hook, "bob")
+ z = th.tensor([2, 2, 10]).send(bob)
+
+ # Both workers should be the same, so the following operation should be valid
+ try:
+ _ = x + y * z
+ except KeyError:
+ assert False
+
+
+def test_create_already_existing_worker_with_different_type(hook, start_proc):
+ # Shares tensor with bob
+ bob = sy.VirtualWorker(hook, "bob")
+ _ = th.tensor([1, 2, 3]).send(bob)
+
+ kwargs = {"id": "fed1", "host": "localhost", "port": 8765, "hook": hook}
+ server = start_proc(WebsocketServerWorker, kwargs)
+
+ time.sleep(0.1)
+
+ # Recreates bob as a different type of worker
+ kwargs = {"id": "bob", "host": "localhost", "port": 8765, "hook": hook}
+ with pytest.raises(RuntimeError):
+ bob = WebsocketClientWorker(**kwargs)
+
+ server.terminate()
diff --git a/test/workers/test_websocket_worker.py b/test/workers/test_websocket_worker.py
--- a/test/workers/test_websocket_worker.py
+++ b/test/workers/test_websocket_worker.py
@@ -2,22 +2,9 @@
import time
from syft.workers import WebsocketClientWorker
from syft.workers import WebsocketServerWorker
-from multiprocessing import Process
-def start_proc(participant, kwargs): # pragma: no cover
- """ helper function for spinning up a websocket participant """
-
- def target():
- server = participant(**kwargs)
- server.start()
-
- p = Process(target=target)
- p.start()
- return p
-
-
-def test_websocket_worker(hook):
+def test_websocket_worker(hook, start_proc):
"""Evaluates that you can do basic tensor operations using
WebsocketServerWorker"""
| Urgent: fail gracefully when initializing two VirtualWorker objects with the same id/name
When people are introduced to PySyft, they often fall into the mistake (particularly in Jupyter Notebbooks) of initializing a VirtualWorker with the same ID twice. This causes really strange errors because both workers end up co-existing together. We need to modify the VirtualWorker (or perhaps the BaseWorker) API to make it so that this fails gracefully (aka... that the new worker actually becomes the old worker or all intensive purposes)
Aka - we want this code to function normally
bob = sy.VirtualWorker(hook, "bob")
x = th.tensor([1,2,3,4,5]).send(bob)
bob = sy.VirtualWorker(hook, "bob")
y = th.tensor([1,2,3,4,5]).send(bob)
z = x + y
z.get() # returns [2,4,6,8,10]
We should accomplish this by having both the new "bob" and the old "bob" point to the same underlying _objects dictionary.
| At any moment, you can check all the virtual workers present on your process by retrieving syft.hook.local_worker.(known_workers) as they get automatically added to this list on __init__. Just making a search and returning the already existing one if any should fix the pb
@LaRiffle Tried this out. Trying to naively return the known worker with the same id, causes over half of the tests to fail. I am of the impression it might be best to make this simply throw a verbose error for the moment. #2000 for reference to my changes | 2019-03-28T23:21:41 |
OpenMined/PySyft | 2,048 | OpenMined__PySyft-2048 | [
"2047"
] | 0a6063307022aac492d4c8351c98980daa45bc65 | diff --git a/syft/exceptions.py b/syft/exceptions.py
--- a/syft/exceptions.py
+++ b/syft/exceptions.py
@@ -16,8 +16,7 @@ class PureTorchTensorFoundError(BaseException):
message -- explanation of the error
"""
- def __init__(self, tensor):
- self.tensor = tensor
+ pass
class RemoteTensorFoundError(BaseException):
diff --git a/syft/frameworks/torch/hook_args.py b/syft/frameworks/torch/hook_args.py
--- a/syft/frameworks/torch/hook_args.py
+++ b/syft/frameworks/torch/hook_args.py
@@ -42,10 +42,10 @@
PointerTensor: lambda p: (_ for _ in ()).throw(RemoteTensorFoundError(p)),
torch.Tensor: lambda i: i.child
if hasattr(i, "child")
- else (_ for _ in ()).throw(PureTorchTensorFoundError(i)),
+ else (_ for _ in ()).throw(PureTorchTensorFoundError),
torch.nn.Parameter: lambda i: i.child
if hasattr(i, "child")
- else (_ for _ in ()).throw(PureTorchTensorFoundError(i)),
+ else (_ for _ in ()).throw(PureTorchTensorFoundError),
LoggingTensor: lambda i: i.child,
FixedPrecisionTensor: lambda i: i.child,
AdditiveSharingTensor: lambda i: i.child,
diff --git a/syft/frameworks/torch/torch_attributes.py b/syft/frameworks/torch/torch_attributes.py
--- a/syft/frameworks/torch/torch_attributes.py
+++ b/syft/frameworks/torch/torch_attributes.py
@@ -98,6 +98,7 @@ def __init__(self, torch: ModuleType, hook: ModuleType) -> None:
"rand",
"randint",
"randn_like",
+ "randn",
"range",
"save",
"short",
| Key error in hook_args when calling torch.randn()
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import syft as sy # <-- NEW: import the Pysyft library
hook = sy.TorchHook(torch) # <-- NEW: hook PyTorch ie add extra functionalities to support Federated Learning
a=torch.randn_like(torch.ones(1,2))
print(a)
a=torch.randn(1,2)
print(a)
```
`a=torch.randn_like(torch.ones(1,2))`
works properly but error occurres when executing torch.randn()
```
Traceback (most recent call last):
File "/home/crd/.local/lib/python3.6/site-packages/syft/frameworks/torch/hook_args.py", line 134, in hook_function_args
hook_args = hook_method_args_functions[attr]
KeyError: 'torch.randn'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 11, in <module>
a=torch.randn(1,2)
File "/home/crd/.local/lib/python3.6/site-packages/syft/frameworks/torch/hook.py", line 690, in overloaded_func
response = TorchTensor.handle_func_command(command)
File "/home/crd/.local/lib/python3.6/site-packages/syft/frameworks/torch/tensors/interpreters/native.py", line 191, in handle_func_command
cmd, args, kwargs, return_args_type=True
File "/home/crd/.local/lib/python3.6/site-packages/syft/frameworks/torch/hook_args.py", line 141, in hook_function_args
args, return_tuple=True
File "/home/crd/.local/lib/python3.6/site-packages/syft/frameworks/torch/hook_args.py", line 171, in build_hook_args_function
```
| The thing is that we haven't hooked torch.randn yet. Check issue https://github.com/OpenMined/PySyft/issues/2046 we should hopefully get it ready soon. For now what you could do is create a random array in numpy and convert it to a torch tensor.
Thanks @peter9711 for reporting this issue, don't hesitate tu put the full stacktrace next time, the real message was in the part you cut. I'm issuing a fix in a few minutes! | 2019-04-08T15:35:37 |
|
OpenMined/PySyft | 2,106 | OpenMined__PySyft-2106 | [
"2105"
] | 39ef35da493bafa51d1482b8eae23ac072ce7eac | diff --git a/syft/frameworks/torch/hook.py b/syft/frameworks/torch/hook.py
--- a/syft/frameworks/torch/hook.py
+++ b/syft/frameworks/torch/hook.py
@@ -713,7 +713,6 @@ def _add_registration_to___init__(hook_self, tensor_type: type, torch_tensor: bo
tensor_type.native___init__ = tensor_type.__init__
def new___init__(cls, *args, owner=None, id=None, register=True, **kwargs):
-
initialize_tensor(
hook_self=hook_self,
cls=cls,
@@ -723,9 +722,6 @@ def new___init__(cls, *args, owner=None, id=None, register=True, **kwargs):
init_kwargs=kwargs,
)
- # if register:
- # owner.register_object(cls, id=id)
-
tensor_type.__init__ = new___init__
def _hook_tensor(hook_self):
@@ -742,6 +738,9 @@ def _hook_tensor(hook_self):
def new_tensor(*args, owner=None, id=None, register=True, **kwargs):
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
_apply_args(hook_self, current_tensor, owner, id)
+ if register:
+ current_tensor.owner.register_obj(current_tensor)
+
return current_tensor
hook_self.torch.tensor = new_tensor
| diff --git a/test/test_local_worker.py b/test/test_local_worker.py
new file mode 100644
--- /dev/null
+++ b/test/test_local_worker.py
@@ -0,0 +1,16 @@
+import syft as sy
+import torch as th
+
+
+def test_is_client_true(hook):
+ me = hook.local_worker
+ me.is_client_worker = True
+ x = th.tensor([1, 2, 3])
+ assert x.id not in me._objects
+
+
+def test_is_client_false(hook):
+ me = hook.local_worker
+ me.is_client_worker = False
+ x = th.tensor([1, 2, 3])
+ assert x.id in me._objects
| A new Tensor should be automatically registered if local_worker is not a client worker
import syft as sy
import torch as th
hook = sy.TorchHook(th, is_client=False)
me = hook.local_worker
print(me._objects) # {}
x = th.tensor([1, 2, 3])
print(me._objects) # {}
| 2019-04-26T22:06:24 |
|
OpenMined/PySyft | 2,144 | OpenMined__PySyft-2144 | [
"2126"
] | a3dccbbff1cdf767f0d99a267aa3d41968da64a9 | diff --git a/syft/frameworks/torch/hook/hook.py b/syft/frameworks/torch/hook/hook.py
--- a/syft/frameworks/torch/hook/hook.py
+++ b/syft/frameworks/torch/hook/hook.py
@@ -365,7 +365,8 @@ def data(self, new_data):
if hasattr(self, "child"):
del self.child
- self.native_param_data.set_(new_data) # .wrap()
+ with torch.no_grad():
+ self.set_(new_data)
return self
torch.nn.Parameter.data = data
@@ -408,7 +409,8 @@ def grad(self, new_grad):
self.child.grad = new_grad # .wrap()
else:
if self.native_param_grad is not None:
- self.native_param_grad.set_(new_grad) # .wrap()
+ with torch.no_grad():
+ self.native_param_grad = new_grad
elif new_grad is not None:
self.native_param_grad = new_grad
return self
diff --git a/syft/frameworks/torch/hook/hook_args.py b/syft/frameworks/torch/hook/hook_args.py
--- a/syft/frameworks/torch/hook/hook_args.py
+++ b/syft/frameworks/torch/hook/hook_args.py
@@ -23,6 +23,8 @@
hook_method_response_functions = {}
get_tensor_type_functions = {}
+base_types = {int, float, str, bool, bytes, bytearray, complex}
+
one = lambda _args: 1
# dict to specify the action depending of the type found
@@ -274,9 +276,17 @@ def build_rule(args):
"""
type_args = type(args)
+ # for list, tuple but also tensors and syft tensors
if type_args in type_rule:
return type_rule[type_args](args)
+ # for int, float, str, etc
+ elif type_args in base_types:
+ return 0
else:
+ # New kind of return with pytorch 1.1
+ if "torch.return_types" in str(type_args):
+ return type_rule[tuple](args)
+ # Still remain ellipsis, slices, etc.
return 0
diff --git a/syft/frameworks/torch/tensors/interpreters/native.py b/syft/frameworks/torch/tensors/interpreters/native.py
--- a/syft/frameworks/torch/tensors/interpreters/native.py
+++ b/syft/frameworks/torch/tensors/interpreters/native.py
@@ -286,13 +286,15 @@ def send(
if self._is_parameter():
if inplace:
- self.data.set_()
+ with torch.no_grad():
+ self.set_()
self.data = ptr
output = self
else:
wrapper = torch.Tensor()
param_wrapper = torch.nn.Parameter(wrapper)
- param_wrapper.data.set_()
+ with torch.no_grad():
+ param_wrapper.set_()
param_wrapper.data = ptr
output = param_wrapper
else:
| diff --git a/test/conftest.py b/test/conftest.py
--- a/test/conftest.py
+++ b/test/conftest.py
@@ -32,7 +32,9 @@ def hook():
def workers(hook):
# To run a plan locally the local worker can't be a client worker,
# since it needs to register objects
- hook.local_worker.is_client_worker = False
+ # LaRiffle edit: doing this increases the reference count on pointers and
+ # breaks the auto garbage collection for pointer of pointers, see #2150
+ # hook.local_worker.is_client_worker = False
# reset the hook and the local worker
syft.local_worker.clear_objects()
diff --git a/test/torch/tensors/test_gc.py b/test/torch/tensors/test_gc.py
--- a/test/torch/tensors/test_gc.py
+++ b/test/torch/tensors/test_gc.py
@@ -11,14 +11,16 @@
def test_explicit_garbage_collect_pointer(workers):
"""Tests whether deleting a PointerTensor garbage collects the remote object too"""
+ alice, bob = workers["alice"], workers["bob"]
+
# create tensor
x = torch.Tensor([1, 2])
# send tensor to bob
- x_ptr = x.send(workers["bob"])
+ x_ptr = x.send(bob)
# ensure bob has tensor
- assert x.id in workers["bob"]._objects
+ assert x.id in bob._objects
# delete pointer to tensor, which should
# automatically garbage collect the remote
@@ -26,29 +28,31 @@ def test_explicit_garbage_collect_pointer(workers):
del x_ptr
# ensure bob's object was garbage collected
- assert x.id not in workers["bob"]._objects
+ assert x.id not in bob._objects
def test_explicit_garbage_collect_double_pointer(workers):
"""Tests whether deleting a pointer to a pointer garbage collects
the remote object too"""
+ alice, bob = workers["alice"], workers["bob"]
+
# create tensor
x = torch.Tensor([1, 2])
# send tensor to bob and then pointer to alice
- x_ptr = x.send(workers["bob"])
- x_ptr_ptr = x_ptr.send(workers["alice"])
+ x_ptr = x.send(bob)
+ x_ptr_ptr = x_ptr.send(alice)
# ensure bob has tensor
- assert x.id in workers["bob"]._objects
+ assert x.id in bob._objects
# delete pointer to pointer to tensor, which should automatically
# garbage collect the remote object on Bob's machine
del x_ptr_ptr
# ensure bob's object was garbage collected
- assert x.id not in workers["bob"]._objects
+ assert x.id not in bob._objects
# TODO: shouldn't we check that alice's object was
# garbage collected as well?
# assert x.id not in workers["alice"]._objects
@@ -57,13 +61,13 @@ def test_explicit_garbage_collect_double_pointer(workers):
x = torch.Tensor([1, 2])
x_id = x.id
# send tensor to bob and then pointer to alice
- x = x.send(workers["bob"]).send(workers["alice"])
+ x = x.send(bob).send(alice)
# ensure bob has tensor
- assert x_id in workers["bob"]._objects
+ assert x_id in bob._objects
# delete pointer to pointer to tensor
del x
# ensure bob's object was garbage collected
- assert x_id not in workers["bob"]._objects
+ assert x_id not in bob._objects
# TODO: shouldn't we check that alice's object was
# garbage collected as well?
# assert x.id not in workers["alice"]._objects
@@ -72,14 +76,16 @@ def test_explicit_garbage_collect_double_pointer(workers):
def test_implicit_garbage_collection_pointer(workers):
"""Tests whether GCing a PointerTensor GCs the remote object too."""
+ alice, bob = workers["alice"], workers["bob"]
+
# create tensor
x = torch.Tensor([1, 2])
# send tensor to bob
- x_ptr = x.send(workers["bob"])
+ x_ptr = x.send(bob)
# ensure bob has tensor
- assert x.id in workers["bob"]._objects
+ assert x.id in bob._objects
# delete pointer to tensor, which should
# automatically garbage collect the remote
@@ -87,47 +93,49 @@ def test_implicit_garbage_collection_pointer(workers):
x_ptr = "asdf"
# ensure bob's object was garbage collected
- assert x.id not in workers["bob"]._objects
+ assert x.id not in bob._objects
def test_implicit_garbage_collect_double_pointer(workers):
"""Tests whether GCing a pointer to a pointer garbage collects
the remote object too"""
+ alice, bob = workers["alice"], workers["bob"]
+
# create tensor
x = torch.Tensor([1, 2])
# send tensor to bob and then pointer to alice
- x_ptr = x.send(workers["bob"])
- x_ptr_ptr = x_ptr.send(workers["alice"])
+ x_ptr = x.send(bob)
+ x_ptr_ptr = x_ptr.send(alice)
# ensure bob has tensor
- assert x.id in workers["bob"]._objects
+ assert x.id in bob._objects
# delete pointer to pointer to tensor, which should automatically
# garbage collect the remote object on Bob's machine
x_ptr_ptr = "asdf"
# ensure bob's object was garbage collected
- assert x.id not in workers["bob"]._objects
+ assert x.id not in bob._objects
# TODO: shouldn't we check that alice's object was
# garbage collected as well?
- # assert x.id not in workers["alice"]._objects
+ # assert x.id not in alice._objects
# Chained version
x = torch.Tensor([1, 2])
x_id = x.id
# send tensor to bob and then pointer to alice
- x = x.send(workers["bob"]).send(workers["alice"])
+ x = x.send(bob).send(alice)
# ensure bob has tensor
- assert x_id in workers["bob"]._objects
+ assert x_id in bob._objects
# delete pointer to pointer to tensor
x = "asdf"
# ensure bob's object was garbage collected
- assert x_id not in workers["bob"]._objects
+ assert x_id not in bob._objects
# TODO: shouldn't we check that alice's object was
# garbage collected as well?
- # assert x.id not in workers["alice"]._objects
+ # assert x.id not in alice._objects
# TESTING IN PLACE METHODS
@@ -135,6 +143,7 @@ def test_implicit_garbage_collect_double_pointer(workers):
def test_inplace_method_on_pointer(workers):
bob = workers["bob"]
+
tensor = torch.tensor([[1.0, 2], [4.0, 2]])
pointer = tensor.send(bob)
pointer.add_(pointer)
@@ -150,16 +159,18 @@ def test_explicit_garbage_collect_logging_on_pointer(workers):
Tests whether deleting a LoggingTensor on a PointerTensor
garbage collects the remote object too
"""
+ alice, bob = workers["alice"], workers["bob"]
+
x = torch.Tensor([1, 2])
x_id = x.id
- x = x.send(workers["bob"])
+ x = x.send(bob)
x = LoggingTensor().on(x)
- assert x_id in workers["bob"]._objects
+ assert x_id in bob._objects
del x
- assert x_id not in workers["bob"]._objects
+ assert x_id not in bob._objects
def test_implicit_garbage_collect_logging_on_pointer(workers):
@@ -167,13 +178,15 @@ def test_implicit_garbage_collect_logging_on_pointer(workers):
Tests whether GCing a LoggingTensor on a PointerTensor
garbage collects the remote object too
"""
+ alice, bob = workers["alice"], workers["bob"]
+
x = torch.Tensor([1, 2])
x_id = x.id
- x = x.send(workers["bob"])
+ x = x.send(bob)
x = LoggingTensor().on(x)
- assert x_id in workers["bob"]._objects
+ assert x_id in bob._objects
x = "open-source"
- assert x_id not in workers["bob"]._objects
+ assert x_id not in bob._objects
| Torch 1.1 Integration
So PyTorch just released torch 1.1 and it causes a lot of failing Unit Tests because of some breaking API updates. We need to address them.
| 2019-05-15T22:50:14 |
|
OpenMined/PySyft | 2,254 | OpenMined__PySyft-2254 | [
"2250"
] | 2b1b84e6c6144059cba31bc8faf8db89d72c2260 | diff --git a/syft/workers/tfe.py b/syft/workers/tfe.py
--- a/syft/workers/tfe.py
+++ b/syft/workers/tfe.py
@@ -1,12 +1,15 @@
"""To be extended in the near future."""
from collections import OrderedDict
import logging
+import os
import subprocess
+import tempfile
import tf_encrypted as tfe
logger = logging.getLogger("tf_encrypted")
+_TMP_DIR = tempfile.gettempdir()
class TFEWorker:
@@ -23,26 +26,24 @@ def start(self, player_name, *workers):
# we're running using a tfe.LocalConfig which doesn't require us to do anything
return
- config_filename = "/tmp/tfe.config"
+ config_filename = os.path.join(_TMP_DIR, "tfe.config")
config, _ = self.config_from_workers(workers)
config.save(config_filename)
+ launch_cmd = "python -m tf_encrypted.player --config {} {}".format(
+ config_filename, player_name
+ )
if self._auto_managed:
- cmd = "python -m tf_encrypted.player --config {} {}".format(
- config_filename, player_name
- )
- self._server_process = subprocess.Popen(cmd.split(" "))
+ self._server_process = subprocess.Popen(launch_cmd.split(" "))
else:
logger.info(
"If not done already, please launch the following "
- "command in a terminal on host '%s':\n"
- "'python -m tf_encrypted.player --config %s %s'\n"
+ "command in a terminal on host %s: '%s'\n"
"This can be done automatically in a local subprocess by "
- "setting `auto_managed=True` when instantiating a TFEWorker.",
+ "setting `auto_managed=True` when instantiating a TFEWorker.\n",
self.host,
- config_filename,
- player_name,
+ launch_cmd,
)
def stop(self):
| Syft Keras bug on Windows
Relevant slack discussion: https://openmined.slack.com/archives/C6DEWA4FR/p1559899875021800
Bug:

It looks like the problem here is that the `tfe.config` is being saved in a location that is not a valid filepath in Windows. As a result, there is likely a file with the name `/tmp/tfe.config` being saved in some folder on the machine, as opposed to a file with the name `tfe.config` being saved in the root subdirectory called `tmp`.
The fix for this should use `os.path` to figure out which filepath the tfe.config should be saved to, and then the logging messages should print the OS-specific CLI command for launching each `TFEWorker` process.
| 2019-06-07T17:37:45 |
||
OpenMined/PySyft | 2,262 | OpenMined__PySyft-2262 | [
"2260"
] | 9685467662d21bbf534af746640e19926b27c23f | diff --git a/syft/frameworks/torch/hook/hook_args.py b/syft/frameworks/torch/hook/hook_args.py
--- a/syft/frameworks/torch/hook/hook_args.py
+++ b/syft/frameworks/torch/hook/hook_args.py
@@ -74,10 +74,12 @@
"my_syft_tensor_type": lambda i, **kwargs: "my_syft_tensor_type(**kwargs).on(i, wrap=False)",
}
-# methods that we really don't want to hook, for example because they have an arbitrary
-# number of tensors in args signature response
-exclude_methods = {"__getitem__", "_getitem_public", "view", "permute"}
-exclude_functions = {"torch.unbind", "unbind", "torch.stack", "stack", "torch.mean", "torch.sum"}
+# Functions that we really don't want to hook because they don't have tensors in their signature
+exclude_functions = {"as_tensor", "torch.as_tensor"}
+# Methods or functions whose signature changes a lot and that we don't want to "cache", because
+# they have an arbitrary number of tensors in args which can trigger unexpected behaviour
+ambiguous_methods = {"__getitem__", "_getitem_public", "view", "permute"}
+ambiguous_functions = {"torch.unbind", "unbind", "torch.stack", "stack", "torch.mean", "torch.sum"}
def hook_method_args(attr, method_self, args, kwargs):
@@ -105,7 +107,7 @@ def hook_method_args(attr, method_self, args, kwargs):
attr_id = type(method_self).__name__ + "." + attr
try:
- assert attr not in exclude_methods
+ assert attr not in ambiguous_methods
# Load the utility function to transform the args
hook_args = hook_method_args_functions[attr_id]
@@ -139,7 +141,10 @@ def hook_function_args(attr, args, kwargs, return_args_type=False):
(- the type of the tensors in the arguments)
"""
try:
- assert attr not in exclude_functions
+ if attr in exclude_functions:
+ raise PureTorchTensorFoundError
+
+ assert attr not in ambiguous_functions
# Load the utility function to transform the args
# TODO rename registry or use another one than for methods
hook_args = hook_method_args_functions[attr]
@@ -224,7 +229,7 @@ def hook_response(attr, response, wrap_type, wrap_args={}, new_self=None):
attr_id = f"{attr}@{wrap_type.__name__}.{response_is_tuple}.{hash_wrap_args}"
try:
- assert attr not in exclude_functions
+ assert attr not in ambiguous_functions
# Load the utility function to transform the args
response_hook_function = hook_method_response_functions[attr_id]
@@ -636,7 +641,7 @@ def register_response(
attr_id = "{}".format(attr)
try:
- assert attr not in exclude_functions
+ assert attr not in ambiguous_functions
# Load the utility function to register the response and transform tensors with pointers
register_response_function = register_response_functions[attr_id]
| Part 6 and Part 11 Tutorial - List index out of range
**Describe the bug**
Part 11 Tutorial breaks when running cell n. 5
See also related issue #2243
**Edit** Error for torchvision >0.2.2
**To Reproduce**
Steps to reproduce the behavior:
1. Launch the Jupyter Notebook containing Part 11 - Secure Deep Learning tutorial
2. Run all cells up to the following one:
```
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size, shuffle=True)
private_test_loader = []
for data, target in test_loader:
private_test_loader.append((
data.fix_precision().share(alice, bob, crypto_provider=crypto_provider),
target.fix_precision().share(alice, bob, crypto_provider=crypto_provider)
))
```
3. Run that cell
4. See the error generated:
https://pastebin.com/cYfEMJ2H
**Expected behavior**
That the cell does indeed run
**Screenshots**


**Desktop (please complete the following information):**
Ubuntu 18.04 LTS
PySyft 0.1.17
| Hello @DanyEle,
you can see the discussion here:
https://openmined.slack.com/archives/C6EEFN3A8/p1559737323042400?thread_ts=1559737323.042400
relate to [issue 1893](https://github.com/OpenMined/PySyft/issues/1893) here.
I encounter the same problem as you do :).
> Hello @DanyEle,
> you can see the discussion here:
> https://openmined.slack.com/archives/C6EEFN3A8/p1559737323042400?thread_ts=1559737323.042400
> relate to [issue 1893](https://github.com/OpenMined/PySyft/issues/1893) here.
> I encounter the same problem as you do :).
I confirm that issue also happens to me in Tutorial Part 6.
And i was wondering if i am just to dump to get everything right... got the same Problem on any Tutorial which uses the MNIST Dataset.
Hey, can you confirm you don't have a gpu or cuda?
I use Google Colab.
Indeed I experience the same on colab. I'm looking at it
> Hey, can you confirm you don't have a gpu or cuda?
I have a GPU and CUDA installed, but even when disabling CUDA, this still happens.
> Hey, can you confirm you don't have a gpu or cuda?
I do not have an external GPU.
Specs:
Memory: 15,4 GiB
Processor: Intel Core i7-7500U CPU @ 2.70GHz x 4
Graphics: Intel HD Graphics 620 (Kaby Lake GT2)
OS: Ubuntu 18.10 64 Bit
Completly fresh installation with Anaconda and pip.
The error is related to the torchvision `0.3.0` version.
Will edit a fix for it | 2019-06-10T17:43:18 |
|
OpenMined/PySyft | 2,274 | OpenMined__PySyft-2274 | [
"2273"
] | aa8f5b660a27f37f836b7d7ecf391ad8cbbf29e1 | diff --git a/syft/frameworks/torch/tensors/interpreters/precision.py b/syft/frameworks/torch/tensors/interpreters/precision.py
--- a/syft/frameworks/torch/tensors/interpreters/precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/precision.py
@@ -96,6 +96,10 @@ def truncate(self, precision_fractional):
def add(self, _self, other):
"""Add two fixed precision tensors together.
"""
+ if isinstance(other, int):
+ scaled_int = other * self.base ** self.precision_fractional
+ return getattr(_self, "add")(scaled_int)
+
if _self.is_wrapper and not other.is_wrapper:
# If we try to add a FPT>(wrap)>AST and a FPT>torch.tensor,
# we want to perform AST + torch.tensor
@@ -125,6 +129,10 @@ def __iadd__(self, other):
def sub(self, _self, other):
"""Subtracts a fixed precision tensor from another one.
"""
+ if isinstance(other, int):
+ scaled_int = other * self.base ** self.precision_fractional
+ return getattr(_self, "sub")(scaled_int)
+
if _self.is_wrapper and not other.is_wrapper:
# If we try to subtract a FPT>(wrap)>AST and a FPT>torch.tensor,
# we want to perform AST - torch.tensor
@@ -166,7 +174,10 @@ def mul(self, other):
self.precision_fractional == other.precision_fractional
), "In mul, all args should have the same precision_fractional"
- if self.child.is_wrapper and not other.child.is_wrapper:
+ if isinstance(other, int):
+ new_self = self.child
+ new_other = other * self.base ** self.precision_fractional
+ elif self.child.is_wrapper and not other.child.is_wrapper:
# If we try to multiply a FPT>(wrap)>AST with a FPT>torch.tensor),
# we want to perform AST * torch.tensor
new_self = self.child
| diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -180,6 +180,24 @@ def test_mul(workers):
assert (z == (t * t)).all()
+def test_operate_with_integer_constants(workers):
+ bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
+ x = th.tensor([2.0])
+ x_sh = x.fix_precision().share(alice, bob, crypto_provider=james)
+
+ r_sh = x_sh + 10
+ assert r_sh.get().float_prec() == x + 10
+
+ r_sh = x_sh - 7
+ assert r_sh.get().float_prec() == x - 7
+
+ r_sh = x_sh * 2
+ assert r_sh.get().float_prec() == x * 2
+
+ r_sh = x_sh / 2
+ assert r_sh.get().float_prec() == x / 2
+
+
def test_stack(workers):
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
t = torch.tensor([1.3, 2])
diff --git a/test/torch/tensors/test_precision.py b/test/torch/tensors/test_precision.py
--- a/test/torch/tensors/test_precision.py
+++ b/test/torch/tensors/test_precision.py
@@ -261,6 +261,25 @@ def test_torch_nn_functional_linear():
assert (result == expected).all()
+def test_operate_with_integer_constants():
+ x = torch.tensor([1.0])
+ x_fp = x.fix_precision()
+
+ r_fp = x_fp + 10
+ r = r_fp.float_precision()
+ assert r == x + 10
+
+ r_fp = x_fp - 7
+ r = r_fp.float_precision()
+ assert r == x - 7
+
+ r_fp = x_fp * 2
+ assert r_fp.float_precision() == x * 2
+
+ r_fp = x_fp / 5
+ assert r_fp.float_precision() == x / 5
+
+
def test_fixed_precision_and_sharing(workers):
bob, alice = (workers["bob"], workers["alice"])
| AttributeError: 'int' object has no attribute 'is_wrapper'
**Describe the bug**
Execute Part 11 - Secure Deep Learning Classification.ipynb ecounter "AttributeError: 'int' object has no attribute 'is_wrapper'"
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'examples/tutorials/Part 11 - Secure Deep Learning Classification.ipynb'
2. Click on 'run all'
3. Scroll down to 'cell 17 test(args, model, private_test_loader)'
4. See error
**Expected behavior**
No error here:).
**Screenshots**

**Desktop (please complete the following information):**
On Google colaboratory
| Hey, indeed there is a bug in our last release 0.1.18 when operating with constant values, I'm working on a fix right now. | 2019-06-12T09:18:14 |
OpenMined/PySyft | 2,276 | OpenMined__PySyft-2276 | [
"2272"
] | 887e558fe094f7245421a23c9da65370fde2f121 | diff --git a/syft/frameworks/torch/tensors/interpreters/multi_pointer.py b/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
--- a/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
+++ b/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
@@ -84,6 +84,12 @@ def shape(self) -> torch.Size:
return list(self.child.values())[0].shape
+ def dim(self) -> int:
+ """This method fixes the error that the result of dim was a list of ints
+ stored inside a multipointer tensor"""
+
+ return len(self.shape)
+
def get(self, sum_results: bool = False) -> torch.Tensor:
results = list()
| diff --git a/test/torch/tensors/test_multi_pointer.py b/test/torch/tensors/test_multi_pointer.py
--- a/test/torch/tensors/test_multi_pointer.py
+++ b/test/torch/tensors/test_multi_pointer.py
@@ -30,3 +30,12 @@ def test_multi_pointers(workers):
c = b.get()
assert len(c) == 2
assert (c[0] == th.tensor([2, 4, 6, 8, 10])).all
+
+
+def test_dim(workers):
+ bob = workers["bob"]
+ alice = workers["alice"]
+
+ a = th.tensor([1, 2, 3, 4, 5]).send(bob, alice)
+
+ assert a.dim() == 1
| Dim does not work with multipointer tensors
**Describe the bug**
Calling dim on a multipointer tensor returns a multipointer tensor where the values of the children are all ints. The return signature should be an int
**To Reproduce**
create a multipointer tensor and call .dim()
**Expected behavior**
the value returned should be an int
| 2019-06-12T16:05:23 |
|
OpenMined/PySyft | 2,278 | OpenMined__PySyft-2278 | [
"2277"
] | ec351865f9c18d114ef457843a455e929ea609a2 | diff --git a/syft/frameworks/torch/tensors/interpreters/multi_pointer.py b/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
--- a/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
+++ b/syft/frameworks/torch/tensors/interpreters/multi_pointer.py
@@ -204,7 +204,7 @@ def simplify(tensor: "MultiPointerTensor") -> tuple:
chain = None
if hasattr(tensor, "child"):
- chain = sy.serde.simplify(tensor.child)
+ chain = sy.serde._simplify(tensor.child)
return (tensor.id, chain)
@staticmethod
| diff --git a/test/torch/tensors/test_multi_pointer.py b/test/torch/tensors/test_multi_pointer.py
--- a/test/torch/tensors/test_multi_pointer.py
+++ b/test/torch/tensors/test_multi_pointer.py
@@ -39,3 +39,15 @@ def test_dim(workers):
a = th.tensor([1, 2, 3, 4, 5]).send(bob, alice)
assert a.dim() == 1
+
+
+def test_simplify(workers):
+ bob = workers["bob"]
+ alice = workers["alice"]
+
+ a = th.tensor([1, 2, 3, 4, 5]).send(bob, alice)
+ ser = sy.serde.serialize(a)
+ detail = sy.serde.deserialize(ser).child
+ assert isinstance(detail, sy.MultiPointerTensor)
+ for key in a.child.child:
+ assert key in detail.child
| Splitting up serde broke serialization for multipointers
**Describe the bug**
```
@staticmethod
def simplify(tensor: "MultiPointerTensor") -> tuple:
"""
This function takes the attributes of a MultiPointerTensor and saves them in a tuple
Args:
tensor (MultiPointerTensor): a MultiPointerTensor
Returns:
tuple: a tuple holding the unique attributes of the additive shared tensor
Examples:
data = simplify(tensor)
"""
chain = None
if hasattr(tensor, "child"):
> chain = sy.serde.simplify(tensor.child)
E AttributeError: module 'syft.serde' has no attribute 'simplify'
```
| 2019-06-12T18:46:10 |
|
OpenMined/PySyft | 2,291 | OpenMined__PySyft-2291 | [
"2290"
] | 22afb21178f1618fc1b48631ff49636ec93a61fb | diff --git a/syft/serde/native_serde.py b/syft/serde/native_serde.py
--- a/syft/serde/native_serde.py
+++ b/syft/serde/native_serde.py
@@ -2,10 +2,206 @@
This file exists to provide one common place for all serialisation and simplify_ and _detail
for all native python objects.
"""
-from syft.workers import AbstractWorker
+from collections import OrderedDict
+from typing import Collection
+from typing import Dict
from typing import Tuple
+
import numpy
+from syft.workers import AbstractWorker
+from syft import serde
+
+
+# Simplify/Detail Collections (list, set, tuple, etc.)
+
+
+def _simplify_collection(my_collection: Collection) -> Collection:
+ """
+ This function is designed to search a collection for any objects
+ which may need to be simplified (i.e., torch tensors). It iterates
+ through each object in the collection and calls _simplify on it. Finally,
+ it returns the output collection as the same type as the input collection
+ so that the consuming serialization step knows the correct type info. The
+ reverse function to this function is _detail_collection, which undoes
+ the functionality of this function.
+
+ Args:
+ my_collection (Collection): a collection of python objects
+
+ Returns:
+ Collection: a collection of the same type as the input of simplified
+ objects.
+
+ """
+
+ # Step 0: get collection type for later use and itialize empty list
+ my_type = type(my_collection)
+ pieces = list()
+
+ # Step 1: serialize each part of the collection
+ for part in my_collection:
+ pieces.append(serde._simplify(part))
+
+ # Step 2: convert back to original type and return serialization
+ if my_type == set:
+ return pieces
+ return my_type(pieces)
+
+
+def _detail_collection_list(worker: AbstractWorker, my_collection: Collection) -> Collection:
+ """
+ This function is designed to operate in the opposite direction of
+ _simplify_collection. It takes a collection of simple python objects
+ and iterates through it to determine whether objects in the collection
+ need to be converted into more advanced types. In particular, it
+ converts binary objects into torch Tensors where appropriate.
+
+ Args:
+ worker: the worker doing the deserialization
+ my_collection (Collection): a collection of simple python objects (including binary).
+
+ Returns:
+ Collection: a collection of the same type as the input where the objects
+ in the collection have been detailed.
+ """
+
+ pieces = list()
+
+ # Step 1: deserialize each part of the collection
+ for part in my_collection:
+ try:
+ pieces.append(
+ serde._detail(worker, part).decode("utf-8")
+ ) # transform bytes back to string
+ except AttributeError:
+ pieces.append(serde._detail(worker, part))
+
+ return pieces
+
+
+def _detail_collection_set(worker: AbstractWorker, my_collection: Collection) -> Collection:
+ """
+ This function is designed to operate in the opposite direction of
+ _simplify_collection. It takes a collection of simple python objects
+ and iterates through it to determine whether objects in the collection
+ need to be converted into more advanced types. In particular, it
+ converts binary objects into torch Tensors where appropriate.
+
+ Args:
+ worker: the worker doing the deserialization
+ my_collection (Collection): a collection of simple python objects (including binary).
+
+ Returns:
+ Collection: a collection of the same type as the input where the objects
+ in the collection have been detailed.
+ """
+
+ pieces = list()
+
+ # Step 1: deserialize each part of the collection
+ for part in my_collection:
+ try:
+ pieces.append(
+ serde._detail(worker, part).decode("utf-8")
+ ) # transform bytes back to string
+ except AttributeError:
+ pieces.append(serde._detail(worker, part))
+ return set(pieces)
+
+
+def _detail_collection_tuple(worker: AbstractWorker, my_tuple: Tuple) -> Tuple:
+ """
+ This function is designed to operate in the opposite direction of
+ _simplify_collection. It takes a tuple of simple python objects
+ and iterates through it to determine whether objects in the collection
+ need to be converted into more advanced types. In particular, it
+ converts binary objects into torch Tensors where appropriate.
+ This is only applicable to tuples. They need special handling because
+ `msgpack` is encoding a tuple as a list.
+
+ Args:
+ worker: the worker doing the deserialization
+ my_tuple (Tuple): a collection of simple python objects (including binary).
+
+ Returns:
+ tuple: a collection of the same type as the input where the objects
+ in the collection have been detailed.
+ """
+
+ pieces = list()
+
+ # Step 1: deserialize each part of the collection
+ for part in my_tuple:
+ pieces.append(serde._detail(worker, part))
+
+ return tuple(pieces)
+
+
+def _simplify_dictionary(my_dict: Dict) -> Dict:
+ """
+ This function is designed to search a dict for any objects
+ which may need to be simplified (i.e., torch tensors). It iterates
+ through each key, value in the dict and calls _simplify on it. Finally,
+ it returns the output dict as the same type as the input dict
+ so that the consuming serialization step knows the correct type info. The
+ reverse function to this function is _detail_dictionary, which undoes
+ the functionality of this function.
+
+ Args:
+ my_dict: A dictionary of python objects.
+
+ Returns:
+ Dict: A dictionary of the same type as the input of simplified
+ objects.
+
+ """
+ pieces = list()
+ # for dictionaries we want to simplify both the key and the value
+ for key, value in my_dict.items():
+ pieces.append((serde._simplify(key), serde._simplify(value)))
+
+ return pieces
+
+
+def _detail_dictionary(worker: AbstractWorker, my_dict: Dict) -> Dict:
+ """
+ This function is designed to operate in the opposite direction of
+ _simplify_dictionary. It takes a dictionary of simple python objects
+ and iterates through it to determine whether objects in the collection
+ need to be converted into more advanced types. In particular, it
+ converts binary objects into torch Tensors where appropriate.
+
+ Args:
+ worker: the worker doing the deserialization
+ my_dict (Dict): a dictionary of simple python objects (including binary).
+
+ Returns:
+ tuple: a collection of the same type as the input where the objects
+ in the collection have been detailed.
+ """
+ pieces = {}
+ # for dictionaries we want to detail both the key and the value
+ for key, value in my_dict:
+ detailed_key = serde._detail(worker, key)
+ try:
+ detailed_key = detailed_key.decode("utf-8")
+ except AttributeError:
+ pass
+
+ detailed_value = serde._detail(worker, value)
+ try:
+ detailed_value = detailed_value.decode("utf-8")
+ except AttributeError:
+ pass
+
+ pieces[detailed_key] = detailed_value
+
+ return pieces
+
+
+# Simplify/Detail native types
+
def _simplify_str(obj: str) -> tuple:
return (obj.encode("utf-8"),)
@@ -15,9 +211,6 @@ def _detail_str(worker: AbstractWorker, str_tuple: tuple) -> str:
return str_tuple[0].decode("utf-8")
-# Range
-
-
def _simplify_range(my_range: range) -> Tuple[int, int, int]:
"""
This function extracts the start, stop and step from the range.
@@ -107,53 +300,17 @@ def _detail_slice(worker: AbstractWorker, my_slice: Tuple[int, int, int]) -> sli
return slice(my_slice[0], my_slice[1], my_slice[2])
-def _simplify_ndarray(my_array: numpy.ndarray) -> Tuple[bin, Tuple, str]:
- """
- This function gets the byte representation of the array
- and stores the dtype and shape for reconstruction
-
- Args:
- my_array (numpy.ndarray): a numpy array
-
- Returns:
- list: a list holding the byte representation, shape and dtype of the array
-
- Examples:
-
- arr_representation = _simplify_ndarray(numpy.random.random([1000, 1000])))
-
- """
- arr_bytes = my_array.tobytes()
- arr_shape = my_array.shape
- arr_dtype = my_array.dtype.name
-
- return (arr_bytes, arr_shape, arr_dtype)
-
-
-def _detail_ndarray(
- worker: AbstractWorker, arr_representation: Tuple[bin, Tuple, str]
-) -> numpy.ndarray:
- """
- This function reconstruct a numpy array from it's byte data, the shape and the dtype
- by first loading the byte data with the appropiate dtype and then reshaping it into the
- original shape
-
- Args:
- worker: the worker doing the deserialization
- arr_representation (tuple): a tuple holding the byte representation, shape
- and dtype of the array
-
- Returns:
- numpy.ndarray: a numpy array
-
- Examples:
- arr = _detail_ndarray(arr_representation)
-
- """
- res = numpy.frombuffer(arr_representation[0], dtype=arr_representation[2]).reshape(
- arr_representation[1]
- )
-
- assert type(res) == numpy.ndarray
-
- return res
+# Maps a type to a tuple containing its simplifier and detailer function
+# IMPORTANT: keep this structure sorted A-Z (by type name)
+MAP_NATIVE_SIMPLIFIERS_AND_DETAILERS = OrderedDict(
+ {
+ dict: (_simplify_dictionary, _detail_dictionary),
+ list: (_simplify_collection, _detail_collection_list),
+ range: (_simplify_range, _detail_range),
+ set: (_simplify_collection, _detail_collection_set),
+ slice: (_simplify_slice, _detail_slice),
+ str: (_simplify_str, _detail_str),
+ tuple: (_simplify_collection, _detail_collection_tuple),
+ type(Ellipsis): (_simplify_ellipsis, _detail_ellipsis),
+ }
+)
diff --git a/syft/serde/serde.py b/syft/serde/serde.py
--- a/syft/serde/serde.py
+++ b/syft/serde/serde.py
@@ -29,8 +29,9 @@
serialization process, it can override the functions _serialize_tensor and _deserialize_tensor
By default, we serialize using msgpack and compress using lz4.
-If different compressions are required, the worker can override the function _apply_compress_scheme
+If different compressions are required, the worker can override the function apply_compress_scheme
"""
+from collections import OrderedDict
import torch
import msgpack
import lz4
@@ -44,7 +45,7 @@
from syft.federated import TrainConfig
-from syft.workers import AbstractWorker #
+from syft.workers import AbstractWorker
from syft.workers import VirtualWorker
from syft.federated import Plan
@@ -59,47 +60,113 @@
from syft.frameworks.torch.tensors.interpreters import MultiPointerTensor
from syft.frameworks.torch import pointers
+from syft.serde.native_serde import MAP_NATIVE_SIMPLIFIERS_AND_DETAILERS
+from syft.serde.torch_serde import MAP_TORCH_SIMPLIFIERS_AND_DETAILERS
-from syft.serde.native_serde import (
- _simplify_str,
- _simplify_range,
- _simplify_ellipsis,
- _simplify_slice,
- _detail_str,
- _detail_range,
- _detail_ellipsis,
- _detail_slice,
+# Maps a type to a tuple containing its simplifier and detailer function
+MAP_TO_SIMPLIFIERS_AND_DETAILERS = OrderedDict(
+ list(MAP_NATIVE_SIMPLIFIERS_AND_DETAILERS.items())
+ + list(MAP_TORCH_SIMPLIFIERS_AND_DETAILERS.items())
)
-from syft.serde.torch_serde import (
- _detail_torch_tensor,
- _detail_torch_parameter,
- _detail_collection_tuple,
- _detail_collection_list,
- _detail_collection_set,
- _detail_dictionary,
- _detail_ndarray,
- _detail_torch_device,
- _force_full_detail,
- _detail_script_module,
- _simplify_torch_tensor,
- _simplify_torch_parameter,
- _simplify_collection,
- _simplify_dictionary,
- _simplify_ndarray,
- _simplify_torch_device,
- _force_full_simplify,
- _simplify_script_module,
-)
+# Objects that we can run force full simplilfy on
+MAP_TO_FORCE_FULL_SIMPLIFY = [VirtualWorker]
+
+# If a object implements its own simplifer and detailer function it should be stored in this list
+OBJ_SIMPLIFIER_AND_DETAILERS = [
+ AdditiveSharingTensor,
+ LoggingTensor,
+ MultiPointerTensor,
+ Plan,
+ pointers.PointerTensor,
+ pointers.ObjectWrapper,
+ TrainConfig,
+ VirtualWorker,
+]
+EXCEPTION_SIMPLIFIER_AND_DETAILERS = [GetNotPermittedError, ResponseSignatureError]
# COMPRESSION SCHEME INT CODES
NO_COMPRESSION = 40
LZ4 = 41
ZSTD = 42
+## SECTION: High Level Simplification Router
+def _force_full_simplify(obj: object) -> object:
+ current_type = type(obj)
+
+ if current_type in forced_full_simplifiers:
+
+ left = forced_full_simplifiers[current_type][0]
+
+ right = forced_full_simplifiers[current_type][1]
+
+ right = right(obj)
+
+ result = (left, right)
+ else:
+ result = _simplify(obj)
+
+ return result
-# High Level Public Functions (these are the ones you use)
+
+def _force_full_detail(worker: AbstractWorker, worker_tuple: tuple) -> tuple:
+ worker_id, _objects, auto_add = worker_tuple
+ worker_id = _detail(worker, worker_id)
+
+ result = sy.VirtualWorker(sy.hook, worker_id, auto_add=auto_add)
+ _objects = _detail(worker, _objects)
+ result._objects = _objects
+
+ # make sure they weren't accidentally double registered
+ for _, obj in _objects.items():
+ if obj.id in worker._objects:
+ del worker._objects[obj.id]
+
+ return result
+
+
+## SECTION: dinamically generate simplifiers and detailers
+def _generate_simplifiers_and_detailers():
+ """Generate simplifiers, forced full simplifiers and detailers."""
+ simplifiers = OrderedDict()
+ forced_full_simplifiers = OrderedDict()
+ detailers = []
+
+ def _add_simplifier_and_detailer(curr_type, simplifier, detailer, forced=False):
+ if detailer in detailers:
+ curr_index = detailers.index(detailer)
+ else:
+ curr_index = len(detailers)
+ detailers.append(detailer)
+
+ if forced:
+ forced_full_simplifiers[curr_type] = (curr_index, simplifier)
+ else:
+ simplifiers[curr_type] = (curr_index, simplifier)
+
+ # Register native and torch types
+ for curr_type in MAP_TO_SIMPLIFIERS_AND_DETAILERS:
+ simplifier, detailer = MAP_TO_SIMPLIFIERS_AND_DETAILERS[curr_type]
+ _add_simplifier_and_detailer(curr_type, simplifier, detailer)
+
+ # Register syft objects with custom simplify and detail methods
+ for syft_type in OBJ_SIMPLIFIER_AND_DETAILERS + EXCEPTION_SIMPLIFIER_AND_DETAILERS:
+ simplifier, detailer = syft_type.simplify, syft_type.detail
+ _add_simplifier_and_detailer(syft_type, simplifier, detailer)
+
+ # Add forced full simplifiers
+ for curr_type in MAP_TO_FORCE_FULL_SIMPLIFY:
+ _add_simplifier_and_detailer(
+ curr_type, _force_full_simplify, _force_full_detail, forced=True
+ )
+
+ return simplifiers, forced_full_simplifiers, detailers
+
+
+simplifiers, forced_full_simplifiers, detailers = _generate_simplifiers_and_detailers()
+
+## SECTION: High Level Public Functions (these are the ones you use)
def serialize(
obj: object,
simplified: bool = False,
@@ -183,7 +250,7 @@ def deserialize(binary: bin, worker: AbstractWorker = None, details=True) -> obj
worker (AbstractWorker): the worker which is acquiring the message content,
for example used to specify the owner of a tensor received(not obvious
for virtual workers)
- detail (bool): there are some cases where we need to perform the decompression
+ details (bool): there are some cases where we need to perform the decompression
and deserialization part, but we don't need to detail all the message.
This is the case for Plan workers for instance
@@ -215,7 +282,7 @@ def deserialize(binary: bin, worker: AbstractWorker = None, details=True) -> obj
return simple_objects
-# Chosen Compression Algorithm
+## SECTION: chosen Compression Algorithm
def _apply_compress_scheme(decompressed_input_bin) -> tuple:
@@ -276,9 +343,7 @@ def _compress(decompressed_input_bin: bin) -> bin:
bin: a compressed binary
"""
-
compress_stream, compress_scheme = _apply_compress_scheme(decompressed_input_bin)
-
if len(compress_stream) < len(decompressed_input_bin):
return compress_scheme.to_bytes(1, byteorder="big") + compress_stream
else:
@@ -315,9 +380,6 @@ def _decompress(binary: bin) -> bin:
)
-# High Level Simplification Router
-
-
def _simplify(obj: object) -> object:
"""
This function takes an object as input and returns a simple
@@ -331,29 +393,24 @@ def _simplify(obj: object) -> object:
being sent.
Args:
- obj: an object which may need to be simplified
+ obj: an object which may need to be simplified.
Returns:
- obj: an simple Python object which msgpack can serialize
+ obj: an simple Python object which msgpack can serialize.
Raises:
ValueError: if `move_this` or `in_front_of_that` are not both single ASCII
characters.
-
"""
-
try:
# check to see if there is a simplifier
# for this type. If there is, run return
# the simplified object
current_type = type(obj)
-
result = (simplifiers[current_type][0], simplifiers[current_type][1](obj))
-
return result
except KeyError:
-
# if there is not a simplifier for this
# object, then the object is already a
# simple python object and we can just
@@ -361,105 +418,23 @@ def _simplify(obj: object) -> object:
return obj
-def _force_full_simplify(obj: object) -> object:
- current_type = type(obj)
-
- if current_type in forced_full_simplifiers:
-
- left = forced_full_simplifiers[current_type][0]
-
- right = forced_full_simplifiers[current_type][1]
-
- right = right(obj)
-
- result = (left, right)
- else:
- result = _simplify(obj)
-
- return result
-
-
-simplifiers = {
- torch.Tensor: [0, _simplify_torch_tensor],
- torch.nn.Parameter: [1, _simplify_torch_parameter],
- tuple: [2, _simplify_collection],
- list: [3, _simplify_collection],
- set: [4, _simplify_collection],
- dict: [5, _simplify_dictionary],
- range: [6, _simplify_range],
- numpy.ndarray: [7, _simplify_ndarray],
- slice: [8, _simplify_slice],
- type(Ellipsis): [9, _simplify_ellipsis],
- torch.device: [10, _simplify_torch_device],
- pointers.PointerTensor: [11, sy.PointerTensor.simplify],
- LoggingTensor: [12, sy.LoggingTensor.simplify],
- AdditiveSharingTensor: [13, sy.AdditiveSharingTensor.simplify],
- MultiPointerTensor: [14, sy.MultiPointerTensor.simplify],
- Plan: [15, sy.Plan.simplify],
- VirtualWorker: [16, sy.VirtualWorker.simplify],
- str: [18, _simplify_str],
- pointers.ObjectWrapper: [19, sy.ObjectWrapper.simplify],
- GetNotPermittedError: [20, sy.exceptions.GetNotPermittedError.simplify],
- ResponseSignatureError: [20, sy.exceptions.ResponseSignatureError.simplify],
- torch.jit.ScriptModule: [21, _simplify_script_module],
- torch.jit.TopLevelTracedModule: [
- 21,
- _simplify_script_module,
- ], # treat as torch.jit.ScriptModule
- TrainConfig: [22, sy.TrainConfig.simplify],
-}
-
-
-forced_full_simplifiers = {VirtualWorker: [19, _force_full_simplify]}
-
-
def _detail(worker: AbstractWorker, obj: object) -> object:
- """
- This function reverses the functionality of _simplify. Where applicable,
- it converts simple objects into more complex objects such as converting
- binary objects into torch tensors. Read _simplify for more information on
- why _simplify and _detail are needed.
+ """Reverses the functionality of _simplify.
+ Where applicable, it converts simple objects into more complex objects such
+ as converting binary objects into torch tensors. Read _simplify for more
+ information on why _simplify and detail are needed.
Args:
worker: the worker which is acquiring the message content, for example
used to specify the owner of a tensor received(not obvious for
- virtual workers)
- obj: a simple Python object which msgpack deserialized
+ virtual workers).
+ obj: a simple Python object which msgpack deserialized.
Returns:
obj: a more complex Python object which msgpack would have had trouble
deserializing directly.
-
"""
-
if type(obj) in (list, tuple):
return detailers[obj[0]](worker, obj[1])
else:
return obj
-
-
-detailers = [
- _detail_torch_tensor,
- _detail_torch_parameter,
- _detail_collection_tuple,
- _detail_collection_list,
- _detail_collection_set,
- _detail_dictionary,
- _detail_range,
- _detail_ndarray,
- _detail_slice,
- _detail_ellipsis,
- _detail_torch_device,
- sy.PointerTensor.detail,
- sy.LoggingTensor.detail,
- sy.AdditiveSharingTensor.detail,
- sy.MultiPointerTensor.detail,
- sy.Plan.detail,
- sy.VirtualWorker.detail,
- _force_full_detail,
- _detail_str,
- sy.ObjectWrapper.detail,
- sy.exceptions.GetNotPermittedError.detail,
- _detail_script_module,
- sy.TrainConfig.detail,
-]
diff --git a/syft/serde/torch_serde.py b/syft/serde/torch_serde.py
--- a/syft/serde/torch_serde.py
+++ b/syft/serde/torch_serde.py
@@ -1,10 +1,9 @@
"""
This file exists to provide one common place for all serialisation and simplify_ and _detail
-for all tensors (Torch) and collections.
+for all tensors (Torch and Numpy).
"""
+from collections import OrderedDict
from tempfile import TemporaryFile
-from typing import Collection
-from typing import Dict
from typing import Tuple
import torch
@@ -13,7 +12,6 @@
import warnings
import syft
-import syft as sy
from syft.federated import TrainConfig
@@ -32,18 +30,6 @@
from syft.frameworks.torch import pointers
-from syft.serde.native_serde import (
- _simplify_slice,
- _simplify_ellipsis,
- _simplify_range,
- _simplify_str,
- _detail_slice,
- _detail_ellipsis,
- _detail_range,
- _detail_str,
-)
-
-
def _serialize_tensor(tensor) -> bin:
"""Serialize the tensor using as default Torch serialization strategy
This function can be overridden to provide different tensor serialization strategies
@@ -153,7 +139,7 @@ def _simplify_torch_tensor(tensor: torch.Tensor) -> bin:
# I think the pointer bug is is between here
if hasattr(tensor, "child"):
- chain = _simplify(tensor.child)
+ chain = syft.serde._simplify(tensor.child)
# and here... leaving a reerence here so i can find it later
# TODO fix pointer bug
@@ -212,7 +198,7 @@ def _detail_torch_tensor(worker: AbstractWorker, tensor_tuple: tuple) -> torch.T
tensor.description = description
if chain is not None:
- chain = detail(worker, chain)
+ chain = syft.serde._detail(worker, chain)
tensor.child = chain
tensor.is_wrapper = True
@@ -282,193 +268,7 @@ def _detail_torch_parameter(worker: AbstractWorker, param_tuple: tuple) -> torch
return param
-# Simplify/Detail Collections (list, set, tuple, etc.)
-
-
-def _simplify_collection(my_collection: Collection) -> Collection:
- """
- This function is designed to search a collection for any objects
- which may need to be simplified (i.e., torch tensors). It iterates
- through each object in the collection and calls _simplify on it. Finally,
- it returns the output collection as the same type as the input collection
- so that the consuming serialization step knows the correct type info. The
- reverse function to this function is _detail_collection, which undoes
- the functionality of this function.
-
- Args:
- my_collection (Collection): a collection of python objects
-
- Returns:
- Collection: a collection of the same type as the input of simplified
- objects.
-
- """
-
- # Step 0: get collection type for later use and itialize empty list
- my_type = type(my_collection)
- pieces = list()
-
- # Step 1: serialize each part of the collection
- for part in my_collection:
- pieces.append(_simplify(part))
-
- # Step 2: convert back to original type and return serialization
- if my_type == set:
- return pieces
- return my_type(pieces)
-
-
-def _detail_collection_list(worker: AbstractWorker, my_collection: Collection) -> Collection:
- """
- This function is designed to operate in the opposite direction of
- _simplify_collection. It takes a collection of simple python objects
- and iterates through it to determine whether objects in the collection
- need to be converted into more advanced types. In particular, it
- converts binary objects into torch Tensors where appropriate.
-
- Args:
- worker: the worker doing the deserialization
- my_collection (Collection): a collection of simple python objects (including binary).
-
- Returns:
- Collection: a collection of the same type as the input where the objects
- in the collection have been detailed.
- """
-
- pieces = list()
-
- # Step 1: deserialize each part of the collection
- for part in my_collection:
- try:
- pieces.append(detail(worker, part).decode("utf-8")) # transform bytes back to string
- except AttributeError:
- pieces.append(detail(worker, part))
-
- return pieces
-
-
-def _detail_collection_set(worker: AbstractWorker, my_collection: Collection) -> Collection:
- """
- This function is designed to operate in the opposite direction of
- _simplify_collection. It takes a collection of simple python objects
- and iterates through it to determine whether objects in the collection
- need to be converted into more advanced types. In particular, it
- converts binary objects into torch Tensors where appropriate.
-
- Args:
- worker: the worker doing the deserialization
- my_collection (Collection): a collection of simple python objects (including binary).
-
- Returns:
- Collection: a collection of the same type as the input where the objects
- in the collection have been detailed.
- """
-
- pieces = list()
-
- # Step 1: deserialize each part of the collection
- for part in my_collection:
- try:
- pieces.append(detail(worker, part).decode("utf-8")) # transform bytes back to string
- except AttributeError:
- pieces.append(detail(worker, part))
- return set(pieces)
-
-
-def _detail_collection_tuple(worker: AbstractWorker, my_tuple: Tuple) -> Tuple:
- """
- This function is designed to operate in the opposite direction of
- _simplify_collection. It takes a tuple of simple python objects
- and iterates through it to determine whether objects in the collection
- need to be converted into more advanced types. In particular, it
- converts binary objects into torch Tensors where appropriate.
- This is only applicable to tuples. They need special handling because
- `msgpack` is encoding a tuple as a list.
-
- Args:
- worker: the worker doing the deserialization
- my_tuple (Tuple): a collection of simple python objects (including binary).
-
- Returns:
- tuple: a collection of the same type as the input where the objects
- in the collection have been detailed.
- """
-
- pieces = list()
-
- # Step 1: deserialize each part of the collection
- for part in my_tuple:
- pieces.append(detail(worker, part))
-
- return tuple(pieces)
-
-
-# Dictionaries
-
-
-def _simplify_dictionary(my_dict: Dict) -> Dict:
- """
- This function is designed to search a dict for any objects
- which may need to be simplified (i.e., torch tensors). It iterates
- through each key, value in the dict and calls _simplify on it. Finally,
- it returns the output dict as the same type as the input dict
- so that the consuming serialization step knows the correct type info. The
- reverse function to this function is _detail_dictionary, which undoes
- the functionality of this function.
-
- Args:
- my_dict (Dict): a dictionary of python objects
-
- Returns:
- Dict: a dictionary of the same type as the input of simplified
- objects.
-
- """
- pieces = list()
- # for dictionaries we want to simplify both the key and the value
- for key, value in my_dict.items():
- pieces.append((_simplify(key), _simplify(value)))
-
- return pieces
-
-
-def _detail_dictionary(worker: AbstractWorker, my_dict: Dict) -> Dict:
- """
- This function is designed to operate in the opposite direction of
- _simplify_dictionary. It takes a dictionary of simple python objects
- and iterates through it to determine whether objects in the collection
- need to be converted into more advanced types. In particular, it
- converts binary objects into torch Tensors where appropriate.
-
- Args:
- worker: the worker doing the deserialization
- my_dict (Dict): a dictionary of simple python objects (including binary).
-
- Returns:
- tuple: a collection of the same type as the input where the objects
- in the collection have been detailed.
- """
- pieces = {}
- # for dictionaries we want to detail both the key and the value
- for key, value in my_dict:
- detailed_key = detail(worker, key)
- try:
- detailed_key = detailed_key.decode("utf-8")
- except AttributeError:
- pass
-
- detailed_value = detail(worker, value)
- try:
- detailed_value = detailed_value.decode("utf-8")
- except AttributeError:
- pass
-
- pieces[detailed_key] = detailed_value
-
- return pieces
-
-
-# numpy array
+# Numpy array
def _simplify_ndarray(my_array: numpy.ndarray) -> Tuple[bin, Tuple, str]:
@@ -531,30 +331,6 @@ def _detail_torch_device(worker: AbstractWorker, device_type: str) -> torch.devi
return torch.device(type=device_type)
-def _force_full_simplify(worker: AbstractWorker) -> tuple:
- """
-
- """
-
- return (_simplify(worker.id), _simplify(worker._objects), worker.auto_add)
-
-
-def _force_full_detail(worker: AbstractWorker, worker_tuple: tuple) -> tuple:
- worker_id, _objects, auto_add = worker_tuple
- worker_id = detail(worker, worker_id)
-
- result = sy.VirtualWorker(sy.hook, worker_id, auto_add=auto_add)
- _objects = detail(worker, _objects)
- result._objects = _objects
-
- # make sure they weren't accidentally double registered
- for _, obj in _objects.items():
- if obj.id in worker._objects:
- del worker._objects[obj.id]
-
- return result
-
-
def _simplify_script_module(obj: torch.jit.ScriptModule) -> str:
"""Strategy to serialize a script module using Torch.jit"""
return obj.save_to_buffer()
@@ -567,148 +343,15 @@ def _detail_script_module(worker: AbstractWorker, script_module_bin: str) -> tor
return loaded_module
-# High Level Simplification Router
-
-
-def _simplify(obj: object) -> object:
- """
- This function takes an object as input and returns a simple
- Python object which is supported by the chosen serialization
- method (such as JSON or msgpack). The reason we have this function
- is that some objects are either NOT supported by high level (fast)
- serializers OR the high level serializers don't support the fastest
- form of serialization. For example, PyTorch tensors have custom pickle
- functionality thus its better to pre-serialize PyTorch tensors using
- pickle and then serialize the binary in with the rest of the message
- being sent.
-
- Args:
- obj: an object which may need to be simplified
-
- Returns:
- obj: an simple Python object which msgpack can serialize
-
- Raises:
- ValueError: if `move_this` or `in_front_of_that` are not both single ASCII
- characters.
-
- """
-
- try:
- # check to see if there is a simplifier
- # for this type. If there is, run return
- # the simplified object
- current_type = type(obj)
- result = (simplifiers[current_type][0], simplifiers[current_type][1](obj))
- return result
-
- except KeyError:
-
- # if there is not a simplifier for this
- # object, then the object is already a
- # simple python object and we can just
- # return it
- return obj
-
-
-def _force_full_simplify(obj: object) -> object:
- current_type = type(obj)
-
- if current_type in forced_full_simplifiers:
-
- left = forced_full_simplifiers[current_type][0]
-
- right = forced_full_simplifiers[current_type][1]
-
- right = right(obj)
-
- result = (left, right)
- else:
- result = _simplify(obj)
-
- return result
-
-
-simplifiers = {
- torch.Tensor: [0, _simplify_torch_tensor],
- torch.nn.Parameter: [1, _simplify_torch_parameter],
- tuple: [2, _simplify_collection],
- list: [3, _simplify_collection],
- set: [4, _simplify_collection],
- dict: [5, _simplify_dictionary],
- range: [6, _simplify_range],
- numpy.ndarray: [7, _simplify_ndarray],
- slice: [8, _simplify_slice],
- type(Ellipsis): [9, _simplify_ellipsis],
- torch.device: [10, _simplify_torch_device],
- pointers.PointerTensor: [11, sy.PointerTensor.simplify],
- LoggingTensor: [12, sy.LoggingTensor.simplify],
- AdditiveSharingTensor: [13, sy.AdditiveSharingTensor.simplify],
- MultiPointerTensor: [14, sy.MultiPointerTensor.simplify],
- Plan: [15, sy.Plan.simplify],
- VirtualWorker: [16, sy.VirtualWorker.simplify],
- str: [18, _simplify_str],
- pointers.ObjectWrapper: [19, sy.ObjectWrapper.simplify],
- GetNotPermittedError: [20, sy.exceptions.GetNotPermittedError.simplify],
- ResponseSignatureError: [20, sy.exceptions.ResponseSignatureError.simplify],
- torch.jit.ScriptModule: [21, _simplify_script_module],
- torch.jit.TopLevelTracedModule: [
- 21,
- _simplify_script_module,
- ], # treat as torch.jit.ScriptModule
- TrainConfig: [22, sy.TrainConfig.simplify],
-}
-
-forced_full_simplifiers = {VirtualWorker: [17, _force_full_simplify]}
-
-
-def detail(worker: AbstractWorker, obj: object) -> object:
- """
- This function reverses the functionality of _simplify. Where applicable,
- it converts simple objects into more complex objects such as converting
- binary objects into torch tensors. Read _simplify for more information on
- why _simplify and _detail are needed.
-
- Args:
- worker: the worker which is acquiring the message content, for example
- used to specify the owner of a tensor received(not obvious for
- virtual workers)
- obj: a simple Python object which msgpack deserialized
-
- Returns:
- obj: a more complex Python object which msgpack would have had trouble
- deserializing directly.
-
- """
-
- if type(obj) in (list, tuple):
- return detailers[obj[0]](worker, obj[1])
- else:
- return obj
-
-
-detailers = [
- _detail_torch_tensor,
- _detail_torch_parameter,
- _detail_collection_tuple,
- _detail_collection_list,
- _detail_collection_set,
- _detail_dictionary,
- _detail_range,
- _detail_ndarray,
- _detail_slice,
- _detail_ellipsis,
- _detail_torch_device,
- sy.PointerTensor.detail,
- sy.LoggingTensor.detail,
- sy.AdditiveSharingTensor.detail,
- sy.MultiPointerTensor.detail,
- sy.Plan.detail,
- sy.VirtualWorker.detail,
- _force_full_detail,
- _detail_str,
- sy.ObjectWrapper.detail,
- sy.exceptions.GetNotPermittedError.detail,
- _detail_script_module,
- sy.TrainConfig.detail,
-]
+# Maps a type to a tuple containing its simplifier and detailer function
+# IMPORTANT: keep this structure sorted A-Z (by type name)
+MAP_TORCH_SIMPLIFIERS_AND_DETAILERS = OrderedDict(
+ {
+ numpy.ndarray: (_simplify_ndarray, _detail_ndarray),
+ torch.device: (_simplify_torch_device, _detail_torch_device),
+ torch.jit.ScriptModule: (_simplify_script_module, _detail_script_module),
+ torch.jit.TopLevelTracedModule: (_simplify_script_module, _detail_script_module),
+ torch.nn.Parameter: (_simplify_torch_parameter, _detail_torch_parameter),
+ torch.Tensor: (_simplify_torch_tensor, _detail_torch_tensor),
+ }
+)
| diff --git a/test/test_serde.py b/test/test_serde.py
--- a/test/test_serde.py
+++ b/test/test_serde.py
@@ -3,7 +3,9 @@
simple python types which are serializable by standard serialization tools.
For more on how/why this works, see serde.py directly.
"""
+from syft.serde import native_serde
from syft.serde import serde
+from syft.serde import torch_serde
import syft
from syft.exceptions import CompressionNotFoundException
@@ -24,7 +26,12 @@ def test_tuple_simplify():
for tuples so that the detailer knows how to interpret it."""
input = ("hello", "world")
- target = (2, ((18, (b"hello",)), (18, (b"world",))))
+ tuple_detail_index = serde.detailers.index(native_serde._detail_collection_tuple)
+ str_detail_index = serde.detailers.index(native_serde._detail_str)
+ target = (
+ tuple_detail_index,
+ ((str_detail_index, (b"hello",)), (str_detail_index, (b"world",))),
+ )
assert serde._simplify(input) == target
@@ -36,7 +43,9 @@ def test_list_simplify():
for lists so that the detailer knows how to interpret it."""
input = ["hello", "world"]
- target = (3, [(18, (b"hello",)), (18, (b"world",))])
+ list_detail_index = serde.detailers.index(native_serde._detail_collection_list)
+ str_detail_index = serde.detailers.index(native_serde._detail_str)
+ target = (list_detail_index, [(str_detail_index, (b"hello",)), (str_detail_index, (b"world",))])
assert serde._simplify(input) == target
@@ -48,7 +57,9 @@ def test_set_simplify():
for sets so that the detailer knows how to interpret it."""
input = set(["hello", "world"])
- target = (4, [(18, (b"hello",)), (18, (b"world",))])
+ set_detail_index = serde.detailers.index(native_serde._detail_collection_set)
+ str_detail_index = serde.detailers.index(native_serde._detail_str)
+ target = (set_detail_index, [(str_detail_index, (b"hello",)), (str_detail_index, (b"world",))])
assert serde._simplify(input)[0] == target[0]
assert set(serde._simplify(input)[1]) == set(target[1])
@@ -82,7 +93,7 @@ def test_string_simplify():
themselves, with no tuple/id necessary."""
input = "hello"
- target = (18, (b"hello",))
+ target = (serde.detailers.index(native_serde._detail_str), (b"hello",))
assert serde._simplify(input) == target
@@ -90,11 +101,16 @@ def test_dict_simplify():
"""This tests our ability to simplify dict objects.
This test is pretty simple since dicts just serialize to
- themselves, with a tuple wrapper with the correct ID (4)
+ themselves, with a tuple wrapper with the correct ID
for dicts so that the detailer knows how to interpret it."""
input = {"hello": "world"}
- target = (5, [((18, (b"hello",)), (18, (b"world",)))])
+ detail_dict_index = serde.detailers.index(native_serde._detail_dictionary)
+ detail_str_index = serde.detailers.index(native_serde._detail_str)
+ target = (
+ detail_dict_index,
+ [((detail_str_index, (b"hello",)), (detail_str_index, (b"world",)))],
+ )
assert serde._simplify(input) == target
@@ -106,7 +122,7 @@ def test_range_simplify():
for dicts so that the detailer knows how to interpret it."""
input = range(1, 3, 4)
- target = (6, (1, 3, 4))
+ target = (serde.detailers.index(native_serde._detail_range), (1, 3, 4))
assert serde._simplify(input) == target
@@ -130,7 +146,7 @@ def test_torch_tensor_simplify():
# make sure the object type ID is correct
# (0 for torch.Tensor)
- assert output[0] == 0
+ assert serde.detailers[output[0]] == torch_serde._detail_torch_tensor
# make sure inner type is correct
assert type(output[1]) == tuple
@@ -154,7 +170,7 @@ def test_ndarray_simplify():
output = serde._simplify(input)
# make sure simplified type ID is correct
- assert output[0] == 7
+ assert serde.detailers[output[0]] == torch_serde._detail_ndarray
# make sure serialized form is correct
assert type(output[1][0]) == bytes
@@ -164,9 +180,7 @@ def test_ndarray_simplify():
def test_ellipsis_simplify():
"""Make sure ellipsis simplifies correctly."""
-
- # the id indicating an ellipsis is here
- assert serde._simplify(Ellipsis)[0] == 9
+ assert serde.detailers[serde._simplify(Ellipsis)[0]] == native_serde._detail_ellipsis
# the simplified ellipsis (empty object)
assert serde._simplify(Ellipsis)[1] == b""
@@ -176,8 +190,7 @@ def test_torch_device_simplify():
"""Test the simplification of torch.device"""
device = torch.device("cpu")
- # the id indicating an torch.device is here
- assert serde._simplify(device)[0] == 10
+ assert serde.detailers[serde._simplify(device)[0]] == torch_serde._detail_torch_device
# the simplified torch.device
assert serde._simplify(device)[1] == "cpu"
@@ -311,7 +324,9 @@ def test_compressed_serde(compress_scheme):
else:
serde._apply_compress_scheme = serde.apply_no_compression
- arr = numpy.random.random((100, 100))
+ # using numpy.ones because numpy.random.random is not compressed.
+ arr = numpy.ones((100, 100))
+
arr_serialized = serde.serialize(arr)
arr_serialized_deserialized = serde.deserialize(arr_serialized)
@@ -374,7 +389,6 @@ def test_range_serde(compress):
_range = range(1, 2, 3)
range_serialized = serde.serialize(_range)
-
range_serialized_deserialized = serde.deserialize(range_serialized)
assert _range == range_serialized_deserialized
@@ -400,10 +414,16 @@ def test_list(compress):
assert _list == list_serialized_deserialized
# Test with a complex data structure
- tensor_one = Tensor(numpy.random.random((100, 100)))
- tensor_two = Tensor(numpy.random.random((100, 100)))
+ tensor_one = Tensor(numpy.ones((100, 100)))
+ tensor_two = Tensor(numpy.ones((100, 100)) * 2)
_list = (tensor_one, tensor_two)
+
list_serialized = serde.serialize(_list)
+ if compress:
+ assert list_serialized[0] == serde.LZ4
+ else:
+ assert list_serialized[0] == serde.NO_COMPRESSION
+
list_serialized_deserialized = serde.deserialize(list_serialized)
# `assert list_serialized_deserialized == _list` does not work, therefore it's split
# into 3 assertions
@@ -422,6 +442,7 @@ def test_set(compress):
# Test with integers
_set = set([1, 2])
set_serialized = serde.serialize(_set)
+
set_serialized_deserialized = serde.deserialize(set_serialized)
assert _set == set_serialized_deserialized
@@ -432,10 +453,16 @@ def test_set(compress):
assert _set == set_serialized_deserialized
# Test with a complex data structure
- tensor_one = Tensor(numpy.random.random((100, 100)))
- tensor_two = Tensor(numpy.random.random((100, 100)))
+ tensor_one = Tensor(numpy.ones((100, 100)))
+ tensor_two = Tensor(numpy.ones((100, 100)) * 2)
_set = (tensor_one, tensor_two)
+
set_serialized = serde.serialize(_set)
+ if compress:
+ assert set_serialized[0] == serde.LZ4
+ else:
+ assert set_serialized[0] == serde.NO_COMPRESSION
+
set_serialized_deserialized = serde.deserialize(set_serialized)
# `assert set_serialized_deserialized == _set` does not work, therefore it's split
# into 3 assertions
@@ -488,20 +515,6 @@ def test_float(compress):
assert y_serialized_deserialized == y
-def test_compressed_float():
- x = 0.5
- y = 1.5
-
- x_serialized = serde.serialize(x)
- x_serialized_deserialized = serde.deserialize(x_serialized)
-
- y_serialized = serde.serialize(y)
- y_serialized_deserialized = serde.deserialize(y_serialized)
-
- assert x_serialized_deserialized == x
- assert y_serialized_deserialized == y
-
-
@pytest.mark.parametrize(
"compress, compress_scheme",
[
@@ -524,8 +537,11 @@ def test_hooked_tensor(compress, compress_scheme):
else:
serde._apply_compress_scheme = serde.apply_no_compression
- t = Tensor(numpy.random.random((100, 100)))
+ t = Tensor(numpy.ones((100, 100)))
t_serialized = serde.serialize(t)
+ assert (
+ t_serialized[0] == compress_scheme if compress else t_serialized[0] == serde.NO_COMPRESSION
+ )
t_serialized_deserialized = serde.deserialize(t_serialized)
assert (t == t_serialized_deserialized).all()
@@ -555,12 +571,15 @@ def test_pointer_tensor_detail(id):
def test_numpy_tensor_serde():
+ serde._apply_compress_scheme = serde.apply_lz4_compression
+
serde._serialize_tensor = syft.serde.numpy_tensor_serializer
serde._deserialize_tensor = syft.serde.numpy_tensor_deserializer
- tensor = torch.tensor(numpy.random.random((10, 10)), requires_grad=False)
+ tensor = torch.tensor(numpy.ones((10, 10)), requires_grad=False)
tensor_serialized = serde.serialize(tensor)
+ assert tensor_serialized[0] != serde.NO_COMPRESSION
tensor_deserialized = serde.deserialize(tensor_serialized)
# Back to Pytorch serializer
| Dinamically generate simplifiers and detailers list
**Describe the bug**
Serde file is not manageable in the current format.
It' very easy to unintentionally breaking `serde.py` by updating simplifiers and detailers list incorrectly.
A fix was discussed at #2230 which consists of dynamically generating the lists during init.
**Pros**: easier to maintain code.
**Cons**: overhead during initialization.
| 2019-06-16T03:38:11 |
|
OpenMined/PySyft | 2,298 | OpenMined__PySyft-2298 | [
"2297"
] | 3d0264762c899094e52816a7c1432c667f898a06 | diff --git a/syft/serde/serde.py b/syft/serde/serde.py
--- a/syft/serde/serde.py
+++ b/syft/serde/serde.py
@@ -69,10 +69,7 @@
+ list(MAP_TORCH_SIMPLIFIERS_AND_DETAILERS.items())
)
-# Objects that we can run force full simplilfy on
-MAP_TO_FORCE_FULL_SIMPLIFY = [VirtualWorker]
-
-# If a object implements its own simplifer and detailer function it should be stored in this list
+# If a object implements its own simplify and detail functions it should be stored in this list
OBJ_SIMPLIFIER_AND_DETAILERS = [
AdditiveSharingTensor,
LoggingTensor,
@@ -84,6 +81,10 @@
VirtualWorker,
]
+# If a object implements its own force_simplify and force_detail functions it should be stored in this list
+OBJ_FORCE_FULL_SIMPLIFIER_AND_DETAILERS = [VirtualWorker]
+
+
EXCEPTION_SIMPLIFIER_AND_DETAILERS = [GetNotPermittedError, ResponseSignatureError]
# COMPRESSION SCHEME INT CODES
@@ -96,11 +97,9 @@ def _force_full_simplify(obj: object) -> object:
current_type = type(obj)
if current_type in forced_full_simplifiers:
-
left = forced_full_simplifiers[current_type][0]
right = forced_full_simplifiers[current_type][1]
-
right = right(obj)
result = (left, right)
@@ -110,22 +109,6 @@ def _force_full_simplify(obj: object) -> object:
return result
-def _force_full_detail(worker: AbstractWorker, worker_tuple: tuple) -> tuple:
- worker_id, _objects, auto_add = worker_tuple
- worker_id = _detail(worker, worker_id)
-
- result = sy.VirtualWorker(sy.hook, worker_id, auto_add=auto_add)
- _objects = _detail(worker, _objects)
- result._objects = _objects
-
- # make sure they weren't accidentally double registered
- for _, obj in _objects.items():
- if obj.id in worker._objects:
- del worker._objects[obj.id]
-
- return result
-
-
## SECTION: dinamically generate simplifiers and detailers
def _generate_simplifiers_and_detailers():
"""Generate simplifiers, forced full simplifiers and detailers."""
@@ -155,17 +138,17 @@ def _add_simplifier_and_detailer(curr_type, simplifier, detailer, forced=False):
simplifier, detailer = syft_type.simplify, syft_type.detail
_add_simplifier_and_detailer(syft_type, simplifier, detailer)
- # Add forced full simplifiers
- for curr_type in MAP_TO_FORCE_FULL_SIMPLIFY:
- _add_simplifier_and_detailer(
- curr_type, _force_full_simplify, _force_full_detail, forced=True
- )
+ # Register syft objects with custom force_simplify and force_detail methods
+ for syft_type in OBJ_FORCE_FULL_SIMPLIFIER_AND_DETAILERS:
+ force_simplifier, force_detailer = syft_type.force_simplify, syft_type.force_detail
+ _add_simplifier_and_detailer(syft_type, force_simplifier, force_detailer, forced=True)
return simplifiers, forced_full_simplifiers, detailers
simplifiers, forced_full_simplifiers, detailers = _generate_simplifiers_and_detailers()
+
## SECTION: High Level Public Functions (these are the ones you use)
def serialize(
obj: object,
diff --git a/syft/workers/virtual.py b/syft/workers/virtual.py
--- a/syft/workers/virtual.py
+++ b/syft/workers/virtual.py
@@ -13,11 +13,27 @@ def _recv_msg(self, message: bin) -> bin:
return self.recv_msg(message)
@staticmethod
- def simplify(worker: AbstractWorker) -> tuple:
- """
+ def force_simplify(worker: AbstractWorker) -> tuple:
+ return (sy.serde._simplify(worker.id), sy.serde._simplify(worker._objects), worker.auto_add)
- """
+ @staticmethod
+ def force_detail(worker: AbstractWorker, worker_tuple: tuple) -> tuple:
+ worker_id, _objects, auto_add = worker_tuple
+ worker_id = sy.serde._detail(worker, worker_id)
+
+ result = sy.VirtualWorker(sy.hook, worker_id, auto_add=auto_add)
+ _objects = sy.serde._detail(worker, _objects)
+ result._objects = _objects
+ # make sure they weren't accidentally double registered
+ for _, obj in _objects.items():
+ if obj.id in worker._objects:
+ del worker._objects[obj.id]
+
+ return result
+
+ @staticmethod
+ def simplify(worker: AbstractWorker) -> tuple:
return (sy.serde._simplify(worker.id),)
@staticmethod
| diff --git a/test/test_serde.py b/test/test_serde.py
--- a/test/test_serde.py
+++ b/test/test_serde.py
@@ -553,8 +553,6 @@ def test_pointer_tensor(hook, workers):
)
t_serialized = serde.serialize(t)
t_serialized_deserialized = serde.deserialize(t_serialized)
- print(f"t.location - {t.location}")
- print(f"t_serialized_deserialized.location - {t_serialized_deserialized.location}")
assert t.id == t_serialized_deserialized.id
assert t.location.id == t_serialized_deserialized.location.id
assert t.id_at_location == t_serialized_deserialized.id_at_location
@@ -643,6 +641,35 @@ def foo(x):
assert foo.code == foo_received.code
+def test_serde_virtual_worker(hook):
+ virtual_worker = syft.VirtualWorker(hook=hook, id="deserialized_worker1")
+ # Populate worker
+ tensor1, tensor2 = torch.tensor([1.0, 2.0]), torch.tensor([0.0])
+ ptr1, ptr2 = tensor1.send(virtual_worker), tensor2.send(virtual_worker)
+
+ serialized_worker = serde.serialize(virtual_worker, force_full_simplification=False)
+ deserialized_worker = serde.deserialize(serialized_worker)
+
+ assert virtual_worker.id == deserialized_worker.id
+
+
+def test_full_serde_virtual_worker(hook):
+ virtual_worker = syft.VirtualWorker(hook=hook, id="deserialized_worker2")
+ # Populate worker
+ tensor1, tensor2 = torch.tensor([1.0, 2.0]), torch.tensor([0.0])
+ ptr1, ptr2 = tensor1.send(virtual_worker), tensor2.send(virtual_worker)
+
+ serialized_worker = serde.serialize(virtual_worker, force_full_simplification=True)
+
+ deserialized_worker = serde.deserialize(serialized_worker)
+
+ assert virtual_worker.id == deserialized_worker.id
+ assert virtual_worker.auto_add == deserialized_worker.auto_add
+ assert len(deserialized_worker._objects) == 2
+ assert tensor1.id in deserialized_worker._objects
+ assert tensor2.id in deserialized_worker._objects
+
+
def test_serde_object_wrapper_traced_module():
data = torch.tensor([[-1, 2.0], [0, 1.1], [-1, 2.1], [0, 1.2]])
| Test worker serialization
**Is your feature request related to a problem? Please describe.**
We should have tests for Virtual Workers serialization since this is crucial for [Grid](https://github.com/OpenMined/Grid).
| 2019-06-18T16:15:16 |
|
OpenMined/PySyft | 2,305 | OpenMined__PySyft-2305 | [
"2304"
] | 7b6f9fb2b98865f4ad45f93d337970c885ee3534 | diff --git a/syft/frameworks/torch/tensors/interpreters/precision.py b/syft/frameworks/torch/tensors/interpreters/precision.py
--- a/syft/frameworks/torch/tensors/interpreters/precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/precision.py
@@ -1,5 +1,7 @@
-import syft
import torch
+
+import syft
+from syft.workers import AbstractWorker
from syft.frameworks.torch.tensors.interpreters.abstract import AbstractTensor
from syft.frameworks.torch.overload_torch import overloaded
@@ -515,3 +517,60 @@ def get(self):
def share(self, *owners, field=None, crypto_provider=None):
self.child = self.child.share(*owners, field=field, crypto_provider=crypto_provider)
return self
+
+ @staticmethod
+ def simplify(tensor: "FixedPrecisionTensor") -> tuple:
+ """Takes the attributes of a FixedPrecisionTensor and saves them in a tuple.
+
+ Args:
+ tensor: a FixedPrecisionTensor.
+
+ Returns:
+ tuple: a tuple holding the unique attributes of the fixed precision tensor.
+ """
+ chain = None
+ if hasattr(tensor, "child"):
+ chain = syft.serde._simplify(tensor.child)
+
+ return (
+ syft.serde._simplify(tensor.id),
+ tensor.field,
+ tensor.base,
+ tensor.precision_fractional,
+ tensor.kappa,
+ syft.serde._simplify(tensor.tags),
+ syft.serde._simplify(tensor.description),
+ chain,
+ )
+
+ @staticmethod
+ def detail(worker: AbstractWorker, tensor_tuple: tuple) -> "FixedPrecisionTensor":
+ """
+ This function reconstructs a FixedPrecisionTensor given it's attributes in form of a tuple.
+ Args:
+ worker: the worker doing the deserialization
+ tensor_tuple: a tuple holding the attributes of the FixedPrecisionTensor
+ Returns:
+ FixedPrecisionTensor: a FixedPrecisionTensor
+ Examples:
+ shared_tensor = detail(data)
+ """
+
+ tensor_id, field, base, precision_fractional, kappa, tags, description, chain = tensor_tuple
+
+ tensor = FixedPrecisionTensor(
+ owner=worker,
+ id=syft.serde._detail(worker, tensor_id),
+ field=field,
+ base=base,
+ precision_fractional=precision_fractional,
+ kappa=kappa,
+ tags=syft.serde._detail(worker, tags),
+ description=syft.serde._detail(worker, description),
+ )
+
+ if chain is not None:
+ chain = syft.serde._detail(worker, chain)
+ tensor.child = chain
+
+ return tensor
diff --git a/syft/serde/serde.py b/syft/serde/serde.py
--- a/syft/serde/serde.py
+++ b/syft/serde/serde.py
@@ -56,6 +56,7 @@
from syft.frameworks.torch.tensors.decorators import LoggingTensor
+from syft.frameworks.torch.tensors.interpreters import FixedPrecisionTensor
from syft.frameworks.torch.tensors.interpreters import AdditiveSharingTensor
from syft.frameworks.torch.tensors.interpreters import MultiPointerTensor
from syft.frameworks.torch import pointers
@@ -72,6 +73,7 @@
# If a object implements its own simplify and detail functions it should be stored in this list
OBJ_SIMPLIFIER_AND_DETAILERS = [
AdditiveSharingTensor,
+ FixedPrecisionTensor,
LoggingTensor,
MultiPointerTensor,
Plan,
| diff --git a/test/test_serde.py b/test/test_serde.py
--- a/test/test_serde.py
+++ b/test/test_serde.py
@@ -606,6 +606,26 @@ def test_additive_sharing_tensor_serde(compress, workers):
)
[email protected]("compress", [True, False])
+def test_fixed_precision_tensor_serde(compress, workers):
+ alice, bob, james = workers["alice"], workers["bob"], workers["james"]
+
+ x = (
+ torch.tensor([[3.1, 4.3]])
+ .fix_prec(base=12, precision_fractional=5)
+ .share(alice, bob, crypto_provider=james)
+ )
+
+ serialized_x = serde.serialize(x)
+ deserialied_x = serde.deserialize(serialized_x)
+
+ assert x.id == deserialied_x.child.id
+ assert x.child.field == deserialied_x.child.field
+ assert x.child.kappa == deserialied_x.child.kappa
+ assert x.child.precision_fractional == deserialied_x.child.precision_fractional
+ assert x.child.base == deserialied_x.child.base
+
+
def test_serde_object_wrapper_int():
obj = 4
obj_wrapper = pointers.ObjectWrapper(obj, id=100)
| Serialize / Deserialize FixedPrecisionTensors
**Is your feature request related to a problem? Please describe.**
I want to be able to serialize/deserialize FixedPrecisionTensors.
Tests should be implemented.
| 2019-06-20T19:57:37 |
|
OpenMined/PySyft | 2,308 | OpenMined__PySyft-2308 | [
"2306"
] | e4044e67c8f1d72381de1fbeab42ae779f3f1ea0 | diff --git a/syft/federated/federated_client.py b/syft/federated/federated_client.py
--- a/syft/federated/federated_client.py
+++ b/syft/federated/federated_client.py
@@ -72,6 +72,9 @@ def fit(self, dataset_key: str, **kwargs):
if self.train_config is None:
raise ValueError("TrainConfig not defined.")
+ if dataset_key not in self.datasets:
+ raise ValueError("Dataset {} unknown.".format(dataset_key))
+
model = self.get_obj(self.train_config._model_id).obj
loss_fn = self.get_obj(self.train_config._loss_fn_id).obj
@@ -106,19 +109,21 @@ def _fit(self, model, dataset_key, loss_fn):
loss = None
iteration_count = 0
- for (data, target) in data_loader:
- # Set gradients to zero
- self.optimizer.zero_grad()
-
- # Update model
- output = model(data)
- loss = loss_fn(target=target, pred=output)
- loss.backward()
- self.optimizer.step()
-
- # Update and check interation count
- iteration_count += 1
- if iteration_count >= self.train_config.max_nr_batches >= 0:
- break
+
+ for _ in range(self.train_config.epochs):
+ for (data, target) in data_loader:
+ # Set gradients to zero
+ self.optimizer.zero_grad()
+
+ # Update model
+ output = model(data)
+ loss = loss_fn(target=target, pred=output)
+ loss.backward()
+ self.optimizer.step()
+
+ # Update and check interation count
+ iteration_count += 1
+ if iteration_count >= self.train_config.max_nr_batches >= 0:
+ break
return loss
| diff --git a/test/federated/test_federated_client.py b/test/federated/test_federated_client.py
--- a/test/federated/test_federated_client.py
+++ b/test/federated/test_federated_client.py
@@ -1,3 +1,5 @@
+import pytest
+
import torch
import syft as sy
@@ -55,7 +57,11 @@ def test_set_obj_other():
assert fed_client._objects[dummy_data.id] == dummy_data
-def test_fit():
[email protected](
+ "fit_dataset_key, epochs",
+ [("gaussian_mixture", 1), ("gaussian_mixture", 10), ("another_dataset", 1)],
+)
+def test_fit(fit_dataset_key, epochs):
data, target = utils.create_gaussian_mixture_toy_data(nr_samples=100)
fed_client = federated.FederatedClient()
@@ -100,6 +106,7 @@ def forward(self, x):
loss_fn_id=loss_id,
lr=0.05,
weight_decay=0.01,
+ epochs=epochs,
)
fed_client.set_obj(model_ow)
@@ -107,32 +114,22 @@ def forward(self, x):
fed_client.set_obj(train_config)
fed_client.optimizer = None
- for curr_round in range(12):
- loss = fed_client.fit(dataset_key=dataset_key)
+ loss = loss_before
+ for curr_round in range(3):
+ if fit_dataset_key == dataset_key:
+ loss = fed_client.fit(dataset_key=fit_dataset_key)
+ else:
+ with pytest.raises(ValueError):
+ loss = fed_client.fit(dataset_key=fit_dataset_key)
if PRINT_IN_UNITTESTS and curr_round % 4 == 0: # pragma: no cover
print("-" * 50)
print("Iteration %s: alice's loss: %s" % (curr_round, loss))
- new_model = fed_client.get_obj(model_id)
- pred = new_model.obj(data)
- loss_after = loss_fn(target=target, pred=pred)
- if PRINT_IN_UNITTESTS: # pragma: no cover:
- print("Loss after training: {}".format(loss_after))
-
- assert loss_after < loss_before
-
-
-def create_xor_data(nr_samples): # pragma: no cover
- with torch.no_grad():
- data = torch.tensor([[0.0, 1.0], [1.0, 0.0], [1.0, 1.0], [0.0, 0.0]], requires_grad=True)
- target = torch.tensor([1, 1, 0, 0], requires_grad=False)
-
- data_len = int(nr_samples / 4 + 1) * 4
- X = torch.zeros(data_len, 2, requires_grad=True)
- Y = torch.zeros(data_len, requires_grad=False)
-
- for i in range(int(data_len / 4)):
- X[i * 4 : (i + 1) * 4, :] = data
- Y[i * 4 : (i + 1) * 4] = target
+ if dataset_key == fit_dataset_key:
+ new_model = fed_client.get_obj(model_id)
+ pred = new_model.obj(data)
+ loss_after = loss_fn(target=target, pred=pred)
+ if PRINT_IN_UNITTESTS: # pragma: no cover:
+ print("Loss after training: {}".format(loss_after))
- return X, Y.long()
+ assert loss_after < loss_before
| TrainConfig parameter "epochs"
**TrainConfig parameter "epochs" doesn't have effect.**
After changing the number of epochs=1 to epochs=100. The worker still do only 1 epoch.
```
train_config = sy.TrainConfig(
model=traced_model,
loss_fn=loss_fn,
batch_size=batch_size,
shuffle=True,
#max_nr_batches=max_nr_batches,
epochs=100,
lr=lr,
)
```
| 2019-06-22T20:47:26 |
|
OpenMined/PySyft | 2,328 | OpenMined__PySyft-2328 | [
"2319"
] | 7acc6a41ca5016cc30ae2553a005cade757ee2ea | diff --git a/syft/exceptions.py b/syft/exceptions.py
--- a/syft/exceptions.py
+++ b/syft/exceptions.py
@@ -86,8 +86,8 @@ def __init__(self, tensor_a, tensor_b, attr="a method"):
message = (
"You tried to call "
+ attr
- + " involving two tensors which "
- + "are not on the same machine! One tensor is on "
+ + " involving two tensors which"
+ + " are not on the same machine! One tensor is on "
+ str(tensor_a.location)
+ " while the other is on "
+ str(tensor_b.location)
@@ -98,7 +98,7 @@ def __init__(self, tensor_a, tensor_b, attr="a method"):
"You tried to call "
+ attr
+ " involving two tensors where one tensor is actually located"
- + "on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
+ + " on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
+ str(tensor_a.location.id)
+ ") on the other tensor.\n"
+ "\nTensor A: "
@@ -111,7 +111,7 @@ def __init__(self, tensor_a, tensor_b, attr="a method"):
"You tried to call "
+ attr
+ " involving two tensors where one tensor is actually located"
- + "on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
+ + " on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
+ str(tensor_b.location.id)
+ ") on the other tensor.\n"
+ "\nTensor A: "
| Cosmetic space to add for TensorsNotCollocatedException
**Describe the bug**
Cosmetic spacing, just 1, missing for TensorsNotCollocatedException
**To Reproduce**
1. initiate two tensors, one send to remote worker, one locate
2. add two tensors
2. get TensorsNotCollocatedException
```
bob = sy.VirtualWorker(hook, id="bob")
x = torch.tensor([1,2,3,4,5]).send(bob)
y = torch.tensor([1,1,1,1,1])
z = x+y
```
**Expected behavior**
" involving two tensors where one tensor is actually locatedon another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
becomes
" involving two tensors where one tensor is actually located on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
`locatedon` becomes `located on`
**Screenshots**
[screenshot of error message](https://github.com/theoptips/udacity_project_submission/blob/master/Screen%20Shot%202019-06-28%20at%202.50.34%20PM.png)
[exception.py with proposed change](https://github.com/theoptips/PySyft/blob/dev/syft/exceptions.py)
[commit message with proposed change explained](https://github.com/theoptips/PySyft/commit/533da84afa6ac4071a58754e97c21ce1ca7056aa)
[exception.py changed line highlighted](https://github.com/theoptips/PySyft/commit/4b68c3c6fbe0c18cdf87dfe6ddc3c2071a71f1cc)
| Hey @theoptips,
Thanks for reporting this issue! Feel free to make a PR to fix it if you want to :)!
[WORKING on this issue]
Thank you for the opportunity!! As this is a good-first-issue, I would love to take a stab. I am in need of a contribution.
Is it okay if I reach out in Slack to ask questions?
I may need help with test case set up and passing TravisCI, but I noticed the test case only tests for the exception TensorsNotCollocatedException not the message, I should be okay, I think? | 2019-07-03T00:38:48 |
|
OpenMined/PySyft | 2,353 | OpenMined__PySyft-2353 | [
"2352"
] | aff49fe813dc27aa7750f9ee41413c5763cec53e | diff --git a/syft/frameworks/torch/pointers/pointer_tensor.py b/syft/frameworks/torch/pointers/pointer_tensor.py
--- a/syft/frameworks/torch/pointers/pointer_tensor.py
+++ b/syft/frameworks/torch/pointers/pointer_tensor.py
@@ -195,6 +195,23 @@ def attr(self, attr_name):
def dim(self) -> int:
return len(self._shape)
+ def fix_prec(self, *args, **kwargs):
+ """
+ Send a command to remote worker to transform a tensor to fix_precision
+
+ Returns:
+ A pointer to an FixPrecisionTensor
+ """
+
+ # Send the command
+ command = ("fix_prec", self, args, kwargs)
+
+ response = self.owner.send_command(self.location, command)
+
+ return response
+
+ fix_precision = fix_prec
+
def share(self, *args, **kwargs):
"""
Send a command to remote worker to additively share a tensor
diff --git a/syft/frameworks/torch/tensors/interpreters/additive_shared.py b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
--- a/syft/frameworks/torch/tensors/interpreters/additive_shared.py
+++ b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
@@ -923,6 +923,10 @@ def simplify(tensor: "AdditiveSharingTensor") -> tuple:
chain = None
if hasattr(tensor, "child"):
chain = sy.serde._simplify(tensor.child)
+
+ # Don't delete the remote values of the shares at simplification
+ tensor.set_garbage_collect_data(False)
+
return (tensor.id, tensor.field, tensor.crypto_provider.id, chain)
@staticmethod
diff --git a/syft/frameworks/torch/tensors/interpreters/native.py b/syft/frameworks/torch/tensors/interpreters/native.py
--- a/syft/frameworks/torch/tensors/interpreters/native.py
+++ b/syft/frameworks/torch/tensors/interpreters/native.py
@@ -662,18 +662,22 @@ def float_prec_(self):
float_precision_ = float_prec_
def fix_prec(self, *args, **kwargs):
- base = kwargs.get("base", 10)
- prec_fractional = kwargs.get("precision_fractional", 3)
- max_precision = _get_maximum_precision()
- if self._requires_large_precision(max_precision, base, prec_fractional):
- return (
- syft.LargePrecisionTensor(*args, **kwargs)
- .on(self)
- .child.fix_large_precision()
- .wrap()
- )
+ if self.is_wrapper:
+ self.child = self.child.fix_prec(*args, **kwargs)
+ return self
else:
- return syft.FixedPrecisionTensor(*args, **kwargs).on(self).enc_fix_prec().wrap()
+ base = kwargs.get("base", 10)
+ prec_fractional = kwargs.get("precision_fractional", 3)
+ max_precision = _get_maximum_precision()
+ if self._requires_large_precision(max_precision, base, prec_fractional):
+ return (
+ syft.LargePrecisionTensor(*args, **kwargs)
+ .on(self)
+ .child.fix_large_precision()
+ .wrap()
+ )
+ else:
+ return syft.FixedPrecisionTensor(*args, **kwargs).on(self).enc_fix_prec().wrap()
fix_precision = fix_prec
| diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -424,6 +424,34 @@ def test_fixed_precision_and_sharing(workers):
assert (y == (t + t)).all()
+def test_fixed_precision_and_sharing_on_pointer(workers):
+ bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
+
+ t = torch.tensor([1, 2, 3, 4.0])
+ ptr = t.send(james)
+
+ x = ptr.fix_prec().share(bob, alice)
+
+ y = x + x
+
+ y = y.get().get().float_prec()
+ assert (y == (t + t)).all()
+
+
+def test_pointer_on_fixed_precision_and_sharing(workers):
+ bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
+
+ t = torch.tensor([1, 2, 3, 4.0])
+
+ x = t.fix_prec().share(bob, alice)
+ x = x.send(james)
+
+ y = x + x
+
+ y = y.get().get().float_prec()
+ assert (y == (t + t)).all()
+
+
def test_get_item(workers):
alice, bob, james = workers["alice"], workers["bob"], workers["james"]
x = th.tensor([[3.1, 4.3]]).fix_prec().share(alice, bob, crypto_provider=james)
| Fix precision on pointers Error
**Describe the bug**
Fix_precision on pointer tensor fails because of change in fix_prec method which make use of numpy ops.
**To Reproduce**
```python
#[classic imports]
x = torch.tensor([1.])
x_ptr = x.send(alice)
x_fp = x_ptr.fix_prec()
```
Or run Tutorial Part 10.

**Error**
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/code/PySyft/syft/frameworks/torch/hook/hook_args.py in register_response(attr, response, response_ids, owner)
656 # Load the utility function to register the response and transform tensors with pointers
--> 657 register_response_function = register_response_functions[attr_id]
658 # Try running it
KeyError: 'numpy'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-1-95fc1ed4192b> in <module>
13 x = torch.tensor([1.])
14 x_ptr = x.send(alice)
---> 15 x_fp = x_ptr.fix_prec()
~/code/PySyft/syft/frameworks/torch/tensors/interpreters/native.py in fix_prec(self, *args, **kwargs)
666 prec_fractional = kwargs.get("precision_fractional", 3)
667 max_precision = _get_maximum_precision()
--> 668 if self._requires_large_precision(max_precision, base, prec_fractional):
669 return (
670 syft.LargePrecisionTensor(*args, **kwargs)
~/code/PySyft/syft/frameworks/torch/tensors/interpreters/native.py in _requires_large_precision(self, max_precision, base, precision_fractional)
691 # We need to use NumPy here as log2 is not yet implemented for LongTensor PyTorch objects
692 return np.any(
--> 693 np.log2(np.abs(self.clone().detach().numpy()) + 1) + base_fractional > max_precision
694 )
695
~/code/PySyft/syft/frameworks/torch/hook/hook.py in overloaded_native_method(self, *args, **kwargs)
675 # Send the new command to the appropriate class and get the response
676 method = getattr(new_self, method_name)
--> 677 response = method(*new_args, **new_kwargs)
678
679 # For inplace methods, just directly return self
~/code/PySyft/syft/frameworks/torch/hook/hook.py in overloaded_pointer_method(self, *args, **kwargs)
511 command = (attr, self, args, kwargs)
512
--> 513 response = owner.send_command(location, command)
514
515 return response
~/code/PySyft/syft/workers/base.py in send_command(self, recipient, message, return_ids)
425
426 try:
--> 427 ret_val = self.send_msg(codes.MSGTYPE.CMD, message, location=recipient)
428 except ResponseSignatureError as e:
429 ret_val = None
~/code/PySyft/syft/workers/base.py in send_msg(self, msg_type, message, location)
221
222 # Step 2: send the message and wait for a response
--> 223 bin_response = self._send_msg(bin_message, location)
224
225 # Step 3: deserialize the response
~/code/PySyft/syft/workers/virtual.py in _send_msg(self, message, location)
8 class VirtualWorker(BaseWorker, FederatedClient):
9 def _send_msg(self, message: bin, location: BaseWorker) -> bin:
---> 10 return location._recv_msg(message)
11
12 def _recv_msg(self, message: bin) -> bin:
~/code/PySyft/syft/workers/virtual.py in _recv_msg(self, message)
11
12 def _recv_msg(self, message: bin) -> bin:
---> 13 return self.recv_msg(message)
14
15 @staticmethod
~/code/PySyft/syft/workers/base.py in recv_msg(self, bin_message)
252 print(f"worker {self} received {sy.codes.code2MSGTYPE[msg_type]} {contents}")
253 # Step 1: route message to appropriate function
--> 254 response = self._message_router[msg_type](contents)
255
256 # Step 2: Serialize the message to simple python objects
~/code/PySyft/syft/workers/base.py in execute_command(self, message)
391 try:
392 response = sy.frameworks.torch.hook_args.register_response(
--> 393 command_name, response, list(return_ids), self
394 )
395 return response
~/code/PySyft/syft/frameworks/torch/hook/hook_args.py in register_response(attr, response, response_ids, owner)
664 register_response_functions[attr_id] = register_response_function
665 # Run it
--> 666 new_response = register_response_function(response, response_ids=response_ids, owner=owner)
667
668 # Remove the artificial tuple
~/code/PySyft/syft/frameworks/torch/hook/hook_args.py in <lambda>(x, **kwargs)
757 f = many_fold
758
--> 759 return lambda x, **kwargs: f(lambdas, x, **kwargs)
~/code/PySyft/syft/frameworks/torch/hook/hook_args.py in two_fold(lambdas, args, **kwargs)
514
515 def two_fold(lambdas, args, **kwargs):
--> 516 return lambdas[0](args[0], **kwargs), lambdas[1](args[1], **kwargs)
517
518
~/code/PySyft/syft/frameworks/torch/hook/hook_args.py in <lambda>(i, **kwargs)
735 if isinstance(r, (list, tuple)) # if the rule is a list or tuple.
736 # Last if not, rule is probably == 1 so use type to return the right transformation.
--> 737 else lambda i, **kwargs: register_tensor(i, **kwargs)
738 for a, r in zip(response, rules) # And do this for all the responses / rules provided
739 ]
~/code/PySyft/syft/frameworks/torch/hook/hook_args.py in register_tensor(tensor, owner, response_ids)
706 and each id is pop out when needed.
707 """
--> 708 tensor.owner = owner
709 try:
710 tensor.id = response_ids.pop(-1)
AttributeError: 'numpy.ndarray' object has no attribute 'owner'
```
| 2019-07-12T09:19:03 |
|
OpenMined/PySyft | 2,360 | OpenMined__PySyft-2360 | [
"2256"
] | 1a06f68d87217d35c6689e213cfc23a19eb1e257 | diff --git a/syft/generic/metrics.py b/syft/generic/metrics.py
new file mode 100644
--- /dev/null
+++ b/syft/generic/metrics.py
@@ -0,0 +1,78 @@
+# Some functions to aid monitoring network traffic
+
+import pyshark
+
+
+class NetworkMonitor:
+ """
+ Provides utility for the monitoring of network traffic, measuring the packets sent through
+ specific filters as passed by user.
+ """
+
+ @staticmethod
+ def get_packets(
+ timeout=50,
+ interface=None,
+ bpf_filter=None,
+ display_filter="tcp.port == 80",
+ tshark_path=None,
+ output_file=None,
+ ):
+ """
+ Returns the captured packets of the transmitted data using Wireshark.
+
+ Args:
+ timeout: An integer. Set for sniffing with tshark. Default to 50 seconds in this setup.
+ interface: A string. Name of the interface to sniff on.
+ bpf_filter: A string. The capture filter in bpf syntax 'tcp port 80'. Needs to be changed to match filter for the traffic sent. Not to be confused with the display filters (e.g. tcp.port == 80). The former are much more limited and is used to restrict the size of a raw packet capture, whereas the latter is used to hide some packets from the packet list. More info can be found at https://wiki.wireshark.org/CaptureFilters.
+ display_filter: A string. Default to 'tcp.port == 80' (assuming this is the port of the 'WebsocketClientWorker'). Please see notes for 'bpf_filter' for details regarding differences. More info can be found at https://wiki.wireshark.org/DisplayFilters.
+ tshark_path: Path to the tshark binary. E.g. '/usr/local/bin/tshark'.
+ output_file: A string. Path including the output file name is to saved. E.g. '/tmp/mycapture.cap'
+
+ Returns:
+ catpure: A 'pyshark.capture.live_capture.LiveCapture' object. Of packets sent over WebSockets.
+ length: An integer. The number of packets captured at the network interface.
+ """
+ capture_output = []
+ if interface is None:
+ raise Exception("Please provide the interface used.")
+ else:
+ capture = pyshark.LiveCapture(
+ interface=interface,
+ bpf_filter=bpf_filter,
+ tshark_path=tshark_path,
+ output_file=output_file,
+ )
+ capture.sniff(timeout=timeout)
+ length = len(capture)
+ return capture, length
+
+ @staticmethod
+ def read_packet(index=None, capture_input=None):
+ """
+ Reads the info of one packet returned by get_packets using pretty_print().
+
+ Args:
+ index: An integer. The index of the packet to be examined.
+
+ Returns:
+ pretty_print: The info of the packet chosen to be read.
+ """
+ if index is None:
+ raise Exception(
+ "Please choose an index within the total number of packets captured by get_packets."
+ )
+ elif capture_input is None:
+ raise Exception("Please input the capture_output from get_packets.")
+ elif not isinstance(index, int):
+ raise Exception("The index passed is not an integer.")
+ else:
+ length = len(capture_input)
+ if index < length:
+ try:
+ packet = capture_input[index]
+ return packet.pretty_print()
+ except:
+ raise Exception("Something went wrong when retrieving packet data.")
+ else:
+ raise Exception("The index given is not valid.")
| Monitoring network usage
This is part of #2235
**Description**
We need a tool to monitor how much information we share over the network, as SMPC is often said to consume a lot of bandwidth. This is very important to benchmark our implementation compared to similar existing works.
| One manual approach to monitor traffic sent over websockets is via Wireshark. But having it integrated in PySyft would indeed be cool!
Oh I didn't new we had already some solutions!
Would be great to have something also for Virtualworkers since they are very practical for debug, and at the worker scale to have like a graph of bandwidth usage between workers
> Oh I didn't new we had already some solutions!
> Would be great to have something also for Virtualworkers since they are very practical for debug, and at the worker scale to have like a graph of bandwidth usage between workers
Also it would be very useful to have a breakdown of the data transmitted as data points or models.
A hacky way to do this would be to add something to the send function which increments a counter based on the size of the message being sent
Hi, I am interested in this issue, but might need some help/pointers to start...
So I will approach the problem via Wireshark first...
Excellent idea!
If you want to do this manualy you can also do it is the code:
In that case you want for a worker to store the amount of data sent and received, like this would be to attributes of a worker: so each time some data is sent or received, you evaluate the size of the serialize data and you add it to this attributes. To catch the event "data is sent or received" you will need to inspect how the module `serde` works (ser-ialize / de-serialize). It's more hacky, but helps you understanding the code.
> Excellent idea!
> If you want to do this manualy you can also do it is the code:
> In that case you want for a worker to store the amount of data sent and received, like this would be to attributes of a worker: so each time some data is sent or received, you evaluate the size of the serialize data and you add it to this attributes. To catch the event "data is sent or received" you will need to inspect how the module `serde` works (ser-ialize / de-serialize). It's more hacky, but helps you understanding the code.
Thanks for the tip! This hacky way sounds really interesting and promising... Am looking into it. Hopefully a PR will follow soon. | 2019-07-16T16:04:05 |
|
OpenMined/PySyft | 2,373 | OpenMined__PySyft-2373 | [
"2368"
] | ed137c6189cf6d91b4d2def93cdc217c93a28948 | diff --git a/syft/frameworks/torch/tensors/interpreters/additive_shared.py b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
--- a/syft/frameworks/torch/tensors/interpreters/additive_shared.py
+++ b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
@@ -563,6 +563,12 @@ def sum(self, *args, **kwargs):
module.sum = sum
+ def dot(self, other):
+ """Overload torch.dot(x, y)"""
+ return self.mul(other).sum()
+
+ module.dot = dot
+
def mean(self, *args, **kwargs):
"""Overload torch.mean(x)"""
# We cannot directly use mean on Long tensors
diff --git a/syft/frameworks/torch/tensors/interpreters/precision.py b/syft/frameworks/torch/tensors/interpreters/precision.py
--- a/syft/frameworks/torch/tensors/interpreters/precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/precision.py
@@ -393,6 +393,11 @@ def addmm(bias, input_tensor, weight):
module.addmm = addmm
+ def dot(self, other):
+ return self.__mul__(other).sum()
+
+ module.dot = dot
+
def conv2d(
input,
weight,
| diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -622,6 +622,16 @@ def test_torch_mean(workers):
assert (s_keepdim == torch.tensor([[1.75], [6.75]])).all()
+def test_torch_dot(workers):
+ torch.manual_seed(121) # Truncation might not always work so we set the random seed
+ alice, bob, james = workers["alice"], workers["bob"], workers["james"]
+
+ x = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0]).fix_prec().share(alice, bob, crypto_provider=james)
+ y = torch.tensor([3.0, 3.0, 3.0, 3.0, 3.0]).fix_prec().share(alice, bob, crypto_provider=james)
+
+ assert torch.dot(x, y).get().float_prec() == 45
+
+
def test_unbind(workers):
alice, bob, james = workers["alice"], workers["bob"], workers["james"]
diff --git a/test/torch/tensors/test_precision.py b/test/torch/tensors/test_precision.py
--- a/test/torch/tensors/test_precision.py
+++ b/test/torch/tensors/test_precision.py
@@ -272,6 +272,15 @@ def test_torch_addmm():
assert (fp_result.float_precision() == torch.tensor([[10.0, 8.0]])).all()
+def test_torch_dot(workers):
+ alice, bob, james = workers["alice"], workers["bob"], workers["james"]
+
+ x = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0]).fix_prec()
+ y = torch.tensor([3.0, 3.0, 3.0, 3.0, 3.0]).fix_prec()
+
+ assert torch.dot(x, y).float_prec() == 45
+
+
def test_torch_conv2d(workers):
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
im = torch.Tensor(
diff --git a/test/workers/test_websocket_worker.py b/test/workers/test_websocket_worker.py
--- a/test/workers/test_websocket_worker.py
+++ b/test/workers/test_websocket_worker.py
@@ -114,7 +114,7 @@ def test_list_objects_remote(hook, start_proc):
kwargs = {"id": "fed", "host": "localhost", "port": 8765, "hook": hook}
process_remote_fed1 = start_proc(WebsocketServerWorker, **kwargs)
- time.sleep(0.1)
+ time.sleep(0.5)
kwargs = {"id": "fed", "host": "localhost", "port": 8765, "hook": hook}
local_worker = WebsocketClientWorker(**kwargs)
@@ -145,7 +145,7 @@ def test_objects_count_remote(hook, start_proc):
kwargs = {"id": "fed", "host": "localhost", "port": 8764, "hook": hook}
process_remote_worker = start_proc(WebsocketServerWorker, **kwargs)
- time.sleep(0.1)
+ time.sleep(0.5)
kwargs = {"id": "fed", "host": "localhost", "port": 8764, "hook": hook}
local_worker = WebsocketClientWorker(**kwargs)
@@ -176,7 +176,7 @@ def test_connect_close(hook, start_proc):
kwargs = {"id": "fed", "host": "localhost", "port": 8763, "hook": hook}
process_remote_worker = start_proc(WebsocketServerWorker, **kwargs)
- time.sleep(0.1)
+ time.sleep(0.5)
kwargs = {"id": "fed", "host": "localhost", "port": 8763, "hook": hook}
local_worker = WebsocketClientWorker(**kwargs)
@@ -211,7 +211,7 @@ def test_websocket_worker_multiple_output_response(hook, start_proc):
kwargs = {"id": "socket_multiple_output", "host": "localhost", "port": 8768, "hook": hook}
process_remote_worker = start_proc(WebsocketServerWorker, **kwargs)
- time.sleep(0.1)
+ time.sleep(0.5)
x = torch.tensor([1.0, 3, 2])
local_worker = WebsocketClientWorker(**kwargs)
| Add support of torch.dot for additive shared tensor
torch.dot(x, y) currently fails, we should add a support for it, doing smthg like `(x * y).sum()`
Example of code to make run:
```python
import sys
import syft as sy
from syft.workers import WebsocketClientWorker
import torch
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
charlie = sy.VirtualWorker(hook, id="charlie")
print("Created clients")
data1 = torch.tensor([1,2,3,4,5])
d1r = data1.share(alice, bob, charlie)
print("Shared first tensor")
data2 = torch.tensor([3,3,3,3,3])
d2r = data2.share(alice, bob, charlie)
print("Shared second tensor")
res = (d1r * d2r).sum()
print(res.get())
plaintext_res = torch.dot(data1, data2)
print(plaintext_res)
```
| 2019-07-20T12:29:04 |
|
OpenMined/PySyft | 2,387 | OpenMined__PySyft-2387 | [
"1905"
] | 09b6fc355d891434d52a40650a306c9ebc68f45a | diff --git a/syft/workers/websocket_server.py b/syft/workers/websocket_server.py
--- a/syft/workers/websocket_server.py
+++ b/syft/workers/websocket_server.py
@@ -8,6 +8,8 @@
import ssl
import sys
import tblib.pickling_support
+import socket
+import logging
tblib.pickling_support.install()
@@ -165,7 +167,11 @@ def start(self):
)
asyncio.get_event_loop().run_until_complete(start_server)
- asyncio.get_event_loop().run_forever()
+ print("Serving. Press CTRL-C to stop.")
+ try:
+ asyncio.get_event_loop().run_forever()
+ except KeyboardInterrupt:
+ logging.info("Websocket server stopped.")
def list_objects(self, *args):
return str(self._objects)
| OpenMined.org Demo Broken
On our homepage - we link to colabs which implement a simple SocketWorker demo - as the new version of PySyft has no SocketWorker - the demo fails to work.
https://colab.research.google.com/drive/1-Jb_E_nDuBGHIJ_psI95k-ukh-P_aly-
| Hi @iamtrask I would like to help out on this issue if no one has been assigned to it or has volunteered to take on the task yet. And, I have just checked the PySyft repo with a keyword search and it returns no results for `SocketWorker`.
Yes any help is welcome is the issue is still there :)
Check that you have the latest version of pysystf and then check out `syft/workers/websocket*` to find the workers.
Thanks for the prompt response and the tip to start! Sure, will work on it as soon as possible. | 2019-07-23T22:25:03 |
|
OpenMined/PySyft | 2,413 | OpenMined__PySyft-2413 | [
"2412"
] | 9b3696e8074bf09e28d15b345b429dea891e3386 | diff --git a/syft/federated/plan.py b/syft/federated/plan.py
--- a/syft/federated/plan.py
+++ b/syft/federated/plan.py
@@ -624,9 +624,8 @@ def simplify(plan: "Plan") -> tuple:
tuple: a tuple holding the unique attributes of the Plan object
"""
- readable_plan = sy.serde._simplify(plan.readable_plan)
return (
- readable_plan,
+ plan.readable_plan, # We're not simplifying because readable_plan is already simplified
sy.serde._simplify(plan.id),
sy.serde._simplify(plan.arg_ids),
sy.serde._simplify(plan.result_ids),
@@ -656,7 +655,7 @@ def detail(worker: AbstractWorker, plan_tuple: tuple) -> "Plan":
id=id,
arg_ids=arg_ids,
result_ids=result_ids,
- readable_plan=sy.serde._detail(worker, readable_plan),
+ readable_plan=readable_plan, # We're not detailing, see simplify() for details
is_built=is_built,
)
| We double simplify the Plan.readable_plan object, making the plan quite large
**Describe the bug**
At present, when we serialize a plan, we simplify the .readable_plan object, which has already been simplified. This leads to a much larger serialization than is necessary. We should modify the simplifier for plans to simply take the readable_plan object as is.
**To Reproduce**
```
@sy.func2plan()
def plan_double_abs(x):
x = x + x
x = torch.abs(x)
return x
plan_double_abs.build(torch.tensor([1., -2.]))
print("This plan is clearly already simplified...")
print(plan_double_abs.readable_plan)
print("...which we can verify by detailing it...")
print(sy.serde._detail(sy.local_worker, plan_double_abs.readable_plan[0]))
print("and when we simplify again (as the serializer does, it gets HUGE")
print(sy.serde._simplify(plan_double_abs))
```
| Make sure to message @cereallarceny when this is done because it's a breaking change to syft.js | 2019-07-30T12:36:50 |
|
OpenMined/PySyft | 2,437 | OpenMined__PySyft-2437 | [
"2434"
] | 83c9aad9293e492c8f195cdd4b9792e5be2fe708 | diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -367,7 +367,7 @@ def execute_command(self, message: tuple) -> "pointers.PointerTensor":
# Handle methods
if _self is not None:
if type(_self) == int:
- _self = self.get_obj(_self)
+ _self = BaseWorker.get_obj(self, _self)
if _self is None:
return
if type(_self) == str and _self == "self":
| fix private tensor disclosure via execute_command
Private tensors aren't meant to be accessible from a remote client, however, execute_command was getting any object using his id, this fix get the object using the get_obj method that doesn't return private tensors
#2432
| it do the check here https://github.com/OpenMined/PySyft/blob/dev/syft/workers/base.py#L469
I have an old version of `dev`... -> like a noob
Ok this is all good, thanks for spotting it!
The multiple inheritance scheme might produce problems.
So BaseWorker defines the wanted get_obj behaviour.
However which method gets invoked in WebsocketServerWorker?
It inherits from
1) FederatedClient <- ObjectStorage and
2) VirtualWorker <- BaseWorker <- ObjectStorage
So which get_obj() function does it have?
@midokura-silvia do you think that we should specify the exact method it should call? keeping in mind that the private feature for tensors should be available for remote worker. I think that introducing the private feature imply using the get_obj() everywhere we wanna use an object because doing otherwise may left a an exploitable flaw
Yes, specifying that it should call the BaseWorker.get_obj() method would make it explicit and easier to understand.
could you please reopen the PR or should I make a new one?
I don't know see a way to reopen the pull request. Easiest will be to create a new one and to reference the initial pull request. | 2019-08-02T13:37:31 |
|
OpenMined/PySyft | 2,440 | OpenMined__PySyft-2440 | [
"2422"
] | 06ce023225dd613d8fb14ab2046135b93ab22376 | diff --git a/syft/serde/native_serde.py b/syft/serde/native_serde.py
--- a/syft/serde/native_serde.py
+++ b/syft/serde/native_serde.py
@@ -16,51 +16,46 @@
# Simplify/Detail Collections (list, set, tuple, etc.)
-def _simplify_collection(my_collection: Collection) -> Collection:
+def _simplify_collection(my_collection: Collection) -> Tuple:
"""
This function is designed to search a collection for any objects
which may need to be simplified (i.e., torch tensors). It iterates
through each object in the collection and calls _simplify on it. Finally,
- it returns the output collection as the same type as the input collection
- so that the consuming serialization step knows the correct type info. The
- reverse function to this function is _detail_collection, which undoes
- the functionality of this function.
+ it returns the output as the tuple of simplified items of the input collection.
+ This function is used to simplify list, set, and tuple. The reverse function,
+ which undoes the functionality of this function is different for each of these types:
+ _detail_collection_list, _detail_collection_set, _detail_collection_tuple.
Args:
my_collection (Collection): a collection of python objects
Returns:
- Collection: a collection of the same type as the input of simplified
- objects.
+ Tuple: a tuple with simplified objects.
"""
- # Step 0: get collection type for later use and itialize empty list
- my_type = type(my_collection)
+ # Step 0: initialize empty list
pieces = list()
# Step 1: serialize each part of the collection
for part in my_collection:
pieces.append(serde._simplify(part))
- # Step 2: convert back to original type and return serialization
- if my_type == set:
- return pieces
-
+ # Step 2: return serialization as tuple of simplified items
return tuple(pieces)
-def _detail_collection_list(worker: AbstractWorker, my_collection: Collection) -> Collection:
+def _detail_collection_list(worker: AbstractWorker, my_collection: Tuple) -> Collection:
"""
This function is designed to operate in the opposite direction of
- _simplify_collection. It takes a collection of simple python objects
+ _simplify_collection. It takes a tuple of simple python objects
and iterates through it to determine whether objects in the collection
need to be converted into more advanced types. In particular, it
converts binary objects into torch Tensors where appropriate.
Args:
worker: the worker doing the deserialization
- my_collection (Collection): a collection of simple python objects (including binary).
+ my_collection (Tuple): a tuple of simple python objects (including binary).
Returns:
Collection: a collection of the same type as the input where the objects
@@ -81,17 +76,17 @@ def _detail_collection_list(worker: AbstractWorker, my_collection: Collection) -
return pieces
-def _detail_collection_set(worker: AbstractWorker, my_collection: Collection) -> Collection:
+def _detail_collection_set(worker: AbstractWorker, my_collection: Tuple) -> Collection:
"""
This function is designed to operate in the opposite direction of
- _simplify_collection. It takes a collection of simple python objects
+ _simplify_collection. It takes a tuple of simple python objects
and iterates through it to determine whether objects in the collection
need to be converted into more advanced types. In particular, it
converts binary objects into torch Tensors where appropriate.
Args:
worker: the worker doing the deserialization
- my_collection (Collection): a collection of simple python objects (including binary).
+ my_collection (Tuple): a tuple of simple python objects (including binary).
Returns:
Collection: a collection of the same type as the input where the objects
@@ -139,13 +134,12 @@ def _detail_collection_tuple(worker: AbstractWorker, my_tuple: Tuple) -> Tuple:
return tuple(pieces)
-def _simplify_dictionary(my_dict: Dict) -> Dict:
+def _simplify_dictionary(my_dict: Dict) -> Tuple:
"""
This function is designed to search a dict for any objects
which may need to be simplified (i.e., torch tensors). It iterates
through each key, value in the dict and calls _simplify on it. Finally,
- it returns the output dict as the same type as the input dict
- so that the consuming serialization step knows the correct type info. The
+ it returns the output tuple of tuples containing key/value pairs. The
reverse function to this function is _detail_dictionary, which undoes
the functionality of this function.
@@ -153,8 +147,8 @@ def _simplify_dictionary(my_dict: Dict) -> Dict:
my_dict: A dictionary of python objects.
Returns:
- Dict: A dictionary of the same type as the input of simplified
- objects.
+ Tuple: Tuple containing tuples of simplified key/value pairs from the
+ input dictionary.
"""
pieces = list()
@@ -162,10 +156,10 @@ def _simplify_dictionary(my_dict: Dict) -> Dict:
for key, value in my_dict.items():
pieces.append((serde._simplify(key), serde._simplify(value)))
- return pieces
+ return tuple(pieces)
-def _detail_dictionary(worker: AbstractWorker, my_dict: Dict) -> Dict:
+def _detail_dictionary(worker: AbstractWorker, my_dict: Tuple) -> Dict:
"""
This function is designed to operate in the opposite direction of
_simplify_dictionary. It takes a dictionary of simple python objects
@@ -175,10 +169,10 @@ def _detail_dictionary(worker: AbstractWorker, my_dict: Dict) -> Dict:
Args:
worker: the worker doing the deserialization
- my_dict (Dict): a dictionary of simple python objects (including binary).
+ my_dict (Tuple): a simplified dictionary of simple python objects (including binary).
Returns:
- tuple: a collection of the same type as the input where the objects
+ Dict: a collection of the same type as the input where the objects
in the collection have been detailed.
"""
pieces = {}
| diff --git a/test/test_serde.py b/test/test_serde.py
--- a/test/test_serde.py
+++ b/test/test_serde.py
@@ -59,7 +59,7 @@ def test_set_simplify():
input = set(["hello", "world"])
set_detail_index = serde.detailers.index(native_serde._detail_collection_set)
str_detail_index = serde.detailers.index(native_serde._detail_str)
- target = (set_detail_index, [(str_detail_index, (b"hello",)), (str_detail_index, (b"world",))])
+ target = (set_detail_index, ((str_detail_index, (b"hello",)), (str_detail_index, (b"world",))))
assert serde._simplify(input)[0] == target[0]
assert set(serde._simplify(input)[1]) == set(target[1])
@@ -109,7 +109,7 @@ def test_dict_simplify():
detail_str_index = serde.detailers.index(native_serde._detail_str)
target = (
detail_dict_index,
- [((detail_str_index, (b"hello",)), (detail_str_index, (b"world",)))],
+ (((detail_str_index, (b"hello",)), (detail_str_index, (b"world",))),),
)
assert serde._simplify(input) == target
| Everything should simplify as a tuple in serde._simplify
# Describe the bug
Currently, when you run `_simplify` on a dictionary or set, you are returned information contained within square brackets (`[]`). It would be a lot easier if we could always work with parens (`()`).
## To Reproduce & Expected behavior
**Dictionary**
_Input:_ `sy.serde._simplify({1: 'hello', 'key2': 999})`
_Output:_ `(0, [(1, (5, (b'hello',))), ((5, (b'key2',)), 999)])`
_Expected:_ `(0, ((1, (5, (b'hello',))), ((5, (b'key2',)), 999)))`
**Set**
_Input:_ `sy.serde._simplify({'apple', 'cherry', 'banana'})`
_Output:_ `(3, [(5, (b'banana',)), (5, (b'apple',)), (5, (b'cherry',))])`
_Expected:_ `(3, ((5, (b'banana',)), (5, (b'apple',)), (5, (b'cherry',))))`
## Screenshots
**Dictionary**
<img width="504" alt="Screen Shot 2019-08-01 at 1 06 06 PM" src="https://user-images.githubusercontent.com/1297930/62292376-2f991d80-b45e-11e9-9dda-74ae44c75914.png">
**Set**
<img width="590" alt="Screen Shot 2019-08-01 at 1 20 23 PM" src="https://user-images.githubusercontent.com/1297930/62292895-4ab85d00-b45f-11e9-9804-8623d05b1f51.png">
## Desktop
- OS: MacOS
- Version 10.14.6
## Additional context
I need this working for syft.js's Serde parser to work as expected without having to perform unique string replacement all over the place. The simpler Serde is in terms of structure, the easier it is to write ports of it to Javascript (syft.js), Kotlin (@mccorby), and other languages.
| 2019-08-03T07:37:51 |
|
OpenMined/PySyft | 2,525 | OpenMined__PySyft-2525 | [
"2475"
] | a15f66e88f612420146353d89105ba9b49379620 | diff --git a/syft/frameworks/torch/crypto/beaver.py b/syft/frameworks/torch/crypto/beaver.py
new file mode 100644
--- /dev/null
+++ b/syft/frameworks/torch/crypto/beaver.py
@@ -0,0 +1,33 @@
+import torch
+from typing import Callable
+from syft.workers.abstract import AbstractWorker
+
+
+def request_triple(
+ crypto_provider: AbstractWorker,
+ cmd: Callable,
+ field: int,
+ a_size: tuple,
+ b_size: tuple,
+ locations: list,
+):
+ """Generates a multiplication triple and sends it to all locations.
+
+ Args:
+ crypto_provider: worker you would like to request the triple from
+ cmd: An equation in einsum notation.
+ field: An integer representing the field size.
+ a_size: A tuple which is the size that a should be.
+ b_size: A tuple which is the size that b should be.
+ locations: A list of workers where the triple should be shared between.
+
+ Returns:
+ A triple of AdditiveSharedTensors such that c_shared = cmd(a_shared, b_shared).
+ """
+ a = torch.randint(field, a_size)
+ b = torch.randint(field, b_size)
+ c = cmd(a, b)
+ a_shared = a.share(*locations, field=field, crypto_provider=crypto_provider).child
+ b_shared = b.share(*locations, field=field, crypto_provider=crypto_provider).child
+ c_shared = c.share(*locations, field=field, crypto_provider=crypto_provider).child
+ return a_shared, b_shared, c_shared
diff --git a/syft/frameworks/torch/crypto/spdz.py b/syft/frameworks/torch/crypto/spdz.py
--- a/syft/frameworks/torch/crypto/spdz.py
+++ b/syft/frameworks/torch/crypto/spdz.py
@@ -2,6 +2,7 @@
from typing import Callable
import syft as sy
from syft.workers.abstract import AbstractWorker
+from syft.frameworks.torch.crypto.beaver import request_triple
no_wrap = {"no_wrap": True}
@@ -25,7 +26,7 @@ def spdz_mul(cmd: Callable, x_sh, y_sh, crypto_provider: AbstractWorker, field:
locations = x_sh.locations
# Get triples
- a, b, a_mul_b = crypto_provider.generate_triple(cmd, field, x_sh.shape, y_sh.shape, locations)
+ a, b, a_mul_b = request_triple(crypto_provider, cmd, field, x_sh.shape, y_sh.shape, locations)
delta = x_sh - a
epsilon = y_sh - b
diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -818,29 +818,6 @@ def deserialized_search(self, query_items: Tuple[str]) -> List["pointers.Pointer
"""
return self.search(*query_items)
- def generate_triple(
- self, cmd: Callable, field: int, a_size: tuple, b_size: tuple, locations: list
- ):
- """Generates a multiplication triple and sends it to all locations.
-
- Args:
- cmd: An equation in einsum notation.
- field: An integer representing the field size.
- a_size: A tuple which is the size that a should be.
- b_size: A tuple which is the size that b should be.
- locations: A list of workers where the triple should be shared between.
-
- Returns:
- A triple of AdditiveSharedTensors such that c_shared = cmd(a_shared, b_shared).
- """
- a = self.torch.randint(field, a_size)
- b = self.torch.randint(field, b_size)
- c = cmd(a, b)
- a_shared = a.share(*locations, field=field, crypto_provider=self).child
- b_shared = b.share(*locations, field=field, crypto_provider=self).child
- c_shared = c.share(*locations, field=field, crypto_provider=self).child
- return a_shared, b_shared, c_shared
-
def _get_msg(self, index):
"""Returns a decrypted message from msg_history. Mostly useful for testing.
| generate_triple in base.py
**Describe the bug**
Cryptography logic is inside of the generic worker class, which is not a proper decoupling of messaging infrastructure from cryptography protocols.
**To Reproduce**
Visit... https://github.com/OpenMined/PySyft/blob/dev/syft/workers/base.py#L825
**Expected behavior**
This method should instead live in a new file syft/frameworks/torch/crypto/beaver.py as a method called "request_triple" where you also pass in the worker you'd like to request the triple from.
| I would like to work on it. | 2019-08-17T18:45:06 |
|
OpenMined/PySyft | 2,548 | OpenMined__PySyft-2548 | [
"2447"
] | 12f828df62c702f50c5d57339fe7436e2ed0a608 | diff --git a/syft/frameworks/torch/tensors/interpreters/native.py b/syft/frameworks/torch/tensors/interpreters/native.py
--- a/syft/frameworks/torch/tensors/interpreters/native.py
+++ b/syft/frameworks/torch/tensors/interpreters/native.py
@@ -412,6 +412,7 @@ def send(
if self._is_parameter():
if inplace:
+ self.is_wrapper = True
with torch.no_grad():
self.set_()
self.data = ptr
@@ -421,6 +422,7 @@ def send(
raise ValueError("Parameters can't accept no_wrap=True")
wrapper = torch.Tensor()
param_wrapper = torch.nn.Parameter(wrapper)
+ param_wrapper.is_wrapper = True
with torch.no_grad():
param_wrapper.set_()
param_wrapper.data = ptr
@@ -571,12 +573,13 @@ def get(self, *args, inplace: bool = False, **kwargs):
# Parameters use .data instead of children
# so we need to have special support to make sure
- # that Parmaeters operate inline (because they're
+ # that Parmeters operate inline (because they're
# typically being managed inside of a model/optimizer
# so not using the same wrapper can cause the model/
# optimizer to lose track of where the actual weights
# are.
if isinstance(self, torch.nn.Parameter):
+ self.is_wrapper = tensor.data.is_wrapper
if inplace:
self.data = tensor.data
self.grad = tensor.grad
@@ -728,7 +731,14 @@ def share(
shared_tensor = self
if self.has_child():
- self.child = self.child.share(*owners, field=field, crypto_provider=crypto_provider)
+ kwargs = (
+ {"requires_grad": requires_grad}
+ if isinstance(self.child, syft.PointerTensor)
+ else {}
+ )
+ self.child = self.child.share(
+ *owners, field=field, crypto_provider=crypto_provider, **kwargs
+ )
if no_wrap:
return self.child
else:
@@ -742,7 +752,7 @@ def share(
if not no_wrap:
shared_tensor = shared_tensor.wrap()
- if requires_grad:
+ if requires_grad and not (self.is_wrapper and isinstance(self.child, syft.PointerTensor)):
shared_tensor = syft.AutogradTensor().on(shared_tensor)
return shared_tensor
| diff --git a/test/conftest.py b/test/conftest.py
--- a/test/conftest.py
+++ b/test/conftest.py
@@ -91,17 +91,25 @@ def workers(hook):
syft.frameworks.torch.hook.hook_args.hook_method_response_functions = {}
syft.frameworks.torch.hook.hook_args.get_tensor_type_functions = {}
- # Define 3 virtual workers
+ # Define 4 virtual workers
alice = syft.VirtualWorker(id="alice", hook=hook, is_client_worker=False)
bob = syft.VirtualWorker(id="bob", hook=hook, is_client_worker=False)
+ charlie = syft.VirtualWorker(id="charlie", hook=hook, is_client_worker=False)
james = syft.VirtualWorker(id="james", hook=hook, is_client_worker=False)
- workers = {"me": hook.local_worker, "alice": alice, "bob": bob, "james": james}
+ workers = {
+ "me": hook.local_worker,
+ "alice": alice,
+ "bob": bob,
+ "charlie": charlie,
+ "james": james,
+ }
yield workers
alice.remove_worker_from_local_worker_registry()
bob.remove_worker_from_local_worker_registry()
+ charlie.remove_worker_from_local_worker_registry()
james.remove_worker_from_local_worker_registry()
diff --git a/test/torch/tensors/test_autograd.py b/test/torch/tensors/test_autograd.py
--- a/test/torch/tensors/test_autograd.py
+++ b/test/torch/tensors/test_autograd.py
@@ -472,6 +472,53 @@ def test_backward_for_linear_model_on_additive_shared_with_autograd(workers):
assert (model.bias.grad == bias_grad).all()
+def test_remote_share_with_requires_grad(workers):
+ """
+ Test calling fix_precision and share(requires_grad=True) on pointers
+ to tensors and model
+ """
+ bob, alice, charlie, crypto_provider = (
+ workers["bob"],
+ workers["alice"],
+ workers["charlie"],
+ workers["james"],
+ )
+
+ t = torch.Tensor([3])
+ t = t.send(charlie)
+ t = t.fix_precision()
+ t = t.share(alice, bob, crypto_provider=crypto_provider, requires_grad=True)
+ t = t.get()
+
+ assert isinstance(t.child, AutogradTensor)
+
+ t = torch.Tensor([3])
+ t = t.fix_precision()
+ t = t.send(charlie)
+ t = t.share(alice, bob, crypto_provider=crypto_provider, requires_grad=True)
+ t = t.get()
+
+ assert isinstance(t.child, AutogradTensor)
+
+ model = nn.Linear(2, 1)
+ model.send(charlie)
+ model.fix_precision()
+ model.share(alice, bob, crypto_provider=crypto_provider, requires_grad=True)
+ model.get()
+
+ assert isinstance(model.weight.child, AutogradTensor)
+
+ # See Issue #2546
+
+ # model = nn.Linear(2, 1)
+ # model.fix_precision()
+ # model.send(charlie)
+ # model.share(alice, bob, crypto_provider=crypto_provider, requires_grad=True)
+ # model.get()
+ #
+ # assert isinstance(model.weight.child, AutogradTensor)
+
+
def test_encrypted_training_with_linear_model(workers):
"""
Test a minimal example of encrypted training using nn.Linear
| Unexpected behavior with AutogradTensors in remote workers
**Describe the bug**
Transforming a remote tensor into an `AutogradTensor` with `.share(*workers, requires_grad=True)` creates an `AutogradTensor` that points to a `Wrapper` instead of the opposite, as it should be expected. We want this to behave correctly as our objective is to work with data in remote servers (i.e. remote tensors) and do encrypted training using MPC with this data.
**To Reproduce**
The code below:
```
t = torch.Tensor([3]).send(bob) # t is a remote tensor located at the remote worker Bob
t = t.fix_precision().share(alice, jon, crypto_provider=crypto_provider, requires_grad=True)
t.get()
```
Gives this output:
```
AutogradTensor>(Wrapper)>FixedPrecisionTensor>[AdditiveSharingTensor]
-> [PointerTensor | me:48986511531 -> alice:2051851770]
-> [PointerTensor | me:48878799575 -> jon:38113753813]
*crypto provider: crypto_provider*
```
**Expected behavior**
When working with local tensors and sharing them for MPC with autograd we have a `Wrapper` at the top
```
t = t.fix_precision().share(alice, jon, crypto_provider=crypto_provider, requires_grad=True)
t
```
```
(Wrapper)>AutogradTensor>FixedPrecisionTensor>[AdditiveSharingTensor]
-> [PointerTensor | me:78465969403 -> alice:24923297626]
-> [PointerTensor | me:39395163768 -> jon:22928149946]
*crypto provider: crypto_provider*
```
**Desktop (please complete the following information):**
- OS: MacOS Mojave 10.14.4
- Version `syft==0.1.22a1`
| 2019-08-22T18:17:18 |
|
OpenMined/PySyft | 2,549 | OpenMined__PySyft-2549 | [
"2503"
] | 7e14632b65f9bb95b73ef62b0678858704fc6b4c | diff --git a/syft/frameworks/torch/hook/hook_args.py b/syft/frameworks/torch/hook/hook_args.py
--- a/syft/frameworks/torch/hook/hook_args.py
+++ b/syft/frameworks/torch/hook/hook_args.py
@@ -55,6 +55,7 @@
"chunk",
"torch.functional.split",
"split",
+ "backward",
}
register_ambiguous_method(*ambiguous_methods)
diff --git a/syft/frameworks/torch/tensors/interpreters/autograd.py b/syft/frameworks/torch/tensors/interpreters/autograd.py
--- a/syft/frameworks/torch/tensors/interpreters/autograd.py
+++ b/syft/frameworks/torch/tensors/interpreters/autograd.py
@@ -255,10 +255,19 @@ def handle_func_command(cls, command):
def get(self):
"""Just a pass through. This is most commonly used when calling .get() on a
AutogradTensor which has also been shared."""
- self.child = self.child.get()
- # Remove the autograd node if a simple tensor is received
- if isinstance(self.child, torch.Tensor) and not self.child.is_wrapper:
- return self.child
+ tensor = self.child.get()
+
+ if isinstance(tensor, torch.Tensor):
+ # Remove the autograd node if a simple tensor is received
+ if not tensor.is_wrapper:
+ return tensor
+ # If it's a wrapper, then insert the autograd under the wrapper
+ else:
+ self.child = tensor.child
+ tensor.child = self
+ return tensor
+
+ self.child = tensor
return self
def float_precision(self):
| diff --git a/test/torch/hook/test_hook_args.py b/test/torch/hook/test_hook_args.py
--- a/test/torch/hook/test_hook_args.py
+++ b/test/torch/hook/test_hook_args.py
@@ -14,3 +14,103 @@ def test_build_rule_numpy():
arr = np.array([2.0, 3.0, 4.0])
result = hook_args.build_rule([arr, arr + 2, [2, 4, "string"]])
assert result == [1, 1, [0, 0, 0]]
+
+
+def test_backward_multiple_use(workers):
+ """
+ Test using backward() in different contexts (FL or Encrypted) within
+ the same session.
+ """
+ big_hospital, small_hospital, crypto_provider = (
+ workers["bob"],
+ workers["alice"],
+ workers["james"],
+ )
+
+ # A Toy Model
+ class Net(torch.nn.Module):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc = torch.nn.Linear(2, 1)
+
+ def forward(self, x):
+ x = self.fc(x)
+ return x
+
+ def federated():
+ # A Toy Dataset
+ data = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1.0]])
+ target = torch.tensor([[0], [0], [1], [1.0]])
+
+ model = Net()
+
+ model_weight = model.fc.weight.copy()
+
+ # Training Logic
+ opt = torch.optim.SGD(params=model.parameters(), lr=0.1)
+
+ data = data.send(big_hospital)
+ target = target.send(big_hospital)
+
+ # NEW) send model to correct worker
+ model.send(data.location)
+
+ # 1) erase previous gradients (if they exist)
+ opt.zero_grad()
+
+ # 2) make a prediction
+ pred = model(data)
+
+ # 3) calculate how much we missed
+ loss = ((pred - target) ** 2).sum()
+
+ # 4) figure out which weights caused us to miss
+ loss.backward()
+
+ # 5) change those weights
+ opt.step()
+
+ assert (model_weight - model.get().fc.weight).sum().abs() > 1.0e-3
+
+ def encrypted():
+ # A Toy Dataset
+ data2 = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1.0]])
+ target2 = torch.tensor([[0], [0], [1], [1.0]])
+
+ model2 = Net()
+
+ model2_weight = model2.fc.weight.copy()
+
+ # We encode everything
+ data2 = data2.fix_precision().share(
+ big_hospital, small_hospital, crypto_provider=crypto_provider, requires_grad=True
+ )
+ target2 = target2.fix_precision().share(
+ big_hospital, small_hospital, crypto_provider=crypto_provider, requires_grad=True
+ )
+ model2 = model2.fix_precision().share(
+ big_hospital, small_hospital, crypto_provider=crypto_provider, requires_grad=True
+ )
+
+ opt2 = torch.optim.SGD(params=model2.parameters(), lr=0.1).fix_precision()
+
+ # 1) erase previous gradients (if they exist)
+ opt2.zero_grad()
+
+ # 2) make a prediction
+ pred2 = model2(data2)
+
+ # 3) calculate how much we missed
+ loss2 = ((pred2 - target2) ** 2).sum()
+
+ # 4) figure out which weights caused us to miss
+ loss2.backward()
+
+ # 5) change those weights
+ opt2.step()
+
+ weight_diff = (model2_weight - model2.fc.weight.get().float_prec()).sum().abs()
+ assert weight_diff > 1.0e-3
+
+ federated()
+ encrypted()
diff --git a/test/torch/tensors/test_autograd.py b/test/torch/tensors/test_autograd.py
--- a/test/torch/tensors/test_autograd.py
+++ b/test/torch/tensors/test_autograd.py
@@ -473,6 +473,32 @@ def test_backward_for_linear_model_on_additive_shared_with_autograd(workers):
assert (model.bias.grad == bias_grad).all()
+def test_share_with_requires_grad(workers):
+ """
+ Test calling fix_precision and share(requires_grad=True) on tensors and model
+ """
+ bob, alice, charlie, crypto_provider = (
+ workers["bob"],
+ workers["alice"],
+ workers["charlie"],
+ workers["james"],
+ )
+
+ t = torch.Tensor([3.0])
+ t = t.fix_precision()
+ t = t.share(alice, bob, crypto_provider=crypto_provider, requires_grad=True)
+
+ assert t.is_wrapper and isinstance(t.child, AutogradTensor)
+
+ t = t.get()
+
+ assert t.is_wrapper and isinstance(t.child, AutogradTensor)
+
+ t = t.float_prec()
+
+ assert t == torch.Tensor([3.0])
+
+
def test_remote_share_with_requires_grad(workers):
"""
Test calling fix_precision and share(requires_grad=True) on pointers
| URGENT: bug in encrypted autograd
**Describe the bug**
For some reason, calling loss.backward() on a pointer using normal autograd then breaks our ability to call loss.backward() on an encrypted loss variable even on two totally separate examples.
**To Reproduce**
```
import torch
import torch as th
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import syft as sy
# Set everything up
hook = sy.TorchHook(torch)
big_hospital = sy.VirtualWorker(hook, id="big_hospital2")
small_hospital = sy.VirtualWorker(hook, id="small_hospital2")
crypto_provider = sy.VirtualWorker(hook, id="crypto_provider2")
# A Toy Model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 2)
self.fc2 = nn.Linear(2, 1)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
def federated():
# A Toy Dataset
data = th.tensor([[0,0],[0,1],[1,0],[1,1.]])
target = th.tensor([[0],[0],[1],[1.]])
model = Net()
# Training Logic
opt = optim.SGD(params=model.parameters(),lr=0.1)
data = data.send(big_hospital)
target = target.send(big_hospital)
# NEW) send model to correct worker
model.send(data.location)
# 1) erase previous gradients (if they exist)
opt.zero_grad()
# 2) make a prediction
pred = model(data)
# 3) calculate how much we missed
loss = ((pred - target)**2).sum()
# 4) figure out which weights caused us to miss
loss.backward()
print("Done!")
def encrypted():
# A Toy Dataset
data2 = th.tensor([[0,0],[0,1],[1,0],[1,1.]])
target2 = th.tensor([[0],[0],[1],[1.]])
model2 = Net()
# We encode everything
data2 = data2.fix_precision().share(big_hospital, small_hospital, crypto_provider=crypto_provider, requires_grad=True)
target2 = target2.fix_precision().share(big_hospital, small_hospital, crypto_provider=crypto_provider, requires_grad=True)
model2 = model2.fix_precision().share(big_hospital, small_hospital, crypto_provider=crypto_provider, requires_grad=True)
opt2 = optim.SGD(params=model2.parameters(),lr=0.1).fix_precision()
# 1) erase previous gradients (if they exist)
opt2.zero_grad()
# 2) make a prediction
pred2 = model2(data2)
# 3) calculate how much we missed
loss2 = ((pred2 - target2)**2).sum()
# 4) figure out which weights caused us to miss
loss2.backward()
# # 5) change those weights
# opt2.step()
# # 6) print our progress
# print(loss2.get().float_precision())
print("Done")
run_broken = True
# make sure to re-start your jupyter notebook / environment with each test.
if(run_broken):
# Breaks
federated()
encrypted() # breaks here - something about loss2.backward() causes the federated() demo to break
else:
# Works fine
encrypted()
federated()
```
Throws error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-db6dbbeffaa2> in <module>()
100 # Breaks
101 federated()
--> 102 encrypted() # breaks here - something about loss2.backward() causes the federated() demo to break
103 else:
104 # Works fine
<ipython-input-1-db6dbbeffaa2> in encrypted()
84
85 # 4) figure out which weights caused us to miss
---> 86 loss2.backward()
87
88 # # 5) change those weights
/Users/atrask/anaconda/lib/python3.6/site-packages/syft-0.1.22a1-py3.6.egg/syft/frameworks/torch/hook/hook.py in overloaded_native_method(self, *args, **kwargs)
683 # Put back the wrappers where needed
684 response = syft.frameworks.torch.hook_args.hook_response(
--> 685 method_name, response, wrap_type=type(self), new_self=self
686 )
687
/Users/atrask/anaconda/lib/python3.6/site-packages/syft-0.1.22a1-py3.6.egg/syft/frameworks/torch/hook/hook_args.py in hook_response(attr, response, wrap_type, wrap_args, new_self)
243 response_hook_function = hook_method_response_functions[attr_id]
244 # Try running it
--> 245 new_response = response_hook_function(response)
246
247 except (IndexError, KeyError, AssertionError): # Update the function in cas of an error
/Users/atrask/anaconda/lib/python3.6/site-packages/syft-0.1.22a1-py3.6.egg/syft/frameworks/torch/hook/hook_args.py in <lambda>(x)
502 f = many_fold
503
--> 504 return lambda x: f(lambdas, x)
505
506
/Users/atrask/anaconda/lib/python3.6/site-packages/syft-0.1.22a1-py3.6.egg/syft/frameworks/torch/hook/hook_args.py in two_fold(lambdas, args, **kwargs)
520
521 def two_fold(lambdas, args, **kwargs):
--> 522 return lambdas[0](args[0], **kwargs), lambdas[1](args[1], **kwargs)
523
524
/Users/atrask/anaconda/lib/python3.6/site-packages/syft-0.1.22a1-py3.6.egg/syft/frameworks/torch/hook/hook_args.py in <lambda>(i)
480 if isinstance(r, (list, tuple)) # if the rule is a list or tuple.
481 # Last if not, rule is probably == 1 so use type to return the right transformation.
--> 482 else lambda i: backward_func[wrap_type](i, **wrap_args)
483 for a, r in zip(response, rules) # And do this for all the responses / rules provided
484 ]
/Users/atrask/anaconda/lib/python3.6/site-packages/syft-0.1.22a1-py3.6.egg/syft/frameworks/torch/hook/hook_args.py in <lambda>(i)
73 backward_func = {
74 TorchTensor: lambda i: i.wrap(),
---> 75 torch.Tensor: lambda i: i.wrap(),
76 torch.nn.Parameter: lambda i: torch.nn.Parameter(data=i),
77 PointerTensor: lambda i: i,
AttributeError: 'NoneType' object has no attribute 'wrap'
```
**Additional context**
latest version of PySyft from PyPI ('0.1.22a1')
| Is this the error you get?
```
File "PySyft/syft/frameworks/torch/hook/hook_args.py", line 75, in <lambda>
torch.Tensor: lambda i: i.wrap()
AttributeError: 'NoneType' object has no attribute 'wrap'
```
Yup
I will try what happens if you force the function to not be taken from the dictionary in the hook. There might be a conflict there.
You rock. Thank you @midokura-silvia
```
# Try this
federated()
sy.frameworks.torch.hook.hook_args.hook_method_args_functions = {}
sy.frameworks.torch.hook.hook_args.hook_method_response_functions = {}
sy.frameworks.torch.hook.hook_args.get_tensor_type_functions = {}
encrypted() # runs through
``` | 2019-08-23T08:11:31 |
OpenMined/PySyft | 2,612 | OpenMined__PySyft-2612 | [
"2604"
] | 6feeb9357b83615f5acc9cde6d05ff4a61263eaa | diff --git a/syft/frameworks/torch/tensors/interpreters/large_precision.py b/syft/frameworks/torch/tensors/interpreters/large_precision.py
--- a/syft/frameworks/torch/tensors/interpreters/large_precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/large_precision.py
@@ -74,7 +74,8 @@ def _create_internal_representation(self):
# floor is applied otherwise, long float is not accurate
self_scaled = np.vectorize(math.floor)(self_scaled)
- self_scaled %= self.field
+ # https://github.com/numpy/numpy/issues/6464
+ self_scaled = np.remainder(self_scaled, np.array(self.field), casting="unsafe")
# self_scaled can be an array of floats. As multiplying an array of int with an int
# still gives an array of int, I think it should be because self.child is a float tensor at this point.
| diff --git a/test/torch/tensors/test_large_precision.py b/test/torch/tensors/test_large_precision.py
--- a/test/torch/tensors/test_large_precision.py
+++ b/test/torch/tensors/test_large_precision.py
@@ -283,6 +283,14 @@ def test_share_sub(workers):
assert torch.all(torch.eq(expected, y))
+def test_storage():
+ x = torch.tensor([1.0, 2.0, 3.0])
+ enlarged = x.fix_prec(storage="large")
+ restored = enlarged.float_precision()
+ # And now x and restored must be the same
+ assert torch.all(torch.eq(x, restored))
+
+
# def test_share_mul(workers):
# alice, bob, james = (workers["alice"], workers["bob"], workers["james"])
#
| Fix precision with large storage fails
**Describe the bug**
The following operation fails
`lpt = torch.Tensor([-2., 4]).fix_prec(storage="large")`
**To Reproduce**
Steps to reproduce the behavior:
Use the above expression in a test and see the result:
`TypeError`
**Expected behavior**
The expression should not crash
**Screenshots**
**Desktop (please complete the following information):**
**Additional context**
| 2019-09-11T06:22:15 |
|
OpenMined/PySyft | 2,641 | OpenMined__PySyft-2641 | [
"2092"
] | b7c8b3d3d587056f376c38dd07462a84f8305564 | diff --git a/syft/frameworks/torch/differential_privacy/pate.py b/syft/frameworks/torch/differential_privacy/pate.py
--- a/syft/frameworks/torch/differential_privacy/pate.py
+++ b/syft/frameworks/torch/differential_privacy/pate.py
@@ -278,15 +278,15 @@ def perform_analysis(teacher_preds, indices, noise_eps, delta=1e-5, moments=8, b
def tensors_to_literals(tensor_list):
"""Converts list of torch tensors to list of integers/floats. Fix for not having the functionality which converts list of tensors to tensors
-
+
Args:
-
+
tensor_list[List]: List of torch tensors
-
+
Returns:
-
+
literal_list[List]: List of floats/integers
-
+
"""
literal_list = []
@@ -334,14 +334,14 @@ def logmgf_exact_torch(q, priv_eps, l):
def compute_q_noisy_max_torch(counts, noise_eps):
"""Returns ~ Pr[outcome != winner].
Args:
-
+
counts: a list of scores
noise_eps: privacy parameter for noisy_max
-
+
Returns:
-
+
q: the probability that outcome is different from true winner.
-
+
"""
if type(counts) != torch.tensor:
@@ -349,9 +349,7 @@ def compute_q_noisy_max_torch(counts, noise_eps):
counts = torch.tensor(tensors_to_literals(counts), dtype=torch.float)
_, winner = counts.max(0)
- counts_normalized = noise_eps * (
- torch.tensor(counts, dtype=torch.float) - torch.tensor(counts[winner], dtype=torch.float)
- )
+ counts_normalized = noise_eps * (counts.clone().detach().type(torch.float) - counts[winner])
counts_normalized = tensors_to_literals(counts_normalized)
counts_rest = torch.tensor(
@@ -387,7 +385,7 @@ def sens_at_k_torch(counts, noise_eps, l, k):
"""Return sensitivity at distane k.
Args:
-
+
counts: an array of scores
noise_eps: noise parameter used
l: moment whose sensitivity is being computed
@@ -419,7 +417,7 @@ def sens_at_k_torch(counts, noise_eps, l, k):
def smooth_sens_torch(counts, noise_eps, l, beta):
"""Compute beta-smooth sensitivity.
-
+
Args:
counts: array of scors
noise_eps: noise parameter
| diff --git a/test/torch/differential_privacy/test_pate.py b/test/torch/differential_privacy/test_pate.py
--- a/test/torch/differential_privacy/test_pate.py
+++ b/test/torch/differential_privacy/test_pate.py
@@ -58,5 +58,5 @@ def test_torch_ref_match():
preds, indices, noise_eps=0.1, delta=1e-5
)
- assert torch.isclose(data_dep_eps, torch.tensor(data_dep_eps_ref.item()))
- assert torch.isclose(data_ind_eps, torch.tensor(data_ind_eps_ref.item()))
+ assert torch.isclose(data_dep_eps, torch.tensor(data_dep_eps_ref))
+ assert torch.isclose(data_ind_eps, torch.tensor(data_ind_eps_ref))
| Build Warnings
#1801 seems to have introduced new build warnings which we should address.
```======================================================================================== warnings summary =========================================================================================
test/torch/tensors/test_poly.py::testSigmoid
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
test/torch/tensors/test_poly.py::testExp
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
test/torch/tensors/test_poly.py::testtanh
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
test/torch/tensors/test_poly.py::testinterpolate
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
test/torch/tensors/test_poly.py::testcustomfunction
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
test/torch/tensors/test_poly.py::testrandomfunction
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
test/torch/tensors/test_poly.py::testlogfunction
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
test/torch/tensors/test_poly.py::testexptaylor
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
test/torch/tensors/test_poly.py::testsigmoidtaylor
/Users/atrask/Laboratory/openmined/PySyft/syft/frameworks/torch/hook.py:743: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
current_tensor = hook_self.torch.native_tensor(*args, **kwargs)
-- Docs: http://doc.pytest.org/en/latest/warnings.html```
| Oops , I will work on this @iamtrask
They are now almost removed except one | 2019-10-04T07:51:18 |
OpenMined/PySyft | 2,659 | OpenMined__PySyft-2659 | [
"2537"
] | ff080c1aff7ef99f5a0411b97ca1e3ac924b9ae4 | diff --git a/syft/frameworks/torch/tensors/interpreters/additive_shared.py b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
--- a/syft/frameworks/torch/tensors/interpreters/additive_shared.py
+++ b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
@@ -325,6 +325,9 @@ def add(self, shares: dict, other):
- a torch tensor
- a constant
"""
+ if isinstance(other, int):
+ other = torch.LongTensor([other])
+
if isinstance(other, (torch.LongTensor, torch.IntTensor)):
# if someone passes a torch tensor, we share it and keep the dict
other = other.share(
@@ -356,10 +359,8 @@ def add(self, shares: dict, other):
return new_shares
- def __add__(self, other, **kwargs):
- """Adds two tensors. Forwards command to add. See add() for more details."""
-
- return self.add(other, **kwargs)
+ __add__ = add
+ __radd__ = add
@overloaded.method
def sub(self, shares: dict, other):
@@ -374,6 +375,9 @@ def sub(self, shares: dict, other):
- a constant
"""
+ if isinstance(other, int):
+ other = torch.LongTensor([other])
+
if isinstance(other, (torch.LongTensor, torch.IntTensor)):
# if someone passes a torch tensor, we share it and keep the dict
other = other.share(
@@ -405,9 +409,10 @@ def sub(self, shares: dict, other):
return new_shares
- def __sub__(self, *args, **kwargs):
- """Subtracts two tensors. Forwards command to sub. See .sub() for details."""
- return self.sub(*args, **kwargs)
+ __sub__ = sub
+
+ def __rsub__(self, other):
+ return (self - other) * -1
def _private_mul(self, other, equation: str):
"""Abstractly Multiplies two tensors
diff --git a/syft/frameworks/torch/tensors/interpreters/precision.py b/syft/frameworks/torch/tensors/interpreters/precision.py
--- a/syft/frameworks/torch/tensors/interpreters/precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/precision.py
@@ -1,12 +1,12 @@
import torch
import syft
-from syft.workers.abstract import AbstractWorker
+from syft.frameworks.torch.tensors.interpreters.additive_shared import AdditiveSharingTensor
from syft.generic.frameworks.hook import hook_args
+from syft.generic.frameworks.overload import overloaded
from syft.generic.pointers.multi_pointer import MultiPointerTensor
from syft.generic.tensor import AbstractTensor
-from syft.frameworks.torch.tensors.interpreters.additive_shared import AdditiveSharingTensor
-from syft.generic.frameworks.overload import overloaded
+from syft.workers.abstract import AbstractWorker
class FixedPrecisionTensor(AbstractTensor):
@@ -142,8 +142,8 @@ def truncate(self, precision_fractional, check_sign=True):
def add(self, _self, other):
"""Add two fixed precision tensors together.
"""
- if isinstance(other, int):
- scaled_int = other * self.base ** self.precision_fractional
+ if isinstance(other, (int, float)):
+ scaled_int = int(other * self.base ** self.precision_fractional)
return getattr(_self, "add")(scaled_int)
if isinstance(_self, AdditiveSharingTensor) and isinstance(other, torch.Tensor):
@@ -161,6 +161,7 @@ def add(self, _self, other):
return response
__add__ = add
+ __radd__ = add
def add_(self, value_or_tensor, tensor=None):
if tensor is None:
@@ -182,8 +183,8 @@ def __iadd__(self, other):
def sub(self, _self, other):
"""Subtracts a fixed precision tensor from another one.
"""
- if isinstance(other, int):
- scaled_int = other * self.base ** self.precision_fractional
+ if isinstance(other, (int, float)):
+ scaled_int = int(other * self.base ** self.precision_fractional)
return getattr(_self, "sub")(scaled_int)
if isinstance(_self, AdditiveSharingTensor) and isinstance(other, torch.Tensor):
@@ -202,6 +203,9 @@ def sub(self, _self, other):
__sub__ = sub
+ def __rsub__(self, other):
+ return (self - other) * -1
+
def sub_(self, value_or_tensor, tensor=None):
if tensor is None:
result = self.sub(value_or_tensor)
@@ -225,7 +229,7 @@ def t(self, _self, *args, **kwargs):
def mul_and_div(self, other, cmd):
"""
- Hook manually mul and div to add the trucation/rescaling part
+ Hook manually mul and div to add the truncation/rescaling part
which is inherent to these operations in the fixed precision setting
"""
changed_sign = False
@@ -238,6 +242,10 @@ def mul_and_div(self, other, cmd):
if isinstance(other, (int, torch.Tensor, AdditiveSharingTensor)):
new_self = self.child
new_other = other
+ elif isinstance(other, float):
+ raise NotImplementedError(
+ "Can't multiply or divide a FixedPrecisionTensor with a float value"
+ )
elif isinstance(self.child, (AdditiveSharingTensor, MultiPointerTensor)) and isinstance(
other.child, torch.Tensor
@@ -374,14 +382,17 @@ def pow(self, power):
This uses the following trick:
- Divide power by 2 and multiply base to itself (if the power is even)
- Decrement power by 1 to make it even and then follow the first step
+
+ Args:
+ power (int): the exponent supposed to be an integer > 0
"""
base = self
- result = 1
+ result = None
while power > 0:
# If power is odd
if power % 2 == 1:
- result = result * base
+ result = result * base if result is not None else base
# Divide the power by 2
power = power // 2
@@ -443,6 +454,109 @@ def matmul(self, *args, **kwargs):
__matmul__ = matmul
mm = matmul
+ # Approximations:
+ def inverse(self, iterations=8):
+ """
+ Computes an approximation of the matrix inversion using Newton-Schulz
+ iterations
+ """
+ # TODO: should we add non-approximate version if self.child is a pure tensor?
+
+ assert len(self.shape) >= 2, "Can't compute inverse on non-matrix"
+ assert self.shape[-1] == self.shape[-2], "Must be batches of square matrices"
+
+ inverse = (0.1 * torch.eye(self.shape[-1])).fix_prec(**self.get_class_attributes()).child
+
+ for _ in range(iterations):
+ inverse = 2 * inverse - inverse @ self @ inverse
+
+ return inverse
+
+ def exp(self, iterations=8):
+ """
+ Approximates the exponential function using a limit approximation:
+ exp(x) = \lim_{n -> infty} (1 + x / n) ^ n
+
+ Here we compute exp by choosing n = 2 ** d for some large d equal to
+ iterations. We then compute (1 + x / n) once and square `d` times.
+
+ Args:
+ iterations (int): number of iterations for limit approximation
+
+ Ref: https://github.com/LaRiffle/approximate-models
+ """
+ return (1 + self / 2 ** iterations) ** (2 ** iterations)
+
+ def sigmoid(self, method="exp"):
+ """
+ Approximates the sigmoid function
+
+ Args:
+ self: the fixed precision tensor
+ method (str): (default = "exp")
+ "exp": Use the exponential approximation and the sigmoid definition
+ sigmoid(x) = 1 / (1 + exp(-x))
+ "maclaurin": Use the Maclaurin / Taylor approximation, with polynomial
+ interpolation of degree 5 over [-8,8]
+ NOTE: This method is faster but not as precise as "exp"
+ Ref: https://mortendahl.github.io/2017/04/17/private-deep-learning-with-mpc/#approximating-sigmoid
+ """
+
+ if method == "exp":
+ # Inverse can only be used on matrices
+ if len(self.shape) == 1:
+ one = self * 0 + 1
+ result = one / (1 + (self * -1).exp())
+ else:
+ result = (1 + (self * -1).exp()).inverse()
+
+ elif method == "maclaurin":
+ weights = (
+ torch.tensor([0.5, 1.91204779e-01, -4.58667307e-03, 4.20690803e-05])
+ .fix_precision(**self.get_class_attributes())
+ .child
+ )
+ degrees = [0, 1, 3, 5]
+
+ # initiate with term of degree 0 to avoid errors with tensor ** 0
+ one = self * 0 + 1
+ result = one * weights[0]
+ for i, d in enumerate(degrees[1:]):
+ result += (self ** d) * weights[i + 1]
+
+ return result
+
+ def log(self, iterations=2, exp_iterations=8):
+ """Approximates the natural logarithm using 8th order modified Householder iterations.
+ Recall that Householder method is an algorithm to solve a non linear equation f(x) = 0.
+ Here f: x -> 1 - C * exp(-x) with C = self
+
+ Iterations are computed by:
+ y_0 = some constant
+ h = 1 - self * exp(-y_n)
+ y_{n+1} = y_n - h * (1 + h / 2 + h^2 / 3 + h^3 / 6 + h^4 / 5 + h^5 / 7)
+
+ Args:
+ iterations (int): number of iterations for 6th order modified
+ Householder approximation.
+ exp_iterations (int): number of iterations for limit approximation of exp
+
+ Ref: https://github.com/LaRiffle/approximate-models
+ """
+
+ y = self / 31 + 1.59 - 20 * (-2 * self - 1.4).exp(iterations=exp_iterations)
+
+ # 6th order Householder iterations
+ for i in range(iterations):
+ h = [1 - self * (-y).refresh().exp(iterations=exp_iterations)]
+ for i in range(1, 5):
+ h.append(h[-1] * h[0])
+
+ y -= h[0] * (1 + h[0] / 2 + h[1] / 3 + h[2] / 4 + h[3] / 5 + h[4] / 6)
+
+ return y
+
+ # Binary ops
@overloaded.method
def __gt__(self, _self, other):
result = _self.__gt__(other)
@@ -506,28 +620,26 @@ def addmm(bias, input_tensor, weight):
module.addmm = addmm
- def sigmoid(tensor):
- """
- Overloads torch.sigmoid to be able to use MPC
- Approximation with polynomial interpolation of degree 5 over [-8,8]
- Ref: https://mortendahl.github.io/2017/04/17/private-deep-learning-with-mpc/#approximating-sigmoid
- """
+ def inverse(self):
+ return self.inverse()
- weights = [0.5, 1.91204779e-01, -4.58667307e-03, 4.20690803e-05]
- degrees = [0, 1, 3, 5]
+ module.inverse = inverse
- max_degree = degrees[-1]
- max_idx = degrees.index(max_degree)
+ def exp(tensor):
+ return tensor.exp()
- # initiate with term of degree 0 to avoid errors with tensor ** 0
- result = (tensor * 0 + 1) * torch.tensor(weights[0]).fix_precision().child
- for w, d in zip(weights[1:max_idx], degrees[1:max_idx]):
- result += (tensor ** d) * torch.tensor(w).fix_precision().child
+ module.exp = exp
- return result
+ def sigmoid(tensor):
+ return tensor.sigmoid()
module.sigmoid = sigmoid
+ def log(tensor):
+ return tensor.log()
+
+ module.log = log
+
def tanh(tensor):
"""
Overloads torch.tanh to be able to use MPC
| diff --git a/test/efficiency_tests/assertions.py b/test/efficiency_tests/assertions.py
--- a/test/efficiency_tests/assertions.py
+++ b/test/efficiency_tests/assertions.py
@@ -16,7 +16,7 @@ def wrapper(*args, **kwargs):
t0 = time.time()
func(*args, **kwargs)
dt = time.time() - t0
- assert dt < max_time
+ assert dt < max_time, f"Test run in {round(dt, 2)} > {round(max_time, 2)} s"
return wrapper
diff --git a/test/efficiency_tests/test_activations_time.py b/test/efficiency_tests/test_activations_time.py
--- a/test/efficiency_tests/test_activations_time.py
+++ b/test/efficiency_tests/test_activations_time.py
@@ -5,7 +5,7 @@
@pytest.mark.parametrize("activation", ["tanh", "sigmoid"])
-@assert_time(max_time=1)
+@assert_time(max_time=10)
def test_activation(activation, hook, workers):
activation_func = torch.tanh if activation == "tanh" else torch.sigmoid
diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -169,6 +169,28 @@ def test_add(workers):
assert (z == (t + t)).all()
+ # with constant integer
+ t = torch.tensor([1.0, -2.0, 3.0])
+ x = t.fix_prec().share(alice, bob, crypto_provider=james)
+ c = 4
+
+ z = (x + c).get().float_prec()
+ assert (z == (t + c)).all()
+
+ z = (c + x).get().float_prec()
+ assert (z == (c + t)).all()
+
+ # with constant float
+ t = torch.tensor([1.0, -2.0, 3.0])
+ x = t.fix_prec().share(alice, bob, crypto_provider=james)
+ c = 4.2
+
+ z = (x + c).get().float_prec()
+ assert ((z - (t + c)) < 10e-3).all()
+
+ z = (c + x).get().float_prec()
+ assert ((z - (c + t)) < 10e-3).all()
+
def test_sub(workers):
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
@@ -211,6 +233,28 @@ def test_sub(workers):
assert (z == (u - t)).all()
+ # with constant integer
+ t = torch.tensor([1.0, -2.0, 3.0])
+ x = t.fix_prec().share(alice, bob, crypto_provider=james)
+ c = 4
+
+ z = (x - c).get().float_prec()
+ assert (z == (t - c)).all()
+
+ z = (c - x).get().float_prec()
+ assert (z == (c - t)).all()
+
+ # with constant float
+ t = torch.tensor([1.0, -2.0, 3.0])
+ x = t.fix_prec().share(alice, bob, crypto_provider=james)
+ c = 4.2
+
+ z = (x - c).get().float_prec()
+ assert ((z - (t - c)) < 10e-3).all()
+
+ z = (c - x).get().float_prec()
+ assert ((z - (c - t)) < 10e-3).all()
+
def test_mul(workers):
torch.manual_seed(121) # Truncation might not always work so we set the random seed
diff --git a/test/torch/tensors/test_precision.py b/test/torch/tensors/test_precision.py
--- a/test/torch/tensors/test_precision.py
+++ b/test/torch/tensors/test_precision.py
@@ -3,6 +3,7 @@
import torch.nn as nn
import torch.nn.functional as F
+from test.efficiency_tests.assertions import assert_time
from syft.frameworks.torch.tensors.interpreters.precision import FixedPrecisionTensor
@@ -59,19 +60,6 @@ def test_fix_prec_inplace_registration(hook):
assert hook.local_worker.get_obj(x.id) == torch.tensor([1.0]).fix_precision()
-def test_add_method():
-
- t = torch.tensor([0.1, 0.2, 0.3])
- x = t.fix_prec()
-
- y = x + x
-
- assert (y.child.child == torch.LongTensor([200, 400, 600])).all()
- y = y.float_prec()
-
- assert (y == t + t).all()
-
-
@pytest.mark.parametrize("method", ["t", "matmul"])
@pytest.mark.parametrize("parameter", [False, True])
def test_methods_for_linear_module(method, parameter):
@@ -96,6 +84,17 @@ def test_methods_for_linear_module(method, parameter):
def test_torch_add(workers):
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
+ # Method syntax
+ x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
+
+ y = x + x
+
+ assert (y.child.child == torch.LongTensor([200, 400, 600])).all()
+ y = y.float_prec()
+
+ assert (y == torch.tensor([0.2, 0.4, 0.6])).all()
+
+ # Function syntax
x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
y = torch.add(x, x)
@@ -120,15 +119,39 @@ def test_torch_add(workers):
assert (y == torch.tensor([40.0, -20.0, 20.0])).all()
- # with AST
+ # with AdditiveSharingTensor
t = torch.tensor([1.0, -2.0, 3.0])
x = t.fix_prec()
y = t.fix_prec().share(bob, alice, crypto_provider=james)
- z = torch.add(y, x).get().float_prec()
+ z = torch.add(x, y).get().float_prec()
+ assert (z == torch.add(t, t)).all()
+ z = torch.add(y, x).get().float_prec()
assert (z == torch.add(t, t)).all()
+ # with constant integer
+ t = torch.tensor([1.0, -2.0, 3.0])
+ x = t.fix_prec()
+ c = 4
+
+ z = (x + c).float_prec()
+ assert (z == (t + c)).all()
+
+ z = (c + x).float_prec()
+ assert (z == (c + t)).all()
+
+ # with constant float
+ t = torch.tensor([1.0, -2.0, 3.0])
+ x = t.fix_prec()
+ c = 4.2
+
+ z = (x + c).float_prec()
+ assert ((z - (t + c)) < 10e-3).all()
+
+ z = (c + x).float_prec()
+ assert ((z - (c + t)) < 10e-3).all()
+
def test_torch_add_():
x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
@@ -151,27 +174,6 @@ def test_torch_add_():
assert (y == torch.tensor([0.15, 0.3, 0.45])).all()
-def test_torch_sub_():
- x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
-
- y = x.sub_(x)
-
- assert (y.child.child == torch.LongTensor([0, 0, 0])).all()
- y = y.float_prec()
-
- assert (y == torch.tensor([0, 0, 0.0])).all()
-
- x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
- lr = torch.tensor(0.5).fix_prec()
-
- y = x.sub_(lr, x)
-
- assert (y.child.child == torch.LongTensor([50, 100, 150])).all()
- y = y.float_prec()
-
- assert (y == torch.tensor([0.05, 0.1, 0.15])).all()
-
-
def test_torch_sub(workers):
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
@@ -185,7 +187,7 @@ def test_torch_sub(workers):
assert (z == torch.tensor([0.4, 0.6, 1.0])).all()
- # with AST
+ # with AdditiveSharingTensor
tx = torch.tensor([1.0, -2.0, 3.0])
ty = torch.tensor([0.1, 0.2, 0.3])
x = tx.fix_prec()
@@ -197,6 +199,49 @@ def test_torch_sub(workers):
assert (z1 == torch.sub(ty, tx)).all()
assert (z2 == torch.sub(tx, ty)).all()
+ # with constant integer
+ t = torch.tensor([1.0, -2.0, 3.0])
+ x = t.fix_prec()
+ c = 4
+
+ z = (x - c).float_prec()
+ assert (z == (t - c)).all()
+
+ z = (c - x).float_prec()
+ assert (z == (c - t)).all()
+
+ # with constant float
+ t = torch.tensor([1.0, -2.0, 3.0])
+ x = t.fix_prec()
+ c = 4.2
+
+ z = (x - c).float_prec()
+ assert ((z - (t - c)) < 10e-3).all()
+
+ z = (c - x).float_prec()
+ assert ((z - (c - t)) < 10e-3).all()
+
+
+def test_torch_sub_():
+ x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
+
+ y = x.sub_(x)
+
+ assert (y.child.child == torch.LongTensor([0, 0, 0])).all()
+ y = y.float_prec()
+
+ assert (y == torch.tensor([0, 0, 0.0])).all()
+
+ x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
+ lr = torch.tensor(0.5).fix_prec()
+
+ y = x.sub_(lr, x)
+
+ assert (y.child.child == torch.LongTensor([50, 100, 150])).all()
+ y = y.float_prec()
+
+ assert (y == torch.tensor([0.05, 0.1, 0.15])).all()
+
def test_torch_mul(workers):
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
@@ -343,6 +388,117 @@ def test_torch_dot(workers):
assert torch.dot(x, y).float_prec() == 45
+@assert_time(max_time=6)
+def test_torch_inverse_approx(workers):
+ """
+ Test the approximate inverse with different tolerance depending on
+ the precision_fractional considered
+ """
+ alice, bob, james = workers["alice"], workers["bob"], workers["james"]
+
+ fix_prec_tolerance = {3: 1 / 100, 4: 1 / 100, 5: 1 / 100}
+
+ for prec_frac, tolerance in fix_prec_tolerance.items():
+ for t in [
+ torch.tensor([[0.4, -0.1], [-0.4, 2.0]]),
+ torch.tensor([[1, -0.6], [0.4, 4.0]]),
+ torch.tensor([[1, 0.2], [0.4, 4.0]]),
+ ]:
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(
+ alice, bob, crypto_provider=james
+ )
+ r_sh = t_sh.inverse()
+ r = r_sh.get().float_prec()
+ t = t.inverse()
+ diff = (r - t).abs().max()
+ norm = (r + t).abs().max() / 2
+
+ assert (diff / (tolerance * norm)) < 1
+
+
+@assert_time(max_time=10)
+def test_torch_exp_approx(workers):
+ """
+ Test the approximate exponential with different tolerance depending on
+ the precision_fractional considered
+ """
+ alice, bob, james = workers["alice"], workers["bob"], workers["james"]
+
+ fix_prec_tolerance = {3: 20 / 100, 4: 5 / 100, 5: 5 / 100}
+
+ for prec_frac, tolerance in fix_prec_tolerance.items():
+ cumsum = torch.zeros(5)
+ for i in range(10):
+ t = torch.tensor([0.0, 1, 2, 3, 4])
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(
+ alice, bob, crypto_provider=james
+ )
+ r_sh = t_sh.exp()
+ r = r_sh.get().float_prec()
+ t = t.exp()
+ diff = (r - t).abs()
+ norm = (r + t) / 2
+ cumsum += diff / (tolerance * norm)
+
+ cumsum /= 10
+ assert (cumsum < 1).all()
+
+
+@assert_time(max_time=40)
+def test_torch_sigmoid_approx(workers):
+ """
+ Test the approximate sigmoid with different tolerance depending on
+ the precision_fractional considered
+ """
+ alice, bob, james = workers["alice"], workers["bob"], workers["james"]
+
+ fix_prec_tolerance_by_method = {
+ "exp": {3: 5 / 100, 4: 1 / 100, 5: 1 / 100},
+ "maclaurin": {3: 7 / 100, 4: 15 / 100, 5: 15 / 100},
+ }
+
+ for method, fix_prec_tolerance in fix_prec_tolerance_by_method.items():
+ for prec_frac, tolerance in fix_prec_tolerance.items():
+ t = torch.tensor(range(-10, 10)) * 0.5
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(
+ alice, bob, crypto_provider=james
+ )
+ r_sh = t_sh.sigmoid(method=method)
+ r = r_sh.get().float_prec()
+ t = t.sigmoid()
+ diff = (r - t).abs().max()
+ norm = (r + t).abs().max() / 2
+
+ assert (diff / (tolerance * norm)) < 1
+
+
+@assert_time(max_time=45)
+def test_torch_log_approx(workers):
+ """
+ Test the approximate logarithm with different tolerance depending on
+ the precision_fractional considered
+ """
+ alice, bob, james = workers["alice"], workers["bob"], workers["james"]
+ fix_prec_tolerance = {3: 100 / 100, 4: 3 / 100, 5: 2 / 100}
+
+ for prec_frac, tolerance in fix_prec_tolerance.items():
+ cumsum = torch.zeros(9)
+ for i in range(10):
+ t = torch.tensor([0.1, 0.5, 2, 5, 10, 20, 50, 100, 250])
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(
+ alice, bob, crypto_provider=james
+ )
+ r_sh = t_sh.log()
+ r = r_sh.get().float_prec()
+ t = t.log()
+ diff = (r - t).abs()
+ norm = (r + t) / 2
+ cumsum += diff / (tolerance * norm)
+
+ cumsum /= 10
+ assert (cumsum.abs() < 1).all()
+
+
def test_torch_conv2d(workers):
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
im = torch.Tensor(
| Exponential implementation in SMPC
**Suggestion to implement exp in SMPC**
The interest of this is to allow computation of functions which derive of exponentials like sigmoid.
Let's say you share the tensor x in x0 and x1. You want to compute exp(x).
But exp(x) = exp(x0 + x1) = exp(x0) * exp(x1).
So you can compute locally exp(x0) and exp(x1), and **additively share** these values, perform an encrypted mul and then you obtain shares of exp(x0) * exp(x1) = exp(x).
| Guess I will take this up :)
@LaRiffle please notice that when doing operations in SMPC we are working with fixed_precision tensors. Exponential is not defined for fixed_precision tensors, we would need a polynomial approximation or other numerical methods that require several computations (including division which is really slow now)... π
Why it won't work:
```python
x = torch.tensor([[3, 4]])
x_sh = x.share(alice, bob, crypto_provider=crypto_provider)
x0, x1 = x_sh.child.child['alice'], x_sh.child.child['bob']
x0 = x0.float()
x1 = x1.float()
exp_x0, exp_x1 = [torch.exp(x0), torch.exp(x1)]
print(alice._objects[exp_x0.id_at_location], bob._objects[exp_x1.id_at_location])
```
output
```python
(tensor([[inf, inf]]), tensor([[inf, inf]]))
```
Computers are so weak.
Ok so let's not give up here!
## Claim 1
> If I can compute privately exp of a bit then I'm good for any exp(integer)
Idea:
```python
e = torch.exp(torch.tensor(1.))
# this...
e**5
# can be written
e**(1*2**2 + 0*2**1 + 1*2**0)
# or also
(e**1)**(2**2) * (e**0)**(2**1) * (e**1)**(2**0)
```
## Claim 2
> I can compute privately exp of a single bit
```python
# Take a bit (0 or 1, here 0 for example)and share it in a *binary* field
x = torch.tensor([0])
x_sh = x.share(alice, bob, crypto_provider=crypto_provider, field=2)
# Access shares
x0, x1 = x_sh.child.child['alice'], x_sh.child.child['bob']
x0 = x0.float()
x1 = x1.float()
print(alice._objects[x0.id_at_location], bob._objects[x1.id_at_location])
# Compute privately the wrap field bit, which decrypts to 1 iff x0+x1 >= 2
x0_sh = x0.fix_precision().share(alice, bob, crypto_provider=charlie).get()
x1_sh = x1.fix_precision().share(alice, bob, crypto_provider=charlie).get()
wrap_field = x0_sh * x1_sh
# Compute exp of shares
exp_x0, exp_x1 = [torch.exp(x0), torch.exp(x1)]
alice._objects[exp_x0.id_at_location], bob._objects[exp_x1.id_at_location]
# Share the exp of shares
exp_x0_sh = exp_x0.fix_precision().share(alice, bob, crypto_provider=charlie).get()
exp_x1_sh = exp_x1.fix_precision().share(alice, bob, crypto_provider=charlie).get()
# Apply exp(x0 + x1) = exp(x0) * exp(x1) formula + a wrapping correction if needed
one = torch.tensor([1.]).fix_precision()
inv_exp_field_size = torch.exp(-torch.tensor([2.])).fix_precision()
exp_sh = exp_x0_sh * exp_x1_sh * (wrap_field * (inv_exp_field_size - one) + one)
# Open and get 1.0
exp_sh.get().float_prec()
```
Now, how practical is this is another question...
From my first implem, the price to pay should be around 1.5s for a float value, which can be amortized when using vectors
@LaRiffle I suggest you take a look in the [SCALE-MAMBA documentation](https://homes.esat.kuleuven.be/~nsmart/SCALE/Documentation.pdf). Note that some of those protocols might be slower versions that are used to support any `n`, and hence there may be better protocols for the two-party case.
@mortendahl we were discussing SCALE-MAMBA last week, @LaRiffle and I were thinking about trying to "hook" it for some MPC operations...
@andrelmfarias @LaRiffle not sure I understand what you mean by hooking here..?
We were thinking that maybe we could "hook" some operations from their github repository: https://github.com/KULeuven-COSIC/SCALE-MAMBA
To use their protocols without needing to implement them ourselves in PySyft.
I don't know if it's clear... | 2019-10-11T16:55:00 |
OpenMined/PySyft | 2,692 | OpenMined__PySyft-2692 | [
"2689"
] | b0b8a13524150002524c15c6c283b4b14de62d40 | diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -1,4 +1,6 @@
from abc import abstractmethod
+from contextlib import contextmanager
+
import logging
from typing import Callable
from typing import List
@@ -202,6 +204,14 @@ def _recv_msg(self, message: bin):
"""
raise NotImplementedError # pragma: no cover
+ @contextmanager
+ def registration_enabled(self):
+ self.is_client_worker = False
+ try:
+ yield self
+ finally:
+ self.is_client_worker = True
+
def remove_worker_from_registry(self, worker_id):
"""Removes a worker from the dictionary of known workers.
Args:
| diff --git a/test/message/test_plan.py b/test/message/test_plan.py
--- a/test/message/test_plan.py
+++ b/test/message/test_plan.py
@@ -27,24 +27,22 @@ def plan_abs(data):
def test_stateful_plan_built_automatically(hook):
- hook.local_worker.is_client_worker = False
+ with hook.local_worker.registration_enabled():
- @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1.0]),))
- def foo(x, state):
- bias, = state.read()
- x = x * 2
- return x + bias
-
- assert isinstance(foo.__str__(), str)
- assert len(foo.readable_plan) > 0
- assert foo.is_built
+ @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1.0]),))
+ def foo(x, state):
+ bias, = state.read()
+ x = x * 2
+ return x + bias
- t = th.tensor([1.0, 2])
- x = foo(t)
+ assert isinstance(foo.__str__(), str)
+ assert len(foo.readable_plan) > 0
+ assert foo.is_built
- assert (x == th.tensor([3.0, 5])).all()
+ t = th.tensor([1.0, 2])
+ x = foo(t)
- hook.local_worker.is_client_worker = True
+ assert (x == th.tensor([3.0, 5])).all()
def test_plan_build():
@@ -62,20 +60,18 @@ def plan_abs(data):
def test_stateful_plan_build(hook):
- hook.local_worker.is_client_worker = False
+ with hook.local_worker.registration_enabled():
- @sy.func2plan(state=(th.tensor([1.0]),))
- def foo(x, state):
- bias, = state.read()
- x = x * 2
- return x + bias
-
- t = th.tensor([1.0, 2])
- x = foo(t)
+ @sy.func2plan(state=(th.tensor([1.0]),))
+ def foo(x, state):
+ bias, = state.read()
+ x = x * 2
+ return x + bias
- assert (x == th.tensor([3.0, 5])).all()
+ t = th.tensor([1.0, 2])
+ x = foo(t)
- hook.local_worker.is_client_worker = True
+ assert (x == th.tensor([3.0, 5])).all()
def test_plan_built_automatically_with_any_dimension():
@@ -136,59 +132,55 @@ def forward(self, x):
def test_plan_method_execute_locally(hook):
- hook.local_worker.is_client_worker = False
-
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(2, 3)
- self.fc2 = nn.Linear(3, 2)
- self.fc3 = nn.Linear(2, 1)
+ with hook.local_worker.registration_enabled():
- def forward(self, x):
- x = F.relu(self.fc1(x))
- x = self.fc2(x)
- x = self.fc3(x)
- return F.log_softmax(x, dim=0)
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(2, 3)
+ self.fc2 = nn.Linear(3, 2)
+ self.fc3 = nn.Linear(2, 1)
- model = Net()
+ def forward(self, x):
+ x = F.relu(self.fc1(x))
+ x = self.fc2(x)
+ x = self.fc3(x)
+ return F.log_softmax(x, dim=0)
- model.build(th.tensor([1.0, 2]))
+ model = Net()
- # Call one time
- assert model(th.tensor([1.0, 2])) == 0
+ model.build(th.tensor([1.0, 2]))
- # Call one more time
- assert model(th.tensor([1.0, 2.1])) == 0
+ # Call one time
+ assert model(th.tensor([1.0, 2])) == 0
- hook.local_worker.is_client_worker = True
+ # Call one more time
+ assert model(th.tensor([1.0, 2.1])) == 0
def test_stateful_plan_method_execute_locally(hook):
- hook.local_worker.is_client_worker = False
+ with hook.local_worker.registration_enabled():
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(2, 1)
- self.bias = th.tensor([1000.0])
-
- def forward(self, x):
- x = self.fc1(x)
- return F.log_softmax(x, dim=0) + self.bias
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(2, 1)
+ self.bias = th.tensor([1000.0])
- model = Net()
+ def forward(self, x):
+ x = self.fc1(x)
+ return F.log_softmax(x, dim=0) + self.bias
- model.build(th.tensor([1.0, 2]))
+ model = Net()
- # Call one time
- assert model(th.tensor([1.0, 2])) == th.tensor([1000.0])
+ model.build(th.tensor([1.0, 2]))
- # Call one more time
- assert model(th.tensor([1.0, 2.1])) == th.tensor([1000.0])
+ # Call one time
+ assert model(th.tensor([1.0, 2])) == th.tensor([1000.0])
- hook.local_worker.is_client_worker = True
+ # Call one more time
+ assert model(th.tensor([1.0, 2.1])) == th.tensor([1000.0])
def test_plan_multiple_send(workers):
@@ -217,30 +209,28 @@ def plan_abs(data):
def test_stateful_plan_multiple_send(hook, workers):
me, bob, alice = workers["me"], workers["bob"], workers["alice"]
- hook.local_worker.is_client_worker = False
+ with hook.local_worker.registration_enabled():
- @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1.0]),))
- def plan_abs(x, state):
- bias, = state.read()
- x = x.abs()
- return x + bias
-
- plan_ptr = plan_abs.send(bob)
- x_ptr = th.tensor([-1.0, 7, 3]).send(bob)
- p = plan_ptr(x_ptr)
- res = p.get()
+ @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1.0]),))
+ def plan_abs(x, state):
+ bias, = state.read()
+ x = x.abs()
+ return x + bias
- assert (res == th.tensor([2.0, 8, 4])).all()
+ plan_ptr = plan_abs.send(bob)
+ x_ptr = th.tensor([-1.0, 7, 3]).send(bob)
+ p = plan_ptr(x_ptr)
+ res = p.get()
- # Test get / send plan
- plan_ptr = plan_abs.send(alice)
+ assert (res == th.tensor([2.0, 8, 4])).all()
- x_ptr = th.tensor([-1.0, 2, 3]).send(alice)
- p = plan_ptr(x_ptr)
- res = p.get()
- assert (res == th.tensor([2.0, 3, 4])).all()
+ # Test get / send plan
+ plan_ptr = plan_abs.send(alice)
- hook.local_worker.is_client_worker = True
+ x_ptr = th.tensor([-1.0, 2, 3]).send(alice)
+ p = plan_ptr(x_ptr)
+ res = p.get()
+ assert (res == th.tensor([2.0, 3, 4])).all()
def test_plan_built_on_class(hook):
@@ -248,50 +238,48 @@ def test_plan_built_on_class(hook):
Test class Plans and plan send / get / send
"""
- hook.local_worker.is_client_worker = False
+ with hook.local_worker.registration_enabled():
- x11 = th.tensor([-1, 2.0]).tag("input_data")
- x21 = th.tensor([-1, 2.0]).tag("input_data")
+ x11 = th.tensor([-1, 2.0]).tag("input_data")
+ x21 = th.tensor([-1, 2.0]).tag("input_data")
- device_1 = sy.VirtualWorker(hook, id="device_1", data=(x11,))
- device_2 = sy.VirtualWorker(hook, id="device_2", data=(x21,))
+ device_1 = sy.VirtualWorker(hook, id="device_1", data=(x11,))
+ device_2 = sy.VirtualWorker(hook, id="device_2", data=(x21,))
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(2, 3)
- self.fc2 = nn.Linear(3, 1)
-
- self.bias = th.tensor([1000.0])
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(2, 3)
+ self.fc2 = nn.Linear(3, 1)
- def forward(self, x):
- x = F.relu(self.fc1(x))
- x = self.fc2(x)
- return F.log_softmax(x, dim=0) + self.bias
+ self.bias = th.tensor([1000.0])
- net = Net()
+ def forward(self, x):
+ x = F.relu(self.fc1(x))
+ x = self.fc2(x)
+ return F.log_softmax(x, dim=0) + self.bias
- # build
- net.build(th.tensor([1, 2.0]))
+ net = Net()
- net_ptr = net.send(device_1)
- pointer_to_data = device_1.search("input_data")[0]
- pointer_to_result = net_ptr(pointer_to_data)
+ # build
+ net.build(th.tensor([1, 2.0]))
- result = pointer_to_result.get()
- assert isinstance(result, th.Tensor)
- assert result == th.tensor([1000.0])
+ net_ptr = net.send(device_1)
+ pointer_to_data = device_1.search("input_data")[0]
+ pointer_to_result = net_ptr(pointer_to_data)
- net_ptr = net.send(device_2)
+ result = pointer_to_result.get()
+ assert isinstance(result, th.Tensor)
+ assert result == th.tensor([1000.0])
- pointer_to_data = device_2.search("input_data")[0]
- pointer_to_result = net_ptr(pointer_to_data)
+ net_ptr = net.send(device_2)
- result = pointer_to_result.get()
- assert isinstance(result, th.Tensor)
- assert result == th.tensor([1000.0])
+ pointer_to_data = device_2.search("input_data")[0]
+ pointer_to_result = net_ptr(pointer_to_data)
- hook.local_worker.is_client_worker = True
+ result = pointer_to_result.get()
+ assert isinstance(result, th.Tensor)
+ assert result == th.tensor([1000.0])
def test_multiple_workers(workers):
@@ -316,367 +304,348 @@ def plan_abs(data):
def test_stateful_plan_multiple_workers(hook, workers):
me, bob, alice = workers["me"], workers["bob"], workers["alice"]
- hook.local_worker.is_client_worker = False
-
- @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1]),))
- def plan_abs(x, state):
- bias, = state.read()
- x = x.abs()
- return x + bias
+ with hook.local_worker.registration_enabled():
- plan_ptr = plan_abs.send(bob, alice)
- x_ptr = th.tensor([-1, 7, 3]).send(bob)
- p = plan_ptr(x_ptr)
- x_abs = p.get()
- assert (x_abs == th.tensor([2, 8, 4])).all()
+ @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1]),))
+ def plan_abs(x, state):
+ bias, = state.read()
+ x = x.abs()
+ return x + bias
- x_ptr = th.tensor([-1, 9, 3]).send(alice)
- p = plan_ptr(x_ptr)
- x_abs = p.get()
- assert (x_abs == th.tensor([2, 10, 4])).all()
+ plan_ptr = plan_abs.send(bob, alice)
+ x_ptr = th.tensor([-1, 7, 3]).send(bob)
+ p = plan_ptr(x_ptr)
+ x_abs = p.get()
+ assert (x_abs == th.tensor([2, 8, 4])).all()
- hook.local_worker.is_client_worker = True
+ x_ptr = th.tensor([-1, 9, 3]).send(alice)
+ p = plan_ptr(x_ptr)
+ x_abs = p.get()
+ assert (x_abs == th.tensor([2, 10, 4])).all()
def test_fetch_plan(hook, workers):
alice = workers["alice"]
- hook.local_worker.is_client_worker = False
-
- @sy.func2plan(args_shape=[(1,)])
- def plan(data):
- return data * 3
+ with hook.local_worker.registration_enabled():
- plan.send(alice)
+ @sy.func2plan(args_shape=[(1,)])
+ def plan(data):
+ return data * 3
- # Fetch plan
- fetched_plan = plan.owner.fetch_plan(plan.id, alice)
+ plan.send(alice)
- # Execute it locally
- x = th.tensor([-1.0, 2, 3])
- assert (plan(x) == th.tensor([-3.0, 6, 9])).all()
- assert (fetched_plan(x) == th.tensor([-3.0, 6, 9])).all()
- assert fetched_plan.forward is None
- assert fetched_plan.is_built
+ # Fetch plan
+ fetched_plan = plan.owner.fetch_plan(plan.id, alice)
- hook.local_worker.is_client_worker = True
+ # Execute it locally
+ x = th.tensor([-1.0, 2, 3])
+ assert (plan(x) == th.tensor([-3.0, 6, 9])).all()
+ assert (fetched_plan(x) == th.tensor([-3.0, 6, 9])).all()
+ assert fetched_plan.forward is None
+ assert fetched_plan.is_built
@pytest.mark.parametrize("is_func2plan", [True, False])
def test_fetch_stateful_plan(hook, is_func2plan, workers):
- hook.local_worker.is_client_worker = False
- if is_func2plan:
+ with hook.local_worker.registration_enabled():
+ if is_func2plan:
- @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1.0]),))
- def plan(data, state):
- bias, = state.read()
- return data * bias
+ @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1.0]),))
+ def plan(data, state):
+ bias, = state.read()
+ return data * bias
- else:
+ else:
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(1, 1)
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(1, 1)
- def forward(self, x):
- return self.fc1(x)
+ def forward(self, x):
+ return self.fc1(x)
- plan = Net()
- plan.build(th.tensor([1.2]))
-
- alice = workers["alice"]
- plan_ptr = plan.send(alice)
+ plan = Net()
+ plan.build(th.tensor([1.2]))
- # Fetch plan
- fetched_plan = plan.owner.fetch_plan(plan_ptr.id_at_location, alice)
+ alice = workers["alice"]
+ plan_ptr = plan.send(alice)
- # Execute it locally
- x = th.tensor([-1.26])
- assert th.all(th.eq(fetched_plan(x), plan(x)))
- # assert fetched_plan.state.state_ids != plan.state.state_ids #TODO
+ # Fetch plan
+ fetched_plan = plan.owner.fetch_plan(plan_ptr.id_at_location, alice)
- # Make sure fetched_plan is using the readable_plan
- assert fetched_plan.forward is None
- assert fetched_plan.is_built
+ # Execute it locally
+ x = th.tensor([-1.26])
+ assert th.all(th.eq(fetched_plan(x), plan(x)))
+ # assert fetched_plan.state.state_ids != plan.state.state_ids #TODO
- # Make sure plan is using the blueprint: forward
- assert plan.forward is not None
+ # Make sure fetched_plan is using the readable_plan
+ assert fetched_plan.forward is None
+ assert fetched_plan.is_built
- hook.local_worker.is_client_worker = True
+ # Make sure plan is using the blueprint: forward
+ assert plan.forward is not None
@pytest.mark.parametrize("is_func2plan", [True, False])
def test_fetch_stateful_plan_remote(hook, is_func2plan, start_remote_worker):
- hook.local_worker.is_client_worker = False
- server, remote_proxy = start_remote_worker(
- id="test_fetch_stateful_plan_remote_{}".format(is_func2plan), hook=hook, port=8802
- )
+ with hook.local_worker.registration_enabled():
+ server, remote_proxy = start_remote_worker(
+ id="test_fetch_stateful_plan_remote_{}".format(is_func2plan), hook=hook, port=8802
+ )
- if is_func2plan:
+ if is_func2plan:
- @sy.func2plan(args_shape=[(1,)], state=(th.tensor([3.0]),))
- def plan(data, state):
- bias, = state.read()
- return data * bias
+ @sy.func2plan(args_shape=[(1,)], state=(th.tensor([3.0]),))
+ def plan(data, state):
+ bias, = state.read()
+ return data * bias
- else:
+ else:
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(1, 1)
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(1, 1)
- def forward(self, x):
- return self.fc1(x)
-
- plan = Net()
- plan.build(th.tensor([1.2]))
+ def forward(self, x):
+ return self.fc1(x)
- x = th.tensor([-1.26])
- expected = plan(x)
- plan_ptr = plan.send(remote_proxy)
+ plan = Net()
+ plan.build(th.tensor([1.2]))
- # Fetch plan
- fetched_plan = plan.owner.fetch_plan(plan_ptr.id_at_location, remote_proxy)
+ x = th.tensor([-1.26])
+ expected = plan(x)
+ plan_ptr = plan.send(remote_proxy)
- # Execute it locally
- assert th.all(th.eq(fetched_plan(x), expected))
- # assert fetched_plan.state.state_ids != plan.state.state_ids #TODO
+ # Fetch plan
+ fetched_plan = plan.owner.fetch_plan(plan_ptr.id_at_location, remote_proxy)
- # Make sure fetched_plan is using the readable_plan
- assert fetched_plan.forward is None
- assert fetched_plan.is_built
+ # Execute it locally
+ assert th.all(th.eq(fetched_plan(x), expected))
+ # assert fetched_plan.state.state_ids != plan.state.state_ids #TODO
- # Make sure plan is using the blueprint: forward
- assert plan.forward is not None
+ # Make sure fetched_plan is using the readable_plan
+ assert fetched_plan.forward is None
+ assert fetched_plan.is_built
- remote_proxy.close()
- server.terminate()
+ # Make sure plan is using the blueprint: forward
+ assert plan.forward is not None
- hook.local_worker.is_client_worker = True
+ remote_proxy.close()
+ server.terminate()
def test_binding_fix_precision_plan(hook):
"""Here we make sure the attributes of a plan are still bound to state elements when calling fix_precision"""
- hook.local_worker.is_client_worker = False
+ with hook.local_worker.registration_enabled():
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(1, 1)
-
- def forward(self, x):
- return self.fc1(x)
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(1, 1)
- plan = Net()
- plan.build(th.tensor([1.2]))
- original_weight = plan.fc1.weight.clone()
+ def forward(self, x):
+ return self.fc1(x)
- plan.fix_precision()
- weight_id = plan.fc1.weight.id
- hook.local_worker.get_obj(weight_id).float_prec_()
+ plan = Net()
+ plan.build(th.tensor([1.2]))
+ original_weight = plan.fc1.weight.clone()
- assert (plan.fc1.weight - original_weight) < 10e-2
+ plan.fix_precision()
+ weight_id = plan.fc1.weight.id
+ hook.local_worker.get_obj(weight_id).float_prec_()
- hook.local_worker.is_client_worker = True
+ assert (plan.fc1.weight - original_weight) < 10e-2
def test_binding_encrypted_plan(hook, workers):
"""Here we make sure the attributes of a plan are still bound to state elements when calling fix_prec + share"""
- hook.local_worker.is_client_worker = False
-
- alice, bob, charlie, james = (
- workers["alice"],
- workers["bob"],
- workers["charlie"],
- workers["james"],
- )
+ with hook.local_worker.registration_enabled():
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(1, 1)
+ alice, bob, charlie, james = (
+ workers["alice"],
+ workers["bob"],
+ workers["charlie"],
+ workers["james"],
+ )
- def forward(self, x):
- return self.fc1(x)
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(1, 1)
- plan = Net()
- plan.build(th.tensor([1.2]))
- original_weight = plan.fc1.weight.clone()
+ def forward(self, x):
+ return self.fc1(x)
- plan.fix_precision().share(alice, bob, crypto_provider=charlie)
- weight_id = plan.fc1.weight.id
- hook.local_worker.get_obj(weight_id).get_().float_prec_()
+ plan = Net()
+ plan.build(th.tensor([1.2]))
+ original_weight = plan.fc1.weight.clone()
- assert (plan.fc1.weight - original_weight) < 10e-2
+ plan.fix_precision().share(alice, bob, crypto_provider=charlie)
+ weight_id = plan.fc1.weight.id
+ hook.local_worker.get_obj(weight_id).get_().float_prec_()
- hook.local_worker.is_client_worker = True
+ assert (plan.fc1.weight - original_weight) < 10e-2
@pytest.mark.parametrize("is_func2plan", [True, False])
def test_fetch_encrypted_stateful_plan(hook, is_func2plan, workers):
# TODO: this test is not working properly with remote workers.
# We need to investigate why this might be the case.
- hook.local_worker.is_client_worker = False
- alice, bob, charlie, james = (
- workers["alice"],
- workers["bob"],
- workers["charlie"],
- workers["james"],
- )
+ with hook.local_worker.registration_enabled():
+ alice, bob, charlie, james = (
+ workers["alice"],
+ workers["bob"],
+ workers["charlie"],
+ workers["james"],
+ )
- if is_func2plan:
+ if is_func2plan:
- @sy.func2plan(args_shape=[(1,)], state=(th.tensor([3.0]),))
- def plan(data, state):
- bias, = state.read()
- return data * bias
+ @sy.func2plan(args_shape=[(1,)], state=(th.tensor([3.0]),))
+ def plan(data, state):
+ bias, = state.read()
+ return data * bias
- else:
+ else:
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(1, 1)
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(1, 1)
- def forward(self, x):
- return self.fc1(x)
+ def forward(self, x):
+ return self.fc1(x)
- plan = Net()
- plan.build(th.tensor([1.2]))
-
- x = th.tensor([-1.0])
- expected = plan(x)
+ plan = Net()
+ plan.build(th.tensor([1.2]))
- plan.fix_precision().share(alice, bob, crypto_provider=charlie)
- ptr_plan = plan.send(james)
+ x = th.tensor([-1.0])
+ expected = plan(x)
- # Fetch plan
- fetched_plan = plan.owner.fetch_plan(ptr_plan.id_at_location, james)
+ plan.fix_precision().share(alice, bob, crypto_provider=charlie)
+ ptr_plan = plan.send(james)
- # Execute the fetch plan
- x = th.tensor([-1.0])
- x_sh = x.fix_precision().share(alice, bob, crypto_provider=charlie)
- decrypted = fetched_plan(x_sh).get().float_prec()
+ # Fetch plan
+ fetched_plan = plan.owner.fetch_plan(ptr_plan.id_at_location, james)
- # Compare with local plan
- assert th.all(decrypted - expected.detach() < 1e-2)
- # assert fetched_plan.state.state_ids != plan.state.state_ids #TODO
+ # Execute the fetch plan
+ x = th.tensor([-1.0])
+ x_sh = x.fix_precision().share(alice, bob, crypto_provider=charlie)
+ decrypted = fetched_plan(x_sh).get().float_prec()
- # Make sure fetched_plan is using the readable_plan
- assert fetched_plan.forward is None
- assert fetched_plan.is_built
+ # Compare with local plan
+ assert th.all(decrypted - expected.detach() < 1e-2)
+ # assert fetched_plan.state.state_ids != plan.state.state_ids #TODO
- # Make sure plan is using the blueprint: forward
- assert plan.forward is not None
+ # Make sure fetched_plan is using the readable_plan
+ assert fetched_plan.forward is None
+ assert fetched_plan.is_built
- hook.local_worker.is_client_worker = True
+ # Make sure plan is using the blueprint: forward
+ assert plan.forward is not None
@pytest.mark.parametrize("is_func2plan", [True, False])
def test_fecth_plan_multiple_times(hook, is_func2plan, workers):
- hook.local_worker.is_client_worker = False
-
- alice, bob, charlie, james = (
- workers["alice"],
- workers["bob"],
- workers["charlie"],
- workers["james"],
- )
- if is_func2plan:
+ with hook.local_worker.registration_enabled():
+ alice, bob, charlie, james = (
+ workers["alice"],
+ workers["bob"],
+ workers["charlie"],
+ workers["james"],
+ )
- @sy.func2plan(args_shape=[(1,)], state=(th.tensor([3.0]),))
- def plan(data, state):
- bias, = state.read()
- return data * bias
+ if is_func2plan:
- else:
+ @sy.func2plan(args_shape=[(1,)], state=(th.tensor([3.0]),))
+ def plan(data, state):
+ bias, = state.read()
+ return data * bias
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(1, 1)
+ else:
- def forward(self, x):
- return self.fc1(x)
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(1, 1)
- plan = Net()
- plan.build(th.tensor([1.2]))
+ def forward(self, x):
+ return self.fc1(x)
- plan_pointer = plan.fix_precision().share(alice, bob, crypto_provider=charlie).send(james)
+ plan = Net()
+ plan.build(th.tensor([1.2]))
- # Fetch plan
- fetched_plan = plan_pointer.owner.fetch_plan(plan_pointer.id_at_location, james, copy=True)
+ plan_pointer = plan.fix_precision().share(alice, bob, crypto_provider=charlie).send(james)
- # Execute the fetch plan
- x = th.tensor([-1.0])
- x_sh = x.fix_precision().share(alice, bob, crypto_provider=charlie)
- decrypted1 = fetched_plan(x_sh).get().float_prec()
+ # Fetch plan
+ fetched_plan = plan_pointer.owner.fetch_plan(plan_pointer.id_at_location, james, copy=True)
- # 2. Re-fetch Plan
- fetched_plan = plan_pointer.owner.fetch_plan(plan_pointer.id_at_location, james, copy=True)
+ # Execute the fetch plan
+ x = th.tensor([-1.0])
+ x_sh = x.fix_precision().share(alice, bob, crypto_provider=charlie)
+ decrypted1 = fetched_plan(x_sh).get().float_prec()
- # Execute the fetch plan
- x = th.tensor([-1.0])
- x_sh = x.fix_precision().share(alice, bob, crypto_provider=charlie)
- decrypted2 = fetched_plan(x_sh).get().float_prec()
+ # 2. Re-fetch Plan
+ fetched_plan = plan_pointer.owner.fetch_plan(plan_pointer.id_at_location, james, copy=True)
- assert th.all(decrypted1 - decrypted2 < 1e-2)
+ # Execute the fetch plan
+ x = th.tensor([-1.0])
+ x_sh = x.fix_precision().share(alice, bob, crypto_provider=charlie)
+ decrypted2 = fetched_plan(x_sh).get().float_prec()
- hook.local_worker.is_client_worker = True
+ assert th.all(decrypted1 - decrypted2 < 1e-2)
def test_fetch_plan_remote(hook, start_remote_worker):
- hook.local_worker.is_client_worker = False
-
- server, remote_proxy = start_remote_worker(id="test_fetch_plan_remote", hook=hook, port=8803)
-
- @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1.0]),))
- def plan_mult_3(data, state):
- bias, = state.read()
- return data * 3 + bias
+ with hook.local_worker.registration_enabled():
+ server, remote_proxy = start_remote_worker(
+ id="test_fetch_plan_remote", hook=hook, port=8803
+ )
- plan_mult_3.send(remote_proxy)
+ @sy.func2plan(args_shape=[(1,)], state=(th.tensor([1.0]),))
+ def plan_mult_3(data, state):
+ bias, = state.read()
+ return data * 3 + bias
- # Fetch plan
- fetched_plan = plan_mult_3.owner.fetch_plan(plan_mult_3.id, remote_proxy)
+ plan_mult_3.send(remote_proxy)
- # Execute it locally
- x = th.tensor([-1.0, 2, 3])
- assert (plan_mult_3(x) == th.tensor([-2.0, 7, 10])).all()
- assert (fetched_plan(x) == th.tensor([-2.0, 7, 10])).all()
- assert fetched_plan.forward is None
- assert fetched_plan.is_built
+ # Fetch plan
+ fetched_plan = plan_mult_3.owner.fetch_plan(plan_mult_3.id, remote_proxy)
- remote_proxy.close()
- server.terminate()
+ # Execute it locally
+ x = th.tensor([-1.0, 2, 3])
+ assert (plan_mult_3(x) == th.tensor([-2.0, 7, 10])).all()
+ assert (fetched_plan(x) == th.tensor([-2.0, 7, 10])).all()
+ assert fetched_plan.forward is None
+ assert fetched_plan.is_built
- hook.local_worker.is_client_worker = True
+ remote_proxy.close()
+ server.terminate()
def test_plan_serde(hook):
- hook.local_worker.is_client_worker = False
-
- @sy.func2plan(args_shape=[(1, 3)])
- def my_plan(data):
- x = data * 2
- y = (x - 2) * 10
- return x + y
+ with hook.local_worker.registration_enabled():
- serialized_plan = serialize(my_plan)
- deserialized_plan = deserialize(serialized_plan)
+ @sy.func2plan(args_shape=[(1, 3)])
+ def my_plan(data):
+ x = data * 2
+ y = (x - 2) * 10
+ return x + y
- x = th.tensor([-1, 2, 3])
- assert (deserialized_plan(x) == th.tensor([-42, 24, 46])).all()
+ serialized_plan = serialize(my_plan)
+ deserialized_plan = deserialize(serialized_plan)
- hook.local_worker.is_client_worker = True
+ x = th.tensor([-1, 2, 3])
+ assert (deserialized_plan(x) == th.tensor([-42, 24, 46])).all()
def test_execute_plan_remotely(hook, start_remote_worker):
@@ -711,114 +680,110 @@ def my_plan(data):
def test_execute_plan_module_remotely(hook, start_remote_worker):
"""Test plan execution remotely."""
- hook.local_worker.is_client_worker = False
-
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(2, 3)
- self.fc2 = nn.Linear(3, 2)
+ with hook.local_worker.registration_enabled():
- self.bias = th.tensor([1000.0])
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(2, 3)
+ self.fc2 = nn.Linear(3, 2)
- def forward(self, x):
- x = F.relu(self.fc1(x))
- x = self.fc2(x)
- return F.log_softmax(x, dim=0) + self.bias
+ self.bias = th.tensor([1000.0])
- net = Net()
+ def forward(self, x):
+ x = F.relu(self.fc1(x))
+ x = self.fc2(x)
+ return F.log_softmax(x, dim=0) + self.bias
- x = th.tensor([-1, 2.0])
- local_res = net(x)
- assert not net.is_built
+ net = Net()
- net.build(x)
+ x = th.tensor([-1, 2.0])
+ local_res = net(x)
+ assert not net.is_built
- server, remote_proxy = start_remote_worker(id="test_plan_worker_2", port=8799, hook=hook)
+ net.build(x)
- plan_ptr = net.send(remote_proxy)
- x_ptr = x.send(remote_proxy)
- ptr = plan_ptr(x_ptr)
- assert isinstance(ptr, FrameworkTensor) and ptr.is_wrapper
- remote_res = ptr.get()
+ server, remote_proxy = start_remote_worker(id="test_plan_worker_2", port=8799, hook=hook)
- assert (remote_res == local_res).all()
+ plan_ptr = net.send(remote_proxy)
+ x_ptr = x.send(remote_proxy)
+ ptr = plan_ptr(x_ptr)
+ assert isinstance(ptr, FrameworkTensor) and ptr.is_wrapper
+ remote_res = ptr.get()
- # delete remote object before websocket connection termination
- del x_ptr
+ assert (remote_res == local_res).all()
- remote_proxy.close()
- server.terminate()
+ # delete remote object before websocket connection termination
+ del x_ptr
- hook.local_worker.is_client_worker = True
+ remote_proxy.close()
+ server.terminate()
def test_train_plan_locally_and_then_send_it(hook, start_remote_worker):
"""Test training a plan locally and then executing it remotely."""
- hook.local_worker.is_client_worker = False
-
- # Create toy model
- class Net(sy.Plan):
- def __init__(self):
- super(Net, self).__init__()
- self.fc1 = nn.Linear(2, 3)
- self.fc2 = nn.Linear(3, 2)
+ with hook.local_worker.registration_enabled():
- def forward(self, x):
- x = F.relu(self.fc1(x))
- x = self.fc2(x)
- return F.log_softmax(x, dim=0)
+ # Create toy model
+ class Net(sy.Plan):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.fc1 = nn.Linear(2, 3)
+ self.fc2 = nn.Linear(3, 2)
- net = Net()
+ def forward(self, x):
+ x = F.relu(self.fc1(x))
+ x = self.fc2(x)
+ return F.log_softmax(x, dim=0)
- # Create toy data
- x = th.tensor([-1, 2.0])
- y = th.tensor([1.0])
+ net = Net()
- # Train Model
- opt = optim.SGD(params=net.parameters(), lr=0.01)
- previous_loss = None
+ # Create toy data
+ x = th.tensor([-1, 2.0])
+ y = th.tensor([1.0])
- for _ in range(5):
- # 1) erase previous gradients (if they exist)
- opt.zero_grad()
+ # Train Model
+ opt = optim.SGD(params=net.parameters(), lr=0.01)
+ previous_loss = None
- # 2) make a prediction
- pred = net(x)
+ for _ in range(5):
+ # 1) erase previous gradients (if they exist)
+ opt.zero_grad()
- # 3) calculate how much we missed
- loss = ((pred - y) ** 2).sum()
+ # 2) make a prediction
+ pred = net(x)
- # 4) figure out which weights caused us to miss
- loss.backward()
+ # 3) calculate how much we missed
+ loss = ((pred - y) ** 2).sum()
- # 5) change those weights
- opt.step()
+ # 4) figure out which weights caused us to miss
+ loss.backward()
- if previous_loss is not None:
- assert loss < previous_loss
+ # 5) change those weights
+ opt.step()
- previous_loss = loss
+ if previous_loss is not None:
+ assert loss < previous_loss
- local_res = net(x)
- net.build(x)
+ previous_loss = loss
- server, remote_proxy = start_remote_worker(id="test_plan_worker_3", port=8800, hook=hook)
+ local_res = net(x)
+ net.build(x)
- plan_ptr = net.send(remote_proxy)
- x_ptr = x.send(remote_proxy)
- remote_res = plan_ptr(x_ptr).get()
+ server, remote_proxy = start_remote_worker(id="test_plan_worker_3", port=8800, hook=hook)
- assert (remote_res == local_res).all()
+ plan_ptr = net.send(remote_proxy)
+ x_ptr = x.send(remote_proxy)
+ remote_res = plan_ptr(x_ptr).get()
- # delete remote object before websocket connection termination
- del x_ptr
+ assert (remote_res == local_res).all()
- remote_proxy.close()
- server.terminate()
+ # delete remote object before websocket connection termination
+ del x_ptr
- hook.local_worker.is_client_worker = True
+ remote_proxy.close()
+ server.terminate()
# def test_replace_worker_ids_two_strings(hook):
diff --git a/test/torch/tensors/test_precision.py b/test/torch/tensors/test_precision.py
--- a/test/torch/tensors/test_precision.py
+++ b/test/torch/tensors/test_precision.py
@@ -31,13 +31,11 @@ def test_encode_decode(workers, parameter):
def test_fix_prec_registration(hook):
- hook.local_worker.is_client_worker = False
+ with hook.local_worker.registration_enabled():
+ x = torch.tensor([1.0])
+ x_fpt = x.fix_precision()
- x = torch.tensor([1.0])
- x_fpt = x.fix_precision()
- assert hook.local_worker.get_obj(x.id) == x
-
- hook.local_worker.is_client_worker = True
+ assert hook.local_worker.get_obj(x.id) == x
def test_inplace_encode_decode(workers):
@@ -54,13 +52,11 @@ def test_inplace_encode_decode(workers):
def test_fix_prec_inplace_registration(hook):
- hook.local_worker.is_client_worker = False
-
- x = torch.tensor([1.0])
- x.fix_precision_()
- assert hook.local_worker.get_obj(x.id) == torch.tensor([1.0]).fix_precision()
- hook.local_worker.is_client_worker = True
+ with hook.local_worker.registration_enabled():
+ x = torch.tensor([1.0])
+ x.fix_precision_()
+ assert hook.local_worker.get_obj(x.id) == torch.tensor([1.0]).fix_precision()
def test_add_method():
diff --git a/test/workers/test_base.py b/test/workers/test_base.py
--- a/test/workers/test_base.py
+++ b/test/workers/test_base.py
@@ -88,3 +88,10 @@ def test_execute_command_self(hook):
assert response == "bob_mocked_function"
bob.mocked_function.assert_called()
+
+
+def test_enable_registration_with_ctx(hook):
+ assert hook.local_worker.is_client_worker == True
+ with hook.local_worker.registration_enabled():
+ hook.local_worker.is_client_worker == False
+ assert hook.local_worker.is_client_worker == True
| Allow worker registration of objects within python context
**Is your feature request related to a problem? Please describe.**
Tons of test and code samples using plans are currently like this:
```python
hook.local_worker.is_client_worker = False
# my awesome code
hook.local_worker.is_client_worker = True
```
**Describe the solution you'd like**
```python
with self.owner.registration_enabled():
# my awesome code
```
**Describe alternatives you've considered**
Enabling registration all the time :O
| I can start looking at this one | 2019-10-22T17:52:59 |
OpenMined/PySyft | 2,759 | OpenMined__PySyft-2759 | [
"2705"
] | 242085f9ce05529315f49176091dbe36512026c0 | diff --git a/syft/frameworks/torch/tensors/interpreters/additive_shared.py b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
--- a/syft/frameworks/torch/tensors/interpreters/additive_shared.py
+++ b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
@@ -198,7 +198,7 @@ def generate_shares(secret, n_workers, field, random_type):
random_shares = [random_type(secret.shape) for _ in range(n_workers - 1)]
for share in random_shares:
- share.random_(field)
+ share.random_(int(-field / 2), int(field / 2) - 1)
shares = []
for i in range(n_workers):
| Transform shares to support negative values
For AdditiveSharing and FixedPrecision, if the field is 2**64, the values of the sahres should lie in `[-2**63, 2**63-1]` and not in `[0, 2**64]`. It allows for automatic wrapping for LongTensor and `field=2**63`.
| I would like to work on this. | 2019-11-26T20:12:35 |
|
OpenMined/PySyft | 2,810 | OpenMined__PySyft-2810 | [
"2768"
] | 3c18add93208915a183d3162b6fdff4596c513c4 | diff --git a/syft/serde/torch_serde.py b/syft/serde/torch_serde.py
--- a/syft/serde/torch_serde.py
+++ b/syft/serde/torch_serde.py
@@ -241,6 +241,11 @@ def _detail_torch_tensor(worker: AbstractWorker, tensor_tuple: tuple) -> torch.T
hook=syft.torch.hook, obj=tensor, owner=worker, id=tensor_id, init_args=[], init_kwargs={}
)
+ if chain is not None:
+ chain = syft.serde._detail(worker, chain)
+ tensor.child = chain
+ tensor.is_wrapper = True
+
if tags is not None:
tags = list(tags)
@@ -257,11 +262,6 @@ def _detail_torch_tensor(worker: AbstractWorker, tensor_tuple: tuple) -> torch.T
description = description.decode("utf-8")
tensor.description = description
- if chain is not None:
- chain = syft.serde._detail(worker, chain)
- tensor.child = chain
- tensor.is_wrapper = True
-
return tensor
| diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -873,3 +873,17 @@ def forward(self, x):
sh_data = torch.zeros((1, 1, 28, 28)).fix_precision().share(alice, bob, crypto_provider=james)
assert torch.allclose(sh_model(sh_data).get().float_prec(), model(data), atol=1e-2)
+
+
+def test_correct_tag_and_description_after_send(workers):
+ bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
+
+ x = torch.tensor([1, 2, 3]).share(alice, bob, james)
+ x.tags = ["tag_additive_test1", "tag_additive_test2"]
+ x.description = "description_additive_test"
+
+ pointer_x = x.send(alice)
+
+ assert alice.search("tag_additive_test1")
+ assert alice.search("tag_additive_test2")
+ assert alice.search("description_additive_test")
| search over tagged AdditiveSharingTensor
**Describe the bug**
Not possible to search over pointers
**To Reproduce**
Steps to reproduce the behavior:
1. Share data over workers
```
x_alice = torch.tensor([15])
x_alice.tags = ["x_alice"]
ptr_x_alice = x_alice.send(alice)
encrypted_x_alice = x_alice.share(alice, bob, charlie)
encrypted_x_alice.tags = ["enc_x_alice"]
ptr_enc_x_alice = encrypted_x_alice.send(alice)
```
2. Search over previous date tagged
```
>>> alice.search("x_alice")
[(Wrapper)>[PointerTensor | me:76577566073 -> alice:97888217298]]
>>> alice.search("enc_x_alice")
[]
```
In addition, I can ched that ptr has been send because:
```
print(alice.list_objects_remote())
{97888217298: tensor([15])
Tags: x_alice
Shape: torch.Size([1]), 30882456864: tensor([2550562729003484850]), 94380707077: (Wrapper)>[AdditiveSharingTensor]
-> tensor([2550562729003484850])
-> [PointerTensor | alice:26890155426 -> bob:40329843004]
-> [PointerTensor | alice:83545414165 -> charlie:2297723963]
*crypto provider: me*}
```
**Expected behavior**
```
>>> alice.search("enc_x_alice")
(Wrapper)>[AdditiveSharingTensor]
-> tensor([2550562729003484850])
-> [PointerTensor | alice:26890155426 -> bob:40329843004]
-> [PointerTensor | alice:83545414165 -> charlie:2297723963]
*crypto provider: me*}
```
**Desktop (please complete the following information):**
- Ubuntu 18.04
- requirements.txt:
- syft[udacity]
| Looking into this | 2019-12-10T20:35:32 |
OpenMined/PySyft | 2,836 | OpenMined__PySyft-2836 | [
"2834"
] | afab0d81762de3d019b798e309120b3b43d7182b | diff --git a/syft/serde/__init__.py b/syft/serde/__init__.py
--- a/syft/serde/__init__.py
+++ b/syft/serde/__init__.py
@@ -2,5 +2,5 @@
from syft.serde.torch_serde import *
from syft.serde.serde import _simplify
from syft.serde.serde import _detail
-from syft.serde.serde import _compress
-from syft.serde.serde import _decompress
+from syft.serde.compression import _compress
+from syft.serde.compression import _decompress
diff --git a/syft/serde/compression.py b/syft/serde/compression.py
new file mode 100644
--- /dev/null
+++ b/syft/serde/compression.py
@@ -0,0 +1,128 @@
+"""
+This file exists to provide one common place for all compression methods used in
+simplifying and serializing PySyft objects.
+"""
+
+import lz4
+from lz4 import ( # noqa: F401
+ frame,
+) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'
+import zstd
+
+from syft.exceptions import CompressionNotFoundException
+
+# COMPRESSION SCHEME INT CODES
+NO_COMPRESSION = 40
+LZ4 = 41
+ZSTD = 42
+scheme_to_bytes = {
+ NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder="big"),
+ LZ4: LZ4.to_bytes(1, byteorder="big"),
+ ZSTD: ZSTD.to_bytes(1, byteorder="big"),
+}
+
+## SECTION: chosen Compression Algorithm
+
+
+def _apply_compress_scheme(decompressed_input_bin) -> tuple:
+ """
+ Apply the selected compression scheme.
+ By default is used LZ4
+
+ Args:
+ decompressed_input_bin: the binary to be compressed
+ """
+ return apply_lz4_compression(decompressed_input_bin)
+
+
+def apply_lz4_compression(decompressed_input_bin) -> tuple:
+ """
+ Apply LZ4 compression to the input
+
+ Args:
+ decompressed_input_bin: the binary to be compressed
+
+ Returns:
+ a tuple (compressed_result, LZ4)
+ """
+ return lz4.frame.compress(decompressed_input_bin), LZ4
+
+
+def apply_zstd_compression(decompressed_input_bin) -> tuple:
+ """
+ Apply ZSTD compression to the input
+
+ Args:
+ decompressed_input_bin: the binary to be compressed
+
+ Returns:
+ a tuple (compressed_result, ZSTD)
+ """
+
+ return zstd.compress(decompressed_input_bin), ZSTD
+
+
+def apply_no_compression(decompressed_input_bin) -> tuple:
+ """
+ No compression is applied to the input
+
+ Args:
+ decompressed_input_bin: the binary
+
+ Returns:
+ a tuple (the binary, LZ4)
+ """
+
+ return decompressed_input_bin, NO_COMPRESSION
+
+
+def _compress(decompressed_input_bin: bin) -> bin:
+ """
+ This function compresses a binary using the function _apply_compress_scheme
+ if the input has been already compressed in some step, it will return it as it is
+
+ Args:
+ decompressed_input_bin (bin): binary to be compressed
+
+ Returns:
+ bin: a compressed binary
+
+ """
+ compress_stream, compress_scheme = _apply_compress_scheme(decompressed_input_bin)
+ try:
+ z = scheme_to_bytes[compress_scheme] + compress_stream
+ return z
+ except KeyError:
+ raise CompressionNotFoundException(
+ f"Compression scheme not found for compression code: {str(compress_scheme)}"
+ )
+
+
+def _decompress(binary: bin) -> bin:
+ """
+ This function decompresses a binary using the scheme defined in the first byte of the input
+
+ Args:
+ binary (bin): a compressed binary
+
+ Returns:
+ bin: decompressed binary
+
+ """
+
+ # check the 1-byte header to check the compression scheme used
+ compress_scheme = binary[0]
+
+ # remove the 1-byte header from the input stream
+ binary = binary[1:]
+ # 1) Decompress or return the original stream
+ if compress_scheme == LZ4:
+ return lz4.frame.decompress(binary)
+ elif compress_scheme == ZSTD:
+ return zstd.decompress(binary)
+ elif compress_scheme == NO_COMPRESSION:
+ return binary
+ else:
+ raise CompressionNotFoundException(
+ f"Compression scheme not found for compression code: {str(compress_scheme)}"
+ )
diff --git a/syft/serde/serde.py b/syft/serde/serde.py
--- a/syft/serde/serde.py
+++ b/syft/serde/serde.py
@@ -35,15 +35,11 @@
from typing import Callable
import inspect
-import lz4
-from lz4 import ( # noqa: F401
- frame,
-) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'
import msgpack
-import zstd
import syft
from syft import dependency_check
+
from syft.federated.train_config import TrainConfig
from syft.frameworks.torch.tensors.decorators.logging import LoggingTensor
from syft.frameworks.torch.tensors.interpreters.precision import FixedPrecisionTensor
@@ -71,11 +67,11 @@
from syft.messaging.message import ForceObjectDeleteMessage
from syft.messaging.message import SearchMessage
from syft.messaging.message import PlanCommandMessage
+from syft.serde import compression
from syft.serde.native_serde import MAP_NATIVE_SIMPLIFIERS_AND_DETAILERS
from syft.workers.abstract import AbstractWorker
from syft.workers.base import BaseWorker
-from syft.exceptions import CompressionNotFoundException
from syft.exceptions import GetNotPermittedError
from syft.exceptions import ResponseSignatureError
@@ -144,16 +140,6 @@
# in https://github.com/OpenMined/proto
EXCEPTION_SIMPLIFIER_AND_DETAILERS = [GetNotPermittedError, ResponseSignatureError]
-# COMPRESSION SCHEME INT CODES
-NO_COMPRESSION = 40
-LZ4 = 41
-ZSTD = 42
-scheme_to_bytes = {
- NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder="big"),
- LZ4: LZ4.to_bytes(1, byteorder="big"),
- ZSTD: ZSTD.to_bytes(1, byteorder="big"),
-}
-
## SECTION: High Level Simplification Router
def _force_full_simplify(worker: AbstractWorker, obj: object) -> object:
"""To force a full simplify generally if the usual _simplify is not suitable.
@@ -300,7 +286,7 @@ def _serialize_msgpack_binary(
# otherwise we output the compressed stream with header set to '1'
# even if compressed flag is set to false by the caller we
# output the input stream as it is with header set to '0'
- return _compress(binary)
+ return compression._compress(binary)
def _serialize_msgpack(
@@ -344,7 +330,7 @@ def _deserialize_msgpack_binary(binary: bin, worker: AbstractWorker = None) -> o
worker = syft.framework.hook.local_worker
# 1) Decompress the binary if needed
- binary = _decompress(binary)
+ binary = compression._decompress(binary)
# 2) Deserialize
# This function converts the binary into the appropriate python
@@ -436,113 +422,6 @@ def deserialize(
return strategy(binary, worker)
-## SECTION: chosen Compression Algorithm
-
-
-def _apply_compress_scheme(decompressed_input_bin) -> tuple:
- """
- Apply the selected compression scheme.
- By default is used LZ4
-
- Args:
- decompressed_input_bin: the binary to be compressed
- """
- return apply_lz4_compression(decompressed_input_bin)
-
-
-def apply_lz4_compression(decompressed_input_bin) -> tuple:
- """
- Apply LZ4 compression to the input
-
- Args:
- decompressed_input_bin: the binary to be compressed
-
- Returns:
- a tuple (compressed_result, LZ4)
- """
- return lz4.frame.compress(decompressed_input_bin), LZ4
-
-
-def apply_zstd_compression(decompressed_input_bin) -> tuple:
- """
- Apply ZSTD compression to the input
-
- Args:
- decompressed_input_bin: the binary to be compressed
-
- Returns:
- a tuple (compressed_result, ZSTD)
- """
-
- return zstd.compress(decompressed_input_bin), ZSTD
-
-
-def apply_no_compression(decompressed_input_bin) -> tuple:
- """
- No compression is applied to the input
-
- Args:
- decompressed_input_bin: the binary
-
- Returns:
- a tuple (the binary, LZ4)
- """
-
- return decompressed_input_bin, NO_COMPRESSION
-
-
-def _compress(decompressed_input_bin: bin) -> bin:
- """
- This function compresses a binary using the function _apply_compress_scheme
- if the input has been already compressed in some step, it will return it as it is
-
- Args:
- decompressed_input_bin (bin): binary to be compressed
-
- Returns:
- bin: a compressed binary
-
- """
- compress_stream, compress_scheme = _apply_compress_scheme(decompressed_input_bin)
- try:
- z = scheme_to_bytes[compress_scheme] + compress_stream
- return z
- except KeyError:
- raise CompressionNotFoundException(
- f"Compression scheme not found for compression code: {str(compress_scheme)}"
- )
-
-
-def _decompress(binary: bin) -> bin:
- """
- This function decompresses a binary using the scheme defined in the first byte of the input
-
- Args:
- binary (bin): a compressed binary
-
- Returns:
- bin: decompressed binary
-
- """
-
- # check the 1-byte header to check the compression scheme used
- compress_scheme = binary[0]
-
- # remove the 1-byte header from the input stream
- binary = binary[1:]
- # 1) Decompress or return the original stream
- if compress_scheme == LZ4:
- return lz4.frame.decompress(binary)
- elif compress_scheme == ZSTD:
- return zstd.decompress(binary)
- elif compress_scheme == NO_COMPRESSION:
- return binary
- else:
- raise CompressionNotFoundException(
- f"Compression scheme not found for compression code: {str(compress_scheme)}"
- )
-
-
def _simplify(worker: AbstractWorker, obj: object, **kwargs) -> object:
"""
This function takes an object as input and returns a simple
| diff --git a/test/test_serde.py b/test/test_serde.py
--- a/test/test_serde.py
+++ b/test/test_serde.py
@@ -13,6 +13,7 @@
from syft.frameworks.torch.tensors.interpreters.additive_shared import AdditiveSharingTensor
from syft.generic.pointers.object_wrapper import ObjectWrapper
from syft.generic.pointers.pointer_tensor import PointerTensor
+from syft.serde import compression
from syft.serde import native_serde
from syft.serde import serde
from syft.serde import torch_serde
@@ -321,9 +322,9 @@ def test_pointer_tensor_simplify(workers):
@pytest.mark.parametrize("compress", [True, False])
def test_torch_Tensor(compress):
if compress:
- syft.serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- syft.serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
t = Tensor(numpy.random.random((100, 100)))
t_serialized = serde.serialize(t)
@@ -340,9 +341,9 @@ def test_torch_Tensor_convenience(compress):
directly on the tensor itself. This tests to makes sure it
works correctly."""
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
t = Tensor(numpy.random.random((100, 100)))
t_serialized = t.serialize()
@@ -354,9 +355,9 @@ def test_torch_Tensor_convenience(compress):
def test_tuple(compress):
# Test with a simple datatype
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
tuple = (1, 2)
tuple_serialized = serde.serialize(tuple)
@@ -379,9 +380,9 @@ def test_tuple(compress):
@pytest.mark.parametrize("compress", [True, False])
def test_bytearray(compress):
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
bytearr = bytearray("This is a teststring", "utf-8")
bytearr_serialized = serde.serialize(bytearr)
@@ -397,9 +398,9 @@ def test_bytearray(compress):
@pytest.mark.parametrize("compress", [True, False])
def test_ndarray_serde(compress):
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
arr = numpy.random.random((100, 100))
arr_serialized = serde.serialize(arr)
@@ -408,30 +409,34 @@ def test_ndarray_serde(compress):
assert numpy.array_equal(arr, arr_serialized_deserialized)
[email protected]("compress_scheme", [serde.LZ4, serde.ZSTD, serde.NO_COMPRESSION])
[email protected](
+ "compress_scheme", [compression.LZ4, compression.ZSTD, compression.NO_COMPRESSION]
+)
def test_compress_decompress(compress_scheme):
- if compress_scheme == serde.LZ4:
- serde._apply_compress_scheme = serde.apply_lz4_compression
- elif compress_scheme == serde.ZSTD:
- serde._apply_compress_scheme = serde.apply_zstd_compression
+ if compress_scheme == compression.LZ4:
+ compression._apply_compress_scheme = compression.apply_lz4_compression
+ elif compress_scheme == compression.ZSTD:
+ compression._apply_compress_scheme = compression.apply_zstd_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
original = msgpack.dumps([1, 2, 3])
- compressed = serde._compress(original)
- decompressed = serde._decompress(compressed)
+ compressed = compression._compress(original)
+ decompressed = compression._decompress(compressed)
assert type(compressed) == bytes
assert original == decompressed
[email protected]("compress_scheme", [serde.LZ4, serde.ZSTD, serde.NO_COMPRESSION])
[email protected](
+ "compress_scheme", [compression.LZ4, compression.ZSTD, compression.NO_COMPRESSION]
+)
def test_compressed_serde(compress_scheme):
- if compress_scheme == serde.LZ4:
- serde._apply_compress_scheme = serde.apply_lz4_compression
- elif compress_scheme == serde.ZSTD:
- serde._apply_compress_scheme = serde.apply_zstd_compression
+ if compress_scheme == compression.LZ4:
+ compression._apply_compress_scheme = compression.apply_lz4_compression
+ elif compress_scheme == compression.ZSTD:
+ compression._apply_compress_scheme = compression.apply_zstd_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
# using numpy.ones because numpy.random.random is not compressed.
arr = numpy.ones((100, 100))
@@ -446,9 +451,9 @@ def test_compressed_serde(compress_scheme):
def test_dict(compress):
# Test with integers
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
_dict = {1: 1, 2: 2, 3: 3}
dict_serialized = serde.serialize(_dict)
dict_serialized_deserialized = serde.deserialize(dict_serialized)
@@ -476,9 +481,9 @@ def test_dict(compress):
@pytest.mark.parametrize("compress", [True, False])
def test_range_serde(compress):
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
_range = range(1, 2, 3)
@@ -491,9 +496,9 @@ def test_range_serde(compress):
@pytest.mark.parametrize("compress", [True, False])
def test_list(compress):
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
# Test with integers
_list = [1, 2]
@@ -514,9 +519,9 @@ def test_list(compress):
list_serialized = serde.serialize(_list)
if compress:
- assert list_serialized[0] == serde.LZ4
+ assert list_serialized[0] == compression.LZ4
else:
- assert list_serialized[0] == serde.NO_COMPRESSION
+ assert list_serialized[0] == compression.NO_COMPRESSION
list_serialized_deserialized = serde.deserialize(list_serialized)
# `assert list_serialized_deserialized == _list` does not work, therefore it's split
@@ -529,9 +534,9 @@ def test_list(compress):
@pytest.mark.parametrize("compress", [True, False])
def test_set(compress):
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
# Test with integers
_set = set([1, 2])
@@ -553,9 +558,9 @@ def test_set(compress):
set_serialized = serde.serialize(_set)
if compress:
- assert set_serialized[0] == serde.LZ4
+ assert set_serialized[0] == compression.LZ4
else:
- assert set_serialized[0] == serde.NO_COMPRESSION
+ assert set_serialized[0] == compression.NO_COMPRESSION
set_serialized_deserialized = serde.deserialize(set_serialized)
# `assert set_serialized_deserialized == _set` does not work, therefore it's split
@@ -568,9 +573,9 @@ def test_set(compress):
@pytest.mark.parametrize("compress", [True, False])
def test_slice(compress):
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
s = slice(0, 100, 2)
x = numpy.random.rand(100)
@@ -592,9 +597,9 @@ def test_slice(compress):
@pytest.mark.parametrize("compress", [True, False])
def test_float(compress):
if compress:
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
x = 0.5
y = 1.5
@@ -612,36 +617,38 @@ def test_float(compress):
@pytest.mark.parametrize(
"compress, compress_scheme",
[
- (True, serde.LZ4),
- (False, serde.LZ4),
- (True, serde.ZSTD),
- (False, serde.ZSTD),
- (True, serde.NO_COMPRESSION),
- (False, serde.NO_COMPRESSION),
+ (True, compression.LZ4),
+ (False, compression.LZ4),
+ (True, compression.ZSTD),
+ (False, compression.ZSTD),
+ (True, compression.NO_COMPRESSION),
+ (False, compression.NO_COMPRESSION),
],
)
def test_hooked_tensor(compress, compress_scheme):
if compress:
- if compress_scheme == serde.LZ4:
- serde._apply_compress_scheme = serde.apply_lz4_compression
- elif compress_scheme == serde.ZSTD:
- serde._apply_compress_scheme = serde.apply_zstd_compression
+ if compress_scheme == compression.LZ4:
+ compression._apply_compress_scheme = compression.apply_lz4_compression
+ elif compress_scheme == compression.ZSTD:
+ compression._apply_compress_scheme = compression.apply_zstd_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
else:
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
t = Tensor(numpy.ones((100, 100)))
t_serialized = serde.serialize(t)
assert (
- t_serialized[0] == compress_scheme if compress else t_serialized[0] == serde.NO_COMPRESSION
+ t_serialized[0] == compress_scheme
+ if compress
+ else t_serialized[0] == compression.NO_COMPRESSION
)
t_serialized_deserialized = serde.deserialize(t_serialized)
assert (t == t_serialized_deserialized).all()
def test_pointer_tensor(hook, workers):
- serde._apply_compress_scheme = serde.apply_no_compression
+ compression._apply_compress_scheme = compression.apply_no_compression
t = PointerTensor(
id=1000, location=workers["alice"], owner=workers["alice"], id_at_location=12345
)
@@ -663,7 +670,7 @@ def test_pointer_tensor_detail(id):
def test_numpy_tensor_serde():
- serde._apply_compress_scheme = serde.apply_lz4_compression
+ compression._apply_compress_scheme = compression.apply_lz4_compression
serde._serialize_tensor = syft.serde.numpy_tensor_serializer
serde._deserialize_tensor = syft.serde.numpy_tensor_deserializer
@@ -671,7 +678,7 @@ def test_numpy_tensor_serde():
tensor = torch.tensor(numpy.ones((10, 10)), requires_grad=False)
tensor_serialized = serde.serialize(tensor)
- assert tensor_serialized[0] != serde.NO_COMPRESSION
+ assert tensor_serialized[0] != compression.NO_COMPRESSION
tensor_deserialized = serde.deserialize(tensor_serialized)
# Back to Pytorch serializer
| Extract compression from `serde` to separate module
**Is your feature request related to a problem? Please describe.**
Compression and serialization are currently intertwined with each other in the same module.
**Describe the solution you'd like**
In order to pave the way for alternate serialization approaches (e.g. Protobuf), it would be helpful to extract compression out of `serde.py` and into its own module.
**Describe alternatives you've considered**
- _Combine all serialization approaches in the same module._ Seems potentially messy.
- _Put each serialization approach in its own module and duplicate compression._ Would work, but harder to maintain.
**Additional context**
Add any other context or screenshots about the feature request here.
| 2019-12-18T14:16:33 |
|
OpenMined/PySyft | 3,017 | OpenMined__PySyft-3017 | [
"3005"
] | d650a47e7547509dcdf87e7afb3aebcbf9e37167 | diff --git a/docs/conf.py b/docs/conf.py
deleted file mode 100644
--- a/docs/conf.py
+++ /dev/null
@@ -1,187 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Configuration file for the Sphinx documentation builder.
-#
-# This file does only contain a selection of the most common options. For a
-# full list see the documentation:
-# http://www.sphinx-doc.org/en/master/config
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-# import os
-# import sys
-# sys.path.insert(0, os.path.abspath('.'))
-
-
-# -- Project information -----------------------------------------------------
-
-project = "PySyft"
-copyright = "2019, OpenMinedContributors"
-author = "Andrew Trask"
-
-# The short X.Y version
-version = "0.2.3a1"
-# The full version, including alpha/beta/rc tags
-release = "0.2.3a1"
-
-
-# -- General configuration ---------------------------------------------------
-
-# If your documentation needs a minimal Sphinx version, state it here.
-#
-# needs_sphinx = '1.0'
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = [
- "sphinx.ext.autodoc",
- "sphinx.ext.doctest",
- "sphinx.ext.todo",
- "sphinx.ext.coverage",
- "sphinx.ext.mathjax",
- "sphinx.ext.napoleon",
- "sphinx.ext.ifconfig",
- "sphinx.ext.viewcode",
- "sphinx_markdown_builder",
-]
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ["_templates"]
-
-# The suffix(es) of source filenames.
-# You can specify multiple suffix as a list of string:
-#
-# source_suffix = ['.rst', '.md']
-source_suffix = ".rst"
-
-# The master toctree document.
-master_doc = "index"
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#
-# This is also used if you do content translation via gettext catalogs.
-# Usually you set "language" from the command line for these cases.
-language = None
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = None
-
-
-# -- Options for HTML output -------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-#
-html_theme = "alabaster"
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further. For a list of options available for each theme, see the
-# documentation.
-#
-# html_theme_options = {}
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ["_static"]
-
-# Custom sidebar templates, must be a dictionary that maps document names
-# to template names.
-#
-# The default sidebars (for documents that don't match any pattern) are
-# defined by theme itself. Builtin themes are using these templates by
-# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
-# 'searchbox.html']``.
-#
-# html_sidebars = {}
-
-
-# -- Options for HTMLHelp output ---------------------------------------------
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = "PySyftdoc"
-
-
-# -- Options for LaTeX output ------------------------------------------------
-
-latex_elements = {
- # The paper size ('letterpaper' or 'a4paper').
- #
- # 'papersize': 'letterpaper',
- # The font size ('10pt', '11pt' or '12pt').
- #
- # 'pointsize': '10pt',
- # Additional stuff for the LaTeX preamble.
- #
- # 'preamble': '',
- # Latex figure (float) alignment
- #
- # 'figure_align': 'htbp',
-}
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title,
-# author, documentclass [howto, manual, or own class]).
-latex_documents = [(master_doc, "PySyft.tex", "PySyft Documentation", "Andrew Trask", "manual")]
-
-
-# -- Options for manual page output ------------------------------------------
-
-# One entry per manual page. List of tuples
-# (source start file, name, description, authors, manual section).
-man_pages = [(master_doc, "pysyft", "PySyft Documentation", [author], 1)]
-
-
-# -- Options for Texinfo output ----------------------------------------------
-
-# Grouping the document tree into Texinfo files. List of tuples
-# (source start file, target name, title, author,
-# dir menu entry, description, category)
-texinfo_documents = [
- (
- master_doc,
- "PySyft",
- "PySyft Documentation",
- author,
- "PySyft",
- "One line description of project.",
- "Miscellaneous",
- )
-]
-
-
-# -- Options for Epub output -------------------------------------------------
-
-# Bibliographic Dublin Core info.
-epub_title = project
-
-# The unique identifier of the text. This can be a ISBN number
-# or the project homepage.
-#
-# epub_identifier = ''
-
-# A unique identification for the text.
-#
-# epub_uid = ''
-
-# A list of files that should not be packed into the epub file.
-epub_exclude_files = ["search.html"]
-
-
-# -- Extension configuration -------------------------------------------------
-
-# -- Options for todo extension ----------------------------------------------
-
-# If true, `todo` and `todoList` produce output, else they produce nothing.
-todo_include_todos = True
diff --git a/docs/source/conf.py b/docs/source/conf.py
new file mode 100644
--- /dev/null
+++ b/docs/source/conf.py
@@ -0,0 +1,73 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+
+import sphinx_rtd_theme
+
+
+sys.path.insert(0, os.path.abspath("../.."))
+
+# -- Project information -----------------------------------------------------
+
+project = "PySyft"
+copyright = "2020, Andrew Trask"
+author = "Andrew Trask"
+
+# The full version, including alpha/beta/rc tags
+release = "0.2.3a1"
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = ["sphinx.ext.napoleon", "autoapi.extension", "sphinx_rtd_theme"]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ["_templates"]
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = "sphinx_rtd_theme"
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = []
+
+
+def setup(app):
+ app.add_stylesheet("css/theme.css")
+
+
+# -- Extension configuration -------------------------------------------------
+
+# AutoApi
+autoapi_root = "api"
+autoapi_type = "python"
+autoapi_dirs = ["../../syft"]
+
+# Napoleon settings
+napoleon_google_docstring = True
+napoleon_numpy_docstring = False
| Automated Sphinx documentation for the module syft
**Is your feature request related to a problem? Please describe.**
I believe it would be helpful if there was a readily available, accessible and always up to date documentation unveiling the interfaces and sub-modules of the module syft in one place.
**Describe the solution you'd like**
Ideally this documentation would be automatically generated by parsing the existing pydocs and creating respective html pages / pdf document.
The docs would be linked to a CI/CD testing environment to ensure that :
* each new feature/bug fix added to the stable branch does not break the documentation.
* the newest documentation version is made available to the domain where it's hosted (e.g. readthedocs)
* there's some version control to the documentation as well.
**Describe alternatives you've considered**
* A "private" readthedocs provided via an existing infrastructure administered by someone inside OpenMined, in case there's some reason not to use the public one.
* Skip the readthedocs and provide the documentation in a site administered by someone inside OpenMined
**Additional context**
Examples of existing documentation using this solution:
* https://web3py.readthedocs.io/en/stable/
* https://docs.scipy.org/doc/numpy/reference/
* https://pandas.pydata.org/pandas-docs/stable/index.html
| 2020-02-05T19:14:32 |
||
OpenMined/PySyft | 3,150 | OpenMined__PySyft-3150 | [
"3134"
] | 6dd3d13734ed08e391fb282a31a56a0d0e640aa2 | diff --git a/syft/serde/compression.py b/syft/serde/compression.py
--- a/syft/serde/compression.py
+++ b/syft/serde/compression.py
@@ -7,18 +7,15 @@
from lz4 import ( # noqa: F401
frame,
) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'
-import zstd
from syft.exceptions import CompressionNotFoundException
# COMPRESSION SCHEME INT CODES
NO_COMPRESSION = 40
LZ4 = 41
-ZSTD = 42
scheme_to_bytes = {
NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder="big"),
LZ4: LZ4.to_bytes(1, byteorder="big"),
- ZSTD: ZSTD.to_bytes(1, byteorder="big"),
}
## SECTION: chosen Compression Algorithm
@@ -48,20 +45,6 @@ def apply_lz4_compression(decompressed_input_bin) -> tuple:
return lz4.frame.compress(decompressed_input_bin), LZ4
-def apply_zstd_compression(decompressed_input_bin) -> tuple:
- """
- Apply ZSTD compression to the input
-
- Args:
- decompressed_input_bin: the binary to be compressed
-
- Returns:
- a tuple (compressed_result, ZSTD)
- """
-
- return zstd.compress(decompressed_input_bin), ZSTD
-
-
def apply_no_compression(decompressed_input_bin) -> tuple:
"""
No compression is applied to the input
@@ -118,8 +101,6 @@ def _decompress(binary: bin) -> bin:
# 1) Decompress or return the original stream
if compress_scheme == LZ4:
return lz4.frame.decompress(binary)
- elif compress_scheme == ZSTD:
- return zstd.decompress(binary)
elif compress_scheme == NO_COMPRESSION:
return binary
else:
| diff --git a/test/serde/msgpack/test_msgpack_serde.py b/test/serde/msgpack/test_msgpack_serde.py
--- a/test/serde/msgpack/test_msgpack_serde.py
+++ b/test/serde/msgpack/test_msgpack_serde.py
@@ -416,14 +416,10 @@ def test_ndarray_serde(compress):
assert numpy.array_equal(arr, arr_serialized_deserialized)
[email protected](
- "compress_scheme", [compression.LZ4, compression.ZSTD, compression.NO_COMPRESSION]
-)
[email protected]("compress_scheme", [compression.LZ4, compression.NO_COMPRESSION])
def test_compress_decompress(compress_scheme):
if compress_scheme == compression.LZ4:
compression._apply_compress_scheme = compression.apply_lz4_compression
- elif compress_scheme == compression.ZSTD:
- compression._apply_compress_scheme = compression.apply_zstd_compression
else:
compression._apply_compress_scheme = compression.apply_no_compression
@@ -434,14 +430,10 @@ def test_compress_decompress(compress_scheme):
assert original == decompressed
[email protected](
- "compress_scheme", [compression.LZ4, compression.ZSTD, compression.NO_COMPRESSION]
-)
[email protected]("compress_scheme", [compression.LZ4, compression.NO_COMPRESSION])
def test_compressed_serde(compress_scheme):
if compress_scheme == compression.LZ4:
compression._apply_compress_scheme = compression.apply_lz4_compression
- elif compress_scheme == compression.ZSTD:
- compression._apply_compress_scheme = compression.apply_zstd_compression
else:
compression._apply_compress_scheme = compression.apply_no_compression
@@ -626,8 +618,6 @@ def test_float(compress):
[
(True, compression.LZ4),
(False, compression.LZ4),
- (True, compression.ZSTD),
- (False, compression.ZSTD),
(True, compression.NO_COMPRESSION),
(False, compression.NO_COMPRESSION),
],
@@ -636,8 +626,6 @@ def test_hooked_tensor(compress, compress_scheme):
if compress:
if compress_scheme == compression.LZ4:
compression._apply_compress_scheme = compression.apply_lz4_compression
- elif compress_scheme == compression.ZSTD:
- compression._apply_compress_scheme = compression.apply_zstd_compression
else:
compression._apply_compress_scheme = compression.apply_no_compression
else:
| Remove ZSTD
**Is your feature request related to a problem? Please describe.**
ZSTD is used for compression in our serde process. However we don't need extra compression as we move to Protobuf.
ZSTD is usually a source of problems when installing PySyft with different hacks to solve it.
**Describe the solution you'd like**
Remove ZSTD dependency.
This will require removing the tests and its use in serde.
**Describe alternatives you've considered**
Protobuf covers compression.
**Additional context**
| Hi @mccorby , I would like to work on this.
So, I guess I would need to remove the ZSTD functionalities in the `test` folder right? Is that it? Or do I need to remove ZSTD from the `compression.py` file in `PySyft/syft/serde/` too?
We'll probably [still want compression](https://eng.uber.com/trip-data-squeeze-json-encoding-compression/) with Protobuf, but we can probably use zlib instead of zstd (if that's any easier to install.)
Could we proceed with two steps? First remove ZSTD, then add another compression tool?
Sounds good to me!
@r0cketr1kky : Ideally, ZSTD should be removed from any place where it's used. This should include:
- Production code
- Tests
- Requirement files
And double check that it's not present either in any notebook (I think it's not).
Special care has to be taken when removing it from the production code (`serde`) as we want to leave the door open to have some other compression tool.
Hey, @mccorby I would like to work on this, is this issue taken up by someone?
@r0cketr1kky are working on this? | 2020-03-06T20:17:53 |
OpenMined/PySyft | 3,179 | OpenMined__PySyft-3179 | [
"3141"
] | 0561a5d21c1787afeeb9753a0487f0c91d38a07d | diff --git a/syft/federated/floptimizer.py b/syft/federated/floptimizer.py
new file mode 100644
--- /dev/null
+++ b/syft/federated/floptimizer.py
@@ -0,0 +1,31 @@
+"""to maintain a list of optimizer objects,
+one for each worker and use them in the appropriate context"""
+import copy
+
+
+class Optims:
+ """to create a list of optimizer objects"""
+
+ def __init__(self, workers, optim):
+ """
+ args:
+ workers: list of worker ids
+ optim: class of pytorch optimizer
+ """
+ self.optim = optim
+ self.workers = workers
+ self.optimizers = {}
+ for worker in workers:
+ self.optimizers[str(worker)] = copy.copy(self.optim)
+ self.optimizers[str(worker)].load_state_dict((self.optim).state_dict())
+
+ def get_optim(self, worker):
+ """returns optimizer for worker
+ args:
+ worker: worker id
+ """
+ return self.optimizers[str(worker)]
+
+ def count(self):
+ """returns the number of optimizer objects"""
+ return len(self.workers)
| FLOptimizer
**Is your feature request related to a problem? Please describe.**
Some optimizers (such as Adam) do not work with Federated Learning by default because they maintain a list of the N recent gradietns in a cache. For example, take this demo (https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/Part%2002%20-%20Intro%20to%20Federated%20Learning.ipynb) and replace the optimizer with Adam and you'll geet a bug.
The solution is to have a separate optimizer for each worker in the FL process which you use at the appropriate time.
**Describe the solution you'd like**
Obviously this would be really annoying to have to write by hand every time... so please create a FLOptimizer which will - under the hood - maintain a list of optimizer objects, one for each worker - and use them in the appropriate context,.
| I will like to take this @iamtrask .
Thank you! I assigned you @roshray :D
@roshray please DM me on slack when you're done - we have someone who needs this feature asap
Sure @iamtrask
is this issue fixed? or @roshray still working on it?
I would like to work on this issue @iamtrask, if it's still open. Do you want to work on this together @ratmcu?
Can I take up this issue? | 2020-03-11T13:55:35 |
|
OpenMined/PySyft | 3,195 | OpenMined__PySyft-3195 | [
"2500"
] | e459e38449abca624aa83d3c0e55c2d121c9d006 | diff --git a/syft/frameworks/torch/tensors/interpreters/precision.py b/syft/frameworks/torch/tensors/interpreters/precision.py
--- a/syft/frameworks/torch/tensors/interpreters/precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/precision.py
@@ -845,6 +845,35 @@ def linear(*args):
module.linear = linear
+ def dropout(input, p=0.5, training=True, inplace=False):
+ """
+ The dropout class calls functional dropout. Hence overloading functional dropout
+ so that even works for dropout layer class.
+ Ref: https://stackoverflow.com/questions/54109617/implementing-dropout-from-scratch
+ """
+ if training:
+ binomial = torch.distributions.binomial.Binomial(probs=1 - p)
+
+ # we must convert the normal tensor to fixed precision before multiplication
+ noise = (
+ (
+ binomial.sample(input.shape).type(torch.FloatTensor)
+ * (1.0 / (1.0 - p))
+ )
+ .fix_prec(**input.get_class_attributes())
+ .child
+ )
+
+ if inplace:
+ input = input * noise
+ return input
+
+ return input * noise
+
+ return input
+
+ module.dropout = dropout
+
module.conv2d = conv2d
module.functional = functional
| diff --git a/test/torch/tensors/test_precision.py b/test/torch/tensors/test_precision.py
--- a/test/torch/tensors/test_precision.py
+++ b/test/torch/tensors/test_precision.py
@@ -589,6 +589,29 @@ def test_torch_nn_functional_linear():
assert (result == expected).all()
+def test_torch_nn_functional_dropout(workers):
+ # Only for precision tensor
+ a = torch.rand((20, 20))
+ x = a.fix_prec()
+
+ train_output = F.dropout(x, p=0.5, training=True, inplace=False)
+ assert (train_output.float_prec() == 0).sum() > 0
+
+ test_output = F.dropout(x, p=0.5, training=False, inplace=False)
+ # should return the same input
+ assert ((test_output == x).float_prec() == 1).all()
+
+ # For AST with precision
+ bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
+ x = a.fix_prec().share(alice, bob, crypto_provider=james)
+
+ train_output = F.dropout(x, p=0.5, training=True, inplace=False)
+ assert (train_output.get().float_prec() == 0).sum() > 0
+
+ test_output = F.dropout(x, p=0.5, training=False, inplace=False)
+ assert ((test_output == x).get().float_prec() == 1).all()
+
+
def test_operate_with_integer_constants():
x = torch.tensor([1.0])
x_fp = x.fix_precision()
| Dropout is currently not supported for SMPC
| Hey, can I work on this?
I just assigned you :) | 2020-03-14T18:20:20 |
OpenMined/PySyft | 3,199 | OpenMined__PySyft-3199 | [
"3184"
] | a3f6a1c3ce6beef35dccf73954f213ccf739ecfb | diff --git a/syft/execution/plan.py b/syft/execution/plan.py
--- a/syft/execution/plan.py
+++ b/syft/execution/plan.py
@@ -232,9 +232,15 @@ def add_placeholder(self, tensor, node_type=None):
f"Please use instead torch.tensor(..., dtype=torch.int32) for example."
)
placeholder.tags.add(f"#input-{self._tmp_args_ids.index(tensor.id)}")
+ if tensor.id in self._tmp_result_ids:
+ placeholder.tags.add(f"#output-{self._tmp_result_ids.index(tensor.id)}")
+
elif node_type == "output":
if tensor.id in self._tmp_result_ids:
placeholder.tags.add(f"#output-{self._tmp_result_ids.index(tensor.id)}")
+
+ if tensor.id in self._tmp_args_ids:
+ placeholder.tags.add(f"#input-{self._tmp_result_ids.index(tensor.id)}")
else:
raise ValueError("node_type should be 'input' or 'output'.")
@@ -318,6 +324,9 @@ def build(self, *args):
results = (results,) if not isinstance(results, tuple) else results
self._tmp_result_ids = [t.id for t in results if isinstance(t, FrameworkTensor)]
+ for arg in args:
+ self.replace_with_placeholders(arg, node_type="input")
+
for log in sy.hook.trace.logs:
command, response = log
command_placeholders = self.replace_with_placeholders(command, node_type="input")
| diff --git a/test/execution/test_plan.py b/test/execution/test_plan.py
--- a/test/execution/test_plan.py
+++ b/test/execution/test_plan.py
@@ -1096,3 +1096,33 @@ def foo(x, state):
y = torchscript_plan(t)
assert (y == th.tensor([3.0, 5])).all()
+
+
+def test_plan_input_usage(hook):
+ x11 = th.tensor([-1, 2.0]).tag("input_data")
+ x12 = th.tensor([1, -2.0]).tag("input_data2")
+
+ device_1 = sy.VirtualWorker(hook, id="test_dev_1", data=(x11, x12))
+
+ @sy.func2plan()
+ def plan_test_1(x, y):
+ return x
+
+ @sy.func2plan()
+ def plan_test_2(x, y):
+ return y
+
+ pointer_to_data_1 = device_1.search("input_data")[0]
+ pointer_to_data_2 = device_1.search("input_data2")[0]
+
+ plan_test_1.build(th.tensor([1.0, -2.0]), th.tensor([1, 2]))
+ pointer_plan = plan_test_1.send(device_1)
+ pointer_to_result = pointer_plan(pointer_to_data_1, pointer_to_data_2)
+ result = pointer_to_result.get()
+ assert (result == x11).all()
+
+ plan_test_2.build(th.tensor([1.0, -2.0]), th.tensor([1, 2]))
+ pointer_plan = plan_test_2.send(device_1)
+ pointer_to_result = pointer_plan(pointer_to_data_1, pointer_to_data_2)
+ result = pointer_to_result.get()
+ assert (result == x12).all
| Improve Plans flexibility
**Is your feature request related to a problem? Please describe.**
Plans currently don't work exactly as normal function in the following scenarios:
- if some inputs that are not used in the computations
- if some inputs that are not used and are given in the output
- if input/ouput contain nested objects like list or tuple
**Describe the solution you'd like**
We want to support this by default
| I will have a look on this. | 2020-03-15T17:29:16 |
OpenMined/PySyft | 3,200 | OpenMined__PySyft-3200 | [
"3180"
] | a3f6a1c3ce6beef35dccf73954f213ccf739ecfb | diff --git a/syft/frameworks/torch/hook/hook.py b/syft/frameworks/torch/hook/hook.py
--- a/syft/frameworks/torch/hook/hook.py
+++ b/syft/frameworks/torch/hook/hook.py
@@ -589,10 +589,11 @@ def module_is_missing_grad(model):
def create_grad_objects(model):
"""Assigns gradient to model parameters if not assigned"""
for p in model.parameters():
- o = p.sum()
- o.backward()
- if p.grad is not None:
- p.grad -= p.grad
+ if p.requires_grad: # check if the object requires a grad object
+ o = p.sum()
+ o.backward()
+ if p.grad is not None:
+ p.grad -= p.grad
def module_send_(nn_self, *dest, force_send=False, **kwargs):
"""Overloads torch.nn instances so that they could be sent to other workers"""
| diff --git a/test/torch/hook/test_hook.py b/test/torch/hook/test_hook.py
--- a/test/torch/hook/test_hook.py
+++ b/test/torch/hook/test_hook.py
@@ -1,8 +1,6 @@
import pytest
import torch
-
-
-# import syft
+import syft
@pytest.mark.skipif(not torch.cuda.is_available(), reason="cuda not available")
@@ -60,3 +58,41 @@ def test_param_data(): # pragma: no cover
param.data = data2
assert (param.data == data2).all()
assert param.is_cuda
+
+
+def test_send_frozen():
+ hook = syft.TorchHook(torch)
+ worker = syft.VirtualWorker(hook, id="worker")
+
+ d_in, h, d_out = 1000, 100, 10
+
+ model = torch.nn.Sequential(
+ torch.nn.Linear(d_in, h), torch.nn.ReLU(), torch.nn.Linear(h, d_out)
+ )
+
+ for param in model.parameters():
+ param.requires_grad = False
+
+ model.send(worker)
+
+
+def test_send_partially_frozen():
+ hook = syft.TorchHook(torch)
+ worker = syft.VirtualWorker(hook, id="worker")
+
+ d_in, h1, h2, d_out = 1000, 1000, 100, 10
+
+ model = torch.nn.Sequential(
+ torch.nn.Linear(d_in, h1),
+ torch.nn.ReLU(),
+ torch.nn.Linear(h1, h2),
+ torch.nn.ReLU(),
+ torch.nn.Linear(h2, d_out),
+ )
+
+ for layer_idx, param in enumerate(model.parameters()):
+ if layer_idx > 2: # freezing the first two layers
+ pass
+ param.requires_grad = False
+
+ model.send(worker)
| Runtime Error asking all parameters to have requires_grad=True
**Describe the bug**
I'm trying to finetune a alexnet model and i've set the parameters except for the final layer of the model to requires_grad=False and have created a new classification layer with the desired outputs i want. However the .send() function keeps throwing a runtime error `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
import syft
import torch
from torchvision import models
import torch.nn as nn
hook = syft.TorchHook(torch)
worker = syft.VirtualWorker(hook, id="worker")
model = models.alexnet(pretrained=True)
for param in model.parameters():
param.requires_grad=False
model.classifier[6] = nn.Linear(model.classifier[6].in_features, 3)
model.send(worker)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-15-a250859d9a13> in <module>
----> 1 model.send(worker)
~/implementation/PyGrid/gateway/src/syft/syft/frameworks/torch/hook/hook.py in module_send_(nn_self, force_send, *dest, **kwargs)
608
609 if module_is_missing_grad(nn_self):
--> 610 create_grad_objects(nn_self)
611
612 for p in nn_self.parameters():
~/implementation/PyGrid/gateway/src/syft/syft/frameworks/torch/hook/hook.py in create_grad_objects(model)
600 for p in model.parameters():
601 o = p.sum()
--> 602 o.backward()
603 if p.grad is not None:
604 p.grad -= p.grad
~/implementation/PyGrid/gateway/src/syft/syft/generic/frameworks/hook/trace.py in trace_wrapper(*args, **kwargs)
81 syft.hook.trace.logs.append((command, response))
82 else:
---> 83 response = func(*args, **kwargs)
84
85 return response
~/implementation/PyGrid/gateway/src/syft/syft/generic/frameworks/hook/hook.py in overloaded_native_method(self, *args, **kwargs)
436 except BaseException as e:
437 # we can make some errors more descriptive with this method
--> 438 raise route_method_exception(e, self, args, kwargs)
439
440 else: # means that there is a wrapper to remove
~/implementation/PyGrid/gateway/src/syft/syft/generic/frameworks/hook/hook.py in overloaded_native_method(self, *args, **kwargs)
432
433 try:
--> 434 response = method(*args, **kwargs)
435
436 except BaseException as e:
~/anaconda3/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
193 products. Defaults to ``False``.
194 """
--> 195 torch.autograd.backward(self, gradient, retain_graph, create_graph)
196
197 def register_hook(self, hook):
~/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
97 Variable._execution_engine.run_backward(
98 tensors, grad_tensors, retain_graph, create_graph,
---> 99 allow_unreachable=True) # allow_unreachable flag
100
101
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
| @avinath1998 If you are trying to change the whole **model.classifier** block, then your input dimension (which you put 2) is wrong. The input dimension of **model.classifier** should be 256x6x6.
refer: https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py
```
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
```
@imraniac Oh shoot, i only added this as an example and didnt see that. Regardless, the error still comes up, ive updated the issue accordingly
Have you still updated the issue with an example? Because it should be 256 * 6 * 6 and not 224
@imraniac updated accordingly, still doesn't work, the error occurs
I will check this.
@tudorcebere, assigned you | 2020-03-15T22:15:25 |
OpenMined/PySyft | 3,205 | OpenMined__PySyft-3205 | [
"3131"
] | e4c703e8b037bee29c0dde7a272d1a97095ef481 | diff --git a/syft/frameworks/torch/tensors/interpreters/precision.py b/syft/frameworks/torch/tensors/interpreters/precision.py
--- a/syft/frameworks/torch/tensors/interpreters/precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/precision.py
@@ -449,6 +449,10 @@ def matmul(self, *args, **kwargs):
__matmul__ = matmul
mm = matmul
+ def reciprocal(self):
+ ones = self * 0 + 1
+ return ones / self
+
# Approximations:
def inverse(self, iterations=8):
"""
@@ -482,45 +486,93 @@ def exp(self, iterations=8):
"""
return (1 + self / 2 ** iterations) ** (2 ** iterations)
- def sigmoid(self, method="exp"):
+ def sign(self):
+ return (self > 0) + (self < 0) * (-1)
+
+ @staticmethod
+ def _sigmoid_exp(tensor):
"""
- Approximates the sigmoid function
+ Implementation taken from FacebookResearch - CrypTen project
+
+ Compute the sigmoid using the exp approximation
+ sigmoid(x) = 1 / (1 + exp(-x))
+
+ For stability:
+ sigmoid(x) = (sigmoid(|x|) - 0.5) * sign(x) + 0.5
+
+ Ref: https://timvieira.github.io/blog/post/2014/02/11/exp-normalize-trick/#numerically_stable_sigmoid_function
Args:
- self: the fixed precision tensor
- method (str): (default = "exp")
- "exp": Use the exponential approximation and the sigmoid definition
- sigmoid(x) = 1 / (1 + exp(-x))
- "maclaurin": Use the Maclaurin / Taylor approximation, with polynomial
- interpolation of degree 5 over [-8,8]
- NOTE: This method is faster but not as precise as "exp"
- Ref: https://mortendahl.github.io/2017/04/17/private-deep-learning-with-mpc/#approximating-sigmoid
- """
-
- if method == "exp":
- # Inverse can only be used on matrices
- if len(self.shape) == 1:
- one = self * 0 + 1
- result = one / (1 + (self * -1).exp())
- else:
- result = (1 + (self * -1).exp()).inverse()
+ tensor (tensor): values where sigmoid should be approximated
+ """
- elif method == "maclaurin":
- weights = (
- torch.tensor([0.5, 1.91204779e-01, -4.58667307e-03, 4.20690803e-05])
- .fix_precision(**self.get_class_attributes())
- .child
- )
- degrees = [0, 1, 3, 5]
+ sign = tensor.sign()
+
+ # Make sure the elements are all positive
+ x = tensor * sign
+ ones = tensor * 0 + 1
+ half = ones.div(2)
+ result = (ones + (-ones * x).exp()).reciprocal()
+ return (result - half) * sign + half
+
+ @staticmethod
+ def _sigmoid_maclaurin(tensor):
+ """
+ Approximates the sigmoid function using Maclaurin, with polynomial
+ interpolation of degree 5 over [-8,8]
+ NOTE: This method is faster but not as precise as "exp"
+ Ref: https://mortendahl.github.io/2017/04/17/private-deep-learning-with-mpc/#approximating-sigmoid
+
+ Args:
+ tensor (tensor): values where sigmoid should be approximated
+ """
+
+ weights = (
+ torch.tensor([0.5, 1.91204779e-01, -4.58667307e-03, 4.20690803e-05])
+ .fix_precision(**tensor.get_class_attributes())
+ .child
+ )
+ degrees = [0, 1, 3, 5]
- # initiate with term of degree 0 to avoid errors with tensor ** 0
- one = self * 0 + 1
- result = one * weights[0]
- for i, d in enumerate(degrees[1:]):
- result += (self ** d) * weights[i + 1]
+ # initiate with term of degree 0 to avoid errors with tensor ** 0
+ one = tensor * 0 + 1
+ result = one * weights[0]
+ for i, d in enumerate(degrees[1:]):
+ result += (tensor ** d) * weights[i + 1]
return result
+ @staticmethod
+ def _sigmoid_chebyshev(tensor, maxval: int = 6, terms: int = 32):
+ """
+ Implementation taken from FacebookResearch - CrypTen project
+ Computes the sigmoid function as
+ sigmoid(x) = (tanh(x /2) + 1) / 2
+
+ Tanh is approximated using chebyshev polynomials
+ Args:
+ maxval (int): interval width used for tanh chebyshev polynomials
+ terms (int): highest degree of Chebyshev polynomials for tanh.
+ Must be even and at least 6.
+ """
+ tanh_approx = tensor._tanh_chebyshev(tensor.div(2), maxval, terms)
+
+ return tanh_approx.div(2) + 0.5
+
+ def sigmoid(tensor, method="exp"):
+ """
+ Approximates the sigmoid function using a given method
+
+ Args:
+ tensor: the fixed precision tensor
+ method (str): (default = "chebyshev")
+ Possible values: "exp", "maclaurin", "chebyshev"
+ """
+
+ sigmoid_f = getattr(tensor, f"_sigmoid_{method}")
+
+ return sigmoid_f(tensor)
+
def log(self, iterations=2, exp_iterations=8):
"""Approximates the natural logarithm using 8th order modified Householder iterations.
Recall that Householder method is an algorithm to solve a non linear equation f(x) = 0.
@@ -560,6 +612,7 @@ def _tanh_chebyshev(tensor, maxval: int = 6, terms: int = 32):
where c_i is the ith Chebyshev series coefficient and P_i is ith polynomial.
The approximation is truncated to +/-1 outside [-maxval, maxval].
Args:
+ tensor (tensor): values where the tanh needs to be approximated
maxval (int): interval width used for computing chebyshev polynomials
terms (int): highest degree of Chebyshev polynomials.
Must be even and at least 6.
@@ -593,9 +646,12 @@ def _tanh_chebyshev(tensor, maxval: int = 6, terms: int = 32):
@staticmethod
def _tanh_sigmoid(tensor):
"""
- Compute the tanh using the sigmoid
+ Compute the tanh using the sigmoid approximation
+ Args:
+ tensor (tensor): values where tanh should be approximated
"""
+
return 2 * torch.sigmoid(2 * tensor) - 1
def tanh(tensor, method="chebyshev"):
| diff --git a/test/conftest.py b/test/conftest.py
--- a/test/conftest.py
+++ b/test/conftest.py
@@ -17,6 +17,20 @@
from syft.workers.websocket_server import WebsocketServerWorker
+def pytest_sessionstart(session):
+ session.failed_tests = set()
+
+
+def pytest_runtest_makereport(item, call): # pragma: no cover
+ if call.excinfo is not None and item.originalname:
+ item.session.failed_tests.add(item.originalname)
+
+
+def pytest_runtest_setup(item): # pragma: no cover
+ if item.originalname in item.session.failed_tests:
+ pytest.skip("previous test failed (%s)" % item.name)
+
+
def _start_proc(participant, dataset: str = None, **kwargs): # pragma: no cover
"""Helper function for spinning up a websocket participant."""
diff --git a/test/torch/tensors/test_precision.py b/test/torch/tensors/test_precision.py
--- a/test/torch/tensors/test_precision.py
+++ b/test/torch/tensors/test_precision.py
@@ -414,114 +414,110 @@ def test_torch_inverse_approx(workers):
assert (diff / (tolerance * norm)) < 1
-def test_torch_exp_approx(workers):
[email protected]("prec_frac, tolerance", [(3, 20 / 100), (4, 5 / 100), (5, 4 / 100)])
+def test_torch_exp_approx(prec_frac, tolerance, workers):
"""
Test the approximate exponential with different tolerance depending on
the precision_fractional considered
"""
alice, bob, james = workers["alice"], workers["bob"], workers["james"]
- fix_prec_tolerance = {3: 20 / 100, 4: 5 / 100, 5: 5 / 100}
-
- for prec_frac, tolerance in fix_prec_tolerance.items():
- cumsum = torch.zeros(5)
- for i in range(10):
- t = torch.tensor([0.0, 1, 2, 3, 4])
- t_sh = t.fix_precision(precision_fractional=prec_frac).share(
- alice, bob, crypto_provider=james
- )
- r_sh = t_sh.exp()
- r = r_sh.get().float_prec()
- t = t.exp()
- diff = (r - t).abs()
- norm = (r + t) / 2
- cumsum += diff / (tolerance * norm)
-
- cumsum /= 10
- assert (cumsum < 1).all()
-
-
-def test_torch_sigmoid_approx(workers):
+ cumsum = torch.zeros(5)
+ for i in range(10):
+ t = torch.tensor([0.0, 1, 2, 3, 4])
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(
+ alice, bob, crypto_provider=james
+ )
+ r_sh = t_sh.exp()
+ r = r_sh.get().float_prec()
+ t = t.exp()
+ diff = (r - t).abs()
+ norm = (r + t) / 2
+ cumsum += diff / (tolerance * norm)
+
+ cumsum /= 10
+ assert (cumsum < 1).all()
+
+
[email protected](
+ "method, prec_frac, tolerance",
+ [
+ ("chebyshev", 3, 6 / 100),
+ ("chebyshev", 4, 1 / 1000),
+ ("exp", 3, 6 / 100),
+ ("exp", 4, 1 / 100),
+ ("maclaurin", 3, 7 / 100),
+ ("maclaurin", 4, 15 / 100),
+ ],
+)
+def test_torch_sigmoid_approx(method, prec_frac, tolerance, workers):
"""
Test the approximate sigmoid with different tolerance depending on
the precision_fractional considered
"""
alice, bob, james = workers["alice"], workers["bob"], workers["james"]
- fix_prec_tolerance_by_method = {
- "exp": {3: 6 / 100, 4: 1 / 100, 5: 1 / 100},
- "maclaurin": {3: 7 / 100, 4: 15 / 100, 5: 15 / 100},
- }
-
- for method, fix_prec_tolerance in fix_prec_tolerance_by_method.items():
- for prec_frac, tolerance in fix_prec_tolerance.items():
- t = torch.tensor(range(-10, 10)) * 0.5
- t_sh = t.fix_precision(precision_fractional=prec_frac).share(
- alice, bob, crypto_provider=james
- )
- r_sh = t_sh.sigmoid(method=method)
- r = r_sh.get().float_prec()
- t = t.sigmoid()
- diff = (r - t).abs().max()
- norm = (r + t).abs().max() / 2
-
- assert (diff / (tolerance * norm)) < 1
-
-
-def test_torch_tanh_approx(workers):
+ t = torch.tensor(range(-10, 10)) * 0.5
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(alice, bob, crypto_provider=james)
+ r_sh = t_sh.sigmoid(method=method)
+ r = r_sh.get().float_prec()
+ t = t.sigmoid()
+ diff = (r - t).abs().max()
+ norm = (r + t).abs().max() / 2
+
+ assert (diff / (tolerance * norm)) < 1
+
+
[email protected](
+ "method, prec_frac, tolerance",
+ [
+ ("chebyshev", 3, 3 / 100),
+ ("chebyshev", 4, 2 / 100),
+ ("sigmoid", 3, 10 / 100),
+ ("sigmoid", 4, 5 / 100),
+ ],
+)
+def test_torch_tanh_approx(method, prec_frac, tolerance, workers):
"""
Test the approximate tanh with different tolerance depending on
the precision_fractional considered
"""
alice, bob, james = workers["alice"], workers["bob"], workers["james"]
- fix_prec_tolerance_by_method = {
- "chebyshev": {3: 3 / 100, 4: 3 / 100, 5: 3 / 100},
- "sigmoid": {3: 10 / 100, 4: 15 / 100, 5: 15 / 100},
- }
+ t = torch.tensor(range(-10, 10)) * 0.5
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(alice, bob, crypto_provider=james)
+ r_sh = t_sh.tanh(method)
+ r = r_sh.get().float_prec()
+ t = t.tanh()
+ diff = (r - t).abs().max()
+ norm = (r + t).abs().max() / 2
- for method, fix_prec_tolerance in fix_prec_tolerance_by_method.items():
- for prec_frac, tolerance in fix_prec_tolerance.items():
- t = torch.tensor(range(-6, 6)) * 0.5
- t_sh = t.fix_precision(precision_fractional=prec_frac).share(
- alice, bob, crypto_provider=james
- )
- r_sh = t_sh.tanh(method)
- r = r_sh.get().float_prec()
- t = t.tanh()
- print(method, prec_frac, tolerance)
- print(r)
- print(t)
- diff = (r - t).abs().max()
- norm = (r + t).abs().max() / 2
-
- assert (diff / (tolerance * norm)) < 1
+ assert (diff / (tolerance * norm)) < 1
-def test_torch_log_approx(workers):
[email protected]("prec_frac, tolerance", [(3, 100 / 100), (4, 3 / 100),])
+def test_torch_log_approx(prec_frac, tolerance, workers):
"""
Test the approximate logarithm with different tolerance depending on
the precision_fractional considered
"""
alice, bob, james = workers["alice"], workers["bob"], workers["james"]
- fix_prec_tolerance = {3: 100 / 100, 4: 3 / 100, 5: 2 / 100}
-
- for prec_frac, tolerance in fix_prec_tolerance.items():
- cumsum = torch.zeros(9)
- for i in range(10):
- t = torch.tensor([0.1, 0.5, 2, 5, 10, 20, 50, 100, 250])
- t_sh = t.fix_precision(precision_fractional=prec_frac).share(
- alice, bob, crypto_provider=james
- )
- r_sh = t_sh.log()
- r = r_sh.get().float_prec()
- t = t.log()
- diff = (r - t).abs()
- norm = (r + t) / 2
- cumsum += diff / (tolerance * norm)
- cumsum /= 10
- assert (cumsum.abs() < 1).all()
+ cumsum = torch.zeros(9)
+ for i in range(10):
+ t = torch.tensor([0.1, 0.5, 2, 5, 10, 20, 50, 100, 250])
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(
+ alice, bob, crypto_provider=james
+ )
+ r_sh = t_sh.log()
+ r = r_sh.get().float_prec()
+ t = t.log()
+ diff = (r - t).abs()
+ norm = (r + t) / 2
+ cumsum += diff / (tolerance * norm)
+
+ cumsum /= 10
+ assert (cumsum.abs() < 1).all()
def test_torch_conv2d(workers):
| Sigmoid in SMPC failing for some shapes
**Describe the bug**
Examples to illustrate:
Working
```python
a = torch.Tensor([1,2,3])
a = a.fix_precision().share(bob, alice, crypto_provider = crypto_provider, requires_grad = True)
torch.nn.Sigmoid()(a)
```
Not working
```python
a = torch.Tensor([[1,2,3],[4,5,6]])
a = a.fix_precision().share(bob, alice, crypto_provider = crypto_provider, requires_grad = True)
torch.nn.Sigmoid()(a)
```
Error: `AssertionError: Must be batches of square matrices`
(same with `requires_grad=False`)
The error seems linked to tensor shape
| It worked before by the way. Like a month ago
Might need to add a test for this.
I can take it @LaRiffle | 2020-03-16T22:45:25 |
OpenMined/PySyft | 3,256 | OpenMined__PySyft-3256 | [
"3214"
] | d1bfd6dc0d7148afe9cad8611debdbf58eb490c0 | diff --git a/syft/execution/placeholder.py b/syft/execution/placeholder.py
--- a/syft/execution/placeholder.py
+++ b/syft/execution/placeholder.py
@@ -30,12 +30,10 @@ def instantiate(self, tensor):
Add a tensor as a child attribute. All operations on the placeholder will be also
executed on this child tensor.
- We remove wrappers or Placeholders if is there are any.
+ We remove Placeholders if is there are any.
"""
if isinstance(tensor, PlaceHolder):
self.child = tensor.child
- elif tensor.is_wrapper:
- self.instantiate(tensor.child)
else:
self.child = tensor
return self
| diff --git a/test/execution/test_plan.py b/test/execution/test_plan.py
--- a/test/execution/test_plan.py
+++ b/test/execution/test_plan.py
@@ -7,6 +7,7 @@
import torch.optim as optim
import syft as sy
+from itertools import starmap
from syft.generic.pointers.pointer_tensor import PointerTensor
from syft.generic.frameworks.types import FrameworkTensor
from syft.execution.plan import Plan
@@ -641,6 +642,13 @@ def forward(self, x):
assert th.all(decrypted - expected.detach() < 1e-2)
# assert fetched_plan.state.state_placeholders != plan.state.state_placeholders #TODO
+ assert all(
+ starmap(
+ lambda fetched_tensor, tensor: (fetched_tensor == tensor).get(),
+ zip(fetched_plan.state.tensors(), plan.state.tensors()),
+ )
+ )
+
# Make sure fetched_plan is using the readable_plan
assert fetched_plan.forward is None
assert fetched_plan.is_built
| Fix parameter serialization
In some situations, parameters are not serialized properly. I suspect this is due to our implementation of parameter.data
Here is one example:
```python
class Net(sy.Plan):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(1, 1)
def forward(self, x):
return self.fc1(x)
plan = Net()
plan.build(th.tensor([1.2]))
x = th.tensor([-1.0])
expected = plan(x)
plan.fix_precision().share(alice, bob, crypto_provider=charlie)
print(plan.state.tensors())
ptr_plan = plan.send(james)
# Fetch plan
fetched_plan = plan.owner.fetch_plan(ptr_plan.id_at_location, james)
print('***')
print(fetched_plan.state.tensors())
```
Output
```
[Parameter containing:
(Wrapper)>FixedPrecisionTensor>[AdditiveSharingTensor]
-> [PointerTensor | me:94226517866 -> alice:74685210613]
-> [PointerTensor | me:30028513485 -> bob:91228892047]
*crypto provider: charlie*, Parameter containing:
(Wrapper)>FixedPrecisionTensor>[AdditiveSharingTensor]
-> [PointerTensor | me:16955185561 -> alice:5015164314]
-> [PointerTensor | me:77573712688 -> bob:21883177159]
*crypto provider: charlie*]
***
[FixedPrecisionTensor>[AdditiveSharingTensor]
-> [PointerTensor | me:94226517866 -> alice:74685210613]
-> [PointerTensor | me:30028513485 -> bob:91228892047]
*crypto provider: charlie*, FixedPrecisionTensor>[AdditiveSharingTensor]
-> [PointerTensor | me:16955185561 -> alice:5015164314]
-> [PointerTensor | me:77573712688 -> bob:21883177159]
*crypto provider: charlie*]
```
| I can take a look | 2020-03-26T14:57:58 |
OpenMined/PySyft | 3,271 | OpenMined__PySyft-3271 | [
"3261"
] | 1b0faa3ede31d481c8b535998dc92b229439f02e | diff --git a/syft/frameworks/torch/tensors/interpreters/native.py b/syft/frameworks/torch/tensors/interpreters/native.py
--- a/syft/frameworks/torch/tensors/interpreters/native.py
+++ b/syft/frameworks/torch/tensors/interpreters/native.py
@@ -813,11 +813,11 @@ def fix_prec(self, *args, storage="auto", field_type="int100", no_wrap: bool = F
kwargs["owner"] = self.owner
if self.is_wrapper:
- self.child = self.child.fix_prec(*args, **kwargs)
+ child = self.child.fix_prec(*args, **kwargs)
if no_wrap:
- return self.child
+ return child
else:
- return self
+ return child.wrap()
base = kwargs.get("base", 10)
prec_fractional = kwargs.get("precision_fractional", 3)
| diff --git a/test/torch/pointers/test_pointer_tensor.py b/test/torch/pointers/test_pointer_tensor.py
--- a/test/torch/pointers/test_pointer_tensor.py
+++ b/test/torch/pointers/test_pointer_tensor.py
@@ -401,17 +401,23 @@ def test_raising_error_when_item_func_called(workers):
def test_fix_prec_on_pointer_tensor(workers):
"""
Ensure .fix_precision() works as expected.
+ Also check that fix_precision() is not inplace.
"""
bob = workers["bob"]
tensor = torch.tensor([1, 2, 3, 4.0])
ptr = tensor.send(bob)
- ptr = ptr.fix_precision()
+ ptr_fp = ptr.fix_precision()
+
remote_tensor = bob._objects[ptr.id_at_location]
+ remote_fp_tensor = bob._objects[ptr_fp.id_at_location]
+
+ # check that fix_precision is not inplace
+ assert (remote_tensor == tensor).all()
assert isinstance(ptr.child, PointerTensor)
- assert isinstance(remote_tensor.child, FixedPrecisionTensor)
+ assert isinstance(remote_fp_tensor.child, FixedPrecisionTensor)
def test_fix_prec_on_pointer_of_pointer(workers):
| fix_precision() is inplace when applied to a pointer tensor.
**Description of the bug**
The method fix_precision() applied on pointers is 'inplace'. This should not be the case
In the following code:
```
a = torch.Tensor([2., 3.]).send(bob)
a.fix_precision()
a = a.get()
print(type(a))
```
The variable `a` after calling `get()` is a fixed precision tensor which is a bug, because the original variable `a` defined as `torch.Tensor([2., 3.])` is not a fixed precision tensor.
However, this bug is not existing in case when `a` is not a pointer:
```
a = torch.Tensor([2., 3.])
a.fix_precision()
print(type(a))
```
`a` here is not a fixed precision tensor. which is the desired behavior.
**Desktop:**
- OS: Archlinux
- Version 0.3.2
| I can take this!!
Hey @AlanAboudib @karlhigley @LaRiffle ,
so if I understand this correctly,
```python
x = th.Tensor([1.,2.,3.]).send(bob)
x_fx = x.fix_precision()
x = x_fx.get()
```
x_fx is a `PointerTensor` and expected behavior is to be a `FixedPrecisionTensor` with `PointerTensor` as child?
Moreover when we .get() the x_fx, it returns `FixedPrecisionTensor`, which I think is a correct behavior, as we haven't called float_precision yet.
not exactly, fix_precision() should be applied anyway on the remote value when called on a pointer (and it works fine)
the issue is that currently we have this for pointers:
```
a = torch.Tensor([2., 3.]).send(bob)
a_fp = a.fix_precision()
# a_fp is a pointer to a fixed precision, but a is too!
```
while for non pointers:
```
a = torch.Tensor([2., 3.])
a_fp = a.fix_precision()
# a_fp is a wrapper onto a fixed precision, but a is not!
```
So for pointers, `fix_precision` behaves as `fix_precision_` while it shouldn't | 2020-03-29T14:41:59 |
OpenMined/PySyft | 3,343 | OpenMined__PySyft-3343 | [
"3316"
] | c41433301bfa1b156a6fe8b3eac226983a56007d | diff --git a/syft/frameworks/torch/tensors/interpreters/additive_shared.py b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
--- a/syft/frameworks/torch/tensors/interpreters/additive_shared.py
+++ b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
@@ -155,6 +155,7 @@ def get(self):
shares.append(share.get())
else:
shares.append(share)
+ self.owner.de_register_obj(share)
res_field = sum(shares) % self.field
@@ -247,7 +248,7 @@ def reconstruct(self):
"""
workers = self.locations
- ptr_to_sh = self.wrap().send(workers[0], **no_wrap)
+ ptr_to_sh = self.copy().wrap().send(workers[0], **no_wrap)
pointer = ptr_to_sh.remote_get()
pointers = [pointer]
@@ -1033,6 +1034,7 @@ def simplify(worker: AbstractWorker, tensor: "AdditiveSharingTensor") -> tuple:
chain = sy.serde.msgpack.serde._simplify(worker, tensor.child)
# Don't delete the remote values of the shares at simplification
+ garbage_collect = tensor.get_garbage_collect_data()
tensor.set_garbage_collect_data(False)
return (
@@ -1040,6 +1042,7 @@ def simplify(worker: AbstractWorker, tensor: "AdditiveSharingTensor") -> tuple:
tensor.field,
sy.serde.msgpack.serde._simplify(worker, tensor.crypto_provider.id),
chain,
+ garbage_collect,
)
@staticmethod
@@ -1055,7 +1058,7 @@ def detail(worker: AbstractWorker, tensor_tuple: tuple) -> "AdditiveSharingTenso
shared_tensor = detail(data)
"""
- tensor_id, field, crypto_provider, chain = tensor_tuple
+ tensor_id, field, crypto_provider, chain, garbage_collect = tensor_tuple
crypto_provider = sy.serde.msgpack.serde._detail(worker, crypto_provider)
tensor = AdditiveSharingTensor(
@@ -1069,6 +1072,8 @@ def detail(worker: AbstractWorker, tensor_tuple: tuple) -> "AdditiveSharingTenso
chain = sy.serde.msgpack.serde._detail(worker, chain)
tensor.child = chain
+ tensor.set_garbage_collect_data(garbage_collect)
+
return tensor
@staticmethod
diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -542,13 +542,12 @@ def execute_communication_action(self, action: CommunicationAction) -> PointerTe
else:
obj = self.get_obj(obj_id)
response = source_worker.send(obj, *destinations, **kwargs_)
-
- response = hook_args.register_response("send", response, [sy.ID_PROVIDER.pop()], self)
-
- # @lariffle: We only remove remote objects when the operations are inplace
- # otherwise we could have stale pointers which we really want to avoid.
- # TODO: needs more discussion
- if kwargs_.get("inplace"):
+ if kwargs_.get("requires_grad", False):
+ response = hook_args.register_response(
+ "send", response, [sy.ID_PROVIDER.pop()], self
+ )
+ else:
+ response.garbage_collect_data = False
self.rm_obj(obj_id)
return response
| diff --git a/test/serde/serde_helpers.py b/test/serde/serde_helpers.py
--- a/test/serde/serde_helpers.py
+++ b/test/serde/serde_helpers.py
@@ -534,6 +534,7 @@ def compare(detailed, original):
msgpack.serde._simplify(
kwargs["workers"]["serde_worker"], ast.child
), # (dict of AbstractTensor) simplified chain
+ ast.get_garbage_collect_data(),
),
),
"cmp_detailed": compare,
diff --git a/test/torch/nn/test_nn.py b/test/torch/nn/test_nn.py
--- a/test/torch/nn/test_nn.py
+++ b/test/torch/nn/test_nn.py
@@ -7,6 +7,23 @@
import syft.frameworks.torch.nn as syft_nn
+def test_nn_linear(workers):
+ torch.manual_seed(121) # Truncation might not always work so we set the random seed
+ bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
+ t = torch.tensor([[1.0, 2]])
+ x = t.fix_prec().share(bob, alice, crypto_provider=james)
+ model = nn.Linear(2, 1)
+ model.weight = nn.Parameter(torch.tensor([[-1.0, 2]]))
+ model.bias = nn.Parameter(torch.tensor([[-1.0]]))
+ model.fix_precision().share(bob, alice, crypto_provider=james)
+
+ y = model(x)
+
+ assert len(alice._objects) == 4 # x, y, weight, bias
+ assert len(bob._objects) == 4
+ assert y.get().float_prec() == torch.tensor([[2.0]])
+
+
def test_conv2d(workers):
"""
Test the nn.Conv2d module to ensure that it produces the exact same
diff --git a/test/torch/pointers/test_pointer_tensor.py b/test/torch/pointers/test_pointer_tensor.py
--- a/test/torch/pointers/test_pointer_tensor.py
+++ b/test/torch/pointers/test_pointer_tensor.py
@@ -289,7 +289,7 @@ def test_move(workers):
x.move(alice)
- assert x.id_at_location in bob._objects
+ assert x.id_at_location not in bob._objects
assert x.id_at_location in alice._objects
x = torch.tensor([1.0, 2, 3, 4, 5], requires_grad=True).send(bob)
@@ -299,7 +299,7 @@ def test_move(workers):
x.move(alice)
- assert x.id_at_location in bob._objects
+ assert x.id_at_location not in bob._objects
assert x.id_at_location in alice._objects
alice.clear_objects()
@@ -554,3 +554,15 @@ def test_setting_back_grad_to_origin_after_move(workers):
z.backward()
assert (x.grad == th.tensor([4.0, 4.0, 4.0, 4.0, 4.0])).all()
+
+
+def test_iadd(workers):
+ alice = workers["alice"]
+ a = torch.ones(1, 5)
+ b = torch.ones(1, 5)
+ a_pt = a.send(alice)
+ b_pt = b.send(alice)
+
+ b_pt += a_pt
+
+ assert len(alice._objects) == 2
diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -493,21 +493,6 @@ def test_roll(workers):
assert (res2.get() == torch.roll(t, (1, 2), dims=(0, 1))).all()
-def test_nn_linear(workers):
- torch.manual_seed(121) # Truncation might not always work so we set the random seed
- bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
- t = torch.tensor([[1.0, 2]])
- x = t.fix_prec().share(bob, alice, crypto_provider=james)
- model = nn.Linear(2, 1)
- model.weight = nn.Parameter(torch.tensor([[-1.0, 2]]))
- model.bias = nn.Parameter(torch.tensor([[-1.0]]))
- model.fix_precision().share(bob, alice, crypto_provider=james)
-
- y = model(x)
-
- assert y.get().float_prec() == torch.tensor([[2.0]])
-
-
def test_matmul(workers):
torch.manual_seed(121) # Truncation might not always work so we set the random seed
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
@@ -859,3 +844,37 @@ def test_correct_tag_and_description_after_send(workers):
assert me.request_search("tag_additive_test1", location=alice)
assert me.request_search("tag_additive_test2", location=alice)
+
+
+def test_garbage_collect_reconstruct(workers):
+ bob, alice, james, me = (workers["bob"], workers["alice"], workers["james"], workers["me"])
+ a = torch.ones(1, 5)
+ a_sh = a.encrypt(workers=[alice, bob], crypto_provider=james)
+ a_recon = a_sh.child.child.reconstruct()
+
+ assert len(alice._objects) == 2
+ assert len(bob._objects) == 2
+
+
+def test_garbage_collect_move(workers):
+ bob, alice, me = (workers["bob"], workers["alice"], workers["me"])
+ a = torch.ones(1, 5).send(alice)
+ b = a.copy().move(bob)
+
+ assert len(alice._objects) == 1
+ assert len(bob._objects) == 1
+
+
+def test_garbage_collect_mul(workers):
+ bob, alice, james, me = (workers["bob"], workers["alice"], workers["james"], workers["me"])
+ a = torch.ones(1, 5)
+ b = torch.ones(1, 5)
+
+ a = a.encrypt(workers=[alice, bob], crypto_provider=james)
+ b = b.encrypt(workers=[alice, bob], crypto_provider=james)
+
+ for _ in range(3):
+ c = a * b
+
+ assert len(alice._objects) == 3
+ assert len(bob._objects) == 3
| Garbage collection: memory leak in SMPC
**Describe the bug**
Garbage collection is not working as expected in SMPC
**To Reproduce**
```python
import torch as th
import syft as sy
hook = sy.TorchHook(th)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
crypto_provider = sy.VirtualWorker(hook, id="james")
torch = th
syft = sy
class Classifier(torch.nn.Module):
def __init__(self, in_features, out_features):
super(Classifier, self).__init__()
self.fc = torch.nn.Linear(in_features, out_features)
def forward(self, x):
logits = self.fc(x)
return logits
classifier = Classifier(in_features = 5, out_features = 2)
bob.clear_objects()
classifier = classifier.fix_prec().share(bob, alice, crypto_provider = crypto_provider)
for i in range(3):
a = torch.ones(1,5)
b = a.fix_prec().share(bob, alice, crypto_provider = crypto_provider)
classifier(b)
print(len(bob._objects))
```
**Expected behavior**
the number of object in bob's store shouldn't increase, but currently it does (quite a lot)
| I'd like to take up this issue!
And very importantly to note: if you comment out `classifier(b)` the problem is no longer there
Thanks @Syzygianinfern0!
Let us know if you're blocked. I've put the issue under "High Priority"
Suggestion: you might want to try first if simple multiplication has a GC issue :)
> Thanks @Syzygianinfern0!
> Let us know if you're blocked. I've put the issue under "High Priority"
Sure. I'll ask for help if stuck.
> Suggestion: you might want to try first if simple multiplication has a GC issue :)
Looks like the normal arithemetic operations don't do that. Just the ones with the more complicated operations do it. Maybe the intermediate tensors during the computataion (the ones preserved for the backprop) are the ones with the problem. | 2020-04-12T13:37:35 |
OpenMined/PySyft | 3,370 | OpenMined__PySyft-3370 | [
"3359"
] | 6d2db912d88723b0d6b62fec4471c3780035dcc7 | diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -1178,15 +1178,13 @@ def serializer(self, workers=None) -> codes.TENSOR_SERIALIZATION:
'torch': serialization will only work between workers that support PyTorch
(more to come: 'tensorflow', 'numpy', etc)
"""
- if workers is not None:
- if not isinstance(workers, list):
- workers = [workers]
- else:
+ if workers is None:
workers = [w for w in self._known_workers.values() if isinstance(w, AbstractWorker)]
- # self is not referenced in self._known_workers when auto_add=False
- if self not in workers:
- workers.append(self)
+ if not isinstance(workers, list):
+ workers = [workers]
+
+ workers.append(self)
frameworks = set()
for worker in workers:
| `test_serde_simplify` is flaky
**Describe the bug**
`test_serde_simplify` frequently fails and appears to be applying the wrong tensor serialization format (sometimes but not always.)
**To Reproduce**
Run the test (or full suite) until it fails.
**Expected behavior**
The test should reliably pass.
**Screenshots**
```
2020-04-14T18:01:22.2134777Z =================================== FAILURES ===================================
2020-04-14T18:01:22.2135402Z __________________________ test_serde_simplify[State] __________________________
2020-04-14T18:01:22.2135862Z
2020-04-14T18:01:22.2137173Z cls = <class 'syft.execution.state.State'>
2020-04-14T18:01:22.2138492Z workers = {'alice': <VirtualWorker id:alice #objects:0>, 'bob': <VirtualWorker id:bob #objects:0>, 'charlie': <VirtualWorker id:charlie #objects:0>, 'james': <VirtualWorker id:james #objects:0>, ...}
2020-04-14T18:01:22.2139188Z hook = <syft.frameworks.torch.hook.hook.TorchHook object at 0x7f343eeafe80>
2020-04-14T18:01:22.2140173Z start_remote_worker = <function start_remote_worker.<locals>._start_remote_worker at 0x7f343cc6aae8>
2020-04-14T18:01:22.2140630Z
2020-04-14T18:01:22.2141200Z @pytest.mark.parametrize("cls", samples)
2020-04-14T18:01:22.2142456Z def test_serde_simplify(cls, workers, hook, start_remote_worker):
2020-04-14T18:01:22.2143904Z """Checks that simplified structures match expected"""
2020-04-14T18:01:22.2144442Z _samples = samples[cls](
2020-04-14T18:01:22.2144954Z workers=workers,
2020-04-14T18:01:22.2145475Z hook=hook,
2020-04-14T18:01:22.2146062Z start_remote_worker=start_remote_worker,
2020-04-14T18:01:22.2146588Z port=9001,
2020-04-14T18:01:22.2147100Z id="simplify",
2020-04-14T18:01:22.2147606Z )
2020-04-14T18:01:22.2148108Z for sample in _samples:
2020-04-14T18:01:22.2148710Z obj, expected_simplified_obj = sample.get("value"), sample.get("simplified")
2020-04-14T18:01:22.2149270Z _simplify = (
2020-04-14T18:01:22.2149785Z msgpack.serde._simplify
2020-04-14T18:01:22.2150289Z if not sample.get("forced", False)
2020-04-14T18:01:22.2150818Z else msgpack.serde._force_full_simplify
2020-04-14T18:01:22.2151338Z )
2020-04-14T18:01:22.2151890Z serde_worker = syft.hook.local_worker
2020-04-14T18:01:22.2152420Z serde_worker.framework = sample.get("framework", torch)
2020-04-14T18:01:22.2153946Z simplified_obj = _simplify(syft.hook.local_worker, obj)
2020-04-14T18:01:22.2154461Z
2020-04-14T18:01:22.2154872Z if sample.get("cmp_simplified", None):
2020-04-14T18:01:22.2155283Z # Custom simplified objects comparison function.
2020-04-14T18:01:22.2155742Z assert sample.get("cmp_simplified")(simplified_obj, expected_simplified_obj) is True
2020-04-14T18:01:22.2156593Z else:
2020-04-14T18:01:22.2157000Z > assert simplified_obj == expected_simplified_obj
2020-04-14T18:01:22.2157445Z E assert (22, ((1, ((4...one, ...)))))) == (22, ((1, ((4...one, ...))))))
2020-04-14T18:01:22.2170190Z E At index 1 diff: ((1, ((48, ((56, (22214076726,)), (3, ((5, (b'state1',)),)), None, None)), (48, ((56, (96071220832,)), (3, ((5, (b'state2',)),)), None, None)))), (1, ((14, (9826749836, b'\x80\x02\x8a\nl\xfc\x9cF\xf9 j\xa8P\x19.\x80\x02M\xe9\x03.\x80\x02}q\x00(X\x10\x00\x00\x00protocol_versionq\x01M\xe9\x03X\r\x00\x00\x00little_endianq\x02\x88X\n\x00\x00\x00type_sizesq\x03}q\x04(X\x05\x00\x00\x00shortq\x05K\x02X\x03\x00\x00\x00intq\x06K\x04X\x04\x00\x00\x00longq\x07K\x04uu.\x80\x02ctorch._utils\n_rebuild_tensor_v2\nq\x00((X\x07\x00\x00\x00storageq\x01ctorch\nFloatStorage\nq\x02X\x0e\x00\x00\x0094299942048352q\x03X\x03\x00\x00\x00cpuq\x04K\tNtq\x05QK\x00K\x03K\x03\x86q\x06K\x03K\x01\x86q\x07\x89ccollections\nOrderedDict\nq\x08)Rq\ttq\nRq\x0b.\x80\x02]q\x00X\x0e\x00\x00\x0094299942048352q\x01a.\t\x00\x00\x00\x00\x00\x00\x00#\x95{?\xf6\x19\x17\xbe\x9c\xa3\xec=\x01\x8b,<\x11-[\xbf\xa1d\xea\xbf\xb8\xcd5\xbe\xac\xbd\xb9\xbf\xb0JQ?', None, None, None, None, (5, (b'torch',)), None, None)), (14, (49440533498, b"\x80\x02\x8a\nl\xfc\x9cF\xf9 j\xa8P\x19.\x80\x02M\xe9\x03.\x80\x02}q\x00(X\x10\x00\x00\x00protocol_versionq\x01M\xe9\x03X\r\x00\x00\x00little_endianq\x02\x88X\n\x00\x00\x00type_sizesq\x03}q\x04(X\x05\x00\x00\x00shortq\x05K\x02X\x03\x00\x00\x00intq\x06K\x04X\x04\x00\x00\x00longq\x07K\x04uu.\x80\x02ctorch._utils\n_rebuild_tensor_v2\nq\x00((X\x07\x00\x00\x00storageq\x01ctorch\nFloatStorage\nq\x02X\x0e\x00\x00\x0094299946743552q\x03X\x03\x00\x00\x00cpuq\x04K\tNtq\x05QK\x00K\x03K\x03\x86q\x06K\x03K\x01\x86q\x07\x89ccollections\nOrderedDict\nq\x08)Rq\ttq\nRq\x0b.\x80\x02]q\x00X\x0e\x00\x00\x0094299946743552q\x01a.\t\x00\x00\x00\x00\x00\x00\x00p=D??*'\xbf\x11\x07\x97\xbdx\xa2\xd0=\x91\xb3v?\xad\xee->\xfd\xbc\xbc?\xef\x87\xe7\xbe\xa4\x1c\x00@", None, None, None, None, (5, (b'torch',)), None, None))))) != ((1, ((48, ((56, (22214076726,)), (3, ((5, (b'state1',)),)), None, None)), (48, ((56, (96071220832,)), (3, ((5, (b'state2',)),)), None, None)))), (1, ((14, (9826749836, (6, ((6, (3, 3)), (5, (b'float32',)), (1, (0.9827443957328796, -0.1475599706172943, 0.11554643511772156, 0.010531187988817692, -0.8561564087867737, -1.8311959505081177, -0.1775425672531128, -1.4511008262634277, 0.8175458908081055)))), None, None, None, None, (5, (b'all',)), None, None)), (14, (49440533498, (6, ((6, (3, 3)), (5, (b'float32',)), (1, (0.7665624618530273, -0.6529883742332458, -0.07374394685029984, 0.10187238454818726, 0.9636774659156799, 0.16985578835010529, 1.4745174646377563, -0.4522089660167694, 2.0017480850219727)))), None, None, None, None, (5, (b'all',)), None, None)))))
2020-04-14T18:01:22.2171445Z E Full diff:
2020-04-14T18:01:22.2171865Z E (
2020-04-14T18:01:22.2172266Z E 22,
2020-04-14T18:01:22.2172660Z E ((1,
2020-04-14T18:01:22.2173053Z E ((48,
2020-04-14T18:01:22.2173436Z E ((56,
2020-04-14T18:01:22.2173834Z E (22214076726,)),
2020-04-14T18:01:22.2174238Z E (3,
2020-04-14T18:01:22.2174632Z E ((5,
2020-04-14T18:01:22.2175404Z E (b'state1',)),)),
2020-04-14T18:01:22.2175833Z E None,
2020-04-14T18:01:22.2176226Z E None)),
2020-04-14T18:01:22.2176620Z E (48,
2020-04-14T18:01:22.2177026Z E ((56,
2020-04-14T18:01:22.2177473Z E (96071220832,)),
2020-04-14T18:01:22.2177643Z E (3,
2020-04-14T18:01:22.2177828Z E ((5,
2020-04-14T18:01:22.2178170Z E (b'state2',)),)),
2020-04-14T18:01:22.2178369Z E None,
2020-04-14T18:01:22.2178554Z E None)))),
2020-04-14T18:01:22.2178739Z E (1,
2020-04-14T18:01:22.2178928Z E ((14,
2020-04-14T18:01:22.2179113Z E (9826749836,
2020-04-14T18:01:22.2179489Z E + b'\x80\x02\x8a\nl\xfc\x9cF\xf9 j\xa8P\x19.\x80\x02M\xe9\x03.\x80\x02}'
2020-04-14T18:01:22.2179912Z E + b'q\x00(X\x10\x00\x00\x00protocol_versionq\x01M\xe9\x03X\r\x00\x00\x00li'
2020-04-14T18:01:22.2180297Z E + b'ttle_endianq\x02\x88X\n\x00\x00\x00type_sizesq\x03}q\x04(X'
2020-04-14T18:01:22.2180762Z E + b'\x05\x00\x00\x00shortq\x05K\x02X\x03\x00\x00\x00intq\x06K\x04X\x04\x00'
2020-04-14T18:01:22.2181171Z E + b'\x00\x00longq\x07K\x04uu.\x80\x02ctorch._utils\n_rebuild_tensor_v2\n'
2020-04-14T18:01:22.2181542Z E + b'q\x00((X\x07\x00\x00\x00storageq\x01ctorch\nFloatStorage\nq\x02'
2020-04-14T18:01:22.2181933Z E + b'X\x0e\x00\x00\x0094299942048352q\x03X\x03\x00\x00\x00cpuq\x04K\tNtq'
2020-04-14T18:01:22.2182528Z E + b'\x05QK\x00K\x03K\x03\x86q\x06K\x03K\x01\x86q\x07\x89ccollections\nOrde'
2020-04-14T18:01:22.2182979Z E + b'redDict\nq\x08)Rq\ttq\nRq\x0b.\x80\x02]q\x00X\x0e\x00\x00\x00942999420'
2020-04-14T18:01:22.2183422Z E + b'48352q\x01a.\t\x00\x00\x00\x00\x00\x00\x00#\x95{?\xf6\x19\x17'
2020-04-14T18:01:22.2183887Z E + b'\xbe\x9c\xa3\xec=\x01\x8b,<\x11-[\xbf\xa1d\xea\xbf\xb8\xcd5'
2020-04-14T18:01:22.2184281Z E + b'\xbe\xac\xbd\xb9\xbf\xb0JQ?',
2020-04-14T18:01:22.2184668Z E - (6,
2020-04-14T18:01:22.2185014Z E - ((6,
2020-04-14T18:01:22.2185394Z E - (3,
2020-04-14T18:01:22.2185742Z E - 3)),
2020-04-14T18:01:22.2186104Z E - (5,
2020-04-14T18:01:22.2186487Z E - (b'float32',)),
2020-04-14T18:01:22.2186851Z E - (1,
2020-04-14T18:01:22.2187323Z E - (0.9827443957328796,
2020-04-14T18:01:22.2187690Z E - -0.1475599706172943,
2020-04-14T18:01:22.2188096Z E - 0.11554643511772156,
2020-04-14T18:01:22.2188465Z E - 0.010531187988817692,
2020-04-14T18:01:22.2188894Z E - -0.8561564087867737,
2020-04-14T18:01:22.2189260Z E - -1.8311959505081177,
2020-04-14T18:01:22.2189665Z E - -0.1775425672531128,
2020-04-14T18:01:22.2190034Z E - -1.4511008262634277,
2020-04-14T18:01:22.2190421Z E - 0.8175458908081055)))),
2020-04-14T18:01:22.2190649Z E None,
2020-04-14T18:01:22.2190860Z E None,
2020-04-14T18:01:22.2191066Z E None,
2020-04-14T18:01:22.2191256Z E None,
2020-04-14T18:01:22.2191485Z E (5,
2020-04-14T18:01:22.2191833Z E - (b'all',)),
2020-04-14T18:01:22.2192094Z E ? ^^^
2020-04-14T18:01:22.2192454Z E + (b'torch',)),
2020-04-14T18:01:22.2192664Z E ? ^^^^^
2020-04-14T18:01:22.2192875Z E None,
2020-04-14T18:01:22.2193084Z E None)),
2020-04-14T18:01:22.2193295Z E (14,
2020-04-14T18:01:22.2193506Z E (49440533498,
2020-04-14T18:01:22.2193939Z E + b'\x80\x02\x8a\nl\xfc\x9cF\xf9 j\xa8P\x19.\x80\x02M\xe9\x03.\x80\x02}'
2020-04-14T18:01:22.2194376Z E + b'q\x00(X\x10\x00\x00\x00protocol_versionq\x01M\xe9\x03X\r\x00\x00\x00li'
2020-04-14T18:01:22.2194829Z E + b'ttle_endianq\x02\x88X\n\x00\x00\x00type_sizesq\x03}q\x04(X'
2020-04-14T18:01:22.2195269Z E + b'\x05\x00\x00\x00shortq\x05K\x02X\x03\x00\x00\x00intq\x06K\x04X\x04\x00'
2020-04-14T18:01:22.2195729Z E + b'\x00\x00longq\x07K\x04uu.\x80\x02ctorch._utils\n_rebuild_tensor_v2\n'
2020-04-14T18:01:22.2196719Z E + b'q\x00((X\x07\x00\x00\x00storageq\x01ctorch\nFloatStorage\nq\x02'
2020-04-14T18:01:22.2197205Z E + b'X\x0e\x00\x00\x0094299946743552q\x03X\x03\x00\x00\x00cpuq\x04K\tNtq'
2020-04-14T18:01:22.2197642Z E + b'\x05QK\x00K\x03K\x03\x86q\x06K\x03K\x01\x86q\x07\x89ccollections\nOrde'
2020-04-14T18:01:22.2198217Z E + b'redDict\nq\x08)Rq\ttq\nRq\x0b.\x80\x02]q\x00X\x0e\x00\x00\x00942999467'
2020-04-14T18:01:22.2198682Z E + b"43552q\x01a.\t\x00\x00\x00\x00\x00\x00\x00p=D??*'\xbf\x11\x07\x97"
2020-04-14T18:01:22.2199134Z E + b'\xbdx\xa2\xd0=\x91\xb3v?\xad\xee->\xfd\xbc\xbc?\xef\x87\xe7'
2020-04-14T18:01:22.2199514Z E + b'\xbe\xa4\x1c\x00@',
2020-04-14T18:01:22.2199880Z E - (6,
2020-04-14T18:01:22.2200242Z E - ((6,
2020-04-14T18:01:22.2200612Z E - (3,
2020-04-14T18:01:22.2200979Z E - 3)),
2020-04-14T18:01:22.2201498Z E - (5,
2020-04-14T18:01:22.2201919Z E - (b'float32',)),
2020-04-14T18:01:22.2202312Z E - (1,
2020-04-14T18:01:22.2202708Z E - (0.7665624618530273,
2020-04-14T18:01:22.2203142Z E - -0.6529883742332458,
2020-04-14T18:01:22.2203545Z E - -0.07374394685029984,
2020-04-14T18:01:22.2203979Z E - 0.10187238454818726,
2020-04-14T18:01:22.2204406Z E - 0.9636774659156799,
2020-04-14T18:01:22.2204821Z E - 0.16985578835010529,
2020-04-14T18:01:22.2205233Z E - 1.4745174646377563,
2020-04-14T18:01:22.2205629Z E - -0.4522089660167694,
2020-04-14T18:01:22.2206066Z E - 2.0017480850219727)))),
2020-04-14T18:01:22.2206388Z E None,
2020-04-14T18:01:22.2206613Z E None,
2020-04-14T18:01:22.2206856Z E None,
2020-04-14T18:01:22.2207062Z E None,
2020-04-14T18:01:22.2207320Z E (5,
2020-04-14T18:01:22.2207701Z E - (b'all',)),
2020-04-14T18:01:22.2207959Z E ? ^^^
2020-04-14T18:01:22.2208337Z E + (b'torch',)),
2020-04-14T18:01:22.2208586Z E ? ^^^^^
2020-04-14T18:01:22.2208813Z E None,
2020-04-14T18:01:22.2209023Z E None))))),
2020-04-14T18:01:22.2209267Z E )
2020-04-14T18:01:22.2209433Z
2020-04-14T18:01:22.2209663Z test/serde/msgpack/test_msgpack_serde_full.py:166: AssertionError
```
**Additional context**
This started being an issue when the order of the test suite was randomized (specifically to shake out flaky tests like this.)
| 2020-04-17T14:46:59 |
||
OpenMined/PySyft | 3,392 | OpenMined__PySyft-3392 | [
"3389"
] | 63af4f2a487d5e96d3a85d256dff13ecd8cbdbb5 | diff --git a/syft/frameworks/torch/mpc/securenn.py b/syft/frameworks/torch/mpc/securenn.py
--- a/syft/frameworks/torch/mpc/securenn.py
+++ b/syft/frameworks/torch/mpc/securenn.py
@@ -70,6 +70,9 @@ def flip(x, dim, dtype):
"""
Reverse the order of the elements in a tensor
"""
+ assert (
+ x.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
indices = torch.arange(x.shape[dim] - 1, -1, -1).type(dtype)
if hasattr(x, "child") and isinstance(x.child, dict):
@@ -141,6 +144,10 @@ def select_share(alpha_sh, x_sh, y_sh):
Return:
z_sh = (1 - alpha_sh) * x_sh + alpha_sh * y_sh
"""
+ assert (
+ alpha_sh.dtype == x_sh.dtype == y_sh.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
+
alice, bob = alpha_sh.locations
crypto_provider = alpha_sh.crypto_provider
L = alpha_sh.field
@@ -343,6 +350,9 @@ def share_convert(a_sh):
An additive sharing tensor with shares in field L-1
"""
assert isinstance(a_sh, sy.AdditiveSharingTensor)
+ assert (
+ a_sh.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
workers = a_sh.locations
crypto_provider = a_sh.crypto_provider
@@ -441,6 +451,9 @@ def relu_deriv(a_sh):
1 if Dec(a_sh) > 0
encrypted in an AdditiveSharingTensor
"""
+ assert (
+ a_sh.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
alice, bob = a_sh.locations
crypto_provider = a_sh.crypto_provider
@@ -477,6 +490,9 @@ def relu(a_sh):
Dec(a_sh) > 0
encrypted in an AdditiveSharingTensor
"""
+ assert (
+ a_sh.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
alice, bob = a_sh.locations
crypto_provider = a_sh.crypto_provider
@@ -499,6 +515,9 @@ def division(x_sh, y_sh, bit_len_max=None):
Returns:
element-wise integer division of x_sh by y_sh
"""
+ assert (
+ x_sh.dtype == y_sh.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
alice, bob = x_sh.locations
crypto_provider = x_sh.crypto_provider
L = x_sh.field
@@ -556,6 +575,10 @@ def maxpool(x_sh):
maximum value as an AdditiveSharingTensor
index of this value in the flattened tensor as an AdditiveSharingTensor
"""
+ assert (
+ x_sh.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
+
if x_sh.is_wrapper:
x_sh = x_sh.child
alice, bob = x_sh.locations
@@ -606,6 +629,10 @@ def maxpool_deriv(x_sh):
an AdditiveSharingTensor of the same shape as x_sh full of zeros except for
a 1 at the position of the max value
"""
+ assert (
+ x_sh.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
+
alice, bob = x_sh.locations
crypto_provider = x_sh.crypto_provider
L = x_sh.field
@@ -653,6 +680,9 @@ def maxpool2d(a_sh, kernel_size: int = 1, stride: int = 1, padding: int = 0):
stride: the stride of the window
padding: implicit zero padding to be added on both sides
"""
+ assert (
+ a_sh.dtype != "custom"
+ ), "`custom` dtype shares are unsupported in SecureNN, use dtype = `long` or `int` instead"
assert len(a_sh.shape) == 4
# Change to tuple if not one
diff --git a/syft/frameworks/torch/tensors/interpreters/additive_shared.py b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
--- a/syft/frameworks/torch/tensors/interpreters/additive_shared.py
+++ b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
@@ -64,12 +64,15 @@ def __init__(
raise ValueError("Field cannot be None for custom dtype")
self.field = field
self.torch_dtype = torch.int32 if field <= 2 ** 32 else torch.int64
- elif dtype == "long":
+ elif dtype == "long" or dtype == "int64":
self.field = 2 ** 64
self.torch_dtype = torch.int64
- elif dtype == "int":
+ self.dtype = "long"
+ elif dtype == "int" or dtype == "int32":
self.field = 2 ** 32
self.torch_dtype = torch.int32
+ self.dtype = "int"
+
else:
if dtype is not None:
raise ValueError("Invalid dtype value: " + dtype)
| AST dtype arg should only accept long and int
**Is your feature request related to a problem? Please describe.**
Currently the dtype arg for initialising AdditiveSharingTensor accepts None, "long", "int", "custom". But as an attempt for standardisation we don't want user to be able to call with arg "custom" it is there only for internal operations(like secureNN).
**Describe the solution you'd like**
Raise a ValueError with a response along the lines of "Invalid value for dtype arg, use long or int"
**Describe alternatives you've considered**
**Additional context**
| 2020-04-23T08:24:26 |
||
OpenMined/PySyft | 3,403 | OpenMined__PySyft-3403 | [
"3398"
] | bc2acbd67c9dc7e61240e64186c87aeb23ede180 | diff --git a/syft/frameworks/torch/torch_attributes.py b/syft/frameworks/torch/torch_attributes.py
--- a/syft/frameworks/torch/torch_attributes.py
+++ b/syft/frameworks/torch/torch_attributes.py
@@ -1,10 +1,8 @@
+import re
from types import ModuleType
-from typing import Union
-from typing import Callable
-from typing import Any
-from syft.generic.frameworks.attributes import FrameworkAttributes
from syft.frameworks.torch.tensors.interpreters.native import TorchTensor
+from syft.generic.frameworks.attributes import FrameworkAttributes
class TorchAttributes(FrameworkAttributes):
@@ -135,6 +133,12 @@ def __init__(self, torch: ModuleType, hook: ModuleType) -> None:
# Dict {method_name: <is_inplace:bool>
self.inplace_methods = {}
+ self._inplace_pattern = re.compile(r"(^__i(?!nit|mport|ter).+_)|^((?!^_+).+[^_])_$")
+ # Positives:
+ # __iadd__, share_
+
+ # Negatives:
+ # __init__, __import__, __iter__, __foo__, __bar_foo
def is_inplace_method(self, method_name):
"""Determine if a method is inplace or not.
@@ -150,6 +154,7 @@ def is_inplace_method(self, method_name):
try:
return self.inplace_methods[method_name]
except KeyError:
- is_inplace = method_name[-1] == "_" and "__" not in method_name
+ is_inplace = True if re.search(self._inplace_pattern, method_name) else False
+
self.inplace_methods[method_name] = is_inplace
return is_inplace
diff --git a/syft/generic/frameworks/hook/hook_args.py b/syft/generic/frameworks/hook/hook_args.py
--- a/syft/generic/frameworks/hook/hook_args.py
+++ b/syft/generic/frameworks/hook/hook_args.py
@@ -717,6 +717,11 @@ def register_tensor(
response_ids: List of ids where the tensor should be stored
and each id is pop out when needed.
"""
+ # This method often leads to re-registration of tensors
+ # hence creating two copies of the same info. The older tensor
+ # is left hanging and is never deleted. De-Registering the original
+ # tensor (if-exists) before registration addresses this problem.
+ owner.de_register_obj(tensor) # Doesn't raise Exceptions if absent on owner
tensor.owner = owner
try:
tensor.id = response_ids.pop(-1)
| diff --git a/test/torch/test_hook.py b/test/torch/test_hook.py
--- a/test/torch/test_hook.py
+++ b/test/torch/test_hook.py
@@ -1,13 +1,14 @@
"""Tests relative to verifying the hook process behaves properly."""
+import re
+
import pytest
import torch
import torch.nn as nn
import torch.nn.functional as F
import syft
-from syft.generic.pointers.pointer_tensor import PointerTensor
-
from syft.exceptions import RemoteObjectFoundError
+from syft.generic.pointers.pointer_tensor import PointerTensor
def test___init__(hook):
@@ -15,6 +16,25 @@ def test___init__(hook):
assert hook.torch.__version__ == torch.__version__
+def test_torch_inplace_method():
+ positives = ["__iadd__", "__imul__", "__idiv__", "share_", "get_", "encrypt_"]
+ negatives = [
+ "__add__",
+ "__init__",
+ "__str__",
+ "share",
+ "get",
+ "encrypt",
+ "__foo",
+ "_bar",
+ "baz__",
+ ]
+ for pos in positives:
+ assert re.search(syft.framework._inplace_pattern, pos)
+ for neg in negatives:
+ assert not re.search(syft.framework._inplace_pattern, neg)
+
+
def test_torch_attributes():
with pytest.raises(RuntimeError):
syft.framework._command_guard("false_command")
| Memory leak in the argmax() method of AST
This code, causes a memory leak only on the first worker in the worker list passed to the `encrypt()`. In this example it is 'alice', the second worker 'bob' does not suffer from any leaks.
The reason is the `argmax()` method:
```
import torch as th
import torch.optim as optim
import syft as sy
hook = sy.TorchHook(th) #, verbose=True)
bob = sy.VirtualWorker(hook, id="bob") #, verbose=True)
alice = sy.VirtualWorker(hook, id="alice") #, verbose=True)
crypto_provider = sy.VirtualWorker(hook, id="james") #, verbose=True)
a = th.ones(1, 5)
a = a.encrypt(workers=[alice, bob], crypto_provider=crypto_provider, requires_grad=True)
for i in range(3):
print("=" * 10 + f"Iter{i + 1}" + "=" * 10)
print(f"Alice: {len(alice._objects)}")
print(f"Bob: {len(bob._objects)}")
a.argmax(dim=1)
print(f"Alice: {len(alice._objects)}")
print(f"Bob: {len(bob._objects)}")
print("~" * 10 + "Done" + "~" * 10)
```
Here is the output (Only Alice leaks):
```
==========Iter1==========
Alice: 1
Bob: 1
Alice: 13
Bob: 1
==========Iter2==========
Alice: 13
Bob: 1
Alice: 25
Bob: 1
==========Iter3==========
Alice: 25
Bob: 1
Alice: 37
Bob: 1
~~~~~~~~~~Done~~~~~~~~~~
```
If you comment out `a.argmax(dim=1)`, no memory leaks are observed:
```
==========Iter1==========
Alice: 1
Bob: 1
Alice: 1
Bob: 1
==========Iter2==========
Alice: 1
Bob: 1
Alice: 1
Bob: 1
==========Iter3==========
Alice: 1
Bob: 1
Alice: 1
Bob: 1
~~~~~~~~~~Done~~~~~~~~~~
```
| 2020-04-24T17:18:22 |
|
OpenMined/PySyft | 3,406 | OpenMined__PySyft-3406 | [
"3385"
] | 7feea7b1e38ca2b23548a0b44f5a5d41946c1802 | diff --git a/syft/frameworks/crypten/hook/hook.py b/syft/frameworks/crypten/hook/hook.py
--- a/syft/frameworks/crypten/hook/hook.py
+++ b/syft/frameworks/crypten/hook/hook.py
@@ -1,6 +1,8 @@
from functools import wraps
+import copy
import syft
+from syft.execution.plan import Plan
import crypten
import torch as th
@@ -28,3 +30,154 @@ def unhook_plan_building():
for method_name in methods_to_hook:
method = getattr(crypten, f"native_{method_name}")
setattr(crypten, method_name, method)
+
+
+def hook_crypten():
+ from syft.frameworks.crypten import load as crypten_load
+
+ setattr(crypten, "load", crypten_load)
+
+
+def hook_crypten_module():
+ """Overloading crypten.nn.Module with PySyft functionality, the primary module
+ responsible for core ML functionality such as Neural network layers and
+ loss functions.
+ It is important to note that all the operations are actually in-place.
+ """
+ import crypten
+
+ def _check_encrypted(model):
+ if model.encrypted:
+ raise RuntimeError("Crypten model must be unencrypted to run PySyft operations")
+
+ crypten.nn.Module._check_encrypted = _check_encrypted
+
+ def module_is_missing_grad(model):
+ """Checks if all the parameters in the model have been assigned a gradient"""
+ for p in model.parameters():
+ if p.grad is None:
+ return True
+ return False
+
+ def create_grad_objects(model):
+ """Assigns gradient to model parameters if not assigned"""
+ for p in model.parameters():
+ if p.requires_grad: # check if the object requires a grad object
+ o = p.sum()
+ o.backward()
+ if p.grad is not None:
+ p.grad -= p.grad
+
+ def module_send_(nn_self, *dest, force_send=False, **kwargs):
+ """Overloads crypten.nn instances so that they could be sent to other workers"""
+ nn_self._check_encrypted()
+
+ if module_is_missing_grad(nn_self):
+ create_grad_objects(nn_self)
+
+ for p in nn_self.parameters():
+ p.send_(*dest, **kwargs)
+
+ if isinstance(nn_self.forward, Plan):
+ nn_self.forward.send(*dest, force=force_send)
+
+ return nn_self
+
+ crypten.nn.Module.send = module_send_
+ crypten.nn.Module.send_ = module_send_
+
+ def module_move_(nn_self, destination):
+ nn_self._check_encrypted()
+ params = list(nn_self.parameters())
+ for p in params:
+ p.move(destination)
+
+ crypten.nn.Module.move = module_move_
+
+ def module_get_(nn_self):
+ """Overloads crypten.nn instances with get method so that parameters could be sent back to owner"""
+ nn_self._check_encrypted()
+ for p in nn_self.parameters():
+ p.get_()
+
+ if isinstance(nn_self.forward, Plan):
+ nn_self.forward.get()
+
+ return nn_self
+
+ crypten.nn.Module.get_ = module_get_
+ crypten.nn.Module.get = module_get_
+
+ def module_share_(nn_self, *args, **kwargs):
+ """Overloads share for crypten.nn.Module."""
+ # TODO: add .data and .grad to syft tensors
+ nn_self._check_encrypted()
+ if module_is_missing_grad(nn_self):
+ create_grad_objects(nn_self)
+
+ for p in nn_self.parameters():
+ p.share_(*args, **kwargs)
+
+ return nn_self
+
+ crypten.nn.Module.share_ = module_share_
+ crypten.nn.Module.share = module_share_
+
+ def module_fix_precision_(nn_self, *args, **kwargs):
+ """Overloads fix_precision for crypten.nn.Module."""
+ nn_self._check_encrypted()
+ if module_is_missing_grad(nn_self):
+ create_grad_objects(nn_self)
+
+ for p in nn_self.parameters():
+ p.fix_precision_(*args, **kwargs)
+
+ return nn_self
+
+ crypten.nn.Module.fix_precision_ = module_fix_precision_
+ crypten.nn.Module.fix_precision = module_fix_precision_
+ crypten.nn.Module.fix_prec = module_fix_precision_
+
+ def module_float_precision_(nn_self):
+ """Overloads float_precision for crypten.nn.Module, convert fix_precision
+ parameters to normal float parameters"""
+ # TODO: add .data and .grad to syft tensors
+ # if module_is_missing_grad(nn_self):
+ # create_grad_objects(nn_self)
+ nn_self._check_encrypted()
+ for p in nn_self.parameters():
+ p.float_precision_()
+
+ return nn_self
+
+ crypten.nn.Module.float_precision_ = module_float_precision_
+ crypten.nn.Module.float_precision = module_float_precision_
+ crypten.nn.Module.float_prec = module_float_precision_
+
+ def module_copy(nn_self):
+ """Returns a copy of a crypten.nn.Module"""
+ nn_self._check_encrypted()
+ return copy.deepcopy(nn_self)
+
+ crypten.nn.Module.copy = module_copy
+
+ @property
+ def owner(nn_self):
+ nn_self._check_encrypted()
+ for p in nn_self.parameters():
+ return p.owner
+
+ crypten.nn.Module.owner = owner
+
+ @property
+ def location(nn_self):
+ nn_self._check_encrypted()
+ try:
+ for p in nn_self.parameters():
+ return p.location
+ except AttributeError:
+ raise AttributeError(
+ "Module has no attribute location, did you already send it to some location?"
+ )
+
+ crypten.nn.Module.location = location
diff --git a/syft/frameworks/torch/hook/hook.py b/syft/frameworks/torch/hook/hook.py
--- a/syft/frameworks/torch/hook/hook.py
+++ b/syft/frameworks/torch/hook/hook.py
@@ -36,6 +36,8 @@
if dependency_check.crypten_available:
import crypten
+from syft.frameworks.crypten.hook.hook import hook_crypten
+from syft.frameworks.crypten.hook.hook import hook_crypten_module
class TorchHook(FrameworkHook):
@@ -215,8 +217,8 @@ def __init__(
# Hook the Crypten module
if dependency_check.crypten_available:
- self._hook_crypten()
- self._hook_crypten_module()
+ hook_crypten()
+ hook_crypten_module()
# Add the local_worker to syft so that it can be found if the hook is
# called several times
@@ -477,155 +479,6 @@ def _hook_torch_module(self):
self._perform_function_overloading(module_name, torch_module, func)
- def _hook_crypten(self):
- from syft.frameworks.crypten import load as crypten_load
-
- setattr(crypten, "load", crypten_load)
-
- def _hook_crypten_module(self):
- """Overloading crypten.nn.Module with PySyft functionality, the primary module
- responsible for core ML functionality such as Neural network layers and
- loss functions.
- It is important to note that all the operations are actually in-place.
- """
- import crypten
-
- def _check_encrypted(model):
- if model.encrypted:
- raise RuntimeError("Crypten model must be unencrypted to run PySyft operations")
-
- crypten.nn.Module._check_encrypted = _check_encrypted
-
- def module_is_missing_grad(model):
- """Checks if all the parameters in the model have been assigned a gradient"""
- for p in model.parameters():
- if p.grad is None:
- return True
- return False
-
- def create_grad_objects(model):
- """Assigns gradient to model parameters if not assigned"""
- for p in model.parameters():
- if p.requires_grad: # check if the object requires a grad object
- o = p.sum()
- o.backward()
- if p.grad is not None:
- p.grad -= p.grad
-
- def module_send_(nn_self, *dest, force_send=False, **kwargs):
- """Overloads crypten.nn instances so that they could be sent to other workers"""
- nn_self._check_encrypted()
-
- if module_is_missing_grad(nn_self):
- create_grad_objects(nn_self)
-
- for p in nn_self.parameters():
- p.send_(*dest, **kwargs)
-
- if isinstance(nn_self.forward, Plan):
- nn_self.forward.send(*dest, force=force_send)
-
- return nn_self
-
- crypten.nn.Module.send = module_send_
- crypten.nn.Module.send_ = module_send_
-
- def module_move_(nn_self, destination):
- nn_self._check_encrypted()
- params = list(nn_self.parameters())
- for p in params:
- p.move(destination)
-
- crypten.nn.Module.move = module_move_
-
- def module_get_(nn_self):
- """Overloads crypten.nn instances with get method so that parameters could be sent back to owner"""
- nn_self._check_encrypted()
- for p in nn_self.parameters():
- p.get_()
-
- if isinstance(nn_self.forward, Plan):
- nn_self.forward.get()
-
- return nn_self
-
- crypten.nn.Module.get_ = module_get_
- crypten.nn.Module.get = module_get_
-
- def module_share_(nn_self, *args, **kwargs):
- """Overloads share for crypten.nn.Module."""
- # TODO: add .data and .grad to syft tensors
- nn_self._check_encrypted()
- if module_is_missing_grad(nn_self):
- create_grad_objects(nn_self)
-
- for p in nn_self.parameters():
- p.share_(*args, **kwargs)
-
- return nn_self
-
- crypten.nn.Module.share_ = module_share_
- crypten.nn.Module.share = module_share_
-
- def module_fix_precision_(nn_self, *args, **kwargs):
- """Overloads fix_precision for crypten.nn.Module."""
- nn_self._check_encrypted()
- if module_is_missing_grad(nn_self):
- create_grad_objects(nn_self)
-
- for p in nn_self.parameters():
- p.fix_precision_(*args, **kwargs)
-
- return nn_self
-
- crypten.nn.Module.fix_precision_ = module_fix_precision_
- crypten.nn.Module.fix_precision = module_fix_precision_
- crypten.nn.Module.fix_prec = module_fix_precision_
-
- def module_float_precision_(nn_self):
- """Overloads float_precision for crypten.nn.Module, convert fix_precision
- parameters to normal float parameters"""
- # TODO: add .data and .grad to syft tensors
- # if module_is_missing_grad(nn_self):
- # create_grad_objects(nn_self)
- nn_self._check_encrypted()
- for p in nn_self.parameters():
- p.float_precision_()
-
- return nn_self
-
- crypten.nn.Module.float_precision_ = module_float_precision_
- crypten.nn.Module.float_precision = module_float_precision_
- crypten.nn.Module.float_prec = module_float_precision_
-
- def module_copy(nn_self):
- """Returns a copy of a crypten.nn.Module"""
- nn_self._check_encrypted()
- return copy.deepcopy(nn_self)
-
- crypten.nn.Module.copy = module_copy
-
- @property
- def owner(nn_self):
- nn_self._check_encrypted()
- for p in nn_self.parameters():
- return p.owner
-
- crypten.nn.Module.owner = owner
-
- @property
- def location(nn_self):
- nn_self._check_encrypted()
- try:
- for p in nn_self.parameters():
- return p.location
- except AttributeError:
- raise AttributeError(
- "Module has no attribute location, did you already send it to some location?"
- )
-
- crypten.nn.Module.location = location
-
def _get_hooked_additive_shared_method(hook_self, attr):
"""
Hook a method to send it multiple remote workers
diff --git a/syft/grid/abstract_grid.py b/syft/grid/abstract_grid.py
--- a/syft/grid/abstract_grid.py
+++ b/syft/grid/abstract_grid.py
@@ -8,6 +8,8 @@
from abc import ABC, abstractmethod
+from syft.workers.node_client import NodeClient
+
class AbstractGrid(ABC):
def __init__(self):
| Move CrypTen Hook functionality to a different file
**Is your feature request related to a problem? Please describe.**
Currently, we have the functionality to hook the CrypTen module inside the ```TorchHook```. We should separate the framework hook information.
**Describe the solution you'd like**
We should move this functionality to a separate file and in ```TorchHook``` import that functionality and call it.
| I would like to work on this issue. | 2020-04-25T18:51:40 |
|
OpenMined/PySyft | 3,426 | OpenMined__PySyft-3426 | [
"2996"
] | 5dcce074495e4183f31ae9f5154b640eb46d8fd3 | diff --git a/syft/federated/fl_client.py b/syft/federated/fl_client.py
new file mode 100644
--- /dev/null
+++ b/syft/federated/fl_client.py
@@ -0,0 +1,26 @@
+from urllib.parse import urlparse
+from syft.grid.grid_client import GridClient
+from syft.federated.fl_job import FLJob
+
+
+class FLClient:
+ def __init__(self, url, auth_token, verbose=False):
+ self.url = url
+ self.auth_token = auth_token
+ self.worker_id = None
+
+ url_fragments = urlparse(url)
+ self.grid_client = GridClient(id="", address=url_fragments.netloc, secure=not verbose,)
+
+ def new_job(self, model_name, model_version) -> FLJob:
+ if self.worker_id is None:
+ auth_response = self.grid_client.authenticate(self.auth_token)
+ self.worker_id = auth_response["data"]["worker_id"]
+
+ job = FLJob(
+ fl_client=self,
+ grid_client=self.grid_client,
+ model_name=model_name,
+ model_version=model_version,
+ )
+ return job
diff --git a/syft/federated/fl_job.py b/syft/federated/fl_job.py
new file mode 100644
--- /dev/null
+++ b/syft/federated/fl_job.py
@@ -0,0 +1,96 @@
+from syft.grid.grid_client import GridClient
+from syft.grid.grid_client import GridError
+from syft.execution.state import State
+from syft.execution.placeholder import PlaceHolder
+
+
+class EventEmitter:
+ def __init__(self):
+ self.listeners = {}
+ pass
+
+ def add_listener(self, event_name, fn):
+ if event_name not in self.listeners:
+ self.listeners[event_name] = []
+ self.listeners[event_name].append(fn)
+
+ def trigger(self, event_name, *args, **kwargs):
+ if event_name in self.listeners:
+ for fn in self.listeners[event_name]:
+ fn(*args, **kwargs)
+
+
+class FLJob(EventEmitter):
+ EVENT_ACCEPTED = "accepted"
+ EVENT_REJECTED = "rejected"
+ EVENT_ERROR = "error"
+
+ def __init__(self, fl_client, grid_client: GridClient, model_name: str, model_version: str):
+ super().__init__()
+ self.fl_client = fl_client
+ self.grid_client = grid_client
+ self.model_name = model_name
+ self.model_version = model_version
+
+ self.model = None
+ self.plans = {}
+ self.protocols = {}
+ self.cycle_params = {}
+ self.client_config = {}
+
+ def _init_cycle(self, cycle_params: dict):
+ self.cycle_params = cycle_params
+ self.client_config = cycle_params["client_config"]
+
+ worker_id = self.fl_client.worker_id
+ request_key = cycle_params["request_key"]
+
+ # Load model
+ self.model = self.grid_client.get_model(worker_id, request_key, cycle_params["model_id"])
+
+ # Load plans
+ for plan_name, plan_id in cycle_params["plans"].items():
+ self.plans[plan_name] = self.grid_client.get_plan(
+ worker_id, request_key, plan_id, GridClient.PLAN_TYPE_TORCHSCRIPT
+ )
+
+ # Load protocols
+ for protocol_name, protocol_id in cycle_params["protocols"].items():
+ self.protocols[protocol_name] = self.grid_client.get_protocol(
+ worker_id, request_key, protocol_id
+ )
+
+ def start(self):
+ try:
+ speed_info = self.grid_client.get_connection_speed(self.fl_client.worker_id)
+ cycle_request_response = self.grid_client.cycle_request(
+ worker_id=self.fl_client.worker_id,
+ model_name=self.model_name,
+ model_version=self.model_version,
+ speed_info=speed_info,
+ )
+ cycle_params = cycle_request_response["data"]
+
+ if cycle_params["status"] == GridClient.CYCLE_STATUS_ACCEPTED:
+ self._init_cycle(cycle_params)
+ self.trigger(self.EVENT_ACCEPTED, self)
+ elif cycle_params["status"] == GridClient.CYCLE_STATUS_REJECTED:
+ self.trigger(self.EVENT_REJECTED, self, cycle_params.get("timeout", None))
+ except GridError as e:
+ self.trigger(self.EVENT_ERROR, self, f"Grid communication error: {e.error}")
+
+ def report(self, updated_model_params: list):
+ # Calc params diff
+ orig_params = self.model.tensors()
+ diff_params = [orig_params[i] - updated_model_params[i] for i in range(len(orig_params))]
+
+ # Wrap diff in State
+ diff_ph = [PlaceHolder().instantiate(t) for t in diff_params]
+ diff = State(state_placeholders=diff_ph)
+
+ response = self.grid_client.report(
+ worker_id=self.fl_client.worker_id,
+ request_key=self.cycle_params["request_key"],
+ diff=diff,
+ )
+ return response
diff --git a/syft/grid/grid_client.py b/syft/grid/grid_client.py
--- a/syft/grid/grid_client.py
+++ b/syft/grid/grid_client.py
@@ -1,16 +1,34 @@
import json
import binascii
+import base64
import websocket
import websockets
+import requests
import syft as sy
from syft.serde import protobuf
+from syft.execution.state import State
+from syft_proto.execution.v1.plan_pb2 import Plan as PlanPB
+from syft_proto.execution.v1.state_pb2 import State as StatePB
+from syft_proto.execution.v1.protocol_pb2 import Protocol as ProtocolPB
+
TIMEOUT_INTERVAL = 60
+class GridError(BaseException):
+ def __init__(self, error, status):
+ self.status = status
+ self.error = error
+
+
class GridClient:
+ CYCLE_STATUS_ACCEPTED = "accepted"
+ CYCLE_STATUS_REJECTED = "rejected"
+ PLAN_TYPE_LIST = "list"
+ PLAN_TYPE_TORCHSCRIPT = "torchscript"
+
def __init__(self, id: str, address: str, secure: bool = False):
self.id = id
self.address = address
@@ -19,11 +37,15 @@ def __init__(self, id: str, address: str, secure: bool = False):
self.serialize_worker = sy.VirtualWorker(hook=None)
@property
- def url(self):
+ def ws_url(self):
return f"wss://{self.address}" if self.secure else f"ws://{self.address}"
+ @property
+ def http_url(self):
+ return f"https://{self.address}" if self.secure else f"http://{self.address}"
+
def connect(self):
- args_ = {"max_size": None, "timeout": TIMEOUT_INTERVAL, "url": self.url}
+ args_ = {"max_size": None, "timeout": TIMEOUT_INTERVAL, "url": self.ws_url}
self.ws = websocket.create_connection(**args_)
@@ -34,8 +56,32 @@ def _send_msg(self, message: dict) -> dict:
Returns:
response (dict) : response payload.
"""
+ if self.ws is None or not self.ws.connected:
+ self.connect()
+
self.ws.send(json.dumps(message))
- return json.loads(self.ws.recv())
+ json_response = json.loads(self.ws.recv())
+
+ # print("REQ", message)
+ # print("RES", json_response)
+
+ error = json_response["data"].get("error", None)
+ if error is not None:
+ raise GridError(error, None)
+
+ return json_response
+
+ def _send_http_req(self, method, path: str, params: dict = None, body: bytes = None):
+ if method == "GET":
+ res = requests.get(self.http_url + path, params)
+ elif method == "POST":
+ res = requests.post(self.http_url + path, body)
+
+ if not res.ok:
+ raise GridError("HTTP response is not OK", res.status_code)
+
+ response = res.content
+ return response
def _serialize(self, obj):
"""Serializes object to protobuf"""
@@ -48,6 +94,12 @@ def _serialize_object(self, obj):
serialized_object[k] = binascii.hexlify(self._serialize(v)).decode()
return serialized_object
+ def _unserialize(self, serialized_obj, obj_protobuf_type):
+ pb = obj_protobuf_type()
+ pb.ParseFromString(serialized_obj)
+ serialization_worker = sy.VirtualWorker(hook=None, auto_add=False)
+ return protobuf.serde._unbufferize(serialization_worker, pb)
+
def close(self):
self.ws.shutdown()
@@ -79,3 +131,64 @@ def host_federated_training(
}
return self._send_msg(message)
+
+ def authenticate(self, auth_token):
+ message = {
+ "type": "federated/authenticate",
+ "data": {"auth_token": auth_token},
+ }
+
+ return self._send_msg(message)
+
+ def cycle_request(self, worker_id, model_name, model_version, speed_info):
+ message = {
+ "type": "federated/cycle-request",
+ "data": {
+ "worker_id": worker_id,
+ "model": model_name,
+ "version": model_version,
+ **speed_info,
+ },
+ }
+ return self._send_msg(message)
+
+ def get_model(self, worker_id, request_key, model_id):
+ params = {
+ "worker_id": worker_id,
+ "request_key": request_key,
+ "model_id": model_id,
+ }
+ serialized_model = self._send_http_req("GET", "/federated/get-model", params)
+ return self._unserialize(serialized_model, StatePB)
+
+ def get_plan(self, worker_id, request_key, plan_id, receive_operations_as):
+ params = {
+ "worker_id": worker_id,
+ "request_key": request_key,
+ "plan_id": plan_id,
+ "receive_operations_as": receive_operations_as,
+ }
+ serialized_plan = self._send_http_req("GET", "/federated/get-plan", params)
+ return self._unserialize(serialized_plan, PlanPB)
+
+ def get_protocol(self, worker_id, request_key, protocol_id):
+ params = {
+ "worker_id": worker_id,
+ "request_key": request_key,
+ "plan_id": protocol_id,
+ }
+ serialized_protocol = self._send_http_req("GET", "/federated/get-protocol", params)
+ return self._unserialize(serialized_protocol, ProtocolPB)
+
+ def report(self, worker_id: str, request_key: str, diff: State):
+ diff_serialized = self._serialize(diff)
+ diff_base64 = base64.b64encode(diff_serialized).decode("ascii")
+ params = {
+ "type": "federated/report",
+ "data": {"worker_id": worker_id, "request_key": request_key, "diff": diff_base64,},
+ }
+ return self._send_msg(params)
+
+ def get_connection_speed(self, worker_id):
+ # TODO
+ return {"ping": 5, "download": 100, "upload": 100}
| GSoC Project: Build a federated learning worker for PySyft
Based on the [Federated Learning roadmap](https://github.com/OpenMined/Roadmap/blob/master/web_and_mobile_team/projects/federated_learning.md) we need to build a worker library for static federated learning in PySyft. The worker will be based on the other 3 worker libraries that already exist: [syft.js](https://github.com/OpenMined/syft.js) (web), [SwiftSyft](https://github.com/OpenMined/SwiftSyft) (iOS), and [KotlinSyft](https://github.com/OpenMined/KotlinSyft) (Android). The API will be a mirror image of the other federated learning worker libraries, allowing for environments other than a web browser or mobile device to be supported: IoT, Raspberry Pi, desktop application, Windows phone, etc.
**Requirements:**
- Willing to work with the Web & Mobile team and be mentored by @vkkhare, the author of KotlinSyft
- Familiarity with Websockets
- Familiarity with WebRTC
- Good knowledge of PySyft
- Intimate knowledge of Python
- History of developing libraries (in any language)
- Good knowledge and understanding of the problems and challenges described in the [Federated Learning roadmap](https://github.com/OpenMined/Roadmap/blob/master/web_and_mobile_team/projects/federated_learning.md) (questions can be directed to @cereallarceny)
**Difficulty:** High
This is quite a large project and would more or less allow for static federated learning to be performed in any environment. _A federated learning system of this breadth has not previously been developed, anywhere._ Make no mistake, this is a big project, but it also has very far-reaching ramifications for federated learning as a whole. Fortunately, the web and mobile team has already built 3 federated learning worker libraries in the past, so you'll be inheriting the wisdom, patterns, and roadmap that has basically been finalized by your new colleagues. If you're not afraid of hard projects with a pretty large payoff, this is the one for you.
| 2020-04-29T23:23:24 |
||
OpenMined/PySyft | 3,441 | OpenMined__PySyft-3441 | [
"3431"
] | 63af4f2a487d5e96d3a85d256dff13ecd8cbdbb5 | diff --git a/syft/frameworks/torch/nn/rnn.py b/syft/frameworks/torch/nn/rnn.py
--- a/syft/frameworks/torch/nn/rnn.py
+++ b/syft/frameworks/torch/nn/rnn.py
@@ -184,107 +184,88 @@ def __init__(
# TODO: implement a nn.Dropout class for PySyft
# Link to issue: https://github.com/OpenMined/PySyft/issues/2500
- # Build RNN layers
- self.rnn_forward = nn.ModuleList()
- for layer in range(self.num_layers):
- if layer == 0:
- self.rnn_forward.append(base_cell(input_size, hidden_size, bias, nonlinearity))
- else:
- self.rnn_forward.append(base_cell(hidden_size, hidden_size, bias, nonlinearity))
+ # Build RNN forward layers
+ sizes = [input_size, *(hidden_size for _ in range(self.num_layers - 1))]
+ self.rnn_forward = nn.ModuleList(
+ (base_cell(sz, hidden_size, bias, nonlinearity) for sz in sizes)
+ )
+ # Build RNN backward layers, if needed
if self.bidirectional:
- self.rnn_backward = nn.ModuleList()
- for layer in range(self.num_layers):
- if layer == 0:
- self.rnn_backward.append(base_cell(input_size, hidden_size, bias, nonlinearity))
- else:
- self.rnn_backward.append(
- base_cell(hidden_size, hidden_size, bias, nonlinearity)
- )
+ self.rnn_backward = nn.ModuleList(
+ (base_cell(sz, hidden_size, bias, nonlinearity) for sz in sizes)
+ )
- def forward(self, x, h=None):
- # If batch_first == True, swap axis with seq_len
- # At the end of the process we swap it back to the original structure
+ def forward(self, x, hc=None):
+ # If batch_first == True, swap batch with seq_len
+ # At the end of the procedure we swap it back to the original structure
if self.batch_first:
- x, h = self._swap_axis(x, h)
+ x = x.transpose(0, 1)
+
+ # If hc is not None, hc is either a Tensor (RNNCell or GRUCell hidden state),
+ # or a 2-tuple of Tensors (LSTMCell hidden and cell states).
+ # For convenience, we make hc always listy so that:
+ # hc[0] is the hidden state
+ # hc[1] if it exists, is the cell state
+ # At the end of the procedure, we swap it back to the original structure
+ if hc is None:
+ # Initialize hc
+ hc = [self._init_hidden(x) for _ in range(2 if self.is_lstm else 1)]
+ else:
+ # Standardize hc per comment above
+ if not self.is_lstm:
+ hc = [hc]
- # If it is LSTM, get hidden and cell states
- if h is not None and self.is_lstm:
- h, c = h
+ # As we did to x above, we swap back at the end of the procedure
+ if self.batch_first:
+ hc = [item.transpose(0, 1) for item in hc]
batch_size = x.shape[1]
seq_len = x.shape[0]
- # Initiate states if needed
- if h is None:
- h = self._init_hidden(x)
- c = self._init_hidden(x) if self.is_lstm else None
-
- # If bidirectional==True, split states in two, each one for each direction
+ # If bidirectional==True, split states in two, one for each direction
if self.bidirectional:
- h = h.contiguous().view(self.num_layers, 2, batch_size, self.hidden_size)
- h_for = h[:, 0, :, :]
- h_back = h[:, 1, :, :]
- if self.is_lstm:
- c = c.contiguous().view(self.num_layers, 2, batch_size, self.hidden_size)
- c_for = c[:, 0, :, :]
- c_back = c[:, 1, :, :]
- else:
- c_for = c_back = None
-
+ hc = [
+ item.contiguous().view(self.num_layers, 2, batch_size, self.hidden_size)
+ for item in hc
+ ]
+ hc_fwd = [item[:, 0, :, :] for item in hc]
+ hc_back = [item[:, 1, :, :] for item in hc]
else:
- h_for = h
- c_for = c if self.is_lstm else None
+ hc_fwd = hc
# Run through rnn in the forward direction
output = x.new(seq_len, batch_size, self.hidden_size).zero_()
for t in range(seq_len):
- h_for, c_for = self._apply_time_step(x, h_for, c_for, t)
- output[t, :, :] = h_for[-1, :, :]
-
- hidden = h_for
- cell = c_for # None if it is not an LSTM
+ hc_fwd = self._apply_time_step(x, hc_fwd, t)
+ output[t, :, :] = hc_fwd[0][-1, :, :]
# Run through rnn in the backward direction if bidirectional==True
if self.bidirectional:
output_back = x.new(seq_len, batch_size, self.hidden_size).zero_()
for t in range(seq_len - 1, -1, -1):
- h_back, c_back = self._apply_time_step(x, h_back, c_back, t, reverse_direction=True)
- output_back[t, :, :] = h_back[-1, :, :]
+ hc_back = self._apply_time_step(x, hc_back, t, reverse_direction=True)
+ output_back[t, :, :] = hc_back[0][-1, :, :]
# Concatenate both directions
output = torch.cat((output, output_back), dim=-1)
- hidden = torch.cat((hidden, h_back), dim=0)
- if self.is_lstm:
- cell = torch.cat((cell, c_back), dim=0)
+ hidden = [
+ torch.cat((hid_item, back_item), dim=0)
+ for hid_item, back_item in zip(hc_fwd, hc_back)
+ ]
+ else:
+ hidden = hc_fwd
# If batch_first == True, swap axis back to get original structure
if self.batch_first:
- output = torch.transpose(output, 0, 1)
- hidden = torch.transpose(hidden, 0, 1)
- if self.is_lstm:
- cell = torch.transpose(cell, 0, 1)
+ output = output.transpose(0, 1)
+ hidden = [item.transpose(0, 1) for item in hidden]
- hidden = (hidden, cell) if self.is_lstm else hidden
+ # Reshape hidden to the original shape of hc
+ hidden = tuple(hidden) if self.is_lstm else hidden[0]
return output, hidden
- def _swap_axis(self, x, h):
- """
- This method swap the axes for batch_size and seq_len. It is used when
- batch_first==True.
- """
- x = torch.transpose(x, 0, 1)
- if h is not None:
- if self.is_lstm:
- h, c = h
- h = torch.transpose(h, 0, 1)
- c = torch.transpose(c, 0, 1)
- h = (h, c)
- else:
- h = torch.transpose(h, 0, 1)
- return x, h
-
def _init_hidden(self, input):
"""
This method initializes a hidden state when no hidden state is provided
@@ -309,34 +290,24 @@ def _init_hidden(self, input):
h = h.share(*owners, crypto_provider=crypto_provider)
return h
- def _apply_time_step(self, x, h, c, t, reverse_direction=False):
+ def _apply_time_step(self, x, hc, t, reverse_direction=False):
"""
Apply RNN layers at time t, given input and previous hidden states
"""
rnn_layers = self.rnn_backward if reverse_direction else self.rnn_forward
- h_next = h.new(h.shape).zero_()
- c_next = c.new(c.shape).zero_() if self.is_lstm else None
+ hc = torch.stack([*hc])
+ hc_next = torch.zeros_like(hc)
for layer in range(self.num_layers):
- if layer == 0:
- if self.is_lstm:
- h_next[layer, :, :], c_next[layer, :, :] = rnn_layers[layer](
- x[t, :, :], (h[layer, :, :], c[layer, :, :])
- )
- else:
- h_next[layer, :, :] = rnn_layers[layer](x[t, :, :], h[layer, :, :])
+ inp = x[t, :, :] if layer == 0 else hc_next[0][layer - 1, :, :].clone()
+
+ if self.is_lstm:
+ hc_next[:, layer, :, :] = torch.stack(rnn_layers[layer](inp, hc[:, layer, :, :]))
else:
- if self.is_lstm:
- h_next[layer, :, :], c_next[layer, :, :] = rnn_layers[layer](
- h_next[layer - 1, :, :].clone(), (h[layer, :, :], c[layer, :, :])
- )
- else:
- h_next[layer, :, :] = rnn_layers[layer](
- h_next[layer - 1, :, :].clone(), h[layer, :, :]
- )
-
- return h_next, c_next
+ hc_next[0][layer, :, :] = rnn_layers[layer](inp, hc[0][layer, :, :])
+
+ return hc_next
class RNN(RNNBase):
| Reduce complexity of ```forward``` from rnn
Split the code from ```forward``` into two separate functions and remove the ```noqa: C901```.
**Describe alternatives you've considered**
Simplify the function such that no split is required.
**Additional context**
Code quality:
```
2020-04-29T13:13:32.6026063Z ./syft/frameworks/torch/nn/rnn.py:205:5: C901 'RNNBase.forward' is too complex (12)
2020-04-29T13:13:32.6026250Z def forward(self, x, h=None):
```
| Hey, is it okay if I start working on this one?
Hey, sure. Assigned the issue to you. | 2020-05-03T15:06:43 |
|
OpenMined/PySyft | 3,442 | OpenMined__PySyft-3442 | [
"3435"
] | 822eb2552ef0f2495dd5803648e6e3795b0bb94f | diff --git a/syft/generic/pointers/pointer_tensor.py b/syft/generic/pointers/pointer_tensor.py
--- a/syft/generic/pointers/pointer_tensor.py
+++ b/syft/generic/pointers/pointer_tensor.py
@@ -274,6 +274,9 @@ def move(self, destination: AbstractWorker, requires_grad: bool = False):
if self.owner.id == destination.id:
return self.get()
+ if self.location.id == destination.id:
+ return self
+
ptr = self.remote_send(destination, requires_grad=requires_grad)
# We make the pointer point at the remote value. As the id doesn't change,
| diff --git a/test/torch/pointers/test_pointer_tensor.py b/test/torch/pointers/test_pointer_tensor.py
--- a/test/torch/pointers/test_pointer_tensor.py
+++ b/test/torch/pointers/test_pointer_tensor.py
@@ -329,6 +329,12 @@ def test_move(workers):
z = y.move(me)
assert (z == t).all()
+ # Move object to same location
+ alice.clear_objects()
+ t = torch.tensor([1.0, 2, 3, 4, 5]).send(bob)
+ t = t.move(bob)
+ assert torch.all(torch.eq(t.get(), torch.tensor([1.0, 2, 3, 4, 5])))
+
def test_combine_pointers(workers):
"""
| Move() on same machine
Dear all,
I was not sure whether to classify this as a "bug" or "new feature", please move it in case this is wrong.
Is the behavior of the move() method intended to throw an error, when trying to move an object to a virtual worker that already possesses the object?
In case this happens, a objectnotfounderror is raised:
_ObjectNotFoundError: Object "56137057453" not found on worker!!! You just tried to interact with an object ID:56137057453 on <VirtualWorker id:manager_alice #objects:12> which does not exist!!! Use .send() and .get() on all your tensors to make sure they're on the same machines. If you think this tensor does exist, check the ._objects dictionary on the worker and see for yourself!!! The most common reason this error happens is because someone calls .get() on the object's pointer without realizing it (which deletes the remote object and sends it to the pointer). Check your code to make sure you haven't already called .get() on this pointer!!!_
Of course one can handle this in the control logic by checking the object ownership before calling the move method, but I think it would be nice if the move() method just ignores the request in case one tries to move objects that a worker already possesses.
When looping over multiple workers this would simplify the control logic.
| 2020-05-03T16:01:05 |
|
OpenMined/PySyft | 3,521 | OpenMined__PySyft-3521 | [
"3490"
] | 94134c3bc8d038913eb6842240627ea95b3dd4dc | diff --git a/syft/execution/protocol.py b/syft/execution/protocol.py
--- a/syft/execution/protocol.py
+++ b/syft/execution/protocol.py
@@ -19,6 +19,7 @@
from syft.generic.frameworks.types import FrameworkLayerModule
from syft.generic.object import AbstractObject
from syft.workers.abstract import AbstractWorker
+from syft.workers.virtual import VirtualWorker
from syft_proto.execution.v1.protocol_pb2 import Protocol as ProtocolPB
@@ -32,12 +33,16 @@ class func2protocol(object):
This class should be used only as a decorator.
"""
- def __init__(self, args_shape=None):
+ def __init__(self, roles: list = [], args_shape: dict = {}):
self.args_shape = args_shape
+ self.role_names = roles
def __call__(self, protocol_function):
# create the roles present in decorator
- roles = {role_id: Role() for role_id in self.args_shape.keys()}
+ roles = {
+ role_id: Role(worker=VirtualWorker(id=role_id, hook=sy.local_worker.hook))
+ for role_id in self.role_names
+ }
protocol = Protocol(
name=protocol_function.__name__,
@@ -47,19 +52,16 @@ def __call__(self, protocol_function):
owner=sy.local_worker,
)
- # Build the protocol automatically
- # TODO We can always build automatically, can't we? Except if workers doesn't have
- # tensors yet in store. Do we handle that?
- if self.args_shape:
- try:
- protocol.build()
- except TypeError as e:
- raise ValueError(
- "Automatic build using @func2protocol failed!\nCheck that:\n"
- " - you have provided the correct number of shapes in args_shape\n"
- " - you have no simple numbers like int or float as args. If you do "
- "so, please consider using a tensor instead."
- )
+ try:
+ protocol.build()
+ except TypeError as e:
+ raise ValueError(
+ "Automatic build using @func2protocol failed!\nCheck that:\n"
+ " - you have provided the correct number of shapes in args_shape\n"
+ " - you have no simple numbers like int or float as args. If you do "
+ "so, please consider using a tensor instead."
+ )
+
return protocol
@@ -139,7 +141,7 @@ def build(self):
self.toggle_tracing(True)
self.is_building = True
- results = self.forward(self.roles)
+ results = self.forward(*self.roles.values())
# Disable tracing
self.toggle_tracing(False)
diff --git a/syft/execution/role.py b/syft/execution/role.py
--- a/syft/execution/role.py
+++ b/syft/execution/role.py
@@ -12,10 +12,10 @@
from syft.execution.placeholder_id import PlaceholderId
from syft.execution.state import State
from syft.generic.frameworks.types import FrameworkTensor
+from syft.serde.syft_serializable import SyftSerializable
from syft.workers.abstract import AbstractWorker
from syft_proto.execution.v1.role_pb2 import Role as RolePB
-from syft.serde.syft_serializable import SyftSerializable
class Role(SyftSerializable):
@@ -25,16 +25,16 @@ class Role(SyftSerializable):
def __init__(
self,
+ id: Union[str, int] = None,
+ worker: AbstractWorker = None,
state: State = None,
actions: List[Action] = None,
placeholders: Dict[Union[str, int], PlaceHolder] = None,
input_placeholder_ids: Tuple[int, str] = None,
output_placeholder_ids: Tuple[int, str] = None,
- # General kwargs
- id: Union[str, int] = None,
):
self.id = id or sy.ID_PROVIDER.pop()
-
+ self.worker = worker
self.actions = actions or []
# All placeholders
| diff --git a/test/execution/test_protocol.py b/test/execution/test_protocol.py
--- a/test/execution/test_protocol.py
+++ b/test/execution/test_protocol.py
@@ -2,16 +2,28 @@
import torch as th
import syft as sy
+from syft.execution.role import Role
-def test_trace_communication_actions(workers):
- bob = workers["bob"]
+def test_func2protocol_creates_roles():
+ @sy.func2protocol(roles=["alice", "bob"])
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
+ return tensor
+
+ assert protocol.is_built
+ assert len(protocol.roles) == 2
+ assert isinstance(protocol.roles["alice"], Role)
+ assert isinstance(protocol.roles["bob"], Role)
- tensor.send(bob)
+
+def test_trace_communication_actions_send():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
+
+ tensor.send(bob.worker)
return tensor
traced_actions = protocol.roles["alice"].actions
@@ -21,14 +33,12 @@ def protocol(roles):
assert "send" in [action.name for action in traced_actions]
-def test_trace_communication_actions_get(workers):
- bob = workers["bob"]
+def test_trace_communication_actions_get():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
-
- ptr = tensor.send(bob)
+ ptr = tensor.send(bob.worker)
res = ptr.get()
return res
@@ -39,15 +49,13 @@ def protocol(roles):
assert "get" in [action.name for action in traced_actions]
-def test_trace_communication_actions_send(workers):
- alice, bob = workers["alice"], workers["bob"]
-
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
+def test_trace_communication_actions_ptr_send():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- ptr = tensor.send(bob)
- res = ptr.send(alice)
+ ptr = tensor.send(bob.worker)
+ res = ptr.send(alice.worker)
return res
traced_actions = protocol.roles["alice"].actions
@@ -57,15 +65,13 @@ def protocol(roles):
assert "send" in [action.name for action in traced_actions]
-def test_trace_communication_actions_move(workers):
- alice, bob = workers["alice"], workers["bob"]
+def test_trace_communication_actions_move():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
-
- ptr = tensor.send(bob)
- res = ptr.move(alice)
+ ptr = tensor.send(bob.worker)
+ res = ptr.move(alice.worker)
return res
traced_actions = protocol.roles["alice"].actions
@@ -75,16 +81,14 @@ def protocol(roles):
assert "move" in [action.name for action in traced_actions]
-def test_trace_communication_actions_share(workers):
- alice, bob = workers["alice"], workers["bob"]
-
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
+def test_trace_communication_actions_share():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- ptr = tensor.send(bob)
+ ptr = tensor.send(bob.worker)
ptr = ptr.fix_prec()
- res = ptr.share(alice, bob)
+ res = ptr.share(alice.worker, bob.worker)
return res
traced_actions = protocol.roles["alice"].actions
@@ -94,16 +98,14 @@ def protocol(roles):
assert "share" in [action.name for action in traced_actions]
-def test_trace_communication_actions_share_(workers):
- alice, bob = workers["alice"], workers["bob"]
+def test_trace_communication_actions_share_():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
-
- ptr = tensor.send(bob)
+ ptr = tensor.send(bob.worker)
ptr = ptr.fix_prec()
- res = ptr.share_(alice, bob)
+ res = ptr.share_(alice.worker, bob.worker)
return res
traced_actions = protocol.roles["alice"].actions
@@ -113,15 +115,13 @@ def protocol(roles):
assert "share_" in [action.name for action in traced_actions]
-def test_trace_communication_actions_remote_send(workers):
- alice, bob = workers["alice"], workers["bob"]
-
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
+def test_trace_communication_actions_remote_send():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- ptr = tensor.send(bob)
- res = ptr.remote_send(alice)
+ ptr = tensor.send(bob.worker)
+ res = ptr.remote_send(alice.worker)
return res
traced_actions = protocol.roles["alice"].actions
@@ -131,14 +131,12 @@ def protocol(roles):
assert "remote_send" in [action.name for action in traced_actions]
-def test_trace_communication_actions_mid_get(workers):
- bob = workers["bob"]
+def test_trace_communication_actions_mid_get():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
-
- ptr = tensor.send(bob)
+ ptr = tensor.send(bob.worker)
res = ptr.mid_get()
return res
@@ -149,14 +147,12 @@ def protocol(roles):
assert "mid_get" in [action.name for action in traced_actions]
-def test_trace_communication_actions_remote_get(workers):
- alice, bob = workers["alice"], workers["bob"]
-
- @sy.func2protocol(args_shape={"alice": ((1,),)})
- def protocol(roles):
- tensor = roles["alice"].fetch(th.tensor([1]))
+def test_trace_communication_actions_remote_get():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
+ def protocol(alice, bob):
+ tensor = alice.fetch(th.tensor([1]))
- ptr = tensor.send(bob).send(alice)
+ ptr = tensor.send(bob.worker).send(alice.worker)
res = ptr.remote_get()
return res
@@ -167,15 +163,11 @@ def protocol(roles):
assert "remote_get" in [action.name for action in traced_actions]
-def test_create_roles_from_decorator(workers):
-
- roles_args_shape = {"alice": ((1,),), "bob": ((1,),)}
-
- @sy.func2protocol(args_shape=roles_args_shape)
- def protocol(roles):
- # fetch tensors from stores
- tensor1 = roles["alice"].fetch(th.tensor([1]))
- tensor2 = roles["bob"].fetch(th.tensor([1]))
+def test_create_roles_from_decorator():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),), "bob": ((1,),)})
+ def protocol(alice, bob):
+ tensor1 = alice.fetch(th.tensor([1]))
+ tensor2 = bob.fetch(th.tensor([2]))
t1plus = tensor1 + 1
t2plus = tensor2 + 1
@@ -187,12 +179,11 @@ def protocol(roles):
assert "bob" in protocol.roles
-def test_multi_role_tracing(workers):
- @sy.func2protocol(args_shape={"alice": ((1,),), "bob": ((1,),)})
- def protocol(roles):
- # fetch tensors from stores
- tensor1 = roles["alice"].fetch(th.tensor([1]))
- tensor2 = roles["bob"].fetch(th.tensor([1]))
+def test_multi_role_tracing():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),), "bob": ((1,),)})
+ def protocol(alice, bob):
+ tensor1 = alice.fetch(th.tensor([1]))
+ tensor2 = bob.fetch(th.tensor([2]))
t1plus = tensor1 + 1
t2plus = tensor2 + 1
@@ -208,12 +199,12 @@ def protocol(roles):
assert len(protocol.roles["bob"].actions) == 1
-def test_multi_role_execution(workers):
- @sy.func2protocol(args_shape={"alice": ((1,), (1,)), "bob": ((1,),)})
- def protocol(roles):
- tensor1 = roles["alice"].fetch(th.tensor([1]))
- tensor2 = roles["bob"].fetch(th.tensor([2]))
- tensor3 = roles["alice"].fetch(th.tensor([3]))
+def test_multi_role_execution():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,), (1,)), "bob": ((1,),)})
+ def protocol(alice, bob):
+ tensor1 = alice.fetch(th.tensor([1]))
+ tensor2 = bob.fetch(th.tensor([2]))
+ tensor3 = alice.fetch(th.tensor([3]))
res1 = tensor2
res2 = tensor1 + tensor3
@@ -231,12 +222,11 @@ def protocol(roles):
assert (dict_res["alice"][0] == th.tensor([4])).all()
-def test_copy(workers):
- @sy.func2protocol(args_shape={"alice": ((1,),), "bob": ((1,),)})
- def protocol(roles):
- # fetch tensors from stores
- tensor1 = roles["alice"].fetch(th.tensor([1]))
- tensor2 = roles["bob"].fetch(th.tensor([1]))
+def test_copy():
+ @sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),), "bob": ((1,),)})
+ def protocol(alice, bob):
+ tensor1 = alice.fetch(th.tensor([1]))
+ tensor2 = bob.fetch(th.tensor([2]))
t1plus = tensor1 + 1
t2plus = tensor2 + 1
diff --git a/test/serde/serde_helpers.py b/test/serde/serde_helpers.py
--- a/test/serde/serde_helpers.py
+++ b/test/serde/serde_helpers.py
@@ -972,12 +972,10 @@ def make_protocol(**kwargs):
alice = kwargs["workers"]["alice"]
bob = kwargs["workers"]["bob"]
- @syft.func2protocol(args_shape={"alice": ((1,),), "bob": ((1,),)})
- def protocol(roles):
- # fetch tensors from stores
- # TODO fix fetch once we have the real implementation of it
- tensor1 = roles["alice"].fetch(torch.tensor([1]))
- tensor2 = roles["bob"].fetch(torch.tensor([1]))
+ @syft.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),), "bob": ((1,),)})
+ def protocol(alice, bob):
+ tensor1 = alice.fetch(torch.tensor([1]))
+ tensor2 = bob.fetch(torch.tensor([1]))
t1plus = tensor1 + 1
t2plus = tensor2 + 1
| Create temporary `VirtualWorkers` for tracing in each `Role` of a `Protocol`
**Is your feature request related to a problem? Please describe.**
`Protocols` currently require `Workers` to exist before tracing, which hampers the future assignment of multiple `Workers` to a `Role`.
**Describe the solution you'd like**
When creating `Roles` in `@sy.func2protocol`, create a `VirtualWorker` for each `Role` and assign it to an attribute on that `Role`.
**Describe alternatives you've considered**
* ???
**Additional context**
Depends on #3488.
| 2020-05-13T16:19:52 |
|
OpenMined/PySyft | 3,522 | OpenMined__PySyft-3522 | [
"3492"
] | 7d4f6239ae273af3740422989cdf8c2319370734 | diff --git a/syft/execution/role.py b/syft/execution/role.py
--- a/syft/execution/role.py
+++ b/syft/execution/role.py
@@ -11,6 +11,7 @@
from syft.execution.placeholder import PlaceHolder
from syft.execution.placeholder_id import PlaceholderId
from syft.execution.state import State
+from syft.execution.tracing import FrameworkWrapper
from syft.generic.frameworks.types import FrameworkTensor
from syft.serde.syft_serializable import SyftSerializable
from syft.workers.abstract import AbstractWorker
@@ -34,7 +35,8 @@ def __init__(
output_placeholder_ids: Tuple[int, str] = None,
):
self.id = id or sy.ID_PROVIDER.pop()
- self.worker = worker
+ self.worker = worker or sy.local_worker
+
self.actions = actions or []
# All placeholders
@@ -47,6 +49,10 @@ def __init__(
self.state = state or State()
self.tracing = False
+ for name, package in framework_packages.items():
+ tracing_wrapper = FrameworkWrapper(package=package, role=self, owner=self.worker)
+ setattr(self, name, tracing_wrapper)
+
def input_placeholders(self):
return [self.placeholders[id_] for id_ in self.input_placeholder_ids]
| diff --git a/test/execution/test_protocol.py b/test/execution/test_protocol.py
--- a/test/execution/test_protocol.py
+++ b/test/execution/test_protocol.py
@@ -8,7 +8,7 @@
def test_func2protocol_creates_roles():
@sy.func2protocol(roles=["alice", "bob"])
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
return tensor
@@ -18,10 +18,25 @@ def protocol(alice, bob):
assert isinstance(protocol.roles["bob"], Role)
+def test_framework_methods_traced_by_role():
+ @sy.func2protocol(roles=["alice", "bob"])
+ def protocol(alice, bob):
+ tensor1 = alice.torch.rand([4, 4])
+ tensor2 = bob.torch.rand([4, 4])
+
+ return tensor1, tensor2
+
+ assert protocol.is_built
+
+ for role in protocol.roles.values():
+ assert len(role.actions) == 1
+ assert "torch.rand" in [action.name for action in role.actions]
+
+
def test_trace_communication_actions_send():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
tensor.send(bob.worker)
return tensor
@@ -29,14 +44,14 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 1
+ assert len(traced_actions) == 2
assert "send" in [action.name for action in traced_actions]
def test_trace_communication_actions_get():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
ptr = tensor.send(bob.worker)
res = ptr.get()
@@ -45,14 +60,14 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 2
+ assert len(traced_actions) == 3
assert "get" in [action.name for action in traced_actions]
def test_trace_communication_actions_ptr_send():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
ptr = tensor.send(bob.worker)
res = ptr.send(alice.worker)
@@ -61,14 +76,14 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 2
+ assert len(traced_actions) == 3
assert "send" in [action.name for action in traced_actions]
def test_trace_communication_actions_move():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
ptr = tensor.send(bob.worker)
res = ptr.move(alice.worker)
@@ -77,14 +92,14 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 2
+ assert len(traced_actions) == 3
assert "move" in [action.name for action in traced_actions]
def test_trace_communication_actions_share():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
ptr = tensor.send(bob.worker)
ptr = ptr.fix_prec()
@@ -94,14 +109,14 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 3
+ assert len(traced_actions) == 4
assert "share" in [action.name for action in traced_actions]
def test_trace_communication_actions_share_():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
ptr = tensor.send(bob.worker)
ptr = ptr.fix_prec()
@@ -111,14 +126,14 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 3
+ assert len(traced_actions) == 4
assert "share_" in [action.name for action in traced_actions]
def test_trace_communication_actions_remote_send():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
ptr = tensor.send(bob.worker)
res = ptr.remote_send(alice.worker)
@@ -127,14 +142,14 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 2
+ assert len(traced_actions) == 3
assert "remote_send" in [action.name for action in traced_actions]
def test_trace_communication_actions_mid_get():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
ptr = tensor.send(bob.worker)
res = ptr.mid_get()
@@ -143,14 +158,14 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 2
+ assert len(traced_actions) == 3
assert "mid_get" in [action.name for action in traced_actions]
def test_trace_communication_actions_remote_get():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),)})
def protocol(alice, bob):
- tensor = alice.fetch(th.tensor([1]))
+ tensor = alice.torch.tensor([1])
ptr = tensor.send(bob.worker).send(alice.worker)
res = ptr.remote_get()
@@ -159,15 +174,15 @@ def protocol(alice, bob):
traced_actions = protocol.roles["alice"].actions
assert protocol.is_built
- assert len(traced_actions) == 3
+ assert len(traced_actions) == 4
assert "remote_get" in [action.name for action in traced_actions]
def test_create_roles_from_decorator():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),), "bob": ((1,),)})
def protocol(alice, bob):
- tensor1 = alice.fetch(th.tensor([1]))
- tensor2 = bob.fetch(th.tensor([2]))
+ tensor1 = alice.torch.tensor([1])
+ tensor2 = bob.torch.tensor([2])
t1plus = tensor1 + 1
t2plus = tensor2 + 1
@@ -182,8 +197,8 @@ def protocol(alice, bob):
def test_multi_role_tracing():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),), "bob": ((1,),)})
def protocol(alice, bob):
- tensor1 = alice.fetch(th.tensor([1]))
- tensor2 = bob.fetch(th.tensor([2]))
+ tensor1 = alice.torch.tensor([1])
+ tensor2 = bob.torch.tensor([2])
t1plus = tensor1 + 1
t2plus = tensor2 + 1
@@ -195,16 +210,16 @@ def protocol(alice, bob):
assert protocol.is_built
assert len(protocol.roles) == 2
- assert len(protocol.roles["alice"].actions) == 1
- assert len(protocol.roles["bob"].actions) == 1
+ assert len(protocol.roles["alice"].actions) == 2
+ assert len(protocol.roles["bob"].actions) == 2
def test_multi_role_execution():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,), (1,)), "bob": ((1,),)})
def protocol(alice, bob):
- tensor1 = alice.fetch(th.tensor([1]))
- tensor2 = bob.fetch(th.tensor([2]))
- tensor3 = alice.fetch(th.tensor([3]))
+ tensor1 = alice.torch.tensor([1])
+ tensor2 = bob.torch.tensor([2])
+ tensor3 = alice.torch.tensor([3])
res1 = tensor2
res2 = tensor1 + tensor3
@@ -225,8 +240,8 @@ def protocol(alice, bob):
def test_copy():
@sy.func2protocol(roles=["alice", "bob"], args_shape={"alice": ((1,),), "bob": ((1,),)})
def protocol(alice, bob):
- tensor1 = alice.fetch(th.tensor([1]))
- tensor2 = bob.fetch(th.tensor([2]))
+ tensor1 = alice.torch.tensor([1])
+ tensor2 = bob.torch.tensor([2])
t1plus = tensor1 + 1
t2plus = tensor2 + 1
| Make it possible for `Protocols` to generate tensors on specific `Roles`/`Workers`
**Is your feature request related to a problem? Please describe.**
When tracing and executing `Protocols`, it's not yet possible to locally generate new tensors (e.g. with `torch.rand()`) and have them registered on the correct `Role`/`Workers`.
**Describe the solution you'd like**
Give each `Role` a `torch` attribute that contains a `FrameworkWrapper` around the actual `torch` module for tracing into that `Role`.
**Describe alternatives you've considered**
* I suppose we could instead add an additional `kwarg` for which `Role` to trace into onto all `torch` methods and strip it out in the `FrameworkWrapper` before forwarding to `torch`. π€
**Additional context**
This may depend on #3490 (not sure.)
| 2020-05-13T16:54:30 |
|
OpenMined/PySyft | 3,525 | OpenMined__PySyft-3525 | [
"3464"
] | 7d4f6239ae273af3740422989cdf8c2319370734 | diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -182,8 +182,6 @@ def __init__(
# storage object for crypto primitives
self.crypto_store = PrimitiveStorage(owner=self)
- # declare the plans used for crypto computations
- sy.frameworks.torch.mpc.fss.initialize_crypto_plans(self)
def register_obj(self, obj):
self.object_store.register_obj(self, obj)
| diff --git a/test/torch/nn/test_nn.py b/test/torch/nn/test_nn.py
--- a/test/torch/nn/test_nn.py
+++ b/test/torch/nn/test_nn.py
@@ -19,8 +19,8 @@ def test_nn_linear(workers):
y = model(x)
- assert len(alice.object_store._objects) == 10 # x, y, weight, bias
- assert len(bob.object_store._objects) == 10
+ assert len(alice.object_store._objects) == 4 # x, y, weight, bias
+ assert len(bob.object_store._objects) == 4
assert y.get().float_prec() == torch.tensor([[2.0]])
diff --git a/test/torch/pointers/test_pointer_tensor.py b/test/torch/pointers/test_pointer_tensor.py
--- a/test/torch/pointers/test_pointer_tensor.py
+++ b/test/torch/pointers/test_pointer_tensor.py
@@ -574,4 +574,4 @@ def test_iadd(workers):
b_pt += a_pt
- assert len(alice.object_store._objects) == 8
+ assert len(alice.object_store._objects) == 2
diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -679,6 +679,8 @@ def test_eq(workers, protocol):
)
if protocol == "fss":
+ for worker in workers.values():
+ syft.frameworks.torch.mpc.fss.initialize_crypto_plans(worker)
me.crypto_store.provide_primitives(["fss_eq"], [alice, bob], n_instances=6)
args = (alice, bob)
@@ -710,6 +712,8 @@ def test_comp(workers, protocol):
)
if protocol == "fss":
+ for worker in workers.values():
+ syft.frameworks.torch.mpc.fss.initialize_crypto_plans(worker)
me.crypto_store.provide_primitives(
["xor_add_couple", "fss_eq", "fss_comp"], [alice, bob], n_instances=50
)
@@ -772,6 +776,8 @@ def test_max(workers, protocol):
)
if protocol == "fss":
+ for worker in workers.values():
+ syft.frameworks.torch.mpc.fss.initialize_crypto_plans(worker)
me.crypto_store.provide_primitives(
["xor_add_couple", "fss_eq", "fss_comp"], [alice, bob], n_instances=16
)
@@ -805,6 +811,8 @@ def test_argmax(workers, protocol):
)
if protocol == "fss":
+ for worker in workers.values():
+ syft.frameworks.torch.mpc.fss.initialize_crypto_plans(worker)
me.crypto_store.provide_primitives(
["xor_add_couple", "fss_eq", "fss_comp"], [alice, bob], n_instances=32
)
@@ -1044,8 +1052,8 @@ def test_garbage_collect_reconstruct(workers):
a_sh = a.encrypt(workers=[alice, bob], crypto_provider=james)
a_recon = a_sh.child.child.reconstruct()
- assert len(alice.object_store._objects) == 8
- assert len(bob.object_store._objects) == 8
+ assert len(alice.object_store._objects) == 2
+ assert len(bob.object_store._objects) == 2
def test_garbage_collect_move(workers):
@@ -1053,8 +1061,8 @@ def test_garbage_collect_move(workers):
a = torch.ones(1, 5).send(alice)
b = a.copy().move(bob)
- assert len(alice.object_store._objects) == 7
- assert len(bob.object_store._objects) == 7
+ assert len(alice.object_store._objects) == 1
+ assert len(bob.object_store._objects) == 1
def test_garbage_collect_mul(workers):
@@ -1068,5 +1076,5 @@ def test_garbage_collect_mul(workers):
for _ in range(3):
c = a * b
- assert len(alice.object_store._objects) == 9
- assert len(bob.object_store._objects) == 9
+ assert len(alice.object_store._objects) == 3
+ assert len(bob.object_store._objects) == 3
diff --git a/test/torch/tensors/test_autograd.py b/test/torch/tensors/test_autograd.py
--- a/test/torch/tensors/test_autograd.py
+++ b/test/torch/tensors/test_autograd.py
@@ -802,7 +802,7 @@ def forward(self, x):
alice, bob, crypto_provider=crypto_provider, requires_grad=True
)
opt = optim.SGD(params=model.parameters(), lr=0.1).fix_precision()
- num_objs = 17
+ num_objs = 11
prev_loss = float("inf")
for i in range(3):
preds = classifier(a)
| New PySyft workers are created with FSS `Plans` in their object storage
**Describe the bug**
When a new worker is created, it comes with some `Plans` already loaded into the object storage:
```
{67822474598: <Plan Plan id:67822474598 owner:bob Tags: #fss_eq_plan_1 built>,
42739784794: <Plan Plan id:42739784794 owner:bob Tags: #fss_eq_plan_2 built>,
98376427733: <Plan Plan id:98376427733 owner:bob Tags: #fss_comp_plan_1 built>,
48876976143: <Plan Plan id:48876976143 owner:bob Tags: #fss_comp_plan_2 built>,
22552634480: <Plan Plan id:22552634480 owner:bob Tags: #xor_add_1 built>,
96804360037: <Plan Plan id:96804360037 owner:bob Tags: #xor_add_2 built>}
```
**To Reproduce**
```
hook = TorchHook(torch)
bob = syft.VirtualWorker(id="bob", hook=hook, is_client_worker=False)
bob._objects
```
**Expected behavior**
New workers should start with empty object stores.
**Additional context**
Probably a result of the recent Function Secret Sharing PR.
| I confirm this bug as well | 2020-05-13T21:58:36 |
OpenMined/PySyft | 3,535 | OpenMined__PySyft-3535 | [
"3523"
] | 728e8158c85b1e4603c11efe7c24180d9f7a6f81 | diff --git a/syft/workers/base.py b/syft/workers/base.py
--- a/syft/workers/base.py
+++ b/syft/workers/base.py
@@ -15,7 +15,7 @@
from syft.execution.communication import CommunicationAction
from syft.generic.frameworks.hook import hook_args
from syft.generic.frameworks.remote import Remote
-from syft.generic.frameworks.types import FrameworkTensorType
+from syft.generic.frameworks.types import FrameworkTensorType, framework_packages
from syft.generic.frameworks.types import FrameworkTensor
from syft.generic.frameworks.types import FrameworkShape
from syft.generic.object_storage import ObjectStore
@@ -1273,3 +1273,12 @@ def force_detail(worker: AbstractWorker, worker_tuple: tuple) -> tuple:
worker.object_store.rm_obj(obj.id)
return result
+
+ @classmethod
+ def is_framework_supported(cls, framework: str) -> bool:
+ """
+ Returns True if framework is supported, else returns False.
+ :param framework: string
+ :return: True/False
+ """
+ return framework.lower() in framework_packages
| diff --git a/test/workers/test_base.py b/test/workers/test_base.py
--- a/test/workers/test_base.py
+++ b/test/workers/test_base.py
@@ -130,3 +130,10 @@ def test_send_command_not_whitelisted(hook, workers):
with pytest.raises(AttributeError):
getattr(attr, method_not_exist)
+
+
+def test_is_framework_supported(hook):
+ worker = sy.VirtualWorker(hook, id="worker")
+ assert worker.is_framework_supported("torch") == True
+ assert sy.VirtualWorker.is_framework_supported("torch") == True
+ assert worker.is_framework_supported("mock_framework") == False
| Method to check if support for framework exists
**Is your feature request related to a problem? Please describe.**
Have a method that checks if a worker can add support for a specific framework.
**Describe the solution you'd like**
A method in the ```BaseWorker``` which receives a framework name and it returns ```True``` or ```False``` depending if there exists support for a specific framework.
| 2020-05-14T20:26:07 |
|
OpenMined/PySyft | 3,582 | OpenMined__PySyft-3582 | [
"3577"
] | 501476ff01ae8f69ad12ddac4226291b4d52e46b | diff --git a/syft/grid/network.py b/syft/grid/network.py
--- a/syft/grid/network.py
+++ b/syft/grid/network.py
@@ -18,7 +18,7 @@
class Network(threading.Thread):
""" Grid Network class to operate in background processing grid requests
and handling multiple peer connections with different nodes.
-
+
"""
# Events called by the grid monitor to health checking and signaling webrtc connections.
@@ -31,7 +31,7 @@ class Network(threading.Thread):
def __init__(self, node_id: str, **kwargs):
""" Create a new thread to send/receive messages from the grid service.
-
+
Args:
node_id: ID used to identify this peer.
"""
@@ -59,8 +59,8 @@ def stop(self):
self._ws.shutdown()
def _update_node_infos(self, node_id: str):
- """ Create a new virtual worker to store/compute datasets owned by this peer.
-
+ """ Create a new virtual worker to store/compute datasets owned by this peer.
+
Args:
node_id: ID used to identify this peer.
"""
@@ -79,8 +79,8 @@ def _listen(self):
self._ws.send(json.dumps(response))
def _handle_messages(self, message):
- """ Route and process the messages received from the websocket connection.
-
+ """ Route and process the messages received from the websocket connection.
+
Args:
message : message to be processed.
"""
@@ -99,14 +99,11 @@ def id(self):
def connect(self, destination_id: str):
""" Create a webrtc connection between this peer and the destination peer by using the grid network
to forward the webrtc connection request protocol.
-
+
Args:
destination_id : Id used to identify the peer to be connected.
"""
- webrtc_request = {
- MSG_FIELD.TYPE: NODE_EVENTS.WEBRTC_SCOPE,
- MSG_FIELD.FROM: self.id,
- }
+ webrtc_request = {MSG_FIELD.TYPE: NODE_EVENTS.WEBRTC_SCOPE, MSG_FIELD.FROM: self.id}
forward_payload = {
MSG_FIELD.TYPE: GRID_EVENTS.FORWARD,
@@ -121,8 +118,8 @@ def connect(self, destination_id: str):
return self._connection_handler.get(destination_id)
def disconnect(self, destination_id: str):
- """ Disconnect with some peer connected previously.
-
+ """ Disconnect with some peer connected previously.
+
Args:
destination_id: Id used to identify the peer to be disconnected.
"""
@@ -132,7 +129,7 @@ def disconnect(self, destination_id: str):
def host_dataset(self, dataset):
""" Host dataset using the virtual worker defined previously.
-
+
Args:
dataset: Dataset to be hosted.
"""
@@ -147,10 +144,7 @@ def host_model(self, model):
def _join(self):
""" Send a join requet to register this peer on the grid network. """
# Join into the network
- join_payload = {
- MSG_FIELD.TYPE: GRID_EVENTS.JOIN,
- MSG_FIELD.NODE_ID: self._worker.id,
- }
+ join_payload = {MSG_FIELD.TYPE: GRID_EVENTS.JOIN, MSG_FIELD.NODE_ID: self._worker.id}
self._ws.send(json.dumps(join_payload))
response = json.loads(self._ws.recv())
self.available = True
@@ -164,3 +158,10 @@ def __repr__(self):
list(self._worker.models.keys()),
list(self._connection_handler.nodes),
)
+
+ @property
+ def peers(self):
+ """
+ Get WebRTCManager object
+ """
+ return self._connection_handler
diff --git a/syft/grid/nodes_manager.py b/syft/grid/nodes_manager.py
--- a/syft/grid/nodes_manager.py
+++ b/syft/grid/nodes_manager.py
@@ -35,7 +35,7 @@ def process_answer(self, destination: str, content: str):
def process_offer(self, destination: str, content: str):
""" Create a thread to process a webrtc offer connection. """
self._connections[destination] = WebRTCConnection(
- self._grid, self.worker, destination, self._connections, WebRTCConnection.ANSWER,
+ self._grid, self.worker, destination, self._connections, WebRTCConnection.ANSWER
)
self._connections[destination].set_msg(content)
self._connections[destination].start()
@@ -43,6 +43,17 @@ def process_offer(self, destination: str, content: str):
def start_offer(self, destination: str):
""" Create a new thread to offer a webrtc connection. """
self._connections[destination] = WebRTCConnection(
- self._grid, self.worker, destination, self._connections, WebRTCConnection.OFFER,
+ self._grid, self.worker, destination, self._connections, WebRTCConnection.OFFER
)
self._connections[destination].start()
+
+ def __getitem__(self, key):
+ """
+ Args:
+ key: Node ID
+
+ Returns:
+ Return a peer connection reference by its ID.
+ """
+
+ return self.get(key)
| Overload __getitem__ to be able to index into connected nodes
**Is your feature request related to a problem? Please describe.**
When I call
```
me = sy.grid.register()
```
I want to be able to select connected nodes by just calling
```
me['another_node']
```
instead of having to use a private variable
me._connection_handler.get("another_node")
| 2020-05-21T14:49:22 |
||
OpenMined/PySyft | 3,588 | OpenMined__PySyft-3588 | [
"3574"
] | d1e0b7c7649a1b639737fb05bdc4abd14e2c547a | diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py
--- a/syft/grid/__init__.py
+++ b/syft/grid/__init__.py
@@ -1,13 +1,12 @@
from .network import Network
+import uuid
DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
-def register(node_id: str, **kwargs):
+def register(**kwargs):
""" Add this process as a new peer registering it in the grid network.
- Args:
- node_id: Id used to identify this node.
Returns:
peer: Peer Network instance.
"""
@@ -16,6 +15,8 @@ def register(node_id: str, **kwargs):
else:
args = kwargs
- peer = Network(node_id, **args)
+ peer_id = str(uuid.uuid4())
+ peer = Network(peer_id, **args)
peer.start()
+
return peer
| Disable manual register() ids in syft.grid.register()
**Is your feature request related to a problem? Please describe.**
It is a security risk for people to specify their own IDs given that GridNetwork will let you connect to anyone whose id you already know. Thus, we should disable the ability for people to specify their own ID and replace it with a randomly generated hash.
This hash should be printed with clear instructions ("Send this to whomever you'd like to connect with") when register() is called.
| 2020-05-21T17:51:13 |
||
OpenMined/PySyft | 3,589 | OpenMined__PySyft-3589 | [
"3573"
] | 6e07e0d4710bf205328eb475ba39fc0648360c08 | diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py
--- a/syft/grid/__init__.py
+++ b/syft/grid/__init__.py
@@ -1,4 +1,5 @@
from .network import Network
+import sys
import uuid
DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
@@ -16,7 +17,32 @@ def register(**kwargs):
args = kwargs
peer_id = str(uuid.uuid4())
+ sys.stdout.write(
+ "Connecting to OpenGrid (" + "\033[94m" + DEFAULT_NETWORK_URL + "\033[0m" + ") ... "
+ )
peer = Network(peer_id, **args)
+
+ sys.stdout.write("\033[92m" + "OK" + "\033[0m" + "\n")
+ sys.stdout.write("Peer ID: " + peer_id + "\n")
+
+ sys.stdout.write(
+ "\033[93m" + "DISCLAIMER" + "\033[0m"
+ ":"
+ + "\033[1m"
+ + " OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\n"
+ + "\033[0m"
+ )
+
+ sys.stdout.write("Where to get help: \n")
+ sys.stdout.write(
+ " - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\n"
+ )
+ sys.stdout.write(
+ " - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\n"
+ )
+ sys.stdout.write(
+ " - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\n"
+ )
peer.start()
return peer
| sy.grid.register() should print useful information
**Is your feature request related to a problem? Please describe.**
When registering a node on OpenGrid, we want to convey some information to the user using sys.stdout.write()
A few things we thought to add.
- Information: connecting to opengrid...etc.
- Information: Can I connect to the main grid node... graceful error message if you can't.
- Disclaimer: OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.
- Where to get Help:
- Join our slack (slack.openmined.org) and ask for help in the #lib_syft channel.
- File a Github Issue: https://github.com/OpenMined/PySyft and add the string "#opengrid" in the issue title.
| It would also be great if this information included a "want to join development? Apply to join the PyGrid team!" and there's a link. | 2020-05-21T18:58:16 |
|
OpenMined/PySyft | 3,591 | OpenMined__PySyft-3591 | [
"3576"
] | f97e562e1a56c4b37752838490474471e63aa85f | diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py
--- a/syft/grid/__init__.py
+++ b/syft/grid/__init__.py
@@ -4,13 +4,25 @@
DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
+_registered_peer = None
+
def register(**kwargs):
""" Add this process as a new peer registering it in the grid network.
-
+
Returns:
peer: Peer Network instance.
"""
+ global _registered_peer
+
+ if isinstance(_registered_peer, Network):
+ sys.stdout.write(
+ "\033[93m" + "WARNING" + "\033[0m"
+ ":" + f" You are already a registered peer!\n{_registered_peer}\n"
+ )
+
+ return _registered_peer
+
try:
if not kwargs:
args = {"max_size": None, "timeout": 444, "url": DEFAULT_NETWORK_URL}
@@ -22,7 +34,7 @@ def register(**kwargs):
"Connecting to OpenGrid (" + "\033[94m" + args["url"] + "\033[0m" + ") ... "
)
- peer = Network(peer_id, **args)
+ _registered_peer = Network(peer_id, **args)
sys.stdout.write("\033[92m" + "OK" + "\033[0m" + "\n")
sys.stdout.write("Peer ID: " + peer_id + "\n")
@@ -45,8 +57,11 @@ def register(**kwargs):
sys.stdout.write(
" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\n"
)
- peer.start()
- return peer
+
+ _registered_peer.start()
+
+ return _registered_peer
+
except Exception as e:
sys.stdout.write("\033[91m" + "FAIL" + "\033[0m" + "\n")
sys.stdout.write("You were not able to register your node.\n")
| Calling syft.grid.register() twice should throw an informative error
**Is your feature request related to a problem? Please describe.**
If you call syft.grid.register() twice in the same python runtime it should raise an error describing that you can't do this - that they shoudl restart the python runtime and try again.
| 2020-05-21T22:46:57 |
||
OpenMined/PySyft | 3,592 | OpenMined__PySyft-3592 | [
"3569"
] | 7f0fb986456a8a9c5cf448b0cbc290e2c6ccb212 | diff --git a/syft/frameworks/torch/tensors/interpreters/native.py b/syft/frameworks/torch/tensors/interpreters/native.py
--- a/syft/frameworks/torch/tensors/interpreters/native.py
+++ b/syft/frameworks/torch/tensors/interpreters/native.py
@@ -786,13 +786,13 @@ def float_prec_(self):
float_precision_ = float_prec_
- def private_tensor(self, *args, allowed_users: Tuple[str], no_wrap: bool = False, **kwargs):
+ def private_tensor(self, *args, allowed_users: List[str], no_wrap: bool = False, **kwargs):
"""
Convert a tensor or syft tensor to private tensor
Args:
*args (tuple): args to transmit to the private tensor.
- allowed_users (tuple): Tuple of allowed users.
+ allowed_users (list): List of allowed users.
no_wrap (bool): if True, we don't add a wrapper on top of the private tensor
**kwargs (dict): kwargs to transmit to the private tensor
"""
@@ -804,7 +804,7 @@ def private_tensor(self, *args, allowed_users: Tuple[str], no_wrap: bool = False
self.child = (
syft.PrivateTensor(*args, **kwargs)
.on(self.child, wrap=False)
- .register_credentials(allowed_users)
+ .register_credentials(tuple(allowed_users))
)
if no_wrap:
return self.child
@@ -814,7 +814,7 @@ def private_tensor(self, *args, allowed_users: Tuple[str], no_wrap: bool = False
private_tensor = (
syft.PrivateTensor(*args, **kwargs)
.on(self, wrap=False)
- .register_credentials(allowed_users)
+ .register_credentials(tuple(allowed_users))
)
if not no_wrap:
private_tensor = private_tensor.wrap()
| diff --git a/test/serde/serde_helpers.py b/test/serde/serde_helpers.py
--- a/test/serde/serde_helpers.py
+++ b/test/serde/serde_helpers.py
@@ -1294,7 +1294,7 @@ def compare(detailed, original):
# syft.frameworks.torch.tensors.interpreters.private.PrivateTensor
def make_privatetensor(**kwargs):
t = torch.tensor([1, 2, 3])
- pt = t.private_tensor(allowed_users=("test",))
+ pt = t.private_tensor(allowed_users=["test"])
pt.tag("tag1")
pt.describe("private")
pt = pt.child
diff --git a/test/torch/tensors/test_private.py b/test/torch/tensors/test_private.py
--- a/test/torch/tensors/test_private.py
+++ b/test/torch/tensors/test_private.py
@@ -25,7 +25,7 @@ def test_native_private_tensor_method():
Test native's private_tensor method.
"""
x_tensor = torch.Tensor([1, 2, 3])
- private_x = x_tensor.private_tensor(allowed_users=("testing",))
+ private_x = x_tensor.private_tensor(allowed_users=["testing"])
assert isinstance(private_x, torch.Tensor)
assert isinstance(private_x.child, PrivateTensor)
assert isinstance(private_x.child.child, torch.Tensor)
@@ -52,7 +52,7 @@ def __eq__(self, other):
second_allowed_user = UserAuthMockup("second_user", "password")
unallowed_user = UserAuthMockup("example", "password")
- private_x = x_tensor.private_tensor(allowed_users=(allowed_user, second_allowed_user))
+ private_x = x_tensor.private_tensor(allowed_users=[allowed_user, second_allowed_user])
assert private_x.allow(allowed_user)
assert private_x.allow(second_allowed_user)
assert not private_x.allow(unallowed_user)
@@ -62,7 +62,7 @@ def test_send_method(workers):
bob = workers["bob"]
x_tensor = torch.tensor([4, 5, 6, 7, 8])
- private_x = x_tensor.private_tensor(allowed_users=("User",))
+ private_x = x_tensor.private_tensor(allowed_users=["User"])
# Try to call send() without credentials
with pytest.raises(SendNotPermittedError):
@@ -80,7 +80,7 @@ def test_get_method(workers):
bob = workers["bob"]
x_tensor = torch.Tensor([1, 2, 3])
- private_x = x_tensor.private_tensor(allowed_users=("User",))
+ private_x = x_tensor.private_tensor(allowed_users=["User"])
private_x_pointer = private_x.send(bob, user="User")
@@ -99,7 +99,7 @@ def test_get_method(workers):
def test_private_tensor_registration(hook):
with hook.local_worker.registration_enabled():
x = torch.tensor([1.0])
- private_x = x.private_tensor(allowed_users=("User",))
+ private_x = x.private_tensor(allowed_users=["User"])
assert hook.local_worker.get_obj(x.id) == x
@@ -108,7 +108,7 @@ def test_allowed_to_get():
x = torch.tensor([1, 2, 3, 4, 5, 6])
assert x.allow("User") # Public tensors always return true.
- private_x = x.private_tensor(allowed_users=("User",))
+ private_x = x.private_tensor(allowed_users=["User"])
assert private_x.allow("User") # It Returns true to previously registered user.
assert not private_x.allow("AnotherUser") # It Returns False to non previously registered user.
@@ -116,7 +116,7 @@ def test_allowed_to_get():
def test_add_method():
t = torch.tensor([0.1, 0.2, 0.3])
- x = t.private_tensor(allowed_users=("User",))
+ x = t.private_tensor(allowed_users=["User"])
y = x + x
@@ -141,7 +141,7 @@ def test_methods_for_linear_module(method, parameter):
fp_tensor = tensor.fix_precision()
# ADD Private Tensor at wrapper stack
- private_fp_tensor = fp_tensor.private_tensor(allowed_users=("User",)) # ADD Private Layer
+ private_fp_tensor = fp_tensor.private_tensor(allowed_users=["User"]) # ADD Private Layer
if method != "t":
fp_result = getattr(private_fp_tensor, method)(private_fp_tensor)
@@ -157,7 +157,7 @@ def test_torch_add():
x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
# ADD Private Tensor at wrapper stack
- x = x.private_tensor(allowed_users=("User",))
+ x = x.private_tensor(allowed_users=["User"])
y = torch.add(x, x)
@@ -176,8 +176,8 @@ def test_torch_add():
y = torch.tensor([0.4, -0.5, -0.6]).fix_prec()
# ADD Private Tensor at wrapper stack
- x = x.private_tensor(allowed_users=("UserCredential",))
- y = y.private_tensor(allowed_users=("UserCredential",))
+ x = x.private_tensor(allowed_users=["UserCredential"])
+ y = y.private_tensor(allowed_users=["UserCredential"])
z = torch.add(x, y)
z_fp = z.float_prec()
@@ -194,8 +194,8 @@ def test_torch_sub():
y = torch.tensor([0.1, 0.2, 0.3]).fix_prec()
# ADD Private Tensor at wrapper stack
- x = x.private_tensor(allowed_users=("User",))
- y = y.private_tensor(allowed_users=("User",))
+ x = x.private_tensor(allowed_users=["User"])
+ y = y.private_tensor(allowed_users=["User"])
z = torch.sub(x, y)
@@ -214,7 +214,7 @@ def test_torch_mul():
x = torch.tensor([2.113]).fix_prec(precision_fractional=2)
# ADD Private Tensor at wrapper stack
- x = x.private_tensor(allowed_users=("User",))
+ x = x.private_tensor(allowed_users=["User"])
y = torch.mul(x, x)
@@ -234,8 +234,8 @@ def test_torch_mul():
y = torch.tensor([-0.113]).fix_prec()
# ADD Private Tensor at wrapper stack
- x = x.private_tensor(allowed_users=("User",))
- y = y.private_tensor(allowed_users=("User",))
+ x = x.private_tensor(allowed_users=["User"])
+ y = y.private_tensor(allowed_users=["User"])
z = torch.mul(x, y)
@@ -251,7 +251,7 @@ def test_torch_mul():
x = torch.tensor([11.0]).fix_prec(field=2 ** 16, precision_fractional=2)
# ADD Private Tensor at wrapper stack
- x = x.private_tensor(allowed_users=("User",))
+ x = x.private_tensor(allowed_users=["User"])
y = torch.mul(x, x)
@@ -264,8 +264,8 @@ def test_torch_mul():
y = torch.tensor([-0.113]).fix_prec()
# ADD Private Tensor at wrapper stack
- x = x.private_tensor(allowed_users=("User",))
- y = y.private_tensor(allowed_users=("User",))
+ x = x.private_tensor(allowed_users=["User"])
+ y = y.private_tensor(allowed_users=["User"])
z = torch.mul(x, y + y)
z = z.float_prec()
@@ -278,7 +278,7 @@ def test_operate_with_integer_constants():
x_fp = x.fix_precision()
# PrivateTensor at wrapper stack.
- x_fp = x_fp.private_tensor(allowed_users=("User",))
+ x_fp = x_fp.private_tensor(allowed_users=["User"])
# ADD
r_fp = x_fp + 10
| .private_tensor() requires a tuple when it should ask for a list and cast it to a tuple
**Describe the new feature**
```
th.tensor([1,2,3,4]).private_tensor(allowed_users=("Ionesio",))
```
This is what I currently have to do to initialize a private tensor. However, the notation for initializing a tuple in Python is clunky because you have to create parentheses and then a comma.
```
th.tensor([1,2,3,4]).private_tensor(allowed_users=["Ionesio"])
```
This is what I'd like to do instead - which can cast to a tuple in the backend.
| What is the reason it should be cast to a tuple? immutable? I tend to use sets whenever I need to use the `in` operator. like:
```
if user in allowed_users:
``` | 2020-05-22T01:10:17 |
OpenMined/PySyft | 3,596 | OpenMined__PySyft-3596 | [
"3570"
] | eb0b87da9747e921643366491f5f8e0aa2cbc4ce | diff --git a/syft/frameworks/torch/tensors/interpreters/native.py b/syft/frameworks/torch/tensors/interpreters/native.py
--- a/syft/frameworks/torch/tensors/interpreters/native.py
+++ b/syft/frameworks/torch/tensors/interpreters/native.py
@@ -802,7 +802,7 @@ def private_tensor(self, *args, allowed_users: List[str], no_wrap: bool = False,
if self.is_wrapper:
self.child = (
- syft.PrivateTensor(*args, **kwargs)
+ syft.PrivateTensor(tags=self.tags, *args, **kwargs)
.on(self.child, wrap=False)
.register_credentials(tuple(allowed_users))
)
@@ -812,7 +812,7 @@ def private_tensor(self, *args, allowed_users: List[str], no_wrap: bool = False,
return self
private_tensor = (
- syft.PrivateTensor(*args, **kwargs)
+ syft.PrivateTensor(tags=self.tags, *args, **kwargs)
.on(self, wrap=False)
.register_credentials(tuple(allowed_users))
)
diff --git a/syft/frameworks/torch/tensors/interpreters/private.py b/syft/frameworks/torch/tensors/interpreters/private.py
--- a/syft/frameworks/torch/tensors/interpreters/private.py
+++ b/syft/frameworks/torch/tensors/interpreters/private.py
@@ -57,16 +57,16 @@ def allow(self, user) -> bool:
"""
return user in self.allowed_users
- def register_credentials(self, users: Tuple[str]) -> "PrivateTensor":
+ def register_credentials(self, users: List[str]) -> "PrivateTensor":
""" Register a new user credential(s) into the list of allowed users to get this tensor.
Args:
- users (tuple): Credential(s) to be registered.
+ users (list): Credential(s) to be registered.
"""
if not hasattr(self, "allowed_users"):
self.allowed_users = tuple()
- self.allowed_users = self.allowed_users + users
+ self.allowed_users = self.allowed_users + tuple(users)
return self
diff --git a/syft/grid/network.py b/syft/grid/network.py
--- a/syft/grid/network.py
+++ b/syft/grid/network.py
@@ -2,6 +2,7 @@
import websocket
import json
from syft.codes import NODE_EVENTS, GRID_EVENTS, MSG_FIELD
+from syft.frameworks.torch.tensors.interpreters.private import PrivateTensor
from syft.grid.nodes_manager import WebRTCManager
from syft.grid.peer_events import (
_monitor,
@@ -146,7 +147,13 @@ def host_dataset(self, dataset):
Args:
dataset: Dataset to be hosted.
"""
- return dataset.send(self._worker)
+ allowed_users = None
+
+ # By default the peer should be allowed to access its own private tensors.
+ if dataset.is_wrapper and type(dataset.child) == PrivateTensor:
+ dataset.child.register_credentials([self._worker.id])
+
+ return dataset.send(self._worker, user=self._worker.id)
def host_model(self, model):
""" Host model using the virtual worker defined previously. """
| I want to be able to send private tensors to my own grid worker
**Is your feature request related to a problem? Please describe.**
me = sy.grid.register("Andrew")
b = th.tensor([1,2,3,4]).tag("#hello").private_tensor(allowed_users=("Ionesio","Andrew"))
me.host_dataset(b)
This raises an error when it should allow me to host data on my own node.
| 2020-05-23T05:09:52 |
||
OpenMined/PySyft | 3,599 | OpenMined__PySyft-3599 | [
"3329"
] | ec520f52deb693c8501052903aad3ae1589aea1a | diff --git a/syft/frameworks/torch/tensors/interpreters/native.py b/syft/frameworks/torch/tensors/interpreters/native.py
--- a/syft/frameworks/torch/tensors/interpreters/native.py
+++ b/syft/frameworks/torch/tensors/interpreters/native.py
@@ -1,5 +1,6 @@
from typing import Union, List
import weakref
+import warnings
import torch
@@ -1033,14 +1034,12 @@ def encrypt(self, protocol="mpc", **kwargs):
"Encryption and Secure Multi-Party Computation"
)
- def decrypt(self, protocol="mpc", **kwargs):
+ def decrypt(self, **kwargs):
"""
This method will decrypt each value in the tensor using Multi Party
Computation (default) or Paillier Homomorphic Encryption
Args:
- protocol (str): Currently supports 'mpc' for Multi Party
- Computation and 'paillier' for Paillier Homomorphic Encryption
**kwargs:
With Respect to MPC accepts:
None
@@ -1049,19 +1048,23 @@ def decrypt(self, protocol="mpc", **kwargs):
private_key (phe.paillier.PaillierPrivateKey): Can be obtained using
```public_key, private_key = sy.frameworks.torch.he.paillier.keygen()```
Returns:
- An decrypted version of the Tensor following the protocol specified
+ An decrypted version of the Tensor following the protocol guessed from its type
Raises:
NotImplementedError: If protocols other than the ones mentioned above are queried
"""
- if protocol.lower() == "mpc":
+ protocol = kwargs.get("protocol", None)
+ if protocol:
+ warnings.warn("protocol should no longer be used in decrypt")
+
+ if isinstance(self.child, (syft.FixedPrecisionTensor, syft.AutogradTensor)):
x_encrypted = self.copy()
x_decrypted = x_encrypted.get().float_prec()
return x_decrypted
- elif protocol.lower() == "paillier":
+ elif isinstance(self.child, PaillierTensor):
# self.copy() not required as PaillierTensor's decrypt method is not inplace
private_key = kwargs.get("private_key")
return self.child.decrypt(private_key)
| diff --git a/test/torch/tensors/test_tensor.py b/test/torch/tensors/test_tensor.py
--- a/test/torch/tensors/test_tensor.py
+++ b/test/torch/tensors/test_tensor.py
@@ -7,3 +7,18 @@ def test_init():
tensor_extension = torch.Tensor()
assert tensor_extension.id is not None
assert tensor_extension.owner is not None
+
+
+def test_decrypt_mpc(workers):
+ alice = workers.get("alice")
+ bob = workers.get("bob")
+ cp = workers.get("charlie")
+ t = torch.tensor(73)
+ # without grad
+ t_encrypted = t.encrypt(protocol="mpc", workers=[alice, bob], crypto_provider=cp)
+ assert t_encrypted.decrypt() == t
+ # with grad
+ t_encrypted = t.encrypt(
+ protocol="mpc", workers=[alice, bob], crypto_provider=cp, requires_grad=True
+ )
+ assert t_encrypted.decrypt() == t
| Call decrypt() without specifying the protocol
**Is your feature request related to a problem? Please describe.**
When we call decrypt on a tensor, we don't want to always provide the protocol if it can be known otherwise.
**Describe the solution you'd like**
I think that the current protocol that encrypted the tensor can be checked by looking at the child's type.
| I'll take this up!
It would be great if you can wait till the CKKSTensor is merged to avoid any merge conflict
Sure π | 2020-05-23T13:07:44 |
OpenMined/PySyft | 3,659 | OpenMined__PySyft-3659 | [
"3543"
] | 2a7b8ec710a2377f78485030ec4008618e4df166 | diff --git a/syft/generic/frameworks/hook/hook_args.py b/syft/generic/frameworks/hook/hook_args.py
--- a/syft/generic/frameworks/hook/hook_args.py
+++ b/syft/generic/frameworks/hook/hook_args.py
@@ -216,7 +216,7 @@ def hook_response(attr, response, wrap_type, wrap_args={}, new_self=None):
"""
# inline methods should just return new_self
- if "__i" == attr[0:3]:
+ if "__i" == attr[0:3] and attr != "__iter__":
return new_self
# TODO: Why do we need to cast it in a tuple? this is a (small) time waste
diff --git a/syft/generic/object_storage.py b/syft/generic/object_storage.py
--- a/syft/generic/object_storage.py
+++ b/syft/generic/object_storage.py
@@ -161,3 +161,9 @@ def register_tags(self, obj):
for tag in obj.tags:
self._tag_to_object_ids[tag].add(obj.id)
+
+ def __len__(self):
+ """
+ Return the number of objects in the store
+ """
+ return len(self._objects)
diff --git a/syft/generic/pointers/pointer_tensor.py b/syft/generic/pointers/pointer_tensor.py
--- a/syft/generic/pointers/pointer_tensor.py
+++ b/syft/generic/pointers/pointer_tensor.py
@@ -407,6 +407,9 @@ def item(self) -> None:
def __eq__(self, other):
return self.eq(other)
+ def __iter__(self):
+ return (self[idx] for idx in range(self.shape[0]))
+
@staticmethod
def simplify(worker: AbstractWorker, ptr: "PointerTensor") -> tuple:
"""
| diff --git a/test/generic/pointers/test_pointer_tensor.py b/test/generic/pointers/test_pointer_tensor.py
--- a/test/generic/pointers/test_pointer_tensor.py
+++ b/test/generic/pointers/test_pointer_tensor.py
@@ -585,3 +585,36 @@ def test_inplace_ops_on_remote_long_tensor(workers):
p.get_()
assert p == torch.LongTensor([4])
+
+
+def test_iterable_pointer(workers):
+ alice = workers["alice"]
+
+ t = torch.Tensor([[1, 2], [4, 5], [7, 8]])
+
+ p = t.send(alice)
+
+ assert len(alice.object_store) == 1
+ for idx, tensor in enumerate(p):
+ assert len(alice.object_store) == 2
+ assert isinstance(tensor, PointerTensor)
+ assert torch.all(tensor.get() == t[idx])
+
+ assert len(alice.object_store) == 1
+
+ l = []
+ for idx, tensor in enumerate(p):
+ l.append(tensor)
+
+ assert len(alice.object_store) == 4
+
+ del l
+ del tensor
+
+ assert len(alice.object_store) == 1
+ for idx, tensor in enumerate(p[:, 1]):
+
+ # Should be 3 because p[:, 1] will create another tensor on alice side
+ assert len(alice.object_store) == 3
+ assert isinstance(tensor, PointerTensor)
+ assert torch.all(tensor.get() == t[:, 1][idx])
| Iterate over wrapper of tensor pointer failed
**Describe the bug**
I would like to iterate over a torch tensor, which is send to a websocketclientworker. But it is non-iterator.
**To Reproduce**
```
import torch
import syft as sy
import copy
hook = sy.TorchHook(torch)
from torch import nn, optim
bob = sy.VirtualWorker(hook, id="bob")
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True)
bob_data = data.send(bob)
for i in data:
print(i)
for i in bob_data:
print(i)
```
**Expected behavior**
It should list the tensors like the following:
> tensor([0., 0.], grad_fn=\<SelectBackward\>)
> tensor([0., 1.], grad_fn=\<SelectBackward\>)
> tensor([1., 0.], grad_fn=\<SelectBackward\>)
> tensor([1., 1.], grad_fn=\<SelectBackward\>)
**Screenshots**
However, it report the following error:
> TypeError: iter() returned non-iterator of type 'Tensor'
**Desktop (please complete the following information):**
- OS: Windows 10 and Ubuntu 18.04
| Hmm, looks like this happens because our tensor types like `PointerTensor` don't implement the iterable interface for Python classes. That might be a missing feature, but if we implemented it, I don't think it would have the behavior you're expecting, since the resulting tensors would still be on the worker `bob` and not local.
As written, it would probably print something like:
```
[PointerTensor | me:12345 -> bob:23456]
[PointerTensor | me:34567 -> bob:45678]
[PointerTensor | me:56789 -> bob:67890]
[PointerTensor | me:78901 -> bob:89012]
```
To get the output you expect above, you'd need to fetch the results from `bob` to your local worker with something like:
```
for i in bob_data:
print(i.get())
```
@gmuraru, what do you think about this? Is this something we can do with the current implementation of `PointerTensor`? Are there things I'm not thinking of that would make this difficult?
> Hmm, looks like this happens because our tensor types like `PointerTensor` don't implement the iterable interface for Python classes. That might be a missing feature, but if we implemented it, I don't think it would have the behavior you're expecting, since the resulting tensors would still be on the worker `bob` and not local.
>
> As written, it would probably print something like:
>
> ```
> [PointerTensor | me:12345 -> bob:23456]
> [PointerTensor | me:34567 -> bob:45678]
> [PointerTensor | me:56789 -> bob:67890]
> [PointerTensor | me:78901 -> bob:89012]
> ```
>
> To get the output you expect above, you'd need to fetch the results from `bob` to your local worker with something like:
>
> ```
> for i in bob_data:
> print(i.get())
> ```
>
> @gmuraru, what do you think about this? Is this something we can do with the current implementation of `PointerTensor`? Are there things I'm not thinking of that would make this difficult?
Currently, we allow indexing. We can do a ```bob_data[0]``` and it will create a ```PointerTensor``` on our machine that will point to the data on ```bob```'s machine. (like you specified @karlhigley).
What we can do, to implement the for: simply get the ```shape``` of the ```PointerTensor``` and then use the functionality we have for indexing.
To get a fast implementation:
```
shape = tensor_ptr.shape
for i in range(shape[0]):
print(bob_data[i].get()) #
```
What can be a use-case for doing an iteration on a ```PointerTensor```? (you can replace the behavior with a ```get``` and then iterate over it)
Could we implement the iterable interface on `PointerTensor` to build that behavior in?
Sure. Started doing it :D | 2020-06-01T19:27:17 |
OpenMined/PySyft | 3,672 | OpenMined__PySyft-3672 | [
"3671"
] | 2aea65cfbf28f3d95ed940f3346e0be1912b961b | diff --git a/syft/frameworks/torch/he/fv/decryptor.py b/syft/frameworks/torch/he/fv/decryptor.py
--- a/syft/frameworks/torch/he/fv/decryptor.py
+++ b/syft/frameworks/torch/he/fv/decryptor.py
@@ -1,3 +1,4 @@
+import copy
from numpy.polynomial import polynomial as poly
@@ -32,7 +33,7 @@ def decrypt(self, encrypted):
"""
# Calculate [c0 + c1 * sk + c2 * sk^2 ...]_q
- temp_product_modq = self._mul_ct_sk(encrypted.data)
+ temp_product_modq = self._mul_ct_sk(copy.deepcopy(encrypted.data))
# Divide scaling variant using BEHZ FullRNS techniques
result = self._context.rns_tool.decrypt_scale_and_round(temp_product_modq)
| diff --git a/test/torch/tensors/test_fv.py b/test/torch/tensors/test_fv.py
--- a/test/torch/tensors/test_fv.py
+++ b/test/torch/tensors/test_fv.py
@@ -356,3 +356,20 @@ def test_fv_encryption_decrption_standard_seq_level(
encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
assert integer == encoder.decode(decryptor.decrypt(encryptor.encrypt(encoder.encode(integer))))
+
+
+def test_fv_encryption_decrption_without_changing_parameters():
+ ctx = Context(EncryptionParams(1024, CoeffModulus().create(1024, [30, 30]), 1024))
+ keys = KeyGenerator(ctx).keygen()
+ encoder = IntegerEncoder(ctx)
+ encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
+ decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
+ values = [0, 1, -1, 100, -100, 1000]
+ for value in values:
+ # Checking simple encryption-decryption with same parameters.
+ assert value == encoder.decode(decryptor.decrypt(encryptor.encrypt(encoder.encode(value))))
+
+ # Checking the decryption of same ciphertext 3 times (checking for ciphertext deepcopy).
+ ct = encryptor.encrypt(encoder.encode(value))
+ for _ in range(3):
+ assert value == encoder.decode(decryptor.decrypt(ct))
| FV cipher-text data change during decryption.
## Description
During the decryption process, the ciphertext was soft copied and it changed the ciphertext value during decryption. So we lose the value of ciphertext.
## How to Reproduce
1. Create a ciphertext
2. Decrypt that ciphertext
3. Retry to decrypt the same ciphertext (wrong result)
| 2020-06-05T17:53:18 |
|
OpenMined/PySyft | 3,674 | OpenMined__PySyft-3674 | [
"3656"
] | 27c6799a2bdde2a27023bf07e29c34f50ff33995 | diff --git a/syft/frameworks/torch/he/fv/evaluator.py b/syft/frameworks/torch/he/fv/evaluator.py
new file mode 100644
--- /dev/null
+++ b/syft/frameworks/torch/he/fv/evaluator.py
@@ -0,0 +1,87 @@
+import copy
+
+from syft.frameworks.torch.he.fv.util.operations import poly_add_mod
+from syft.frameworks.torch.he.fv.util.operations import multiply_add_plain_with_delta
+from syft.frameworks.torch.he.fv.ciphertext import CipherText
+from syft.frameworks.torch.he.fv.plaintext import PlainText
+
+
+class Evaluator:
+ def __init__(self, context):
+ self.context = context
+ self.coeff_modulus = context.param.coeff_modulus
+ self.plain_modulus = context.param.plain_modulus
+
+ def add(self, op1, op2):
+ """Adds two operands using FV scheme.
+
+ Args:
+ op1 (Ciphertext/Plaintext): First argument.
+ op2 (Ciphertext/Plaintext): Second argument.
+
+ Returns:
+ If both arguments are Plaintext elements then the result will be a Plaintext object
+ otherwise a Ciphertext object with value equivalent to the result of addition
+ operation of two provided arguments.
+ """
+ if isinstance(op1, CipherText) and isinstance(op2, CipherText):
+ return self._add_cipher_cipher(op1, op2)
+
+ elif isinstance(op1, PlainText) and isinstance(op2, PlainText):
+ return self._add_plain_plain(op1, op2)
+
+ elif isinstance(op1, PlainText) and isinstance(op2, CipherText):
+ return self._add_plain_cipher(op1, op2)
+
+ elif isinstance(op1, CipherText) and isinstance(op2, PlainText):
+ return self._add_plain_cipher(op2, op1)
+
+ else:
+ raise TypeError(f"Addition Operation not supported between {type(op1)} and {type(op2)}")
+
+ def _add_cipher_cipher(self, ct1, ct2):
+ """Adds two ciphertexts.
+
+ Args:
+ ct1 (Ciphertext): First argument.
+ ct2 (Ciphertext): Second argument.
+
+ Returns:
+ A Ciphertext object with value equivalent to result of addition of two provided
+ arguments.
+ """
+ ct1, ct2 = copy.deepcopy(ct1.data), copy.deepcopy(ct2.data)
+ result = ct2 if len(ct2) > len(ct1) else ct1
+
+ for i in range(min(len(ct1), len(ct2))):
+ for j in range(len(self.coeff_modulus)):
+ result[i][j] = poly_add_mod(ct1[i][j], ct2[i][j], self.coeff_modulus[j])
+
+ return CipherText(result)
+
+ def _add_plain_cipher(self, pt, ct):
+ """Adds a ciphertext and a plaintext.
+
+ Args:
+ pt (Plaintext): First argument.
+ ct (Ciphertext): Second argument.
+ Returns:
+ A Ciphertext object with value equivalent to result of addition of two provided
+ arguments.
+ """
+ ct = copy.deepcopy(ct)
+ return multiply_add_plain_with_delta(ct, pt, self.context)
+
+ def _add_plain_plain(self, pt1, pt2):
+ """Adds two plaintexts object.
+
+ Args:
+ pt1 (Plaintext): First argument.
+ pt2 (Plaintext): Second argument.
+
+ Returns:
+ A Plaintext object with value equivalent to result of addition of two provided
+ arguments.
+ """
+ pt1, pt2 = copy.deepcopy(pt1), copy.deepcopy(pt2)
+ return PlainText(poly_add_mod(pt1.data, pt2.data, self.plain_modulus))
diff --git a/syft/frameworks/torch/he/fv/util/operations.py b/syft/frameworks/torch/he/fv/util/operations.py
--- a/syft/frameworks/torch/he/fv/util/operations.py
+++ b/syft/frameworks/torch/he/fv/util/operations.py
@@ -60,6 +60,15 @@ def invert_mod(value, modulus):
def poly_add_mod(op1, op2, modulus):
"""return addition of two polynomials with all coefficients of
polynomial %q(coefficient modulus)"""
+
+ # For non same size polynomails we have to shift the polynomials because numpy consider right
+ # side as lower order of polynomial and we consider right side as heigher order.
+ if len(op1) != len(op2):
+ if len(op1) > len(op2):
+ op2 = op2 + [0] * (len(op1) - len(op2))
+ else:
+ op1 = op1 + [0] * (len(op2) - len(op1))
+
return np.mod(np.polyadd(op1, op2), modulus).tolist()
@@ -152,10 +161,10 @@ def xgcd(x, y):
def multiply_add_plain_with_delta(phase, message, context):
- """Add message (PlainText) into phase.
+ """Add message into phase.
Args:
- phase: phase is pre-computed carrier polynomial where we can add message data.
+ phase (Ciphertext): phase is pre-computed carrier polynomial where we can add message data.
message (Plaintext): A plaintext representation of integer data to be encrypted.
context (Context): Context for extracting encryption parameters.
| diff --git a/test/torch/tensors/test_fv.py b/test/torch/tensors/test_fv.py
--- a/test/torch/tensors/test_fv.py
+++ b/test/torch/tensors/test_fv.py
@@ -19,6 +19,7 @@
from syft.frameworks.torch.he.fv.util.operations import reverse_bit
from syft.frameworks.torch.he.fv.encryptor import Encryptor
from syft.frameworks.torch.he.fv.decryptor import Decryptor
+from syft.frameworks.torch.he.fv.evaluator import Evaluator
@pytest.mark.parametrize(
@@ -373,3 +374,63 @@ def test_fv_encryption_decrption_without_changing_parameters():
ct = encryptor.encrypt(encoder.encode(value))
for _ in range(3):
assert value == encoder.decode(decryptor.decrypt(ct))
+
+
[email protected](
+ "int1, int2", [(0, 0), (-1, 1), (100, -10), (1000, 100), (-1000, 100), (-100, -100)]
+)
+def test_fv_add_cipher_cipher(int1, int2):
+ ctx = Context(EncryptionParams(1024, CoeffModulus().create(1024, [30, 30]), 1024))
+ keys = KeyGenerator(ctx).keygen()
+ encoder = IntegerEncoder(ctx)
+ encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
+ decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
+ evaluator = Evaluator(ctx)
+
+ op1 = encryptor.encrypt(encoder.encode(int1))
+ op2 = encryptor.encrypt(encoder.encode(int2))
+ assert (
+ int1 + int2
+ == encoder.decode(decryptor.decrypt(evaluator._add_cipher_cipher(op1, op2)))
+ == encoder.decode(decryptor.decrypt(evaluator.add(op1, op2)))
+ == encoder.decode(decryptor.decrypt(evaluator.add(op2, op1)))
+ )
+
+
[email protected](
+ "int1, int2", [(0, 0), (-1, 1), (100, -10), (1000, 100), (-1000, 100), (-100, -100)]
+)
+def test_fv_add_plain_cipher(int1, int2):
+ ctx = Context(EncryptionParams(1024, CoeffModulus().create(1024, [30, 30]), 1024))
+ keys = KeyGenerator(ctx).keygen()
+ encoder = IntegerEncoder(ctx)
+ encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
+ decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
+ evaluator = Evaluator(ctx)
+
+ op1 = encoder.encode(int1)
+ op2 = encryptor.encrypt(encoder.encode(int2))
+
+ assert (
+ int1 + int2
+ == encoder.decode(decryptor.decrypt(evaluator._add_plain_cipher(op1, op2)))
+ == encoder.decode(decryptor.decrypt(evaluator.add(op1, op2)))
+ == encoder.decode(decryptor.decrypt(evaluator.add(op2, op1)))
+ )
+
+
[email protected](
+ "int1, int2", [(0, 0), (-1, 1), (100, -10), (1000, 100), (-1000, 100), (-100, -100)]
+)
+def test_fv_add_plain_plain(int1, int2):
+ ctx = Context(EncryptionParams(1024, CoeffModulus().create(1024, [30, 30]), 1024))
+ encoder = IntegerEncoder(ctx)
+ evaluator = Evaluator(ctx)
+ op1 = encoder.encode(int1)
+ op2 = encoder.encode(int2)
+ assert (
+ int1 + int2
+ == encoder.decode(evaluator._add_plain_plain(op1, op2))
+ == encoder.decode(evaluator.add(op1, op2))
+ == encoder.decode(evaluator.add(op2, op1))
+ )
| Implement Addition operation for FV HE Scheme
## Feature Description
Addition operations of FV Scheme need to be implemented.
1. It can **Add** 2 ciphertexts objects.
2. It can **Add** a ciphertext with plaintext objects.
In both cases, the result should be returned as a ciphertext object.
| 2020-06-06T11:30:56 |
|
OpenMined/PySyft | 3,684 | OpenMined__PySyft-3684 | [
"3683"
] | 9713bedf20530f70e4d5a0be005643ec45c66fec | diff --git a/syft/frameworks/torch/he/fv/util/operations.py b/syft/frameworks/torch/he/fv/util/operations.py
--- a/syft/frameworks/torch/he/fv/util/operations.py
+++ b/syft/frameworks/torch/he/fv/util/operations.py
@@ -66,6 +66,15 @@ def poly_add_mod(op1, op2, modulus):
def poly_mul_mod(op1, op2, modulus):
"""return multiplication of two polynomials with all coefficients of
polynomial %q(coefficient modulus) and result polynomial % t(polynomial modulus)"""
+
+ # For non same size polynomails we have to shift the polynomials because numpy consider right
+ # side as lower order of polynomial and we consider right side as heigher order.
+ if len(op1) != len(op2):
+ if len(op1) > len(op2):
+ op2 = op2 + [0] * (len(op1) - len(op2))
+ else:
+ op1 = op1 + [0] * (len(op2) - len(op1))
+
poly_mod = np.array([1] + [0] * (len(op1) - 1) + [1])
result = (
poly.polydiv(
| diff --git a/test/torch/tensors/test_fv.py b/test/torch/tensors/test_fv.py
--- a/test/torch/tensors/test_fv.py
+++ b/test/torch/tensors/test_fv.py
@@ -125,6 +125,8 @@ def test_CoeffModulus_bfv_default(poly_modulus, SeqLevelType, result):
([0, 0], [0, 0], 3, [0, 0]),
([1, 2, 3, 4], [2, 3, 4, 5], 3, [0, 2, 1, 0]),
([1, 2, 3, 4], [2, 3, 4, 5], 1, [0, 0, 0, 0]),
+ ([1, 2, 3, 4, 5], [1, -4], 3, [1, 2, 0, 2, 1]),
+ ([4, 4], [-4, -4, -4, -4], 4, [0, 0, 0, 0]),
],
)
def test_poly_add_mod(op1, op2, mod, result):
@@ -133,7 +135,12 @@ def test_poly_add_mod(op1, op2, mod, result):
@pytest.mark.parametrize(
"op1, op2, mod, result",
- [([1, 1], [2, 1], 5, [1, 3]), ([1, 2, 3, 4], [2, 3, 4, 5], 5, [3, 1, 1])],
+ [
+ ([1, 1], [2, 1], 5, [1, 3]),
+ ([1, 2, 3, 4], [2, 3, 4, 5], 5, [3, 1, 1]),
+ ([1, 2, 3, 4, 5], [1, -4], 3, [0, 1, 1, 1, 1]),
+ ([4, 4], [-4, -4, -4, -4], 4, [0]),
+ ],
)
def test_poly_mul_mod(op1, op2, mod, result):
print("test poly_mul_mod : ", poly_mul_mod(op1, op2, mod))
| Update Polynomial operations for non same size polynomials.
## Description
Fix poly_mul_mod operations for two non-same size polynomial arguments.
polynomial operations are done with wrong orientation.
## How to Reproduce
1. Apply `poly_mul_mod` with two non-same size polynomials.
2. The result is incorrect.
| 2020-06-08T16:52:16 |
|
OpenMined/PySyft | 3,708 | OpenMined__PySyft-3708 | [
"3687"
] | eb2e9147aaab76e43b23d7846065f53f4a485492 | diff --git a/syft/frameworks/crypten/__init__.py b/syft/frameworks/crypten/__init__.py
--- a/syft/frameworks/crypten/__init__.py
+++ b/syft/frameworks/crypten/__init__.py
@@ -3,10 +3,47 @@
import crypten.communicator as comm
import crypten
+from syft.workers.base import BaseWorker
+
+
+RANK_TO_WORKER_ID = {
+ # Contains translation dictionaries for every computation.
+ # cid (computation id): {rank_to_worker_id dictionary for a specific computation}
+}
+CID = None
+
+
+def get_worker_from_rank(rank: int, cid: int = None) -> BaseWorker:
+ """Find the worker running CrypTen party with specific rank in a certain computation.
+
+ Args:
+ rank: rank of the CrypTen party.
+ cid: CrypTen computation id.
+
+ Returns:
+ BaseWorker corresponding to cid and rank.
+ """
+ if cid is None:
+ if CID is None:
+ # Neither CID have been set appropriately nor cid have been passed
+ raise ValueError("cid must be set.")
+ cid = CID
+
+ rank_to_worker_id = RANK_TO_WORKER_ID.get(cid, None)
+ if rank_to_worker_id is None:
+ raise RuntimeError(
+ "CrypTen computation not initiated properly, computation_id doesn't match any rank to"
+ "worker_id translation table"
+ )
+ return syft.local_worker._get_worker_based_on_id(rank_to_worker_id[rank])
+
def load(tag: str, src: int, **kwargs):
if src == comm.get().get_rank():
- worker = syft.local_worker.get_worker_from_rank(src)
+ if CID is None:
+ raise RuntimeError("CrypTen computation id is not set.")
+
+ worker = get_worker_from_rank(src)
results = worker.search(tag)
# Make sure there is only one result
diff --git a/syft/frameworks/crypten/context.py b/syft/frameworks/crypten/context.py
--- a/syft/frameworks/crypten/context.py
+++ b/syft/frameworks/crypten/context.py
@@ -13,8 +13,12 @@
def _launch(
- func, rank, world_size, master_addr, master_port, queue, func_args, func_kwargs
+ cid, func, rank, world_size, master_addr, master_port, queue, func_args, func_kwargs
): # pragma: no cover
+
+ # set CrypTen computation id
+ sy.frameworks.crypten.CID = cid
+
communicator_args = {
"RANK": rank,
"WORLD_SIZE": world_size,
@@ -34,19 +38,20 @@ def _launch(
queue.put(return_value)
-def _new_party(func, rank, world_size, master_addr, master_port, func_args, func_kwargs):
+def _new_party(cid, func, rank, world_size, master_addr, master_port, func_args, func_kwargs):
queue = multiprocessing.Queue()
process = multiprocessing.Process(
target=_launch,
- args=(func, rank, world_size, master_addr, master_port, queue, func_args, func_kwargs),
+ args=(cid, func, rank, world_size, master_addr, master_port, queue, func_args, func_kwargs),
)
return process, queue
-def run_party(func, rank, world_size, master_addr, master_port, func_args, func_kwargs):
+def run_party(cid, func, rank, world_size, master_addr, master_port, func_args, func_kwargs):
"""Start crypten party localy and run computation.
Args:
+ cid (int): CrypTen computation id.
func (function): computation to be done.
rank (int): rank of the crypten party.
world_size (int): number of crypten parties involved in the computation.
@@ -60,7 +65,7 @@ def run_party(func, rank, world_size, master_addr, master_port, func_args, func_
"""
process, queue = _new_party(
- func, rank, world_size, master_addr, master_port, func_args, func_kwargs
+ cid, func, rank, world_size, master_addr, master_port, func_args, func_kwargs
)
was_initialized = DistributedCommunicator.is_initialized()
if was_initialized:
@@ -130,8 +135,6 @@ def wrapper(*args, **kwargs):
rank_to_worker_id = dict(zip(range(0, len(workers)), [worker.id for worker in workers]))
- sy.local_worker.rank_to_worker_id = rank_to_worker_id
-
# TODO: run ttp in a specified worker
# if crypten.mpc.ttp_required():
# ttp_process, _ = _new_party(
@@ -189,8 +192,6 @@ def wrapper(*args, **kwargs):
for thread in threads:
thread.join()
- del sy.local_worker.rank_to_worker_id
-
return return_values
return wrapper
diff --git a/syft/frameworks/crypten/message_handler.py b/syft/frameworks/crypten/message_handler.py
--- a/syft/frameworks/crypten/message_handler.py
+++ b/syft/frameworks/crypten/message_handler.py
@@ -1,28 +1,21 @@
-import types
+import syft
from syft.messaging.message import CryptenInitPlan
from syft.messaging.message import CryptenInitJail
from syft.messaging.message import ObjectMessage
+from syft.frameworks import crypten as syft_crypten
from syft.frameworks.crypten.context import run_party
-
from syft.frameworks.crypten.jail import JailRunner
from syft.frameworks.crypten import utils
-from syft.workers.base import BaseWorker
from syft.generic.abstract.message_handler import AbstractMessageHandler
-def get_worker_from_rank(worker: BaseWorker, rank: int) -> BaseWorker:
- assert hasattr(worker, "rank_to_worker_id"), "First need to call run_crypten_party"
- return worker._get_worker_based_on_id(worker.rank_to_worker_id[rank])
-
-
class CryptenMessageHandler(AbstractMessageHandler):
def __init__(self, object_store, worker):
super().__init__(object_store)
self.worker = worker
- setattr(worker, "get_worker_from_rank", types.MethodType(get_worker_from_rank, worker))
def init_routing_table(self):
return {
@@ -41,7 +34,10 @@ def run_crypten_party_plan(self, msg: CryptenInitPlan) -> ObjectMessage:
An ObjectMessage containing the return value of the crypten function computed.
"""
- self.worker.rank_to_worker_id, world_size, master_addr, master_port = msg.crypten_context
+ rank_to_worker_id, world_size, master_addr, master_port = msg.crypten_context
+
+ cid = syft.ID_PROVIDER.pop()
+ syft_crypten.RANK_TO_WORKER_ID[cid] = rank_to_worker_id
# TODO Change this, we need a way to handle multiple plan definitions
plans = self.worker.search("crypten_plan")
@@ -49,10 +45,13 @@ def run_crypten_party_plan(self, msg: CryptenInitPlan) -> ObjectMessage:
plan = plans[0].get()
- rank = self._current_rank()
+ rank = self._current_rank(rank_to_worker_id)
assert rank is not None
- return_value = run_party(plan, rank, world_size, master_addr, master_port, (), {})
+ return_value = run_party(cid, plan, rank, world_size, master_addr, master_port, (), {})
+ # remove rank to id transaltion dict
+ del syft_crypten.RANK_TO_WORKER_ID[cid]
+
return ObjectMessage(return_value)
def run_crypten_party_jail(self, msg: CryptenInitJail):
@@ -66,23 +65,31 @@ def run_crypten_party_jail(self, msg: CryptenInitJail):
An ObjectMessage containing the return value of the crypten function computed.
"""
- self.worker.rank_to_worker_id, world_size, master_addr, master_port = msg.crypten_context
+ rank_to_worker_id, world_size, master_addr, master_port = msg.crypten_context
+
+ cid = syft.ID_PROVIDER.pop()
+ syft_crypten.RANK_TO_WORKER_ID[cid] = rank_to_worker_id
ser_func = msg.jail_runner
onnx_model = msg.model
crypten_model = None if onnx_model is None else utils.onnx_to_crypten(onnx_model)
jail_runner = JailRunner.detail(ser_func, model=crypten_model)
- rank = self._current_rank()
+ rank = self._current_rank(rank_to_worker_id)
assert rank is not None
- return_value = run_party(jail_runner, rank, world_size, master_addr, master_port, (), {})
+ return_value = run_party(
+ cid, jail_runner, rank, world_size, master_addr, master_port, (), {}
+ )
+ # remove rank to id transaltion dict
+ del syft_crypten.RANK_TO_WORKER_ID[cid]
+
return ObjectMessage(return_value)
- def _current_rank(self):
+ def _current_rank(self, rank_to_worker_id):
"""Returns current rank based on worker_id."""
rank = None
- for r, worker_id in self.worker.rank_to_worker_id.items():
+ for r, worker_id in rank_to_worker_id.items():
if worker_id == self.worker.id:
rank = r
break
| diff --git a/test/crypten/test_context.py b/test/crypten/test_context.py
--- a/test/crypten/test_context.py
+++ b/test/crypten/test_context.py
@@ -186,7 +186,7 @@ def party(): # pragma: no cover
t = crypten.cryptensor(expected)
return t.get_plain_text()
- t = run_party(party, 0, 1, "127.0.0.1", 15463, (), {})
+ t = run_party(None, party, 0, 1, "127.0.0.1", 15463, (), {})
result = utils.unpack_values(t)
assert result == expected
| Rank to Worker_ID translation should be linked to a specific CrypTen computation
## Description
Server workers are intended to run multiple queries from client in parallel. Starting multiple CrypTen computation on the same worker at a time should be possible, however, due to the current way of storing the rank-to-worker-id translation, starting a new computation will overwrite the old translation dictionary and lead to undefined behavior.
## How to Reproduce
1. Start a server worker supporting CrypTen messages
2. Start two consecutive and long CrypTen computations with different workers involved
## Expected Behavior
Running parallel CrypTen computation should work normally as if they were run sequentially.
## Additional Context
Also see #3516
| 2020-06-15T09:11:51 |
|
OpenMined/PySyft | 3,759 | OpenMined__PySyft-3759 | [
"3701"
] | 3dac9dda90deae86406dba97d3186d08a4c0f753 | diff --git a/syft/frameworks/torch/he/fv/evaluator.py b/syft/frameworks/torch/he/fv/evaluator.py
--- a/syft/frameworks/torch/he/fv/evaluator.py
+++ b/syft/frameworks/torch/he/fv/evaluator.py
@@ -1,6 +1,7 @@
import copy
from syft.frameworks.torch.he.fv.util.operations import poly_add_mod
+from syft.frameworks.torch.he.fv.util.operations import negate_mod
from syft.frameworks.torch.he.fv.util.operations import multiply_add_plain_with_delta
from syft.frameworks.torch.he.fv.ciphertext import CipherText
from syft.frameworks.torch.he.fv.plaintext import PlainText
@@ -39,6 +40,24 @@ def add(self, op1, op2):
else:
raise TypeError(f"Addition Operation not supported between {type(op1)} and {type(op2)}")
+ def negate(self, ct):
+ """Negate a cipher i.e -(ct_value)
+
+ Args:
+ ct (Ciphertext): Ciphertext to be negated.
+
+ Returns:
+ A Ciphertext object with value equivalent to result of -(ct_value).
+ """
+ result = copy.deepcopy(ct.data)
+
+ for i in range(len(result)):
+ for j in range(len(result[i])):
+ for k in range(len(result[i][j])):
+ result[i][j][k] = negate_mod(ct.data[i][j][k], self.coeff_modulus[j])
+
+ return CipherText(result)
+
def _add_cipher_cipher(self, ct1, ct2):
"""Adds two ciphertexts.
| diff --git a/test/torch/tensors/test_fv.py b/test/torch/tensors/test_fv.py
--- a/test/torch/tensors/test_fv.py
+++ b/test/torch/tensors/test_fv.py
@@ -441,3 +441,15 @@ def test_fv_add_plain_plain(int1, int2):
== encoder.decode(evaluator.add(op1, op2))
== encoder.decode(evaluator.add(op2, op1))
)
+
+
[email protected]("val", [(0), (-1), (100), (-1000), (-123), (99), (0xFFF), (0xFFFFFF)])
+def test_fv_negate_cipher(val):
+ ctx = Context(EncryptionParams(1024, CoeffModulus().create(1024, [30, 30]), 1024))
+ keys = KeyGenerator(ctx).keygen()
+ encoder = IntegerEncoder(ctx)
+ evaluator = Evaluator(ctx)
+ encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
+ decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
+ op = encryptor.encrypt(encoder.encode(val))
+ assert -val == encoder.decode(decryptor.decrypt(evaluator.negate(op)))
| Implement Negation operation for FV HE Scheme
## Feature Description
Negation operations of FV Scheme need to be implemented.
It should Negate a ciphertext object and return the result in ciphertext form.
| 2020-06-21T05:15:21 |
|
OpenMined/PySyft | 3,775 | OpenMined__PySyft-3775 | [
"3702"
] | 5ad0534f99f10d67c267ecf4b66b4871388ea773 | diff --git a/syft/frameworks/torch/he/fv/evaluator.py b/syft/frameworks/torch/he/fv/evaluator.py
--- a/syft/frameworks/torch/he/fv/evaluator.py
+++ b/syft/frameworks/torch/he/fv/evaluator.py
@@ -1,12 +1,39 @@
import copy
+from enum import Enum
from syft.frameworks.torch.he.fv.util.operations import poly_add_mod
from syft.frameworks.torch.he.fv.util.operations import negate_mod
+from syft.frameworks.torch.he.fv.util.operations import poly_sub_mod
+from syft.frameworks.torch.he.fv.util.operations import poly_negate_mod
from syft.frameworks.torch.he.fv.util.operations import multiply_add_plain_with_delta
+from syft.frameworks.torch.he.fv.util.operations import multiply_sub_plain_with_delta
from syft.frameworks.torch.he.fv.ciphertext import CipherText
from syft.frameworks.torch.he.fv.plaintext import PlainText
+class ParamTypes(Enum):
+ """Enumeration for type checking of parameters."""
+
+ CTCT = 1
+ PTPT = 2
+ CTPT = 3
+ PTCT = 4
+
+
+def _typecheck(op1, op2):
+ """Check the type of parameters used and return correct enum type."""
+ if isinstance(op1, CipherText) and isinstance(op2, CipherText):
+ return ParamTypes.CTCT
+ elif isinstance(op1, PlainText) and isinstance(op2, PlainText):
+ return ParamTypes.PTPT
+ elif isinstance(op1, CipherText) and isinstance(op2, PlainText):
+ return ParamTypes.CTPT
+ elif isinstance(op1, PlainText) and isinstance(op2, CipherText):
+ return ParamTypes.PTCT
+ else:
+ return None
+
+
class Evaluator:
def __init__(self, context):
self.context = context
@@ -17,29 +44,59 @@ def add(self, op1, op2):
"""Adds two operands using FV scheme.
Args:
- op1 (Ciphertext/Plaintext): First argument.
- op2 (Ciphertext/Plaintext): Second argument.
+ op1 (Ciphertext/Plaintext): First polynomial argument (Augend).
+ op2 (Ciphertext/Plaintext): Second polynomial argument (Addend).
Returns:
If both arguments are Plaintext elements then the result will be a Plaintext object
otherwise a Ciphertext object with value equivalent to the result of addition
operation of two provided arguments.
"""
- if isinstance(op1, CipherText) and isinstance(op2, CipherText):
+
+ param_type = _typecheck(op1, op2)
+
+ if param_type == ParamTypes.CTCT:
return self._add_cipher_cipher(op1, op2)
- elif isinstance(op1, PlainText) and isinstance(op2, PlainText):
+ elif param_type == ParamTypes.PTPT:
return self._add_plain_plain(op1, op2)
- elif isinstance(op1, PlainText) and isinstance(op2, CipherText):
- return self._add_plain_cipher(op1, op2)
+ elif param_type == ParamTypes.CTPT:
+ return self._add_cipher_plain(op1, op2)
- elif isinstance(op1, CipherText) and isinstance(op2, PlainText):
- return self._add_plain_cipher(op2, op1)
+ elif param_type == ParamTypes.PTCT:
+ return self._add_cipher_plain(op2, op1)
else:
raise TypeError(f"Addition Operation not supported between {type(op1)} and {type(op2)}")
+ def sub(self, op1, op2):
+ """Subtracts two operands using FV scheme.
+
+ Args:
+ op1 (Ciphertext/Plaintext): First polynomail argument (Minuend).
+ op2 (Ciphertext/Plaintext): Second polynomial argument (Subtrahend).
+
+ Returns:
+ A ciphertext object with the value equivalent to the result of the subtraction
+ of two operands.
+ """
+ param_type = _typecheck(op1, op2)
+
+ if param_type == ParamTypes.CTCT:
+ return self._sub_cipher_cipher(op1, op2)
+
+ elif param_type == ParamTypes.CTPT:
+ return self._sub_cipher_plain(op1, op2)
+
+ elif param_type == ParamTypes.PTCT:
+ return self._sub_cipher_plain(op2, op1)
+
+ else:
+ raise TypeError(
+ f"Subtraction Operation not supported between {type(op1)} and {type(op2)}"
+ )
+
def negate(self, ct):
"""Negate a cipher i.e -(ct_value)
@@ -62,8 +119,8 @@ def _add_cipher_cipher(self, ct1, ct2):
"""Adds two ciphertexts.
Args:
- ct1 (Ciphertext): First argument.
- ct2 (Ciphertext): Second argument.
+ ct1 (Ciphertext): First polynomail argument (Augend).
+ ct2 (Ciphertext): Second polynomial argument (Addend).
Returns:
A Ciphertext object with value equivalent to result of addition of two provided
@@ -78,12 +135,13 @@ def _add_cipher_cipher(self, ct1, ct2):
return CipherText(result)
- def _add_plain_cipher(self, pt, ct):
- """Adds a ciphertext and a plaintext.
+ def _add_cipher_plain(self, ct, pt):
+ """Add a plaintext into a ciphertext.
Args:
- pt (Plaintext): First argument.
- ct (Ciphertext): Second argument.
+ ct (Ciphertext): First polynomail argument (Augend).
+ pt (Plaintext): Second polynomial argument (Addend).
+
Returns:
A Ciphertext object with value equivalent to result of addition of two provided
arguments.
@@ -95,8 +153,8 @@ def _add_plain_plain(self, pt1, pt2):
"""Adds two plaintexts object.
Args:
- pt1 (Plaintext): First argument.
- pt2 (Plaintext): Second argument.
+ pt1 (Plaintext): First polynomail argument (Augend).
+ pt2 (Plaintext): Second polynomial argument (Addend).
Returns:
A Plaintext object with value equivalent to result of addition of two provided
@@ -104,3 +162,42 @@ def _add_plain_plain(self, pt1, pt2):
"""
pt1, pt2 = copy.deepcopy(pt1), copy.deepcopy(pt2)
return PlainText(poly_add_mod(pt1.data, pt2.data, self.plain_modulus))
+
+ def _sub_cipher_plain(self, ct, pt):
+ """Subtract a plaintext from a ciphertext.
+
+ Args:
+ ct (Ciphertext): First polynomail argument (Minuend).
+ pt (Plaintext): Second polynomial argument (Subtrahend).
+
+ Returns:
+ A Ciphertext object with value equivalent to result of addition of two provided
+ arguments.
+ """
+ ct = copy.deepcopy(ct)
+ return multiply_sub_plain_with_delta(ct, pt, self.context)
+
+ def _sub_cipher_cipher(self, ct1, ct2):
+ """Subtract two ciphertexts.
+
+ Args:
+ ct1 (Ciphertext): First polynomail argument (Minuend).
+ ct2 (Ciphertext): Second polynomial argument (Subtrahend).
+
+ Returns:
+ A Ciphertext object with value equivalent to result of subtraction of two provided
+ arguments.
+ """
+ ct1, ct2 = copy.deepcopy(ct1.data), copy.deepcopy(ct2.data)
+ result = ct2 if len(ct2) > len(ct1) else ct1
+ min_size, max_size = min(len(ct1), len(ct2)), max(len(ct1), len(ct2))
+
+ for i in range(min_size):
+ for j in range(len(self.coeff_modulus)):
+ result[i][j] = poly_sub_mod(ct1[i][j], ct2[i][j], self.coeff_modulus[j])
+
+ for i in range(min_size + 1, max_size):
+ for j in range(len(self.coeff_modulus)):
+ result[i][j] = poly_negate_mod(result[i][j], self.coeff_modulus[j])
+
+ return CipherText(result)
diff --git a/syft/frameworks/torch/he/fv/util/operations.py b/syft/frameworks/torch/he/fv/util/operations.py
--- a/syft/frameworks/torch/he/fv/util/operations.py
+++ b/syft/frameworks/torch/he/fv/util/operations.py
@@ -57,9 +57,16 @@ def invert_mod(value, modulus):
return gcd_tuple[1]
-def poly_add_mod(op1, op2, modulus):
- """return addition of two polynomials with all coefficients of
- polynomial %q(coefficient modulus)"""
+def poly_add_mod(op1, op2, coeff_mod):
+ """Add two polynomails and modulo every coefficient with coeff_mod.
+
+ Args:
+ op1 (list): First Polynomail (Augend).
+ op2 (list): Second Polynomail (Addend).
+
+ Returns:
+ A list with polynomial coefficients.
+ """
# For non same size polynomails we have to shift the polynomials because numpy consider right
# side as lower order of polynomial and we consider right side as heigher order.
@@ -69,12 +76,40 @@ def poly_add_mod(op1, op2, modulus):
else:
op1 = op1 + [0] * (len(op2) - len(op1))
- return np.mod(np.polyadd(op1, op2), modulus).tolist()
+ return np.mod(np.polyadd(op1, op2), coeff_mod).tolist()
+
+def poly_sub_mod(op1, op2, coeff_mod):
+ """Subtract two polynomails and modulo every coefficient with coeff_mod.
-def poly_mul_mod(op1, op2, modulus):
- """return multiplication of two polynomials with all coefficients of
- polynomial %q(coefficient modulus) and result polynomial % t(polynomial modulus)"""
+ Args:
+ op1 (list): First Polynomail (Minuend).
+ op2 (list): Second Polynomail (Subtrahend).
+
+ Returns:
+ A list with polynomial coefficients.
+ """
+ # For non same size polynomails we have to shift the polynomials because numpy consider right
+ # side as lower order of polynomial and we consider right side as heigher order.
+ if len(op1) != len(op2):
+ if len(op1) > len(op2):
+ op2 = op2 + [0] * (len(op1) - len(op2))
+ else:
+ op1 = op1 + [0] * (len(op2) - len(op1))
+
+ return np.mod(np.polysub(op1, op2), coeff_mod).tolist()
+
+
+def poly_mul_mod(op1, op2, coeff_mod):
+ """Multiply two polynomails and modulo every coefficient with coeff_mod.
+
+ Args:
+ op1 (list): First Polynomail (Multiplicand).
+ op2 (list): Second Polynomail (Multiplier).
+
+ Returns:
+ A list with polynomial coefficients.
+ """
# For non same size polynomails we have to shift the polynomials because numpy consider right
# side as lower order of polynomial and we consider right side as heigher order.
@@ -87,26 +122,34 @@ def poly_mul_mod(op1, op2, modulus):
poly_mod = np.array([1] + [0] * (len(op1) - 1) + [1])
result = (
poly.polydiv(
- poly.polymul(np.array(op1, dtype="object"), np.array(op2, dtype="object")) % modulus,
+ poly.polymul(np.array(op1, dtype="object"), np.array(op2, dtype="object")) % coeff_mod,
poly_mod,
)[1]
- % modulus
+ % coeff_mod
).tolist()
return [round(x) for x in result]
-def poly_negate_mod(op, modulus):
- """returns negative of polynomial i.e (-1 * op)"""
+def poly_negate_mod(op, coeff_mod):
+ """Negate polynomail and modulo every coefficient with coeff_mod.
+
+ Args:
+ op1 (list): First Polynomail (Multiplicand).
+ op2 (list): Second Polynomail (Multiplier).
+
+ Returns:
+ A list with polynomial coefficients.
+ """
coeff_count = len(op)
result = [0] * coeff_count
for i in range(coeff_count):
- if modulus == 0:
+ if coeff_mod == 0:
raise ValueError("Modulus cannot be 0")
- if op[i] >= modulus:
+ if op[i] >= coeff_mod:
raise OverflowError("operand cannot be greater than modulus")
non_zero = op[i] != 0
- result[i] = (modulus - op[i]) & (-int(non_zero))
+ result[i] = (coeff_mod - op[i]) & (-int(non_zero))
return result
@@ -169,28 +212,55 @@ def xgcd(x, y):
return [x, prev_a, prev_b]
-def multiply_add_plain_with_delta(phase, message, context):
+def multiply_add_plain_with_delta(ct, pt, context):
"""Add message into phase.
Args:
- phase (Ciphertext): phase is pre-computed carrier polynomial where we can add message data.
- message (Plaintext): A plaintext representation of integer data to be encrypted.
+ ct (Ciphertext): ct is pre-computed carrier polynomial where we can add pt data.
+ pt (Plaintext): A plaintext representation of integer data to be encrypted.
+ context (Context): Context for extracting encryption parameters.
+
+ Returns:
+ A Ciphertext object with the encrypted result of encryption process.
+ """
+ coeff_modulus = context.param.coeff_modulus
+ pt = pt.data
+ plain_coeff_count = len(pt)
+ delta = context.coeff_div_plain_modulus
+ ct0, ct1 = ct.data # here ct = pk * u * e
+
+ # Coefficients of plain m multiplied by coeff_modulus q, divided by plain_modulus t,
+ # and rounded to the nearest integer (rounded up in case of a tie). Equivalent to
+ for i in range(plain_coeff_count):
+ for j in range(len(coeff_modulus)):
+ temp = round(delta[j] * pt[i]) % coeff_modulus[j]
+ ct0[j][i] = (ct0[j][i] + temp) % coeff_modulus[j]
+
+ return CipherText([ct0, ct1]) # ct0 = pk0 * u * e + delta * pt
+
+
+def multiply_sub_plain_with_delta(ct, pt, context):
+ """Subtract plaintext from ciphertext.
+
+ Args:
+ ct (Ciphertext): ct is pre-computed carrier polynomial where we can add message data.
+ pt (Plaintext): A plaintext representation of integer data to be encrypted.
context (Context): Context for extracting encryption parameters.
Returns:
A Ciphertext object with the encrypted result of encryption process.
"""
coeff_modulus = context.param.coeff_modulus
- message = message.data
- plain_coeff_count = len(message)
+ pt = pt.data
+ plain_coeff_count = len(pt)
delta = context.coeff_div_plain_modulus
- phase0, phase1 = phase.data # here phase = pk * u * e
+ ct0, ct1 = ct.data # here ct = pk * u * e
# Coefficients of plain m multiplied by coeff_modulus q, divided by plain_modulus t,
# and rounded to the nearest integer (rounded up in case of a tie). Equivalent to
for i in range(plain_coeff_count):
for j in range(len(coeff_modulus)):
- temp = round(delta[j] * message[i]) % coeff_modulus[j]
- phase0[j][i] = (phase0[j][i] + temp) % coeff_modulus[j]
+ temp = round(delta[j] * pt[i]) % coeff_modulus[j]
+ ct0[j][i] = (ct0[j][i] - temp) % coeff_modulus[j]
- return CipherText([phase0, phase1]) # phase0 = pk0 * u * e + delta * m
+ return CipherText([ct0, ct1]) # ct0 = pk0 * u * e - delta * pt
| diff --git a/test/torch/tensors/test_fv.py b/test/torch/tensors/test_fv.py
--- a/test/torch/tensors/test_fv.py
+++ b/test/torch/tensors/test_fv.py
@@ -407,7 +407,7 @@ def test_fv_add_cipher_cipher(int1, int2):
@pytest.mark.parametrize(
"int1, int2", [(0, 0), (-1, 1), (100, -10), (1000, 100), (-1000, 100), (-100, -100)]
)
-def test_fv_add_plain_cipher(int1, int2):
+def test_fv_add_cipher_plain(int1, int2):
ctx = Context(EncryptionParams(1024, CoeffModulus().create(1024, [30, 30]), 1024))
keys = KeyGenerator(ctx).keygen()
encoder = IntegerEncoder(ctx)
@@ -415,12 +415,12 @@ def test_fv_add_plain_cipher(int1, int2):
decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
evaluator = Evaluator(ctx)
- op1 = encoder.encode(int1)
- op2 = encryptor.encrypt(encoder.encode(int2))
+ op1 = encryptor.encrypt(encoder.encode(int1))
+ op2 = encoder.encode(int2)
assert (
int1 + int2
- == encoder.decode(decryptor.decrypt(evaluator._add_plain_cipher(op1, op2)))
+ == encoder.decode(decryptor.decrypt(evaluator._add_cipher_plain(op1, op2)))
== encoder.decode(decryptor.decrypt(evaluator.add(op1, op2)))
== encoder.decode(decryptor.decrypt(evaluator.add(op2, op1)))
)
@@ -453,3 +453,46 @@ def test_fv_negate_cipher(val):
decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
op = encryptor.encrypt(encoder.encode(val))
assert -val == encoder.decode(decryptor.decrypt(evaluator.negate(op)))
+
+
[email protected](
+ "int1, int2", [(0, 0), (-1, 1), (100, -10), (1000, 100), (-1000, 100), (-100, -100)]
+)
+def test_fv_sub_cipher_cipher(int1, int2):
+ ctx = Context(EncryptionParams(1024, CoeffModulus().create(1024, [30, 30]), 1024))
+ keys = KeyGenerator(ctx).keygen()
+ encoder = IntegerEncoder(ctx)
+ encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
+ decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
+ evaluator = Evaluator(ctx)
+
+ op1 = encryptor.encrypt(encoder.encode(int1))
+ op2 = encryptor.encrypt(encoder.encode(int2))
+ assert (
+ int1 - int2
+ == encoder.decode(decryptor.decrypt(evaluator._sub_cipher_cipher(op1, op2)))
+ == encoder.decode(decryptor.decrypt(evaluator.sub(op1, op2)))
+ == -encoder.decode(decryptor.decrypt(evaluator.sub(op2, op1)))
+ )
+
+
[email protected](
+ "int1, int2", [(0, 0), (-1, 1), (100, -10), (1000, 100), (-1000, 100), (-100, -100)]
+)
+def test_fv_sub_cipher_plain(int1, int2):
+ ctx = Context(EncryptionParams(1024, CoeffModulus().create(1024, [30, 30]), 1024))
+ keys = KeyGenerator(ctx).keygen()
+ encoder = IntegerEncoder(ctx)
+ encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
+ decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
+ evaluator = Evaluator(ctx)
+
+ op1 = encryptor.encrypt(encoder.encode(int1))
+ op2 = encoder.encode(int2)
+
+ assert (
+ int1 - int2
+ == encoder.decode(decryptor.decrypt(evaluator._sub_cipher_plain(op1, op2)))
+ == encoder.decode(decryptor.decrypt(evaluator.sub(op1, op2)))
+ == encoder.decode(decryptor.decrypt(evaluator.sub(op2, op1)))
+ )
| Implement Subtraction operation for FV HE Scheme
## Feature Description
Subtraction operations of FV Scheme need to be implemented.
It should Subtract 2 ciphertexts objects.
It should Subtract a ciphertext with plaintext objects.
In both cases, the result should be returned as a ciphertext object.
| 2020-06-23T14:23:55 |
|
OpenMined/PySyft | 3,815 | OpenMined__PySyft-3815 | [
"3429"
] | 6ce9b3544826665d4d9c98bd15e79f750dece218 | diff --git a/syft/exceptions.py b/syft/exceptions.py
--- a/syft/exceptions.py
+++ b/syft/exceptions.py
@@ -360,25 +360,21 @@ class TranslationUnavailableError(Exception):
pass
-def route_method_exception(exception, self, args_, kwargs_): # noqa: C901
+def route_method_exception(exception, self, args_, kwargs_):
try:
- if self.is_wrapper:
- if isinstance(self.child, sy.PointerTensor):
- if len(args_) > 0:
- if not args_[0].is_wrapper:
- return TensorsNotCollocatedException(self, args_[0])
- elif isinstance(args_[0].child, sy.PointerTensor):
- if self.location != args_[0].child.location:
- return TensorsNotCollocatedException(self, args_[0])
+ if self.is_wrapper and isinstance(self.child, sy.PointerTensor) and len(args_) > 0:
+ if not args_[0].is_wrapper:
+ return TensorsNotCollocatedException(self, args_[0])
+ elif isinstance(args_[0].child, sy.PointerTensor):
+ if self.location != args_[0].child.location:
+ return TensorsNotCollocatedException(self, args_[0])
# if self is a normal tensor
- elif isinstance(self, FrameworkTensor):
- if len(args_) > 0:
- if args_[0].is_wrapper:
- if isinstance(args_[0].child, sy.PointerTensor):
- return TensorsNotCollocatedException(self, args_[0])
- elif isinstance(args_[0], sy.PointerTensor):
- return TensorsNotCollocatedException(self, args_[0])
+ elif isinstance(self, FrameworkTensor) and len(args_) > 0:
+ if args_[0].is_wrapper and isinstance(args_[0].child, sy.PointerTensor):
+ return TensorsNotCollocatedException(self, args_[0])
+ elif isinstance(args_[0], sy.PointerTensor):
+ return TensorsNotCollocatedException(self, args_[0])
except: # noqa: E722
""
return exception
| Reduce complexity of route_method_exception
Split the code from ```route_method_exception``` into two separate functions and remove the ```noqa: C901```.
**Describe alternatives you've considered**
Simplify the function such that no split is required.
**Additional context**
Code quality:
```
2020-04-29T13:13:32.5920184Z ./syft/exceptions.py:359:1: C901 'route_method_exception' is too complex (14)
2020-04-29T13:13:32.5920476Z def route_method_exception(exception, self, args_, kwargs_):
2020-04-29T13:13:32.5920733Z ^
```
| hey , can i work on this issue?
Is the issue still open?I would like to work on it? @gmuraru
Hey. If the issue is still present, you can grab it :D
Can you guide through how to produce the bug?
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days.
@gmuraru sir, I tried to solve the mentioned problem
```
2020-04-29T13:13:32.5920184Z ./syft/exceptions.py:359:1: C901 'route_method_exception' is too complex (14)
2020-04-29T13:13:32.5920476Z def route_method_exception(exception, self, args_, kwargs_):
2020-04-29T13:13:32.5920733Z ^
```
But, instead of converting ```route_method_exception``` to two separate methods I just reduced the ```if...elif``` statements.
And it seems to work properly, it shows positive results in both Flake8 test(after removing C901) and pytest.
Will this change work instead of separating the method? | 2020-07-04T10:06:58 |
|
OpenMined/PySyft | 3,977 | OpenMined__PySyft-3977 | [
"3662"
] | 80155352ad99265742d03165aa78bfb279810549 | diff --git a/syft/frameworks/torch/he/fv/evaluator.py b/syft/frameworks/torch/he/fv/evaluator.py
--- a/syft/frameworks/torch/he/fv/evaluator.py
+++ b/syft/frameworks/torch/he/fv/evaluator.py
@@ -11,6 +11,7 @@
from syft.frameworks.torch.he.fv.util.operations import multiply_sub_plain_with_delta
from syft.frameworks.torch.he.fv.ciphertext import CipherText
from syft.frameworks.torch.he.fv.plaintext import PlainText
+from syft.frameworks.torch.he.fv.relin_keys import RelinKey
class ParamTypes(Enum):
@@ -381,3 +382,103 @@ def _mul_plain_plain(self, pt1, pt2):
"""
pt1, pt2 = pt1.data, pt2.data
return PlainText(poly_mul_mod(pt1, pt2, self.plain_modulus, self.poly_modulus))
+
+ def relin(self, ct, key):
+ """Relinearize the provided ciphertext and decrease its size by one.
+ (cannot apply on size 2 ciphetext)
+
+ Args:
+ ct (CipherText): ciphertext of size 3 to relinearize.
+ key (Relin Key): relinearization key generated with keygenerator.
+
+ Returns:
+ Ciphertext of Size 2 with same encrypted value.
+ """
+ if len(ct.data) == 2:
+ raise Warning("Ciphertext of size 2 does not need to relinearize.")
+ if len(ct.data) > 3:
+ raise Warning(f"Ciphertext of size {len(ct.data)} cannot be relinearized.")
+
+ return self._switch_key_inplace(copy.deepcopy(ct), key)
+
+ def _switch_key_inplace(self, ct, key):
+
+ if not isinstance(key, RelinKey):
+ raise RuntimeError("Relinearization key is invalid")
+
+ param_id = ct.param_id
+ ct = ct.data
+ key_vector = key.data
+ context_data = self.context.context_data_map[param_id]
+ key_context = self.context.context_data_map[self.context.key_param_id]
+
+ coeff_modulus = context_data.param.coeff_modulus
+ decomp_mod_count = len(coeff_modulus)
+ key_mod = key_context.param.coeff_modulus
+ key_mod_count = len(key_mod)
+ rns_mod_count = decomp_mod_count + 1
+
+ target = ct[-1] # Last component of ciphertext
+
+ modswitch_factors = key_context.rns_tool.inv_q_last_mod_q
+
+ for i in range(decomp_mod_count):
+
+ local_small_poly_0 = copy.deepcopy(target[i])
+
+ temp_poly = [[[0] for x in range(rns_mod_count)], [[0] for x in range(rns_mod_count)]]
+
+ for j in range(rns_mod_count):
+ index = key_mod_count - 1 if j == decomp_mod_count else j
+
+ if key_mod[i] <= key_mod[index]:
+ local_small_poly_1 = copy.deepcopy(local_small_poly_0)
+ else:
+ local_small_poly_1 = [x % key_mod[index] for x in local_small_poly_0]
+
+ for k in range(2):
+ local_small_poly_2 = poly_mul_mod(
+ local_small_poly_1,
+ key_vector[i][k][index],
+ key_mod[index],
+ self.poly_modulus,
+ )
+ temp_poly[k][j] = poly_add_mod(
+ local_small_poly_2, temp_poly[k][j], key_mod[index], self.poly_modulus
+ )
+
+ # Results are now stored in temp_poly[k]
+ # Modulus switching should be performed
+ for k in range(2):
+ temp_poly_ptr = temp_poly[k][decomp_mod_count]
+ temp_last_poly_ptr = temp_poly[k][decomp_mod_count]
+
+ temp_poly_ptr = [x % key_mod[-1] for x in temp_poly_ptr]
+
+ # Add (p-1)/2 to change from flooring to rounding.
+ half = key_mod[-1] >> 1
+ temp_last_poly_ptr = [(x + half) % key_mod[-1] for x in temp_last_poly_ptr]
+
+ encrypted_ptr = ct[k]
+ for j in range(decomp_mod_count):
+ temp_poly_ptr = temp_poly[k][j]
+
+ temp_poly_ptr = [x % key_mod[j] for x in temp_poly_ptr]
+ local_small_poly = [x % key_mod[j] for x in temp_last_poly_ptr]
+ half_mod = half % key_mod[j]
+
+ local_small_poly = [(x - half_mod) % key_mod[j] for x in local_small_poly]
+
+ # ((ct mod qi) - (ct mod qk)) mod qi
+ temp_poly_ptr = poly_sub_mod(
+ temp_poly_ptr, local_small_poly, key_mod[j], self.poly_modulus
+ )
+
+ # qk^(-1) * ((ct mod qi) - (ct mod qk)) mod qi
+ temp_poly_ptr = [(x * modswitch_factors[j]) % key_mod[j] for x in temp_poly_ptr]
+
+ encrypted_ptr[j] = poly_add_mod(
+ temp_poly_ptr, encrypted_ptr[j], key_mod[j], self.poly_modulus
+ )
+
+ return CipherText(ct[0:2], param_id)
diff --git a/syft/frameworks/torch/he/fv/key_generator.py b/syft/frameworks/torch/he/fv/key_generator.py
--- a/syft/frameworks/torch/he/fv/key_generator.py
+++ b/syft/frameworks/torch/he/fv/key_generator.py
@@ -3,6 +3,7 @@
from syft.frameworks.torch.he.fv.util.rlwe import encrypt_symmetric
from syft.frameworks.torch.he.fv.secret_key import SecretKey
from syft.frameworks.torch.he.fv.public_key import PublicKey
+from syft.frameworks.torch.he.fv.relin_keys import RelinKeys
class KeyGenerator:
@@ -16,11 +17,12 @@ class KeyGenerator:
def __init__(self, context):
if not isinstance(context, Context):
- raise ValueError("invalid context")
+ raise RuntimeError("invalid context")
self.public_key = None
self.secret_key = None
self.context = context
+ self.relin_key_generator = None
def keygen(self):
"""Generate the secret key and public key.
@@ -44,3 +46,18 @@ def _generate_pk(self):
self.context, self.context.key_param_id, self.secret_key.data
)
self.public_key = PublicKey(public_key.data)
+
+ def get_relin_keys(self):
+ """Generate a relinearization key.
+
+ Returns:
+ A relinearization key.
+ """
+ if self.relin_key_generator is None:
+ if self.secret_key is None:
+ raise RuntimeError("cannot generate relinearization key for unspecified secret key")
+
+ self.relin_key_generator = RelinKeys(self.context, self.secret_key)
+
+ # generate keys.
+ return self.relin_key_generator._generate_relin_key()
diff --git a/syft/frameworks/torch/he/fv/relin_keys.py b/syft/frameworks/torch/he/fv/relin_keys.py
new file mode 100644
--- /dev/null
+++ b/syft/frameworks/torch/he/fv/relin_keys.py
@@ -0,0 +1,45 @@
+from syft.frameworks.torch.he.fv.util.operations import poly_mul_mod
+from syft.frameworks.torch.he.fv.util.rlwe import encrypt_symmetric
+from syft.frameworks.torch.he.fv.util.operations import poly_add_mod
+
+
+class RelinKey:
+ def __init__(self, data):
+ self.data = data
+
+
+class RelinKeys:
+ def __init__(self, context, sk):
+ self._context = context
+ key_param = context.context_data_map[context.key_param_id].param
+ self._coeff_modulus = key_param.coeff_modulus
+ self._poly_modulus = key_param.poly_modulus
+ self._secret_key = sk.data
+ self._secret_key_power_2 = self._get_sk_power_2(sk.data)
+
+ def _generate_relin_key(self):
+ return RelinKey(self._generate_one_kswitch_key(self._secret_key_power_2))
+
+ def _generate_one_kswitch_key(self, sk_power_2):
+ decomp_mod_count = len(self._coeff_modulus) - 1
+ result = [0] * (decomp_mod_count)
+ for i in range(decomp_mod_count):
+ result[i] = encrypt_symmetric(
+ self._context, self._context.key_param_id, self._secret_key
+ ).data
+ factor = self._coeff_modulus[-1] % self._coeff_modulus[i]
+
+ temp = [(x * factor) for x in sk_power_2[i]]
+
+ result[i][0][i] = poly_add_mod(
+ result[i][0][i], temp, self._coeff_modulus[i], self._poly_modulus
+ )
+ return result
+
+ def _get_sk_power_2(self, sk):
+ sk_power_2 = []
+ for i in range(len(self._coeff_modulus)):
+ sk_power_2.append(
+ poly_mul_mod(sk[i], sk[i], self._coeff_modulus[i], self._poly_modulus)
+ )
+ return sk_power_2
| diff --git a/test/torch/tensors/test_fv.py b/test/torch/tensors/test_fv.py
--- a/test/torch/tensors/test_fv.py
+++ b/test/torch/tensors/test_fv.py
@@ -676,3 +676,47 @@ def test_rns_tool_fastbconv_sk(poly_len, coeff_mod, plain_mod, input, output):
rns_tool = RNSTool(enc_param)
result = rns_tool.fastbconv_sk(input)
assert result == output
+
+
[email protected](
+ "val1, val2", [(0, 0), (1, 1), (-1, 1), (100, -1), (1000, 1), (-1000, -1), (-99, 0)],
+)
+def test_fv_relin(val1, val2):
+ ctx = Context(EncryptionParams(128, CoeffModulus().create(128, [40, 40]), 64))
+ keygenerator = KeyGenerator(ctx)
+ keys = keygenerator.keygen()
+ relin_key = keygenerator.get_relin_keys()
+ encoder = IntegerEncoder(ctx)
+ encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
+ decryptor = Decryptor(ctx, keys[0]) # keys[0] = secret_key
+ evaluator = Evaluator(ctx)
+
+ op1 = encryptor.encrypt(encoder.encode(val1))
+ op2 = encryptor.encrypt(encoder.encode(val2))
+ temp_prod = evaluator.mul(op1, op2)
+ relin_prod = evaluator.relin(temp_prod, relin_key)
+ assert len(temp_prod.data) - 1 == len(relin_prod.data)
+ assert val1 * val2 == encoder.decode(decryptor.decrypt(relin_prod))
+
+
[email protected](
+ "val1, val2", [(-1, 1)],
+)
+def test_fv_relin_exceptions(val1, val2):
+ ctx = Context(EncryptionParams(128, CoeffModulus().create(128, [40, 40]), 64))
+ keygenerator = KeyGenerator(ctx)
+ keys = keygenerator.keygen()
+ relin_key = keygenerator.get_relin_keys()
+ encoder = IntegerEncoder(ctx)
+ encryptor = Encryptor(ctx, keys[1]) # keys[1] = public_key
+ evaluator = Evaluator(ctx)
+
+ op1 = encryptor.encrypt(encoder.encode(val1))
+ op2 = encryptor.encrypt(encoder.encode(val2))
+ temp_prod = evaluator.mul(op1, op2)
+
+ with pytest.raises(Warning):
+ evaluator.relin(op1, relin_key) # Ciphertext size 2
+
+ with pytest.raises(Exception):
+ evaluator.relin(evaluator.mul(temp_prod, val1), relin_key) # Ciphertext size 4
| Implement Relinearization keys for FV scheme
## Feature Description
**Relinearization keys** need to be implemented for the FV scheme.
These keys are useful for reducing the number of polynomials produced after the multiplication operations of two ciphertexts.
| This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days.
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days. | 2020-08-10T11:27:21 |
OpenMined/PySyft | 4,007 | OpenMined__PySyft-4007 | [
"3099"
] | 5a7657bc5618f9916408a4994a03ea35326f1e8d | diff --git a/syft/grid/utils/autoscale/gcloud.py b/syft/grid/utils/autoscale/gcloud.py
--- a/syft/grid/utils/autoscale/gcloud.py
+++ b/syft/grid/utils/autoscale/gcloud.py
@@ -5,6 +5,11 @@
import terrascript.data
import terrascript.provider
import terrascript.resource
+
+# syft dependencies
+from syft.grid.clients.data_centric_fl_client import DataCentricFLClient
+
+# terraform utils
from utils.script import terraform_script
from utils.notebook import terraform_notebook
@@ -25,7 +30,9 @@ def __init__(self, credentials, project_id, region):
self.provider = "google"
self.config = terrascript.Terrascript()
self.config += terrascript.provider.google(
- credentials=self.credentials, project=self.project_id, region=self.region
+ credentials=self.credentials,
+ project=self.project_id,
+ region=self.region,
)
with open("main.tf.json", "w") as main_config:
json.dump(self.config, main_config, indent=2, sort_keys=False)
@@ -107,9 +114,9 @@ def create_gridnetwork(self, name, machine_type, zone, apply=True):
self.expose_port(name="pygrid", apply=False)
image = terrascript.data.google_compute_image(
- name + "pytorch",
- family="pytorch-latest-gpu-debian-10",
- project="deeplearning-platform-release",
+ name + "container-optimized-os",
+ family="cos-81-lts",
+ project="cos-cloud",
)
self.config += image
@@ -125,11 +132,14 @@ def create_gridnetwork(self, name, machine_type, zone, apply=True):
},
metadata_startup_script="""
#!/bin/bash
- apt-get update
- apt-get -y upgrade
- sudo -i bash -c 'pip install git+https://github.com/OpenMined/PyGridNetwork.git'
- sudo -i bash -c 'echo Starting PyGridNetwork & \
- python -m gridnetwork --port=80 --start_local_db'""",
+ sleep 30;
+ docker pull openmined/grid-network:production;
+ docker run \
+ -e PORT=80 \
+ -e DATABASE_URL=sqlite:///databasenode.db \
+ --name gridnetwork\
+ -p 80:80 \
+ -d openmined/grid-network:production;""",
)
self.config += node
@@ -158,15 +168,17 @@ def create_gridnode(self, name, machine_type, zone, gridnetwork_name=None, apply
outputs = json.load(out)["outputs"]
gridnetwork_ip = outputs[gridnetwork_name + "-ip"]["value"]
- pygrid_network_address = "--gateway_url=http://" + gridnetwork_ip if gridnetwork_ip else ""
+ pygrid_network_address = "=http://" + gridnetwork_ip if gridnetwork_ip else ""
+ host = "curl ifconfig.co" if gridnetwork_ip else "hostname -I"
image = terrascript.data.google_compute_image(
- name + "pytorch",
- family="pytorch-latest-gpu-debian-10",
- project="deeplearning-platform-release",
+ name + "container-optimized-os",
+ family="cos-81-lts",
+ project="cos-cloud",
)
self.config += image
+ # HOST environment variable is set to internal IP address
node = terrascript.resource.google_compute_instance(
name,
name=name,
@@ -176,13 +188,17 @@ def create_gridnode(self, name, machine_type, zone, gridnetwork_name=None, apply
network_interface={"network": "default", "access_config": {}},
metadata_startup_script=f"""
#!/bin/bash
- apt-get update
- apt-get -y upgrade
- sudo -i bash -c 'pip install notebook==5.7.8'
- sudo -i bash -c 'pip install git+https://github.com/OpenMined/PyGridNode.git'
- sudo -i bash -c 'echo Starting Node {name} \
- joined with PyGridNetwork at {gridnetwork_ip} & \
- python -m gridnode --id={name} --port=80 {pygrid_network_address}'""",
+ sleep 30;
+ docker pull openmined/grid-node:production;
+ docker run \
+ -e NODE_ID={name} \
+ -e HOST = "$({host})" \
+ -e PORT=80 \
+ -e NETWORK={pygrid_network_address} \
+ -e DATABASE_URL=sqlite:///databasenode.db \
+ --name gridnode \
+ -p 80:80 \
+ -d openmined/grid-node:production;""",
)
self.config += node
with open("main.tf.json", "w") as main_config:
@@ -211,19 +227,22 @@ def create_cluster(
zone: zone of your GCP project
reserve_ip_name: name of the reserved ip created using reserve_ip
target_size: number of wokers to be created(N workers + 1 master)
- eviction_policy: "delete" to teardown the cluster after calling .sweep() else None
+ eviction_policy: "delete" to teardown the cluster after calling
+ cluster.sweep() else None
apply: to call terraform apply at the end
"""
- self.expose_port("pygrid", apply=False)
+ self.expose_port("pygrid", ports=[80, 3000], apply=False)
with open("terraform.tfstate", "r") as out:
outputs = json.load(out)["outputs"]
+
gridnetwork_ip = outputs[reserve_ip_name + "-ip"]["value"]
+ pygrid_network_address = "http://" + gridnetwork_ip
image = terrascript.data.google_compute_image(
- name + "pytorch",
- family="pytorch-latest-gpu-debian-10",
- project="deeplearning-platform-release",
+ name + "container-optimized-os",
+ family="cos-81-lts",
+ project="cos-cloud",
)
self.config += image
@@ -234,16 +253,29 @@ def create_cluster(
zone=zone,
boot_disk={"initialize_params": {"image": "${" + image.self_link + "}"}},
network_interface={"network": "default", "access_config": {"nat_ip": gridnetwork_ip}},
- metadata_startup_script="""
+ metadata_startup_script=f"""
#!/bin/bash
- apt-get update
- apt-get -y upgrade
- sudo -i bash -c 'pip install git+https://github.com/OpenMined/PyGridNetwork.git'
- sudo -i bash -c 'echo Starting PyGridNetwork & \
- python -m gridnetwork --port=80 --start_local_db'""",
+ sleep 30;
+ docker pull openmined/grid-network:production && \
+ docker run \
+ -e PORT=80 \
+ -e DATABASE_URL=sqlite:///databasenode.db \
+ --name gridnetwork \
+ -p 80:80 \
+ -d openmined/grid-network:production;
+ docker pull openmined/grid-node:production;
+ docker run \
+ -e NODE_ID={name+"-network-node"} \
+ -e HOST="$(hostname -I)" \
+ -e PORT=3000 \
+ -e NETWORK={pygrid_network_address} \
+ -e DATABASE_URL=sqlite:///databasenode.db \
+ --name gridnode \
+ -p 3000:3000 \
+ -d openmined/grid-node:production;""",
)
- pygrid_network_address = "http://" + gridnetwork_ip
+ # HOST environment variable is set to external IP address
instance_template = terrascript.resource.google_compute_instance_template(
name + "-template",
name=name + "-template",
@@ -252,14 +284,17 @@ def create_cluster(
network_interface={"network": "default", "access_config": {}},
metadata_startup_script=f"""
#!/bin/bash
- apt-get update
- apt-get -y upgrade
- sudo -i bash -c 'pip install notebook==5.7.8'
- sudo -i bash -c 'pip install git+https://github.com/OpenMined/PyGridNode.git'
- sudo -i bash -c 'echo Starting Node {name} \
- joined with PyGridNetwork at {pygrid_network_address} & \
- python -m gridnode --id={name} --port=80 \
- --gateway_url={pygrid_network_address}'""",
+ sleep 30;
+ docker pull openmined/grid-node:production;
+ docker run \
+ -e NODE_ID={name} \
+ -e HOST="$(curl ifconfig.co)" \
+ -e PORT=80 \
+ -e NETWORK={pygrid_network_address} \
+ -e DATABASE_URL=sqlite:///databasenode.db \
+ --name gridnode \
+ -p 80:80 \
+ -d openmined/grid-node:production;""",
lifecycle={"create_before_destroy": True},
)
self.config += instance_template
@@ -281,7 +316,12 @@ def create_cluster(
else:
terraform_script.apply()
- return Cluster(name, self.provider, gridnetwork_ip, eviction_policy=eviction_policy)
+ return Cluster(
+ name,
+ self.provider,
+ gridnetwork_ip,
+ eviction_policy=eviction_policy,
+ )
def compute_instance(self, name, machine_type, zone, image_family, apply=True):
"""
@@ -329,22 +369,55 @@ def __init__(self, name, provider, gridnetwork_ip, eviction_policy=None):
name: name of the cluster
provider: terrafrom provider for the cluster
gridnetwork_ip: ip of grid network instance
- eviction_policy: "delete" to teardown the cluster after calling .sweep() else None
+ eviction_policy: "delete" to teardown the cluster after calling
+ cluster.sweep() else None
"""
self.name = name
self.provider = provider
self.gridnetwork_ip = gridnetwork_ip
+ self.gridnetwork_node_ip = gridnetwork_ip + ":3000"
self.master = name + "-master"
self.cluster = name + "-cluster"
self.template = name + "-template"
self.eviction_policy = eviction_policy
+ self.network_node = None
self.config = None
- def sweep(self, apply=True):
+ def sweep(
+ self,
+ model,
+ hook,
+ model_id=None,
+ mpc=False,
+ allow_download=False,
+ allow_remote_inference=False,
+ apply=True,
+ ):
"""
args:
+ model : A jit model or Syft Plan.
+ hook : A PySyft hook
+ model_id (str): An integer/string representing the model id.
+ If it isn't provided and the model is a Plan we use model.id,
+ if the model is a jit model we raise an exception.
+ allow_download (bool) : Allow to copy the model to run it locally.
+ allow_remote_inference (bool) : Allow to run remote inferences.
apply: to call terraform apply at the end
"""
+ print("Connecting to network-node")
+ self.network_node = DataCentricFLClient(hook, self.gridnetwork_node_ip)
+ print("Sending model to node")
+ self.network_node.serve_model(
+ model=model,
+ model_id=model_id,
+ mpc=mpc,
+ allow_download=allow_download,
+ allow_remote_inference=allow_remote_inference,
+ )
+ print("Model sent, disconnecting node now")
+ self.network_node.close()
+ print("Node disconnected")
+
with open("main.tf.json", "r") as main_config:
self.config = json.load(main_config)
| Implement Auto-Scaling on Google Cloud
### Description
In this project, you will implement functionality necessary to automatically spin-up Google cloud machines, load a PyGrid instance, run a training job, and tear down the instance upon completion (depositing the results into another long-running instance). The primary feature will be the ability to run a βhyperparameter sweepβ, as exemplified below.
```python
parameters = {βalphaβ : [0, 0.01, 0.02], βbatch_sizeβ : [32,64]}
gcloud = gr.GoogleCloud(credentials)
cluster = gcloud.LazyCluster(n_machines=10,
type="n-series",
priority="low",
eviction_policy="Delete",
max_price_usd=0.55,
reboot_max_price_usd=0.65)
cluster.sweep(model=model,
parameters=parameters,
optim=optim.SGD,
results_node=grid['my node']
)
```
### Context
I will be mentoring this project as part of the [Google Summer of Code](https://summerofcode.withgoogle.com/)
Also, this year we have some plans for PyGrid, along with our other projects, including a focus on production: Deployment on Cloud Providers (GCP, Amazon, Azure). You can check out the PyGrid Roadmap [here](https://github.com/OpenMined/Roadmap/tree/master/pygrid_team).
### Required Skills:
- Knowledge of [PyTorch](https://pytorch.org/) and Deep Learning
- Familiarity (or willingness to become familiar) with [PySyft](https://github.com/OpenMined/PySyft)
- Familiarity (or willingness to become familiar) with [Google Cloudβs Infrastructure](https://cloud.google.com/docs)
**Difficulty**: Beginner
While this is a sizeable project, the core functionality is relatively straightforward (provision machines, install and start PyGrid servers, and run training scripts). Furthermore, there are plenty of tutorials on how to run PyGrid servers and how to run Google Cloud machines.
If you are new to PyGrid and want to learn more about it, try out the tutorials. A comprehensive list of tutorials can be found [here](https://github.com/OpenMined/PySyft/tree/master/examples/tutorials/grid). These tutorials cover how to create a PyGrid node and what operations you can perform.
### Useful Links
- PyTorch - https://pytorch.org/blog
- PySyft - https://github.com/OpenMined/PySyft
- PyGrid - https://github.com/OpenMined/PyGrid
- GCP Docs - https://cloud.google.com/docs
- GCloud APIs for Python - https://cloud.google.com/python/docs/reference
- Google APIs (Python Client) - https://github.com/googleapis/google-api-python-client
- Google Summer of Code - https://summerofcode.withgoogle.com
- [Timeline](https://developers.google.com/open-source/gsoc/timeline)
- [Student Guide](https://google.github.io/gsocguides/student/)
- [OpenMined Org.](https://summerofcode.withgoogle.com/organizations/5561599089180672)
_List of all GSoC project ideas [here](https://docs.google.com/document/d/1OA8277RB9ZN8u7Wgn6NqZK7tzixnZbSlkfulyqoo2G4/edit?usp=sharing)._
| Note: we want to allow for automatic node teardown after calling .sweep() as an option - where it deposits the results into a "main cluster node". In other words, if I spin up a cluster of 10 workers, this is ACTUALLY a cluster of 10 workers and 1 master, where the master can store resources produced by the 10 workers.
I'd like to work on this project
Cool, @carlodavid012! I saw that you already know PySyft. I also think that basic PyGrid concepts can be useful. Feel free to chat with me if you need anything!
I am working on it.
I have no experience related to Cloud but I am trying to learn PySyft .
I really want to contribute to this project.
Please can you help me .
Great! There are a lot of interested people. I think you could also talk to each other about the issue. :rocket: :nerd_face:
Is there any particular channel for this project or should I message in gsoc slack group
Yess @neeravjain24! Don't worry if you're not familiar with GCloud. There's a lot of tutorials, docs and codelabs to learn. I also recommend taking a look at the PyGrid tutorial. The links are in the issue description.
GCloud links:
https://cloud.google.com/getting-started
https://codelabs.developers.google.com/cloud/
> Is there any particular channel for this project or should I message in gsoc slack group
You can use the #gsoc channel :slightly_smiling_face:
I would like to work on this project and I am willing to learn PySyft. Also, I am learning about GCP already.
I'd like to contribute to this project
Great, guys. I'm thinking of creating a slack channel for this issue. It is even better to discuss it there too. What do you think? :slightly_smiling_face:
Hey guys,
I am pursuing B.Tech from University of Delhi and am fully interested in contributing to this project. I have also started working on this.
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days. | 2020-08-13T14:41:48 |
|
OpenMined/PySyft | 4,031 | OpenMined__PySyft-4031 | [
"3998"
] | 7ac20bd98cf5c7acde2d96cb9af99252266cca45 | diff --git a/benchmarks/frameworks/torch/mpc/scripts/benchmark_sample_data.py b/benchmarks/frameworks/torch/mpc/scripts/benchmark_sample_data.py
new file mode 100644
--- /dev/null
+++ b/benchmarks/frameworks/torch/mpc/scripts/benchmark_sample_data.py
@@ -0,0 +1,4 @@
+# this is sample data for benchmark of sigmoid function
+
+# data format ('method_name', precision value)
+benchmark_data_sigmoid = [("chebyshev", 4), ("maclaurin", 4), ("exp", 4)]
diff --git a/benchmarks/frameworks/torch/mpc/scripts/benchmark_sigmoid.py b/benchmarks/frameworks/torch/mpc/scripts/benchmark_sigmoid.py
new file mode 100644
--- /dev/null
+++ b/benchmarks/frameworks/torch/mpc/scripts/benchmark_sigmoid.py
@@ -0,0 +1,100 @@
+import torch
+import timeit
+import matplotlib.pyplot as plt
+
+from benchmarks.frameworks.torch.mpc.scripts.workers_initialization import workers, hook
+from benchmarks.frameworks.torch.mpc.scripts.benchmark_sample_data import benchmark_data_sigmoid
+
+
+def benchmark_sigmoid(method, prec_frac, workers):
+ """
+ This function approximates the sigmoid function using a given method.
+
+ Args:
+ method (str): the name of the method for approximation
+ prec_frac (int): precision value
+ workers (dict): workers used for sharing data
+
+ Returns:
+ diff (int): the difference between the syft and torch approximated value
+
+ """
+ alice, bob, james = workers["alice"], workers["bob"], workers["james"]
+
+ t = torch.tensor([1.23212])
+ t_sh = t.fix_precision(precision_fractional=prec_frac).share(alice, bob, crypto_provider=james)
+ r_sh = t_sh.sigmoid(method=method)
+ r = r_sh.get().float_prec()
+ t = t.sigmoid()
+ # Calculation of the difference between FPT and normal sigmoid (error)
+ diff = (r - t).abs().max()
+ return diff.item()
+
+
+def sigmoid_approximation_plot(benchmark_data_sigmoid):
+ """
+ This function plots the graph for various sigmoidal approximation benchmarks namely
+ 'chebyshev', 'maclaurin', 'exp'.
+
+ Args:
+ benchmark_data_sigmoid (list): the sample data to approximate
+
+ Returns:
+ sigmoid_function_approximations_benchmark (png): plotted graph in graph directory
+ """
+
+ # initializing workers
+ worker = workers(hook())
+
+ # initializing graph plot
+ fig, ax1 = plt.subplots()
+ ax2 = ax1.twinx()
+
+ # list for handling graph data
+ x_data = []
+ y_data = []
+ y2_data = []
+
+ for data in benchmark_data_sigmoid:
+
+ # getting value from benchmark_data_sigmoid
+ method, prec_frac = data
+
+ for precision_value in range(1, (prec_frac + 1)):
+
+ # temporary list for calculating average execution time and error
+ temp_time_taken = []
+ temp_error = []
+
+ for i in range(10):
+ start_time = timeit.default_timer()
+ error = benchmark_sigmoid(method, precision_value, worker)
+ time_taken = timeit.default_timer() - start_time
+ temp_time_taken.append(time_taken)
+ temp_error.append(error)
+
+ final_time_taken = sum(temp_time_taken) / len(temp_time_taken)
+ final_time_taken *= 1000
+ final_error = sum(temp_error) / len(temp_error)
+ x_data.append(precision_value)
+ y_data.append(final_time_taken)
+ y2_data.append(final_error)
+
+ ax1.plot(x_data, y_data, label=method, linestyle="-")
+ ax2.plot(x_data, y2_data, label=method, linestyle="--")
+ x_data.clear()
+ y_data.clear()
+ y2_data.clear()
+
+ # plotting of the data
+ ax1.set_xlabel("Precision Value")
+ ax1.set_ylabel("Execution Time (ms)")
+ ax2.set_ylabel("Error")
+ ax1.legend(bbox_to_anchor=(1, 1.3), loc="upper right", title="Method", fontsize="small")
+ ax2.legend(bbox_to_anchor=(0, 1.3), loc="upper left", title="Error", fontsize="small")
+ plt.tight_layout()
+ plt.savefig("../graphs/sigmoid_function_approximations_benchmark.png")
+
+
+# calling sigmoid_approximation_plot function
+sigmoid_approximation_plot(benchmark_data_sigmoid)
diff --git a/benchmarks/frameworks/torch/mpc/scripts/workers_initialization.py b/benchmarks/frameworks/torch/mpc/scripts/workers_initialization.py
new file mode 100644
--- /dev/null
+++ b/benchmarks/frameworks/torch/mpc/scripts/workers_initialization.py
@@ -0,0 +1,35 @@
+import torch
+import syft
+from syft import TorchHook
+from syft.generic.frameworks.hook import hook_args
+
+
+def hook():
+ hook = TorchHook(torch)
+ return hook
+
+
+def workers(hook):
+ """
+ This function defines virtual workers to be used in benchmarking functions.
+ """
+
+ # Reset the hook and the local worker
+ syft.local_worker.clear_objects()
+ hook_args.hook_method_args_functions = {}
+ hook_args.hook_method_response_functions = {}
+ hook_args.register_response_functions = {}
+ hook_args.get_tensor_type_functions = {}
+
+ # Define virtual workers
+ alice = syft.VirtualWorker(id="alice", hook=hook, is_client_worker=False)
+ bob = syft.VirtualWorker(id="bob", hook=hook, is_client_worker=False)
+ james = syft.VirtualWorker(id="james", hook=hook, is_client_worker=False)
+
+ workers = {
+ "me": hook.local_worker,
+ "alice": alice,
+ "bob": bob,
+ "james": james,
+ }
+ return workers
| Benchmark sigmoid function approximations methods
## What?
Benchmark the ```sigmoid``` function from the ```FPT``` (Fixed Precision Tensor) using the different approximations implemented: ```exp```, ```chebyshev```, ```maclaurin```
The final output should be a graph or graphs (png images) where:
* X-axis - the used precision represents
* Y-axis - the time to compute the approximation
We can have multiple lines, each one representing an approximation method.
Some more possibilities:
* Y-axis (or Y2-Axis) - delta (difference) between the values - our method vs the one implemented in ```torch```
* X-axis - number of iterations used for the Chebyshev or the number of terms
See the [Epic](https://github.com/OpenMined/PySyft/issues/3997) to check more details regarding where to place the graphs.
| Hi @gmuraru I would like to work in this!
Done :D | 2020-08-17T17:05:25 |
|
OpenMined/PySyft | 4,035 | OpenMined__PySyft-4035 | [
"3982"
] | 9e32521888192390e99363a0da2ed4b46a39a9e1 | diff --git a/syft/frameworks/torch/tensors/interpreters/native.py b/syft/frameworks/torch/tensors/interpreters/native.py
--- a/syft/frameworks/torch/tensors/interpreters/native.py
+++ b/syft/frameworks/torch/tensors/interpreters/native.py
@@ -945,7 +945,7 @@ def share(
shared_tensor = syft.AutogradTensor().on(shared_tensor, wrap=False)
if not no_wrap:
- shared_tensor = shared_tensor.wrap()
+ shared_tensor = shared_tensor.wrap(type=self.dtype)
return shared_tensor
| diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -41,6 +41,7 @@ def test_share_get(workers, protocol, dtype, n_workers):
t = torch.tensor([1, 2, 3])
x = t.share(*share_holders[:n_workers], **kwargs)
+ assert t.dtype == x.dtype
x = x.get()
assert (x == t).all()
| Return invalid dtype when MPC is applied to Other Dtype Tensor
## Description
When MPC is applied to the int tensor, it must be int but float return.
## How to Reproduce
```python
x = torch.tensor([1, 2, 3])
print(x.dtype) # torch.int64
x = share(bob, alice, crypto_provider=theo)
print(x.dtype) # torch.float32 # should be torch.int64
print(x.get().dtype) # torch.int64
```
## Expected Behavior
should be `torch.int64`
## Screenshots

## System Information
- OS: MAC
- OS Version: Catalina
- Language Version: Python3.7
- Package Manager Version: Conda 4.8.3
- Browser (if applicable): [e.g. Google Chrome]
- Browser Version (if applicable): [e.g. 81.0.4044.138]
## Additional Context
| 2020-08-18T12:19:03 |
|
OpenMined/PySyft | 4,044 | OpenMined__PySyft-4044 | [
"4036"
] | 8f5bd18330b744d1a23e655732ae932b354341b0 | diff --git a/syft/frameworks/torch/tensors/interpreters/precision.py b/syft/frameworks/torch/tensors/interpreters/precision.py
--- a/syft/frameworks/torch/tensors/interpreters/precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/precision.py
@@ -559,7 +559,7 @@ def _sigmoid_exp(tensor):
x = tensor * sign
ones = tensor * 0 + 1
half = ones.div(2)
- result = (ones + (-ones * x).exp()).reciprocal()
+ result = (ones + (-ones * x).exp()).reciprocal(method="division")
return (result - half) * sign + half
@staticmethod
| Flaky exp test
## Description
The tolerance for the ```exp``` needs to be increased since it is failing randomly (because of the random order in which we run the tests).
## How to Reproduce
```
____________________ test_torch_sigmoid_approx[exp-3-0.065] ____________________
method = 'exp', prec_frac = 3, tolerance = 0.065
workers = {'alice': <VirtualWorker id:alice #objects:1>, 'bob': <VirtualWorker id:bob #objects:1>, 'charlie': <VirtualWorker id:charlie #objects:0>, 'james': <VirtualWorker id:james #objects:0>, ...}
@pytest.mark.parametrize(
"method, prec_frac, tolerance",
[
("chebyshev", 3, 6 / 100),
("chebyshev", 4, 1 / 1000),
("exp", 3, 6.5 / 100),
("exp", 4, 1 / 100),
("maclaurin", 3, 7 / 100),
("maclaurin", 4, 15 / 100),
],
)
def test_torch_sigmoid_approx(method, prec_frac, tolerance, workers):
"""
Test the approximate sigmoid with different tolerance depending on
the precision_fractional considered
"""
alice, bob, james = workers["alice"], workers["bob"], workers["james"]
t = torch.tensor(range(-10, 10)) * 0.5
t_sh = t.fix_precision(precision_fractional=prec_frac).share(alice, bob, crypto_provider=james)
r_sh = t_sh.sigmoid(method=method)
r = r_sh.get().float_prec()
t = t.sigmoid()
diff = (r - t).abs().max()
norm = (r + t).abs().max() / 2
> assert (diff / (tolerance * norm)) < 1
E assert (tensor(1.2798e+13) / (0.065 * tensor(6.3991e+12))) < 1
```
## Expected Behavior
The test should be passing
## Screenshots
If applicable, add screenshots to help explain your problem.
| Hey @gmuraru would want to work on this!
Assigned it to you! But I think this will require more research since it seems that the value is pretty high and probably we do something behind the scenes which brokes the ```exp```. A simple increase in the tolerance would not do the job. | 2020-08-19T14:28:04 |
|
OpenMined/PySyft | 4,064 | OpenMined__PySyft-4064 | [
"4000"
] | 1868f268cefa53d21863e40f48bfb4415d1fc79a | diff --git a/benchmarks/frameworks/torch/mpc/scripts/benchmark_ast.py b/benchmarks/frameworks/torch/mpc/scripts/benchmark_ast.py
new file mode 100644
--- /dev/null
+++ b/benchmarks/frameworks/torch/mpc/scripts/benchmark_ast.py
@@ -0,0 +1,381 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import timeit
+import syft
+import matplotlib.pyplot as plt
+
+from benchmarks.frameworks.torch.mpc.scripts.workers_initialization import workers, hook
+from benchmarks.frameworks.torch.mpc.scripts.benchmark_sample_data import (
+ b_data_share_get,
+ b_data_max_pool2d,
+ b_data_avg_pool2d,
+ b_data_batch_norm,
+)
+
+
+def benchmark_share_get(workers, protocol, dtype, n_workers):
+ """
+ This function benchmarks the share_get_functions.
+
+ Args:
+ workers (dict): workers used for sharing data
+ protocol (str): the name of the protocol
+ dtype (int): data type
+ n_workers (int): number of workers
+
+ """
+ alice, bob, charlie, james = (
+ workers["alice"],
+ workers["bob"],
+ workers["charlie"],
+ workers["james"],
+ )
+
+ share_holders = [alice, bob, charlie]
+ kwargs = dict(protocol=protocol, crypto_provider=james, dtype=dtype)
+
+ t = torch.tensor([1, 2, 3])
+
+ x = t.share(*share_holders[:n_workers], **kwargs)
+ x = x.get()
+
+
+def benchmark_share_get_plot(b_data_share_get):
+ """
+ This function plots the graph for various protocols benchmarks for additive
+ shared tensors
+
+ Args:
+ b_data_share_get (list): the sample data to approximate
+
+ Returns:
+ benchmark_share_get.png (png): plotted graph in graph/ast_benchmarks directory
+ """
+ # initializing workers
+ worker = workers(hook())
+
+ # available protocols
+ protocols = ["snn", "fss"]
+
+ # initializing graph plot
+ fig, ax = plt.subplots()
+
+ for protocol in protocols:
+
+ # list for handling graph data
+ x_data = []
+ y_data = []
+
+ for data in b_data_share_get:
+
+ # getting value from b_data_share_get
+ dtype, n_workers = data
+
+ # temporary list for calculating average execution time and error
+ temp_time_taken = []
+
+ for i in range(10):
+ start_time = timeit.default_timer()
+ benchmark_share_get(worker, protocol, dtype, n_workers)
+ time_taken = timeit.default_timer() - start_time
+ temp_time_taken.append(time_taken)
+
+ final_time_taken = sum(temp_time_taken) / len(temp_time_taken)
+ final_time_taken *= 1000
+ x_data.append(dtype + str(" / ") + str(n_workers))
+ y_data.append(final_time_taken)
+
+ ax.plot(x_data, y_data, label=protocol, linestyle="-")
+ x_data.clear()
+ y_data.clear()
+
+ ax.set_xlabel("dtype / n_workers")
+ ax.set_ylabel("Execution Time (ms)")
+ ax.legend(bbox_to_anchor=(1, 1.22), loc="upper right", title="Protocols", fontsize="small")
+ plt.tight_layout()
+ plt.savefig("../graphs/ast_benchmarks/benchmark_share_get.png")
+ # plt.show()
+
+
+# calling benchmark_share_get_plot function
+benchmark_share_get_plot(b_data_share_get)
+
+
+def benchmark_max_pool2d(workers, protocol, prec_frac):
+ """
+ This function benchmarks max_plot2d function.
+
+ Args:
+ workers (dict): workers used for sharing data
+ protocol (str): the name of the protocol
+ prec_frac (int): the precision value (upper limit)
+ """
+
+ me, alice, bob, crypto_provider = (
+ workers["me"],
+ workers["alice"],
+ workers["bob"],
+ workers["james"],
+ )
+
+ args = (alice, bob)
+ kwargs = dict(crypto_provider=crypto_provider, protocol=protocol)
+
+ m = 4
+ t = torch.tensor(list(range(3 * 7 * m * m))).float().reshape(3, 7, m, m)
+ x = t.fix_prec(precision_fractional=prec_frac).share(*args, **kwargs)
+
+ # using maxpool optimization for kernel_size=2
+ expected = F.max_pool2d(t, kernel_size=2)
+ result = F.max_pool2d(x, kernel_size=2).get().float_prec()
+
+ # # without
+ # expected = F.max_pool2d(t, kernel_size=3)
+ # result = F.max_pool2d(x, kernel_size=3).get().float_prec()
+
+
+def benchmark_max_pool2d_plot(b_data_max_pool2d):
+ """
+ This function plots the graph for various protocols benchmarks for
+ max_pool2d.
+
+ Args:
+ b_data_max_pool2d (list): list of protocols to approximate
+
+ Returns:
+ benchmark_max_pool2d.png (png): plotted graph in graph/ast_benchmarks directory
+ """
+
+ # initializing workers
+ worker = workers(hook())
+
+ # getting data (protocols)
+ protocols = b_data_max_pool2d
+
+ # initializing graph plot
+ fig, ax = plt.subplots()
+
+ for protocol in protocols:
+
+ # list for handling graph data
+ x_data = []
+ y_data = []
+
+ for prec_frac in range(1, 5):
+ temp_time_taken = []
+
+ for i in range(10):
+ start_time = timeit.default_timer()
+ benchmark_max_pool2d(worker, protocol, prec_frac)
+ time_taken = timeit.default_timer() - start_time
+ temp_time_taken.append(time_taken)
+
+ final_time_taken = sum(temp_time_taken) / len(temp_time_taken)
+ final_time_taken *= 1000
+ y_data.append(final_time_taken)
+ x_data.append(prec_frac)
+
+ ax.plot(x_data, y_data, label=protocol, linestyle="-")
+ x_data.clear()
+ y_data.clear()
+
+ # plotting of the data
+ plt.title("Benchmark max_pool2d")
+ ax.set_xlabel("Precision Value")
+ ax.set_ylabel("Execution Time (ms)")
+ ax.legend(bbox_to_anchor=(1, 1.3), loc="upper right", title="Protocol", fontsize="small")
+ plt.tight_layout()
+ plt.savefig("../graphs/ast_benchmarks/benchmark_max_pool2d.png")
+
+
+# calling benchmark_max_pool2d_plot
+benchmark_max_pool2d_plot(b_data_max_pool2d)
+
+
+def benchmark_avg_pool2d(workers, protocol, prec_frac):
+ """
+ This function benchmarks avg_plot2d function.
+
+ Args:
+ workers (dict): workers used for sharing data
+ protocol (str): the name of the protocol
+ prec_frac (int): the precision value (upper limit)
+ """
+
+ me, alice, bob, crypto_provider = (
+ workers["me"],
+ workers["alice"],
+ workers["bob"],
+ workers["james"],
+ )
+
+ args = (alice, bob)
+ kwargs = dict(crypto_provider=crypto_provider, protocol=protocol)
+
+ m = 4
+ t = torch.tensor(list(range(3 * 7 * m * m))).float().reshape(3, 7, m, m)
+ x = t.fix_prec(precision_fractional=prec_frac).share(*args, **kwargs)
+
+ # using maxpool optimization for kernel_size=2
+ expected = F.avg_pool2d(t, kernel_size=2)
+ result = F.avg_pool2d(x, kernel_size=2).get().float_prec()
+
+ # # without
+ # expected = F.avg_pool2d(t, kernel_size=3)
+ # result = F.avg_pool2d(x, kernel_size=3).get().float_prec()
+
+
+def benchmark_avg_pool2d_plot(b_data_avg_pool2d):
+ """
+ This function plots the graph for various protocols benchmarks for
+ avg_pool2d.
+
+ Args:
+ b_data_avg_pool2d (list): list of protocols to approximate
+
+ Returns:
+ benchmark_avg_pool2d.png (png): plotted graph in graph/ast_benchmarks directory
+ """
+
+ # initializing workers
+ worker = workers(hook())
+
+ # getting data (protocols)
+ protocols = b_data_avg_pool2d
+
+ # initializing graph plot
+ fig, ax = plt.subplots()
+
+ for protocol in protocols:
+
+ # list for handling graph data
+ x_data = []
+ y_data = []
+
+ for prec_frac in range(1, 5):
+ temp_time_taken = []
+
+ for i in range(10):
+ start_time = timeit.default_timer()
+ benchmark_avg_pool2d(worker, protocol, prec_frac)
+ time_taken = timeit.default_timer() - start_time
+ temp_time_taken.append(time_taken)
+
+ final_time_taken = sum(temp_time_taken) / len(temp_time_taken)
+ final_time_taken *= 1000
+ y_data.append(final_time_taken)
+ x_data.append(prec_frac)
+
+ ax.plot(x_data, y_data, label=protocol, linestyle="-")
+ x_data.clear()
+ y_data.clear()
+
+ # plotting of the data
+ plt.title("Benchmark avg_pool2d")
+ ax.set_xlabel("Precision Value")
+ ax.set_ylabel("Execution Time (ms)")
+ ax.legend(bbox_to_anchor=(1, 1.3), loc="upper right", title="Protocol", fontsize="small")
+ plt.tight_layout()
+ plt.savefig("../graphs/ast_benchmarks/benchmark_avg_pool2d.png")
+ # plt.show()
+
+
+# calling benchmark_avg_pool2d_plot
+benchmark_avg_pool2d_plot(b_data_avg_pool2d)
+
+
+def benchmark_batch_norm(workers, protocol, training, prec_frac):
+ """
+ This function benchmarks batch_norm function.
+
+ Args:
+ workers (dict): workers used for sharing data
+ protocol (str): the name of the protocol
+ training (bool): training or eval mode
+ prec_frac (int): the precision value (upper limit)
+ """
+
+ me, alice, bob, crypto_provider = (
+ workers["me"],
+ workers["alice"],
+ workers["bob"],
+ workers["james"],
+ )
+
+ args = (alice, bob)
+ syft.local_worker.clients = args
+ kwargs = dict(crypto_provider=crypto_provider, protocol=protocol)
+
+ model = nn.BatchNorm2d(4, momentum=0)
+ if training:
+ model.train()
+ else:
+ model.eval()
+
+ x = torch.rand(1, 4, 5, 5)
+ expected = model(x)
+
+ model.fix_prec(precision_fractional=prec_frac).share(*args, **kwargs)
+ x = x.fix_prec(precision_fractional=prec_frac).share(*args, **kwargs)
+ y = model(x)
+ predicted = y.get().float_prec()
+
+
+def benchmark_batch_norm_plot(b_data_batch_norm):
+ """
+ This function plots the graph for various protocols benchmarks for
+ batch_norm.
+
+ Args:
+ b_data_batch_norm (list): list of protocols to approximate
+
+ Returns:
+ benchmark_batch_norm.png (png): plotted graph in graph/ast_benchmarks directory
+ """
+
+ # initializing workers
+ worker = workers(hook())
+
+ # getting data (protocols)
+ protocols = b_data_batch_norm
+
+ # initializing graph plot
+ fig, ax = plt.subplots()
+
+ for protocol in protocols:
+
+ # list for handling graph data
+ x_data = []
+ y_data = []
+
+ for prec_frac in range(1, 5):
+ temp_time_taken = []
+
+ for i in range(10):
+ start_time = timeit.default_timer()
+ benchmark_batch_norm(worker, protocol, True, prec_frac)
+ time_taken = timeit.default_timer() - start_time
+ temp_time_taken.append(time_taken)
+
+ final_time_taken = sum(temp_time_taken) / len(temp_time_taken)
+ final_time_taken *= 1000
+ y_data.append(final_time_taken)
+ x_data.append(prec_frac)
+
+ ax.plot(x_data, y_data, label=protocol, linestyle="-")
+ x_data.clear()
+ y_data.clear()
+
+ # plotting of the data
+ plt.title("benchmark_batch_norm")
+ ax.set_xlabel("Precision Value")
+ ax.set_ylabel("Execution Time (ms)")
+ ax.legend(bbox_to_anchor=(1, 1.3), loc="upper right", title="Protocol", fontsize="small")
+ plt.tight_layout()
+ plt.savefig("../graphs/ast_benchmarks/benchmark_batch_norm.png")
+ # plt.show()
+
+
+# calling benchmark_batch_norm_plot
+benchmark_batch_norm_plot(b_data_batch_norm)
diff --git a/benchmarks/frameworks/torch/mpc/scripts/benchmark_sample_data.py b/benchmarks/frameworks/torch/mpc/scripts/benchmark_sample_data.py
--- a/benchmarks/frameworks/torch/mpc/scripts/benchmark_sample_data.py
+++ b/benchmarks/frameworks/torch/mpc/scripts/benchmark_sample_data.py
@@ -7,3 +7,19 @@
benchmark_data_sigmoid = [("chebyshev", 4), ("maclaurin", 4), ("exp", 4)]
benchmark_data_tanh = [("chebyshev", 4), ("sigmoid", 4)]
+
+############################################################
+# Benchmark Data Additive Sharing Tensors #
+###########################################################
+
+# data format benchmark_share_get_plot: ('protocol', dtype, n_workers)
+b_data_share_get = [("int", 2), ("long", 2), ("int", 3), ("long", 3)]
+
+# data format for benchmark_max_pool2d_plot: (list) (['protocols'])
+b_data_max_pool2d = ["snn", "fss"]
+
+# data format for benchmark_avg_pool2d_plot: (list) (['protocols'])
+b_data_avg_pool2d = ["snn", "fss"]
+
+# data format for benchmark_avg_pool2d_plot: (list) (['protocols'])
+b_data_batch_norm = ["snn", "fss"]
diff --git a/benchmarks/frameworks/torch/mpc/scripts/workers_initialization.py b/benchmarks/frameworks/torch/mpc/scripts/workers_initialization.py
--- a/benchmarks/frameworks/torch/mpc/scripts/workers_initialization.py
+++ b/benchmarks/frameworks/torch/mpc/scripts/workers_initialization.py
@@ -25,11 +25,12 @@ def workers(hook):
alice = syft.VirtualWorker(id="alice", hook=hook, is_client_worker=False)
bob = syft.VirtualWorker(id="bob", hook=hook, is_client_worker=False)
james = syft.VirtualWorker(id="james", hook=hook, is_client_worker=False)
-
+ charlie = syft.VirtualWorker(id="charlie", hook=hook, is_client_worker=False)
workers = {
"me": hook.local_worker,
"alice": alice,
"bob": bob,
+ "charlie": charlie,
"james": james,
}
return workers
| Benchmark AST operations
## What?
Benchmark the ```AST``` operations using the already existing implemented protocols (```fss``` and ```snn```) - some of them might be ```relu```, ```conv2d```, ```batchnorm```.
A good idea for a more real-life scenario would be to use ```GridNodes``` from [PyGrid](https://github.com/OpenMined/PyGrid)
But, we can start with ```VirtualWorkers``` and then move to the Grid.
The final output should be a graph or graphs (png images) where:
* X-axis - the used precision for the ```FPT```
* Y-axis - the time to compute the operation
We can have multiple lines, each one representing an approximation method.
Some more possibilities:
* Y-axis (or Y2-Axis) - delta (difference) between the values - our method vs the one implemented in ```torch```
* X-axis - number of iterations used for the Chebyshev or the number of terms
See the [Epic](https://github.com/OpenMined/PySyft/issues/3997) to check more details regarding where to place the graphs.
| I'll take this up :) | 2020-08-22T10:56:20 |
|
OpenMined/PySyft | 4,065 | OpenMined__PySyft-4065 | [
"4048"
] | 7fe9f79ebbc2df5ee7990c0f1fd80e8afcda02fc | diff --git a/syft/frameworks/torch/tensors/interpreters/precision.py b/syft/frameworks/torch/tensors/interpreters/precision.py
--- a/syft/frameworks/torch/tensors/interpreters/precision.py
+++ b/syft/frameworks/torch/tensors/interpreters/precision.py
@@ -468,6 +468,19 @@ def matmul(self, *args, **kwargs):
__matmul__ = matmul
mm = matmul
+ def signum(self):
+ """
+ Calculation of signum function for a given tensor
+ """
+ sgn = (self > 0) - (self < 0)
+ return sgn
+
+ def modulus(self):
+ """
+ Calculation of modulus for a given tensor
+ """
+ return self.signum() * self
+
def reciprocal(self, method="NR", nr_iters=10):
r"""
Calculate the reciprocal using the algorithm specified in the method args.
@@ -488,15 +501,17 @@ def reciprocal(self, method="NR", nr_iters=10):
"""
if method.lower() == "nr":
- result = 3 * (0.5 - self).exp() + 0.003
+ new_self = self.modulus()
+ result = 3 * (0.5 - new_self).exp() + 0.003
for i in range(nr_iters):
- result = 2 * result - result * result * self
- return result
+ result = 2 * result - result * result * new_self
+ return result * self.signum()
elif method.lower() == "division":
ones = self * 0 + 1
return ones / self
elif method.lower() == "log":
- return (-self.log()).exp()
+ new_self = self.modulus()
+ return (-new_self.log()).exp() * self.signum()
else:
raise ValueError(f"Invalid method {method} given for reciprocal function")
| diff --git a/test/torch/tensors/test_precision.py b/test/torch/tensors/test_precision.py
--- a/test/torch/tensors/test_precision.py
+++ b/test/torch/tensors/test_precision.py
@@ -106,7 +106,7 @@ def test_methods_for_linear_module(method, parameter):
def test_reciprocal(workers):
bob, alice, james = (workers["bob"], workers["alice"], workers["james"])
- tensor = torch.tensor([1.0, 2.0, 3.0])
+ tensor = torch.tensor([-2.0, 6.0, 2.0, 3.0, -5.0, -0.5])
x = tensor.fix_prec()
result = x.reciprocal(method="division").float_prec()
assert torch.isclose(tensor.reciprocal(), result, rtol=1e-2).all()
| Reciprocal test enchantment negative numbers
## Description
Test the ```reciprocal``` method from precision using negative numbers
## Expected Behavior
All tests are passing
## Screenshots
If applicable, add screenshots to help explain your problem.
## Additional Context
It might require changes to the reciprocal method
| I guess the Log and NR method are a problem. Division works fine.. Will look into the issue
Adding you :)
> I guess the Log and NR method are a problem. Division works fine.. Will look into the issue
Also, there might be worth checking out how [CrypTen](https://github.com/facebookresearch/CrypTen) is doing - if they take into consideration negative values.
If not, one idea (it might not be the greatest) is to use symmetry. | 2020-08-22T13:44:20 |
OpenMined/PySyft | 4,708 | OpenMined__PySyft-4708 | [
"4677"
] | 0d601f9bcb41dd6aaed16cf39dd75f6f0fb63930 | diff --git a/src/syft/lib/torch/__init__.py b/src/syft/lib/torch/__init__.py
--- a/src/syft/lib/torch/__init__.py
+++ b/src/syft/lib/torch/__init__.py
@@ -12,7 +12,7 @@
from ...ast.globals import Globals
from .allowlist import allowlist
-TORCH_VERSION = version.parse(torch.__version__)
+TORCH_VERSION = version.parse(torch.__version__.split("+")[0])
def get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:
| diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml
--- a/.github/workflows/tests.yml
+++ b/.github/workflows/tests.yml
@@ -8,19 +8,21 @@ on:
paths:
- "**.py"
- "setup.cfg"
+ - ".github/workflows/tests.yml"
pull_request:
types: [opened, synchronize, reopened]
paths:
- "**.py"
- "setup.cfg"
+ - ".github/workflows/tests.yml"
jobs:
python-tests:
strategy:
max-parallel: 24
matrix:
- os: [ubuntu-latest, macos-latest]
+ os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.8, 3.7, 3.6]
torch-version: [1.5.0, 1.5.1, 1.6.0]
@@ -49,6 +51,14 @@ jobs:
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
+ - uses: actions/cache@v2
+ if: startsWith(runner.os, 'Windows')
+ with:
+ path: '%LOCALAPPDATA%\pip\Cache'
+ key: ${{ runner.os }}-pip-${{ matrix.python-version }}-${{ hashFiles('**/requirements.txt') }}
+ restore-keys: |
+ ${{ runner.os }}-pip-${{ matrix.python-version }}-
+
- name: Cache packages
uses: actions/cache@v2
id: cache-reqs
@@ -61,7 +71,8 @@ jobs:
pip install bandit
bandit -r src -ll
- - name: Run normal tests without coverage
+ - name: Install pytorch Linux/MacOS
+ if: startsWith(runner.os, 'Windows') != true
env:
TORCH_VERSION: ${{ matrix.torch-version }}
run: |
@@ -75,8 +86,25 @@ jobs:
then
TORCHVISION_VERSION="0.7"
fi
- pip install torch==$TORCH_VERSION
+ pip install torch==${TORCH_VERSION}
pip install torchvision==${TORCHVISION_VERSION}
+
+ - name: Install pytorch Windows
+ if: startsWith(runner.os, 'Windows')
+ env:
+ TORCH_VERSION: ${{ matrix.torch-version }}
+ run: |
+ If ($env:TORCH_VERSION -eq "1.5.0") {
+ $env:TORCHVISION_VERSION="0.6.0"
+ } Elseif ( $env:TORCH_VERSION -eq "1.5.1" ) {
+ $env:TORCHVISION_VERSION="0.6.1"
+ } Elseif ($env:TORCH_VERSION -eq "1.6.0") {
+ $env:TORCHVISION_VERSION="0.7"
+ }
+ pip install torch==$env:TORCH_VERSION+cpu torchvision==$env:TORCHVISION_VERSION+cpu -f https://download.pytorch.org/whl/torch_stable.html
+
+ - name: Run normal tests without coverage
+ run: |
pip install -r requirements.txt
pip install -e .
pip freeze | grep torch
@@ -105,14 +133,6 @@ jobs:
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- - uses: actions/cache@v2
- if: startsWith(runner.os, 'macOS')
- with:
- path: ~/Library/Caches/pip
- key: ${{ runner.os }}-pip-${{ matrix.python-version }}-${{ hashFiles('**/requirements.txt') }}
- restore-keys: |
- ${{ runner.os }}-pip-${{ matrix.python-version }}-
-
- name: Cache packages
uses: actions/cache@v2
id: cache-reqs
@@ -150,21 +170,12 @@ jobs:
python-version: ${{ matrix.python-version }}
- uses: actions/cache@v2
- if: startsWith(runner.os, 'Linux')
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- - uses: actions/cache@v2
- if: startsWith(runner.os, 'macOS')
- with:
- path: ~/Library/Caches/pip
- key: ${{ runner.os }}-pip-${{ matrix.python-version }}-${{ hashFiles('**/requirements.txt') }}
- restore-keys: |
- ${{ runner.os }}-pip-${{ matrix.python-version }}-
-
- name: Cache packages
uses: actions/cache@v2
id: cache-reqs
@@ -183,7 +194,7 @@ jobs:
strategy:
max-parallel: 24
matrix:
- os: [ubuntu-latest, macos-latest]
+ os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.8, 3.7, 3.6]
torch-version: [1.5.0, 1.5.1, 1.6.0]
diff --git a/tests/syft/grid/connections/webrtc_test.py b/tests/syft/grid/connections/webrtc_test.py
--- a/tests/syft/grid/connections/webrtc_test.py
+++ b/tests/syft/grid/connections/webrtc_test.py
@@ -20,7 +20,8 @@ def get_signing_key() -> SigningKey:
return SigningKey(bytes.fromhex(key))
-def test_init_without_event_loop() -> None:
[email protected]
+async def test_init_without_event_loop() -> None:
nest_asyncio.apply()
domain = Domain(name="test")
diff --git a/tests/syft/lib/allowlist_report.py b/tests/syft/lib/allowlist_report.py
--- a/tests/syft/lib/allowlist_report.py
+++ b/tests/syft/lib/allowlist_report.py
@@ -25,7 +25,7 @@
# syft absolute
from syft.lib.torch import allowlist # noqa: E402
-TORCH_VERSION = version.parse(th.__version__)
+TORCH_VERSION = version.parse(th.__version__.split("+")[0])
py_ver = sys.version_info
PYTHON_VERSION = version.parse(f"{py_ver.major}.{py_ver.minor}")
OS_NAME = platform.system().lower()
diff --git a/tests/syft/lib/allowlist_test.py b/tests/syft/lib/allowlist_test.py
--- a/tests/syft/lib/allowlist_test.py
+++ b/tests/syft/lib/allowlist_test.py
@@ -33,7 +33,7 @@
from syft.lib.torch.tensor_util import TORCH_STR_DTYPE
from syft.lib.util import full_name_with_qualname
-TORCH_VERSION = version.parse(th.__version__)
+TORCH_VERSION = version.parse(th.__version__.split("+")[0])
py_ver = sys.version_info
PYTHON_VERSION = version.parse(f"{py_ver.major}.{py_ver.minor}")
OS_NAME = platform.system().lower()
| Add Windows to CI
## Description
Add windows to the CI tests as a separate step for say python 3.8 and torch==1.6.0 initially just to get things working. Then if it works expand to all versions to see any potential issues.
## Definition of Done
This ticket is done when we know what does and doesn't run on Windows in CI from the current "fast" tests and the new "slow" tests. Post a screenshot and link to CI here when it's running.
| 2020-10-25T17:49:30 |
|
OpenMined/PySyft | 4,752 | OpenMined__PySyft-4752 | [
"4705"
] | eb4d348e19ee0356343a841e19f8df978cf9170f | diff --git a/syft/frameworks/torch/tensors/interpreters/additive_shared.py b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
--- a/syft/frameworks/torch/tensors/interpreters/additive_shared.py
+++ b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
@@ -929,7 +929,7 @@ def __gt__(self, other):
@crypto_protocol("fss")
def __gt__(self, other):
- return (other + 1) <= self
+ return -self <= -(other + 1)
def ge(self, other):
return (self - other).positive()
| diff --git a/test/torch/tensors/test_additive_shared.py b/test/torch/tensors/test_additive_shared.py
--- a/test/torch/tensors/test_additive_shared.py
+++ b/test/torch/tensors/test_additive_shared.py
@@ -1325,3 +1325,62 @@ def test_garbage_collect_mul(workers):
assert len(alice.object_store._objects) == 3
assert len(bob.object_store._objects) == 3
+
+
[email protected]("protocol", ["fss"])
[email protected]("force_preprocessing", [True, False])
+def test_comp_ast_fpt(workers, protocol, force_preprocessing):
+ me, alice, bob, crypto_provider = (
+ workers["me"],
+ workers["alice"],
+ workers["bob"],
+ workers["james"],
+ )
+
+ if force_preprocessing:
+ me.crypto_store.provide_primitives("fss_comp", [alice, bob], n_instances=50)
+
+ args = (alice, bob)
+ kwargs = {"protocol": protocol, "crypto_provider": crypto_provider}
+
+ # for x as AST and y as FPT
+ # we currently support this set of operation only for fss protocol
+ t1 = torch.tensor([-2.1, 1.8])
+ t2 = torch.tensor([-3.1, 0.3])
+ x = t1.fix_prec().share(*args, **kwargs)
+ y = t2.fix_prec()
+
+ assert ((x >= y).get().float_prec() == (t1 >= t2)).all()
+ assert ((x <= y).get().float_prec() == (t1 <= t2)).all()
+ assert ((x > y).get().float_prec() == (t1 > t2)).all()
+ assert ((x < y).get().float_prec() == (t1 < t2)).all()
+
+ t1 = torch.tensor([[-2.1, 1.8], [-1.1, -0.7]])
+ t2 = torch.tensor([[-3.1, 0.3], [-1.1, 0.3]])
+ x = t1.fix_prec().share(*args, **kwargs)
+ y = t2.fix_prec()
+
+ assert ((x >= y).get().float_prec() == (t1 >= t2)).all()
+ assert ((x <= y).get().float_prec() == (t1 <= t2)).all()
+ assert ((x > y).get().float_prec() == (t1 > t2)).all()
+ assert ((x < y).get().float_prec() == (t1 < t2)).all()
+
+
[email protected]("protocol", ["fss"])
+def test_eq_ast_fpt(workers, protocol):
+ me, alice, bob, crypto_provider = (
+ workers["me"],
+ workers["alice"],
+ workers["bob"],
+ workers["james"],
+ )
+
+ args = (alice, bob)
+ kwargs = {"protocol": protocol, "crypto_provider": crypto_provider}
+
+ # for x as AST and y as FPT
+ # we currently support this set of operation only for fss protocol
+ x = torch.tensor([-3.1]).fix_prec().share(*args, **kwargs)
+ y = torch.tensor([-3.1]).fix_prec()
+
+ assert (x == y).get().float_prec()
| Comparison between FPT and AST doesn't always works
## Description
Some cases of handling comparison between FixedPrecision and AdditiveSharingTensor are supported, but some are not. We should systematize this.
## How to Reproduce
```python
t1 = torch.tensor([1.2, 1]).fix_precision().share(*workers, crypto_provider=crypto_provider, protocol="fss")
t2 = torch.tensor([1.2, 1]).fix_precision()
t1 > t2 # FAILS but t1 < t2 works
```
## Stacktrace
```
AttributeError Traceback (most recent call last)
<ipython-input-10-c55d3fcd7179> in <module>
2 t2 = torch.tensor([1.2, 1]).fix_precision()#.share(*workers, crypto_provider=crypto_provider, protocol="fss", requires_grad=True)
3
----> 4 t1 > t2
~/code/PySyft/syft/generic/frameworks/hook/hook.py in overloaded_native_method(self, *args, **kwargs)
218 # Send the new command to the appropriate class and get the response
219 method = getattr(new_self, method_name)
--> 220 response = method(*new_args, **new_kwargs)
221
222 # For inplace methods, just directly return self
~/code/PySyft/syft/generic/frameworks/overload.py in _hook_method_args(self, *args, **kwargs)
25
26 # Send it to the appropriate class and get the response
---> 27 response = attr(self, new_self, *new_args, **new_kwargs)
28
29 # Put back SyftTensor on the tensors found in the response
~/code/PySyft/syft/frameworks/torch/tensors/interpreters/precision.py in __gt__(self, _self, other)
821 def __gt__(self, _self, other):
822 print("FPT gt", _self, other)
--> 823 result = _self.__gt__(other)
824 return result.type(self.torch_dtype) * self.base ** self.precision_fractional
825
~/code/PySyft/syft/frameworks/torch/mpc/__init__.py in method(self, *args, **kwargs)
33 def method(self, *args, **kwargs):
34 f = protocol_store[(name, self.protocol)]
---> 35 return f(self, *args, **kwargs)
36
37 return method
~/code/PySyft/syft/frameworks/torch/tensors/interpreters/additive_shared.py in __gt__(self, other)
938 @crypto_protocol("fss")
939 def __gt__(self, other):
--> 940 return (other + 1) <= self
941
942 def ge(self, other):
~/code/PySyft/syft/generic/frameworks/hook/hook.py in overloaded_native_method(self, *args, **kwargs)
156 # arguments
157 if not isinstance(args[0].child, PointerTensor):
--> 158 self = type(args[0].child)().on(self, wrap=True)
159 args = [args[0]]
160 return overloaded_native_method(self, *args, **kwargs)
AttributeError: 'dict' object has no attribute 'on'
```
| 2020-11-01T16:14:26 |
|
OpenMined/PySyft | 4,787 | OpenMined__PySyft-4787 | [
"4762"
] | 7ef6b39d921b800b00c2d50ffffb1c0288471a1b | diff --git a/src/syft/core/node/common/service/obj_search_permission_service.py b/src/syft/core/node/common/service/obj_search_permission_service.py
--- a/src/syft/core/node/common/service/obj_search_permission_service.py
+++ b/src/syft/core/node/common/service/obj_search_permission_service.py
@@ -19,6 +19,7 @@
from .....proto.core.node.common.service.object_search_permission_update_message_pb2 import (
ObjectSearchPermissionUpdateMessage as ObjectSearchPermissionUpdateMessage_PB,
)
+from ....common.group import VerifyAll
from ....common.message import ImmediateSyftMessageWithoutReply
from ....common.serde.deserialize import _deserialize
from ....common.uid import UID
@@ -33,7 +34,7 @@ class ObjectSearchPermissionUpdateMessage(ImmediateSyftMessageWithoutReply):
def __init__(
self,
add_instead_of_remove: bool,
- target_verify_key: VerifyKey,
+ target_verify_key: Optional[VerifyKey],
target_object_id: UID,
address: Address,
msg_id: Optional[UID] = None,
@@ -63,7 +64,9 @@ def _object2proto(self) -> ObjectSearchPermissionUpdateMessage_PB:
return ObjectSearchPermissionUpdateMessage_PB(
msg_id=self.id.serialize(),
address=self.address.serialize(),
- target_verify_key=bytes(self.target_verify_key),
+ target_verify_key=bytes(self.target_verify_key)
+ if self.target_verify_key
+ else None,
target_object_id=self.target_object_id.serialize(),
add_instead_of_remove=self.add_instead_of_remove,
)
@@ -88,7 +91,9 @@ def _proto2object(
return ObjectSearchPermissionUpdateMessage(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
- target_verify_key=VerifyKey(proto.target_verify_key),
+ target_verify_key=VerifyKey(proto.target_verify_key)
+ if proto.target_verify_key
+ else None,
target_object_id=_deserialize(blob=proto.target_object_id),
add_instead_of_remove=proto.add_instead_of_remove,
)
@@ -122,10 +127,15 @@ def process(
msg: ObjectSearchPermissionUpdateMessage,
verify_key: VerifyKey,
) -> None:
+ target_verify_key = msg.target_verify_key or VerifyAll
if msg.add_instead_of_remove:
- node.store[msg.target_object_id].search_permissions[verify_key] = msg.id
+ node.store[msg.target_object_id].search_permissions[
+ target_verify_key
+ ] = msg.id
else:
- node.store[msg.target_object_id].search_permissions.pop(verify_key, None)
+ node.store[msg.target_object_id].search_permissions.pop(
+ target_verify_key, None
+ )
@staticmethod
def message_handler_types() -> List[Type[ObjectSearchPermissionUpdateMessage]]:
diff --git a/src/syft/core/node/common/service/obj_search_service.py b/src/syft/core/node/common/service/obj_search_service.py
--- a/src/syft/core/node/common/service/obj_search_service.py
+++ b/src/syft/core/node/common/service/obj_search_service.py
@@ -198,10 +198,9 @@ def process(
for obj in node.store.get_objects_of_type(obj_type=object):
# if this tensor allows anyone to search for it, then one of its keys
# has an All() class in it.
- contains_all_in_permissions = False
- for key in obj.search_permissions.keys():
- if isinstance(key, VerifyAll):
- contains_all_in_permissions = True
+ contains_all_in_permissions = any(
+ key is VerifyAll for key in obj.search_permissions.keys()
+ )
if (
verify_key in obj.search_permissions.keys()
diff --git a/src/syft/core/pointer/pointer.py b/src/syft/core/pointer/pointer.py
--- a/src/syft/core/pointer/pointer.py
+++ b/src/syft/core/pointer/pointer.py
@@ -89,6 +89,7 @@
# third party
from google.protobuf.reflection import GeneratedProtocolMessageType
from loguru import logger
+from nacl.signing import VerifyKey
# syft absolute
import syft as sy
@@ -105,6 +106,9 @@
GarbageCollectObjectAction,
)
from ..node.common.action.get_object_action import GetObjectAction
+from ..node.common.service.obj_search_permission_service import (
+ ObjectSearchPermissionUpdateMessage,
+)
from ..store.storeable_object import StorableObject
@@ -440,6 +444,21 @@ def request(
# escape the while loop
return status
+ def make_searchable(self, target_verify_key: Optional[VerifyKey] = None) -> None:
+ """Make the object pointed at searchable for other people. If target_verify_key is not specified, the
+ object will be searchable by anyone.
+
+ :param target_verify_key: The verify_key of the client to which we want to give search permission.
+ :type target_verify_key: Optional[VerifyKey]
+ """
+ msg = ObjectSearchPermissionUpdateMessage(
+ add_instead_of_remove=True,
+ target_verify_key=target_verify_key,
+ target_object_id=self.id_at_location,
+ address=self.client.address,
+ )
+ self.client.send_immediate_msg_without_reply(msg=msg)
+
def check_access(self, node: AbstractNode, request_id: UID) -> any: # type: ignore
"""Method that checks the status of an already made request. There are three possible
outcomes when requesting access:
| diff --git a/tests/syft/core/node/common/service/obj_search_permission_service_test.py b/tests/syft/core/node/common/service/obj_search_permission_service_test.py
--- a/tests/syft/core/node/common/service/obj_search_permission_service_test.py
+++ b/tests/syft/core/node/common/service/obj_search_permission_service_test.py
@@ -52,7 +52,9 @@ def test_object_search_permissons_update_execute_add() -> None:
)
assert (
- bob_phone.store[ptr.id_at_location].search_permissions[bob_phone.verify_key]
+ bob_phone.store[ptr.id_at_location].search_permissions[
+ bob_phone_client.verify_key
+ ]
== msg.id
)
@@ -77,6 +79,6 @@ def test_object_search_permissons_update_execute_remove() -> None:
)
assert (
- bob_phone.verify_key
+ bob_phone_client.verify_key
not in bob_phone.store[ptr.id_at_location].search_permissions
)
diff --git a/tests/syft/core/pointer/__init__.py b/tests/syft/core/pointer/__init__.py
new file mode 100644
diff --git a/tests/syft/core/pointer/pointer_test.py b/tests/syft/core/pointer/pointer_test.py
new file mode 100644
--- /dev/null
+++ b/tests/syft/core/pointer/pointer_test.py
@@ -0,0 +1,25 @@
+# third party
+import pytest
+import torch as th
+
+# syft absolute
+import syft as sy
+
+
[email protected]("with_verify_key", [True, False])
+def test_make_searchable(with_verify_key: bool) -> None:
+ bob = sy.VirtualMachine(name="Bob")
+ root_client = bob.get_root_client()
+ client = bob.get_client()
+
+ ten = th.tensor([1, 2])
+ ptr = ten.send(root_client)
+
+ assert len(client.store) == 0
+
+ if with_verify_key:
+ ptr.make_searchable(target_verify_key=client.verify_key)
+ else:
+ ptr.make_searchable()
+
+ assert len(client.store) == 1
| Ability to Toggle Searchability on Data
## Description
Some data cant be set to searchable because the method only exists on send, and other situations would benefit from the ability to potentially toggle it if the user has the correct permissions.
## Definition of Done
All Pointers can be used to toggle the searchability flag through a method assuming the user has the permission to do so.
| 2020-11-06T09:52:21 |
|
OpenMined/PySyft | 4,801 | OpenMined__PySyft-4801 | [
"4616"
] | 5dd308e4506660bcb20f77c41d620dd2ea4a23b2 | diff --git a/syft/generic/object_storage.py b/syft/generic/object_storage.py
--- a/syft/generic/object_storage.py
+++ b/syft/generic/object_storage.py
@@ -6,6 +6,7 @@
from syft.generic.frameworks.types import FrameworkTensorType
from syft.generic.abstract.tensor import AbstractTensor
from syft.workers.abstract import AbstractWorker
+import torch
class ObjectStore:
@@ -62,7 +63,8 @@ def de_register_obj(self, obj: object, _recurse_torch_objs: bool = True):
more complex and needs to be explored. Is not supported at the
moment.
"""
- if hasattr(obj, "id"):
+ has_id = hasattr(obj, "_id") if isinstance(obj, torch.Tensor) else hasattr(obj, "id")
+ if has_id:
self.rm_obj(obj.id)
if hasattr(obj, "_owner"):
del obj._owner
| diff --git a/test/generic/pointers/test_pointer_tensor.py b/test/generic/pointers/test_pointer_tensor.py
--- a/test/generic/pointers/test_pointer_tensor.py
+++ b/test/generic/pointers/test_pointer_tensor.py
@@ -398,6 +398,18 @@ def test_remote_T(workers):
assert (bob_xT.get() == x.T).all()
+def test_remote_svd(workers):
+ """Test pointer.svd() functionality"""
+ bob = workers["bob"]
+ x = th.rand(5, 3)
+ local_u, local_s, local_v = x.svd()
+ bob_x = x.send(bob)
+ bob_u, bob_s, bob_v = bob_x.svd()
+ assert (local_u == bob_u.get()).all()
+ assert (local_s == bob_s.get()).all()
+ assert (local_v == bob_v.get()).all()
+
+
def test_remote_function_with_multi_ouput(workers):
"""
Functions like .split return several tensors, registration and response
| SVD is returning 4 pointers instead of 3
## Description
When sending a tensor to a worker and performing SVD, returns four pointers instead of three. Also, the third one is not gettable. By experimentation, I have had to solve the issue using `U, s, _, V = x.svd()`.
## How to Reproduce
```python
import torch
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id='bob')
x = torch.rand(250, 84).send(bob) # Synthetic tensor
x.svd()
# Output:
# ((Wrapper)>[PointerTensor | me:88822589827 -> bob:10423896311],
# (Wrapper)>[PointerTensor | me:22528885369 -> bob:34285527022],
# (Wrapper)>[PointerTensor | me:46709676193 -> bob:67244907535],
# (Wrapper)>[PointerTensor | me:235847656 -> bob:15738446586])
```
## Expected Behavior
Should return **three** pointers: `U, s, V = x.svd()`
## System Information
- Official released Docker container
- Same for pip package:
- Ubuntu 18.04.5 LTS (Bionic Beaver)
- Python 3.6.9
| this is pretty weird I run this code in debug mode, stepped into every function, the things seemed correct while computing the response (3 tensors were returned), and in the end got this error : (which shows things were correct 3 tensor returned as expected)
```
Exception has occurred: ValueError
not enough values to unpack (expected 4, got 3)
File "/home/nilansh/Anton/OpenMined/PySyft/debug.py", line 8, in <module>
U, s, _, V = x.svd()
```
but returns 4 values when run as normal python code.
@LaRiffle
want to work on it. | 2020-11-10T07:58:00 |
OpenMined/PySyft | 4,859 | OpenMined__PySyft-4859 | [
"4804"
] | 1d865a8bd50bc038d8c3959ada2029fd40bf1c4e | diff --git a/syft/frameworks/torch/tensors/interpreters/additive_shared.py b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
--- a/syft/frameworks/torch/tensors/interpreters/additive_shared.py
+++ b/syft/frameworks/torch/tensors/interpreters/additive_shared.py
@@ -1,6 +1,7 @@
import math
import torch
import warnings
+import logging
import syft as sy
from syft.frameworks.torch.mpc import crypto_protocol
@@ -344,6 +345,10 @@ def generate_shares(self, secret, n_workers, random_type):
"""
random_type = torch.LongTensor if random_type == torch.int64 else torch.IntTensor
if not isinstance(secret, random_type):
+ if secret.device == torch.device("cuda"):
+ logging.warning("CUDA tensors were automatically shifted to CPU before encryption.")
+ # implicit shifting to CPU happens in the next line - explicit change of type
+
secret = secret.type(random_type)
random_shares = [random_type(secret.shape) for _ in range(n_workers - 1)]
diff --git a/syft/frameworks/torch/tensors/interpreters/paillier.py b/syft/frameworks/torch/tensors/interpreters/paillier.py
--- a/syft/frameworks/torch/tensors/interpreters/paillier.py
+++ b/syft/frameworks/torch/tensors/interpreters/paillier.py
@@ -1,6 +1,7 @@
import syft as sy
import numpy as np
import torch as th
+import logging
from syft.generic.abstract.tensor import AbstractTensor
from syft.generic.frameworks.hook import hook_args
@@ -49,6 +50,9 @@ def encrypt_(self, public_key):
*public_key a public key created using
syft.frameworks.torch.he.paillier.keygen()
"""
+ if self.child.device == th.device("cuda"):
+ logging.warning("CUDA tensors were automatically shifted to CPU before encryption.")
+ # implicit shifting to CPU happens in the next line through '.tolist()'
inputs = self.child.flatten().tolist()
new_child = sy.pool().map(public_key.encrypt, inputs)
| Print warning if calling fix_precision() on a CUDA-tensor
## Description
Currently when calling fix_precision() on a CUDA tensor will automatically first shift that tensor to the CPU to then call fix_precision() on it. It would be useful to issue a warning whenever this happens to warn the user that he might have intended to let the tensor be implicitly shifted to the CPU.
## Are you interested in working on this improvement yourself?
If nobody is interested in the next week I could do it.
## Additional Context
As discussed with @gmuraru and @LaRiffle
| 2020-11-25T18:07:49 |
||
OpenMined/PySyft | 4,923 | OpenMined__PySyft-4923 | [
"4913"
] | fe71a27a745d205981e44177d092d326f7e7c1af | diff --git a/src/syft/core/pointer/pointer.py b/src/syft/core/pointer/pointer.py
--- a/src/syft/core/pointer/pointer.py
+++ b/src/syft/core/pointer/pointer.py
@@ -68,7 +68,7 @@
requested_object = data_ptr_domain_1.id_at_location
# getting the request id
- message_request_id = domain_1_client.request_queue.get_request_id_from_object_id(
+ message_request_id = domain_1_client.requests.get_request_id_from_object_id(
object_id=requested_object
)
| Small correction in Sample Code Block of sy.core.pointer.pointer
## Where?
Where are you looking to add documentation? Which file? Which feature?
In directory PySyft/src/syft/core/pointer directory, there is a python file, pointer.py.
The example code in the python file can be viewed by the user by executing the following command:
import syft as sy
sy.core.pointer.pointer?
The example code in the output of above command is as follows:
Example:
.. code-block::
# creating the data holder domain
domain_1 = Domain(name="Data holder domain")
# creating dummy data
tensor = th.tensor([1, 2, 3])
# creating the data holder client
domain_1_client = domain_1.get_root_client()
# sending the data to the client and receiving a pointer of that data.
data_ptr_domain_1 = tensor.send(domain_1_client) # or tensor.send_to(domain_1_client)
# creating the data user domain
domain_2 = Domain(name="Data user domain")
# creating a request to access the data
data_ptr_domain_1.request(
name="My Request", reason="I'd lke to see this pointer"
)
# getting the remote id of the object
requested_object = data_ptr_domain_1.id_at_location
message_request_id = domain_1_client.request_queue.get_request_id_from_object_id(
object_id=requested_object
)
# the data holder accepts the request
domain_1.requests[0].owner_client_if_available = domain_1_client
domain_1.requests[0].accept()
# the data user checks if the data holder approved his request
response = data_ptr_domain_1.check_access(node=domain_2, request_id=message_request_id)
**The following line is where the issue is:**
# getting the request id
message_request_id = domain_1_client.request_queue.get_request_id_from_object_id(
object_id=requested_object
)
**Error Generated by the line is:**
AttributeError: 'DomainClient' object has no attribute 'request_queue'
**Solution** :
Replacing "request_queue" to "requests" solves the issue.
| Can I do the corrections in the documentation for this issue?
@rajatrc1705 good spot, thank you! ππΌ | 2020-12-16T05:29:28 |
|
OpenMined/PySyft | 4,991 | OpenMined__PySyft-4991 | [
"4527"
] | 3f29129f31a5266069254889635dc4c944728575 | diff --git a/src/syft/grid/example_nodes/network.py b/src/syft/grid/example_nodes/network.py
--- a/src/syft/grid/example_nodes/network.py
+++ b/src/syft/grid/example_nodes/network.py
@@ -9,6 +9,7 @@
"""
# stdlib
import os
+import sys
# third party
import flask
@@ -77,15 +78,25 @@ def process_network_msgs() -> flask.Response:
def run() -> None:
global network
- print("====================================")
- print("========== NODE ROOT KEY ===========")
- print("====================================")
+
+ IP_MODE = os.getenv("IP_MODE", "IPV4") # default to ipv4
+ if len(sys.argv) > 1:
+ IP_MODE = sys.argv[1]
+
+ IP_MODE = "IPV6" if IP_MODE == "IPV6" else "IPV4"
# this signing_key is to aid in local development and is not used in the real
# PyGrid implementation
+ HOST = "0.0.0.0" if IP_MODE == "IPV4" else "::" # nosec
PORT = os.getenv("PORT", 5000)
- print(f"Starting Node on PORT: {PORT}")
+
+ print("====================================")
+ print("========== NODE ROOT KEY ===========")
+ print("====================================")
print(network.signing_key.encode(encoder=HexEncoder).decode("utf-8"), "\n")
- app.run(host="0.0.0.0", port=int(PORT)) # nosec
+
+ print(f"Using {IP_MODE} and listening on port {PORT}")
+
+ app.run(host=HOST, port=int(PORT))
run()
| Does the framework support IPv6 networks?
Is this framework suitable for IPv6 network environment?
| Maybe @IonesioJunior @cereallarceny can provide some insight here?
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days.
I have checked this in 0.3.0 and it seems while the Flask server runs in IPV6 the sy.duet() joining function fails to connect.
We will investigate further.
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days.
I would like to work on this issue. I have only recently started exploring PySyft so I may need some hint or help to get started.
Could you assign this to me?
@rajatrc1705 Your help would be greatly appreciated. I would do the following:
- Run the local network signaling server with:
```
$ syft-network
```
- Start jupyter notebooks and load up any example like MNIST or even a basic connection
- Connect via your local network server using the instructions here: https://github.com/OpenMined/PySyft/tree/master/examples/duet/#host-a-network
```
duet = sy.launch_duet(network_url="http://127.0.0.1:5000")
```
Then once its working, modify the script and the code to get it working over IPV6.
Please note however, people often offer to help with issues but never get around to submitting a PR, as such if there is no Draft PR showing some initial attempt after a week, then if someone else wants to take this issue I will have to reassign it. I hope you can understand. If you have any questions feel free to ping me on Slack.
Thank you. Yes, I will make sure to keep you informed about my progress on this issue. | 2021-01-04T17:00:02 |
|
OpenMined/PySyft | 5,055 | OpenMined__PySyft-5055 | [
"5052"
] | 2694c0822e0b84b48ad821150ff0985c28d03179 | diff --git a/src/syft/ast/klass.py b/src/syft/ast/klass.py
--- a/src/syft/ast/klass.py
+++ b/src/syft/ast/klass.py
@@ -290,6 +290,16 @@ def send(
if id_ is None:
id_ = UID()
which_obj.id = id_
+
+ tags = sorted(set(tags), key=tags.index) # keep order of original
+ obj_tags = getattr(which_obj, "tags", [])
+ # if `tags` is passed in, use it; else, use obj_tags
+ tags = tags if tags else obj_tags
+
+ obj_description = getattr(which_obj, "description", "")
+ # if `description` is passed in, use it; else, use obj_description
+ description = description if description else obj_description
+
which_obj.tags = tags
which_obj.description = description
@@ -324,7 +334,7 @@ def send(
def create_storable_object_attr_convenience_methods(outer_self: Any) -> None:
def tag(self: Any, *tags: Tuple[Any, ...]) -> object:
- self.tags = list(tags)
+ self.tags = sorted(set(tags), key=tags.index) # keep order of original
return self
def describe(self: Any, description: str) -> object:
| diff --git a/tests/syft/core/pointer/pointer_test.py b/tests/syft/core/pointer/pointer_test.py
--- a/tests/syft/core/pointer/pointer_test.py
+++ b/tests/syft/core/pointer/pointer_test.py
@@ -69,3 +69,41 @@ def test_searchable_property() -> None:
ptr.searchable = False
assert len(client.store) == 0
+
+
+def test_tags() -> None:
+ bob = sy.VirtualMachine(name="Bob")
+ root_client = bob.get_root_client()
+
+ ten = th.tensor([1, 2])
+
+ ten = ten.tag("tag1", "tag1", "other")
+ assert ten.tags == ["tag1", "other"]
+
+ # .send without `tags` passed in
+ ptr = ten.send(root_client)
+ assert ptr.tags == ["tag1", "other"]
+
+ # .send with `tags` passed in
+ ptr = ten.send(root_client, tags=["tag2", "tag2", "other"])
+ assert ten.tags == ["tag2", "other"]
+ assert ptr.tags == ["tag2", "other"]
+
+
+def test_description() -> None:
+ bob = sy.VirtualMachine(name="Bob")
+ root_client = bob.get_root_client()
+
+ ten = th.tensor([1, 2])
+
+ ten = ten.describe("description 1")
+ assert ten.description == "description 1"
+
+ # .send without `description` passed in
+ ptr = ten.send(root_client)
+ assert ptr.description == "description 1"
+
+ # .send with `description` passed in
+ ptr = ten.send(root_client, description="description 2")
+ assert ten.description == "description 2"
+ assert ptr.description == "description 2"
| Re-add .tag() to Pointer
## Description
Until we figure out which way we want to go with the API we should keep the .tag() method to prevent breaking compatibility with existing docs and examples.
For all instances except C types in libs like PSI, users should be able to use either or both of:
```
obj.tag("first").tag("second").send(duet, searchable=True)
```
or
```
obj.send(duet, tags=["first", "second"], searchable=True)
```
If the tags arg is supplied to .send it should overwrite any previous tags, but if its empty we will use the previously set ones.
## Definition of Done
The old functionality of .tag still functions and the PSI .send(tags=[]) style still works as well.
A test which makes sure both of these work as expected is added and passes.
| 2021-01-22T05:24:24 |
|
OpenMined/PySyft | 5,122 | OpenMined__PySyft-5122 | [
"4798",
"4796",
"4798"
] | aa6eab23766e4254a19d08dc80f2c63d594a749d | diff --git a/src/syft/lib/torch/__init__.py b/src/syft/lib/torch/__init__.py
--- a/src/syft/lib/torch/__init__.py
+++ b/src/syft/lib/torch/__init__.py
@@ -12,7 +12,7 @@
from . import parameter # noqa: 401
from . import uppercase_tensor # noqa: 401
from ...ast.globals import Globals
-from ...logger import critical
+from ...logger import info
from .allowlist import allowlist
TORCH_VERSION = version.parse(torch.__version__.split("+")[0])
@@ -64,7 +64,7 @@ def create_torch_ast(client: Any = None) -> Globals:
path=method, framework_reference=torch, return_type_name=return_type
)
else:
- critical(f"Skipping {method} not supported in {TORCH_VERSION}")
+ info(f"Skipping {method} not supported in {TORCH_VERSION}")
for klass in ast.classes:
klass.create_pointer_class()
| Add DCGAN example Duet Notebooks
## Description
Add two notebook's which reflect the DCGAN example split into DO (Data Owner) and DS (Data Scientist):
https://github.com/pytorch/examples/blob/master/dcgan/
## Definition of Done
The partially runnable DCGAN example notebooks should be in the examples/duet/dcgan folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.
Add Super Resolution Example Duet Notebooks
## Description
Add two notebook's which reflect the Super Resolution example split into DO (Data Owner) and DS (Data Scientist):
https://github.com/pytorch/examples/blob/master/super_resolution/
## Definition of Done
The partially runnable Super Resolution example notebooks should be in the examples/duet/super_resolution folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.
Add DCGAN example Duet Notebooks
## Description
Add two notebook's which reflect the DCGAN example split into DO (Data Owner) and DS (Data Scientist):
https://github.com/pytorch/examples/blob/master/dcgan/
## Definition of Done
The partially runnable DCGAN example notebooks should be in the examples/duet/dcgan folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.
| @madhavajay I am new to this project I have gone through tutorials but didn't come across what are duets as I would like to contribute to this issue
Hi @ayush12gupta Yes, sorry the Duet stuff is in the code but only referenced from the Duet README currently.
https://github.com/OpenMined/PySyft/tree/dev/examples/duet
This task is to create 2 notebooks for DCGAN the same way we have for MNIST.
You will see the MNIST notebooks in the README.md above and the DCGAN folder already created in examples.
However please be aware that it's entirely possible that all the functionality required to complete the DCGAN notebooks might not exist yet, but an initial attempt is a great way to figure out whats missing and all contributions are greatly appreciated.
If you have any questions feel free to jump into the Slack Channel. π
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days.
Hi, @madhavajay I'm working on this issue, and I almost finished the implementation for MNIST (PR #5071 ). I tested my notebook on both my local laptop and Google Colab, and both of them worked. Then, I have a few questions.
1. Should I support other datasets like the original example? It seems that remote_torchvision.datasets currently supports only MNIST.
2. I had to change the structure of the model a bit because pysyft doesn't support transform.resize. Is it OK?
@Koukyosyumei this looks great! Thank you so much for your PR. I will be reviewing this tomorrow. As @tudorcebere says are you able to fix the conflicts?
@madhavajay
Thanks for your reply! Actually, I'm not familiar with git. Could you show me how to resolve the conflict?
@Koukyosyumei you should be able to do:
```
$ git checkout 0.4
$ git pull
$ git checkout feature_4798
$ get merge 0.4
```
At this point it's going to tell you there are conflicts.
You need to carefully edit the conflicted files and merge the code in places where it doesn't match.
Check out guides like this one:
https://www.atlassian.com/git/tutorials/using-branches/merge-conflicts
Alternatively, if all you have changed are Notebooks then it might be easier to just copy the notebooks out and create a fresh branch from the latest 0.4 and commit them as a single commit.
@madhavajay
Thank you for your instruction! I somehow solved the conflicts.
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days.
@madhavajay
Hi, I am working on this issue, and I have almost done it. But, I already have a PR (#5071 ) to fix #4798. So, should I wait for #5071 to be finished, or may I create another PR for this issue? Also, one possible plan is committing the notebooks for this issue within #5071 which I made for the DCGAN issue. I would like to hear your opinion because I'm not familiar with contriution to open source project. Thank you in advance.
@madhavajay I am new to this project I have gone through tutorials but didn't come across what are duets as I would like to contribute to this issue
Hi @ayush12gupta Yes, sorry the Duet stuff is in the code but only referenced from the Duet README currently.
https://github.com/OpenMined/PySyft/tree/dev/examples/duet
This task is to create 2 notebooks for DCGAN the same way we have for MNIST.
You will see the MNIST notebooks in the README.md above and the DCGAN folder already created in examples.
However please be aware that it's entirely possible that all the functionality required to complete the DCGAN notebooks might not exist yet, but an initial attempt is a great way to figure out whats missing and all contributions are greatly appreciated.
If you have any questions feel free to jump into the Slack Channel. π
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the `stale` label to unmark it. Otherwise, this will be closed in 7 days.
Hi, @madhavajay I'm working on this issue, and I almost finished the implementation for MNIST (PR #5071 ). I tested my notebook on both my local laptop and Google Colab, and both of them worked. Then, I have a few questions.
1. Should I support other datasets like the original example? It seems that remote_torchvision.datasets currently supports only MNIST.
2. I had to change the structure of the model a bit because pysyft doesn't support transform.resize. Is it OK?
@Koukyosyumei this looks great! Thank you so much for your PR. I will be reviewing this tomorrow. As @tudorcebere says are you able to fix the conflicts?
@madhavajay
Thanks for your reply! Actually, I'm not familiar with git. Could you show me how to resolve the conflict?
@Koukyosyumei you should be able to do:
```
$ git checkout 0.4
$ git pull
$ git checkout feature_4798
$ get merge 0.4
```
At this point it's going to tell you there are conflicts.
You need to carefully edit the conflicted files and merge the code in places where it doesn't match.
Check out guides like this one:
https://www.atlassian.com/git/tutorials/using-branches/merge-conflicts
Alternatively, if all you have changed are Notebooks then it might be easier to just copy the notebooks out and create a fresh branch from the latest 0.4 and commit them as a single commit.
@madhavajay
Thank you for your instruction! I somehow solved the conflicts. | 2021-02-08T03:11:33 |
|
OpenMined/PySyft | 5,169 | OpenMined__PySyft-5169 | [
"5148"
] | 2bf4abdb182e91088014f8b72b77063e8649a358 | diff --git a/src/syft/core/common/object.py b/src/syft/core/common/object.py
--- a/src/syft/core/common/object.py
+++ b/src/syft/core/common/object.py
@@ -37,9 +37,8 @@ def __init__(self, id: Optional[UID] = None):
primary purpose of this class. It also sets the 'as_wrapper' flag
for the 'Serializable' superclass.
- :param id: an override which can be used to set an ID for this object
- manually. This is probably only used for deserialization.
- :type id: UID
+ Args:
+ id: an override which can be used to set an ID for this object
"""
@@ -58,8 +57,8 @@ def id(self) -> UID:
developers of Syft from modifying .id attributes after an object
has been initialized.
- :return: returns the unique id of the object
- :rtype: UID
+ Returns:
+ returns the unique id of the object
"""
return self._id
@@ -70,10 +69,11 @@ def __eq__(self, other: Any) -> bool:
comparing whether they have the same .id objects. These objects
come with their own __eq__ function which we assume to be correct.
- :param other: this is the other ObjectWithIDs to be compared with
- :type other: Any (note this must be Any or __eq__ fails on other types)
- :return: returns True/False based on whether the objects are the same
- :rtype: bool
+ Args:
+ other: this is the other ObjectWithIDs to be compared with
+
+ Returns:
+ True/False based on whether the objects are the same
"""
try:
@@ -82,33 +82,39 @@ def __eq__(self, other: Any) -> bool:
return False
def __repr__(self) -> str:
- """Returns a human-readable version of the ObjectWithID
-
+ """
Return a human-readable representation of the ObjectWithID with brackets
so that it can be easily spotted when nested inside of the human-
- readable representations of other objects."""
+ readable representations of other objects.
+
+ Returns:
+ a human-readable version of the ObjectWithID
+
+ """
no_dash = str(self.id.value).replace("-", "")
return f"<{type(self).__name__}: {no_dash}>"
def repr_short(self) -> str:
- """Returns a SHORT human-readable version of SpecificLocation
-
+ """
Return a SHORT human-readable version of the ID which
makes it print nicer when embedded (often alongside other
- UID objects) within other object __repr__ methods."""
+ UID objects) within other object __repr__ methods.
+
+ Returns:
+ a SHORT human-readable version of SpecificLocation
+ """
return f"<{type(self).__name__}:{self.id.repr_short()}>"
def _object2proto(self) -> ObjectWithID_PB:
- """Returns a protobuf serialization of self.
-
+ """
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
- :return: returns a protobuf object
- :rtype: ObjectWithID_PB
+ Returns:
+ a protobuf object that is the serialization of self.
.. note::
This method is purely an internal method. Please use object.serialize() or one of
@@ -124,8 +130,11 @@ def _proto2object(proto: ObjectWithID_PB) -> "ObjectWithID":
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
- :return: returns an instance of ObjectWithID
- :rtype: ObjectWithID
+ Args:
+ proto: a protobuf object that we wish to convert to instance of this class
+
+ Returns:
+ an instance of ObjectWithID
.. note::
This method is purely an internal method. Please use syft.deserialize()
@@ -147,8 +156,8 @@ def get_protobuf_schema() -> GeneratedProtocolMessageType:
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for details.
- :return: the type of protobuf object which corresponds to this class.
- :rtype: GeneratedProtocolMessageType
+ Returns:
+ the type of protobuf object which corresponds to this class.
"""
| Fix all darglint docs warnings
## Description
Currently there are 419 warnings:
```
$ flake8 src tests | wc -l
419
```
We can progressively improve this with multiple PRs.
If you are interested, please run the checker on the `src` and `tests` folders and try to fix the warnings until there are none.
Some will require adding docs that don't exist while others are just fixes.
This will likely require multiple PRs so if you are interested open a PR with a few fixes and we will go from there.
## Definition of Done
All docstring warnings are fixed and CI has the darglint checker turned on to fail the build.
| @madhavajay I have opened a PR to fix darglint warnings for `src/syft/utils.py` . What I wish to know is whether you want a seperate PR for each file or just for `src/syft` or `tests/syft` | 2021-02-17T05:23:03 |
|
OpenMined/PySyft | 5,245 | OpenMined__PySyft-5245 | [
"5208"
] | 6bfa546f0bf0379e911c9f35f61647744eb162da | diff --git a/src/syft/core/node/domain/service/request_handler_service.py b/src/syft/core/node/domain/service/request_handler_service.py
--- a/src/syft/core/node/domain/service/request_handler_service.py
+++ b/src/syft/core/node/domain/service/request_handler_service.py
@@ -117,6 +117,7 @@ def get_protobuf_schema() -> GeneratedProtocolMessageType:
return UpdateRequestHandlerMessage_PB
+@bind_protobuf
class GetAllRequestHandlersMessage(ImmediateSyftMessageWithReply):
def __init__(
self, address: Address, reply_to: Address, msg_id: Optional[UID] = None
@@ -188,6 +189,7 @@ def get_protobuf_schema() -> GeneratedProtocolMessageType:
return GetAllRequestHandlersMessage_PB
+@bind_protobuf
class GetAllRequestHandlersResponseMessage(ImmediateSyftMessageWithoutReply):
def __init__(
self,
diff --git a/src/syft/util.py b/src/syft/util.py
--- a/src/syft/util.py
+++ b/src/syft/util.py
@@ -34,7 +34,7 @@ def validate_type(_object: object, _type: type, optional: bool = False) -> Any:
def validate_field(_object: object, _field: str) -> Any:
object = getattr(_object, _field, None)
- if object:
+ if object is not None:
return object
traceback_and_raise(f"Object {_object} has no {_field} field set.")
| Handler Serde
## Description
The `duet.requests.handler` return value is broken currently due to serialization errors.
```
TypeError: You tried to deserialize an unsupported type. This can be caused by several reasons. Either you are actively writing Syft code and forgot to create one, or you are trying to deserialize an object which was serialized using a different version of Syft and the object you tried to deserialize is not supported in this version.
```
## How to Reproduce
Run: `duet.requests.handler`
## Expected Behavior
No error
## Additional Context
We will be refactoring this anyway.
| want to work on this | 2021-03-03T04:24:00 |
|
OpenMined/PySyft | 5,288 | OpenMined__PySyft-5288 | [
"5273"
] | 81649b58ec66e36928548a9a0e1c05a31d729a21 | diff --git a/src/syft/__init__.py b/src/syft/__init__.py
--- a/src/syft/__init__.py
+++ b/src/syft/__init__.py
@@ -72,7 +72,7 @@
# Convenience Objects
from syft.lib import lib_ast # noqa: F401
-from syft.lib import load_lib # noqa: F401
+from syft.lib import load # noqa: F401
from syft.lib.torch.module import Module # noqa: F401
# syft relative
diff --git a/src/syft/lib/__init__.py b/src/syft/lib/__init__.py
--- a/src/syft/lib/__init__.py
+++ b/src/syft/lib/__init__.py
@@ -5,6 +5,7 @@
from typing import Any as TypeAny
from typing import Dict as TypeDict
from typing import Optional
+import warnings
# third party
from packaging import version
@@ -17,6 +18,7 @@
from ..lib.torchvision import create_torchvision_ast
from ..logger import critical
from ..logger import traceback_and_raise
+from ..logger import warning
from .misc import create_union_ast
@@ -25,6 +27,16 @@ class VendorLibraryImportException(Exception):
def vendor_requirements_available(vendor_requirements: TypeDict[str, TypeAny]) -> bool:
+ """
+ Check whether torch or python version is supported
+
+ Args:
+ vendor_requirements: dictionary containing version of python or torch to be supported
+
+ Returns:
+ True if system supports all vendor requirements
+
+ """
# see if python version is supported
if "python" in vendor_requirements:
python_reqs = vendor_requirements["python"]
@@ -80,6 +92,13 @@ def vendor_requirements_available(vendor_requirements: TypeDict[str, TypeAny]) -
def _load_lib(lib: str, options: TypeDict[str, TypeAny] = {}) -> None:
+ """
+ Load and Update Node with given library module
+
+ Args:
+ lib: name of library to load and update Node with
+ options: external requirements for loading library successfully
+ """
_ = importlib.import_module(lib)
vendor_ast = importlib.import_module(f"syft.lib.{lib}")
PACKAGE_SUPPORT = getattr(vendor_ast, "PACKAGE_SUPPORT", None)
@@ -99,7 +118,14 @@ def _load_lib(lib: str, options: TypeDict[str, TypeAny] = {}) -> None:
lib_ast.loaded_lib_constructors[lib] = update_ast
-def load_lib(lib: str, options: TypeDict[str, TypeAny] = {}) -> None:
+def load(lib: str, options: TypeDict[str, TypeAny] = {}) -> None:
+ """
+ Load and Update Node with given library module
+
+ Args:
+ lib: name of library to load and update Node with
+ options: external requirements for loading library successfully
+ """
try:
_load_lib(lib=lib, options=options)
except VendorLibraryImportException as e:
@@ -108,8 +134,34 @@ def load_lib(lib: str, options: TypeDict[str, TypeAny] = {}) -> None:
critical(f"Unable to load package support for: {lib}. {e}")
+def load_lib(lib: str, options: TypeDict[str, TypeAny] = {}) -> None:
+ """
+ Load and Update Node with given library module
+ load_lib() is deprecated please use load() in the future
+
+ Args:
+ lib: name of library to load and update Node with
+ options: external requirements for loading library successfully
+
+ """
+ msg = "load_lib() is deprecated please use load() in the future"
+ warning(msg, print=True)
+ warnings.warn(msg, DeprecationWarning)
+ load(lib=lib, options=options)
+
+
# now we need to load the relevant frameworks onto the node
def create_lib_ast(client: Optional[Any] = None) -> Globals:
+ """
+ Create AST and load the relevant frameworks onto the node
+
+ Args:
+ client: VM client onto whom the frameworks need to be loaded
+
+ Returns:
+ AST for client of type Globals
+
+ """
python_ast = create_python_ast(client=client)
torch_ast = create_torch_ast(client=client)
torchvision_ast = create_torchvision_ast(client=client)
| diff --git a/.github/workflows/pr_tests.yml b/.github/workflows/pr_tests.yml
--- a/.github/workflows/pr_tests.yml
+++ b/.github/workflows/pr_tests.yml
@@ -169,6 +169,7 @@ jobs:
- name: Run supported library tests
run: |
+ pip install -r requirements.libs.deps.txt
pip list
pytest -m libs --co
pytest -m libs -n auto -k "not tenseal" --suppress-no-test-exit-code
diff --git a/tests/syft/api/deprecated_test.py b/tests/syft/api/deprecated_test.py
--- a/tests/syft/api/deprecated_test.py
+++ b/tests/syft/api/deprecated_test.py
@@ -4,6 +4,7 @@
# syft absolute
import syft as sy
+from syft.lib import load_lib
def test_searchable_pointable() -> None:
@@ -21,3 +22,8 @@ def test_searchable_pointable() -> None:
with pytest.deprecated_call():
x_ptr.searchable = False
assert x_ptr.searchable is False
+
+
+def test_load_lib_deprecated() -> None:
+ with pytest.deprecated_call():
+ assert load_lib("tenseal") is None
diff --git a/tests/syft/lib/opacus/opacus_test.py b/tests/syft/lib/opacus/opacus_test.py
--- a/tests/syft/lib/opacus/opacus_test.py
+++ b/tests/syft/lib/opacus/opacus_test.py
@@ -7,7 +7,7 @@
@pytest.mark.vendor(lib="opacus")
def test_remote_engine_simple() -> None:
- sy.load_lib("opacus")
+ sy.load("opacus")
data_owner = sy.VirtualMachine().get_root_client()
remote_opacus = data_owner.opacus
diff --git a/tests/syft/lib/psi/psi_test.py b/tests/syft/lib/psi/psi_test.py
--- a/tests/syft/lib/psi/psi_test.py
+++ b/tests/syft/lib/psi/psi_test.py
@@ -13,15 +13,15 @@ def test_psi(loadlib_before_client: bool, reveal_intersection: bool) -> None:
# third party
import openmined_psi as psi
- # it should work when call load_lib before or after create clients
+ # it should work when call load before or after create clients
if loadlib_before_client:
- sy.load_lib("openmined_psi")
+ sy.load("openmined_psi")
server_vm = sy.VirtualMachine().get_root_client()
client_vm = sy.VirtualMachine().get_root_client()
else:
server_vm = sy.VirtualMachine().get_root_client()
client_vm = sy.VirtualMachine().get_root_client()
- sy.load_lib("openmined_psi")
+ sy.load("openmined_psi")
# server send reveal_intersection
s_reveal_intersection = reveal_intersection
diff --git a/tests/syft/lib/pydp/client_pydp_test.py b/tests/syft/lib/pydp/client_pydp_test.py
--- a/tests/syft/lib/pydp/client_pydp_test.py
+++ b/tests/syft/lib/pydp/client_pydp_test.py
@@ -7,7 +7,7 @@
@pytest.mark.vendor(lib="pydp")
def test_pydp() -> None:
- sy.load_lib("pydp")
+ sy.load("pydp")
bob = sy.VirtualMachine(name="Bob")
client = bob.get_root_client()
x_ptr = client.pydp.algorithms.laplacian.BoundedMean(1, 1, 50)
@@ -23,7 +23,7 @@ def test_pydp() -> None:
@pytest.mark.vendor(lib="pydp")
def test_pydp_functions() -> None:
- sy.load_lib("pydp")
+ sy.load("pydp")
bob = sy.VirtualMachine(name="Bob")
client = bob.get_root_client()
x_ptr = client.pydp.algorithms.laplacian.BoundedMean(1, 1, 50)
diff --git a/tests/syft/lib/sympc/sympc_test.py b/tests/syft/lib/sympc/sympc_test.py
--- a/tests/syft/lib/sympc/sympc_test.py
+++ b/tests/syft/lib/sympc/sympc_test.py
@@ -19,7 +19,7 @@ def test_load_sympc() -> None:
from sympc.session import SessionManager
from sympc.tensor import MPCTensor
- sy.load_lib("sympc")
+ sy.load("sympc")
session = Session(parties=[alice_client, bob_client])
SessionManager.setup_mpc(session)
diff --git a/tests/syft/lib/tenseal/duet_test.py b/tests/syft/lib/tenseal/duet_test.py
--- a/tests/syft/lib/tenseal/duet_test.py
+++ b/tests/syft/lib/tenseal/duet_test.py
@@ -20,7 +20,7 @@
from ...grid.duet.signaling_server_test import run
ts = pytest.importorskip("tenseal")
-sy.load_lib("tenseal")
+sy.load("tenseal")
set_start_method("spawn", force=True)
PORT = 21000
@@ -48,7 +48,7 @@ def do(ct_size: int, batch_size: int) -> None:
# syft absolute
import syft as sy
- sy.load_lib("tenseal")
+ sy.load("tenseal")
sy.logger.add(sys.stderr, "ERROR")
duet = sy.launch_duet(loopback=True, network_url=f"http://127.0.0.1:{PORT}/")
@@ -79,7 +79,7 @@ def ds(ct_size: int, batch_size: int) -> None:
# syft absolute
import syft as sy
- sy.load_lib("tenseal")
+ sy.load("tenseal")
sy.logger.add(sys.stderr, "ERROR")
duet = sy.join_duet(loopback=True, network_url=f"http://127.0.0.1:{PORT}/")
diff --git a/tests/syft/lib/tenseal/tenseal_bfvvector_test.py b/tests/syft/lib/tenseal/tenseal_bfvvector_test.py
--- a/tests/syft/lib/tenseal/tenseal_bfvvector_test.py
+++ b/tests/syft/lib/tenseal/tenseal_bfvvector_test.py
@@ -11,7 +11,7 @@
from .utils_test import decrypt
ts = pytest.importorskip("tenseal")
-sy.load_lib("tenseal")
+sy.load("tenseal")
@pytest.fixture(scope="function")
diff --git a/tests/syft/lib/tenseal/tenseal_ckkstensor_test.py b/tests/syft/lib/tenseal/tenseal_ckkstensor_test.py
--- a/tests/syft/lib/tenseal/tenseal_ckkstensor_test.py
+++ b/tests/syft/lib/tenseal/tenseal_ckkstensor_test.py
@@ -11,7 +11,7 @@
from .utils_test import decrypt
ts = pytest.importorskip("tenseal")
-sy.load_lib("tenseal")
+sy.load("tenseal")
def _almost_equal(vec1: Any, vec2: Any, precision_pow_ten: int = 1) -> None:
diff --git a/tests/syft/lib/tenseal/tenseal_ckksvector_test.py b/tests/syft/lib/tenseal/tenseal_ckksvector_test.py
--- a/tests/syft/lib/tenseal/tenseal_ckksvector_test.py
+++ b/tests/syft/lib/tenseal/tenseal_ckksvector_test.py
@@ -12,7 +12,7 @@
from .utils_test import decrypt
ts = pytest.importorskip("tenseal")
-sy.load_lib("tenseal")
+sy.load("tenseal")
def _almost_equal(vec1: Sequence, vec2: Sequence, precision_pow_ten: int = 1) -> None:
diff --git a/tests/syft/lib/tenseal/tenseal_context_test.py b/tests/syft/lib/tenseal/tenseal_context_test.py
--- a/tests/syft/lib/tenseal/tenseal_context_test.py
+++ b/tests/syft/lib/tenseal/tenseal_context_test.py
@@ -8,7 +8,7 @@
import syft as sy
ts = pytest.importorskip("tenseal")
-sy.load_lib("tenseal")
+sy.load("tenseal")
@pytest.fixture(scope="function")
| Rename load_lib to load
## Description
- [x] Rename `sy.load_lib` to `sy.load`.
Leave old method but pass through to new method and add DeprecationWarning like here:
https://github.com/OpenMined/PySyft/blob/fb4f99028338d4e59fb702f1f6ec60f1e16bc660/src/syft/ast/klass.py#L322
- [x] Add test to `api/deprecated_test.py`
- [x] Look into auto loading libraries using a post import hook:
https://stackoverflow.com/questions/40623889/post-import-hooks-in-python-3
https://github.com/GrahamDumpleton/wrapt
https://www.youtube.com/watch?v=u7oj-ghfhUk
- [x] Update notebooks
## Definition of Done
`sy.load_lib` is deprecated but still functional and replaced with `sy.load`
| Hi @madhavajay can I work on this?
@avinsit123 could you also open a PR in [SyMPC](https://github.com/OpenMined/SyMPC/blob/main/src/sympc/__init__.py#L23) for changing the name?
Sure @gmuraru I will do that after completing the PR here if thats fine!!
Hello @avinsit123 !
Open a PR and lets talk directly on that. :100: | 2021-03-10T14:25:04 |
OpenMined/PySyft | 5,312 | OpenMined__PySyft-5312 | [
"5304"
] | d7f6f1e0c12719d5ed67c431cdc1fd26f5afda9b | diff --git a/src/syft/util.py b/src/syft/util.py
--- a/src/syft/util.py
+++ b/src/syft/util.py
@@ -589,10 +589,10 @@ def inherit_tags(
def get_root_data_path() -> Path:
# get the PySyft / data directory to share datasets between notebooks
- here = Path(os.path.dirname(os.path.realpath("__file__")))
- while os.path.basename(here).lower() != "pysyft" and here != here.parent:
- here = here.parent
+ # on Linux and MacOS the directory is: ~/.syft/data"
+ # on Windows the directory is: C:/Users/$USER/.syft/data
+
+ data_dir = Path.home() / ".syft" / "data"
- data_dir = here / "data"
os.makedirs(data_dir, exist_ok=True)
return data_dir
| Change get_root_data_path to use HOME dir
## Description
We probably need to change `get_root_data_path` in utils to use some kind of HOME dir thats cross platform.
The `Path` class in `pathlib` library includes a `Path.home` so hopefully that works on all 3 platforms.
The goal is that when Syft Examples are used that their datasets are downloaded to a common shared location.
This could be something like `~/.syft/data` or something similar.
Once this is done the other notebooks which download datasets should also be updated to use this common `get_root_data_path` method like the SuperResolution example.
# Definition of Done
The `get_root_data_path` method uses a common location like user HOME dir, so that it will work as a pip package as well as editable source during `dev`.
| So instead of creating a directory as `../pysft/data` use a universal directory `~/.syft/data` or `~/.pysyft/data`.
I am new to this repository and a GSOC aspirant. I will create a PR to resolve this issue. | 2021-03-16T15:41:34 |
|
OpenMined/PySyft | 5,330 | OpenMined__PySyft-5330 | [
"5315"
] | bb5f4f7ded32ff8496b577bea4c3e7c1db57ccae | diff --git a/src/syft/core/common/environment.py b/src/syft/core/common/environment.py
--- a/src/syft/core/common/environment.py
+++ b/src/syft/core/common/environment.py
@@ -14,7 +14,8 @@
from packaging import version
NOTEBOOK_VERSION = version.parse(notebook.__version__.split("+")[0])
- if NOTEBOOK_VERSION < version.parse("6.0.0"):
+ if NOTEBOOK_VERSION < version.parse("6.0.0") and "google.colab" not in sys.modules:
+ # google.colab check to fix issue #5315
raise Exception(
"Your Jupyter Notebook is too old. Please upgrade to version 6 or higher."
)
| Check and Fix notebook / jupyter client warning on Colab
## Description
This needs to be checked on Colab, since it seems colab has its own outdated versions of notebook and jupyter-client as well.
https://github.com/OpenMined/PySyft/issues/4915
## Definition of Done
Fix for Colab if possible.
| `import syft as fy` shows this exception
```
Exception: Your Jupyter Notebook is too old. Please upgrade to version 6 or higher.
```
and running `pip install --upgrade notebook` shows this error
```
ERROR: google-colab 1.0.0 has requirement notebook~=5.3.0; python_version >= "3.0", but you'll have notebook 6.2.0 which is incompatible.
ERROR: google-colab 1.0.0 has requirement tornado~=5.1.0; python_version >= "3.0", but you'll have tornado 6.1 which is incompatible.
```
alongside this warning
```
WARNING: Upgrading ipython, ipykernel, tornado, prompt-toolkit or pyzmq can
cause your runtime to repeatedly crash or behave in unexpected ways and is not
recommended. If your runtime won't connect or execute code, you can reset it
with "Factory reset runtime" from the "Runtime" menu.
```
although the exception goes away after restarting runtime as the notebook is updated.
So, what exactly is the definition of the fix?
@ArtistBanda well, there was a bug with notebook 5.x where it wouldnt work, but I think its unrelated to colab, so in this instance we could allow the `Exception: Your Jupyter Notebook is too old` check to pass if the code is running in colab for now, which is detected by doing this:
```python
if "google.colab" in sys.modules:
``` | 2021-03-19T18:24:03 |
|
OpenMined/PySyft | 5,332 | OpenMined__PySyft-5332 | [
"5324"
] | f30720c9b2c8a0c0f65cba0c3e5d4bdd19118797 | diff --git a/src/syft/lib/PIL/__init__.py b/src/syft/lib/PIL/__init__.py
new file mode 100644
--- /dev/null
+++ b/src/syft/lib/PIL/__init__.py
@@ -0,0 +1,48 @@
+# stdlib
+import functools
+from typing import Any as TypeAny
+from typing import List as TypeList
+from typing import Tuple as TypeTuple
+
+# third party
+import PIL
+
+# syft relative
+from . import image # noqa: 401
+from ...ast import add_classes
+from ...ast import add_methods
+from ...ast import add_modules
+from ...ast.globals import Globals
+from ..util import generic_update_ast
+
+LIB_NAME = "PIL"
+PACKAGE_SUPPORT = {"lib": LIB_NAME}
+
+
+def create_ast(client: TypeAny = None) -> Globals:
+ ast = Globals(client)
+
+ modules: TypeList[TypeTuple[str, TypeAny]] = [
+ ("PIL", PIL),
+ ("PIL.Image", PIL.Image),
+ ]
+
+ classes: TypeList[TypeTuple[str, str, TypeAny]] = [
+ ("PIL.Image.Image", "PIL.Image.Image", PIL.Image.Image)
+ ]
+
+ methods: TypeList[TypeTuple[str, str]] = []
+
+ add_modules(ast, modules)
+ add_classes(ast, classes)
+ add_methods(ast, methods)
+
+ for klass in ast.classes:
+ klass.create_pointer_class()
+ klass.create_send_method()
+ klass.create_storable_object_attr_convenience_methods()
+
+ return ast
+
+
+update_ast = functools.partial(generic_update_ast, LIB_NAME, create_ast)
diff --git a/src/syft/lib/PIL/image.py b/src/syft/lib/PIL/image.py
new file mode 100644
--- /dev/null
+++ b/src/syft/lib/PIL/image.py
@@ -0,0 +1,35 @@
+# third party
+import PIL
+import numpy as np
+import torch
+import torchvision
+
+# syft relative
+from ...generate_wrapper import GenerateWrapper
+from ...lib.torch.tensor_util import protobuf_tensor_deserializer
+from ...lib.torch.tensor_util import protobuf_tensor_serializer
+from ...proto.lib.torch.tensor_pb2 import TensorData
+
+
+def object2proto(obj: PIL.Image.Image) -> TensorData:
+ image_tensor = torch.Tensor(np.array(obj))
+ tensor_proto = protobuf_tensor_serializer(image_tensor)
+
+ return tensor_proto
+
+
+def proto2object(proto: TensorData) -> PIL.Image.Image:
+ image_tensor = protobuf_tensor_deserializer(proto)
+ if image_tensor.dim() == 3:
+ image_tensor = image_tensor.permute(2, 0, 1)
+ image_obj = torchvision.transforms.functional.to_pil_image(image_tensor)
+ return image_obj
+
+
+GenerateWrapper(
+ wrapped_type=PIL.Image.Image,
+ import_path="PIL.Image.Image",
+ protobuf_scheme=TensorData,
+ type_object2proto=object2proto,
+ type_proto2object=proto2object,
+)
diff --git a/src/syft/lib/torchvision/allowlist.py b/src/syft/lib/torchvision/allowlist.py
--- a/src/syft/lib/torchvision/allowlist.py
+++ b/src/syft/lib/torchvision/allowlist.py
@@ -466,18 +466,12 @@
"min_version": "0.8.0"
# Torch 1.6 expects input to be PIL image, so minimum version as 0.7 (Torch 1.7.0)
}
-
-# TODO: Fix when we have PIL support
-# Issue: https://github.com/OpenMined/PySyft/issues/5324
-# Following takes PIL image as input, currently not supported
-# allowlist["torchvision.transforms.functional.to_grayscale"] = {
-# "return_type" : "torch.Tensor",
-# "test_parameters" : "(npy_array)"
-# }
-
-# Following converts image to PIL image, currently not supported
-# allowlist["torchvision.transforms.functional.to_pil_image"] = "PIL.Image.Image"
-
+allowlist["torchvision.transforms.functional.to_grayscale"] = {
+ "return_type": "PIL.Image.Image"
+}
+allowlist["torchvision.transforms.functional.to_pil_image"] = {
+ "return_type": "PIL.Image.Image",
+}
allowlist["torchvision.transforms.functional.to_tensor"] = {
"return_type": "torch.Tensor",
}
| diff --git a/tests/syft/lib/PIL/__init__.py b/tests/syft/lib/PIL/__init__.py
new file mode 100644
diff --git a/tests/syft/lib/PIL/pil_test.py b/tests/syft/lib/PIL/pil_test.py
new file mode 100644
--- /dev/null
+++ b/tests/syft/lib/PIL/pil_test.py
@@ -0,0 +1,44 @@
+# third party
+import pytest
+
+# syft absolute
+import syft as sy
+from syft.grid.duet.ui import LOGO_URL
+
+
[email protected](lib="PIL")
+def test_send_and_get() -> None:
+ # third party
+ import PIL
+
+ sy.load("PIL")
+
+ data_owner = sy.VirtualMachine().get_root_client()
+
+ im = PIL.Image.open(LOGO_URL)
+ remote_im = im.send(data_owner)
+ received_im = remote_im.get()
+
+ assert PIL.ImageChops.difference(im, received_im).getbbox() is None
+
+
[email protected](lib="PIL")
+def test_remote_create() -> None:
+ # third party
+ import PIL
+ import numpy as np
+ import torch
+
+ sy.load("PIL")
+
+ data_owner = sy.VirtualMachine().get_root_client()
+ remote_torchvision = data_owner.torchvision
+
+ im = PIL.Image.open(LOGO_URL)
+ im_array = np.array(im)
+ im_tensor = torch.Tensor(im_array).permute(2, 0, 1)
+ remote_tensor = im_tensor.send(data_owner)
+ remote_im = remote_torchvision.transforms.functional.to_pil_image(remote_tensor)
+ received_im = remote_im.get()
+
+ assert PIL.ImageChops.difference(im, received_im).getbbox() is None
| Add PIL.Image.Image for torchvision
## Description
Add `PIL.Image.Image` return type support for `torchvision`.
## Definition of Done
`torchvision/allowlist.py` can enable `to_pil_image` and `to_grayscale`.
| 2021-03-20T03:49:22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.